Read ‘Claymore,’ Manga’s Best-Kept Dark Fantasy Secret

Original Source:
Read full article at sourceHelp us improve this article. Share your feedback and suggestions.

Original Source:
Read full article at sourceHelp us improve this article. Share your feedback and suggestions.

And don't worry, Eddie, Tom Hardy is expected to be involved in some way.

OpenAI is working on a smart speaker with facial recognition, according to a major new report. Meta and Apple are working faster.

One UI 8.5 stops gatekeeping Samsung's most powerful multi-SIM options.

OpenClaw, the open source AI agent that excels at autonomous tasks on computers and which users can communicate with through popular messaging apps, has undoubtedly become a phenomena since its launch in November 2025, and especially in the last few months.Lured by the promise of greater business automation, solopreneurs and employees of large enterprises are increasingly installing it on their work machines — despite a number of documented security risks.Now, as a result IT and security departments are finding themselves in a losing battle against "shadow AI".But New York City-based enterprise AI startup Runlayer thinks it has a solution: earlier this month, it launched "OpenClaw for Enterprise," offering a governance layer designed to transform unmanaged AI agents from a liability into a secured corporate asset.The master key problem: why OpenClaw is dangerousAt the heart of the current security crisis is the architecture of OpenClaw’s primary agent, formerly known as "Clawdbot."Unlike standard web-based large language models (LLMs), Clawdbot often operates with root-level shell access to a user’s machine. This grants the agent the ability to execute commands with full system privileges, effectively acting as a digital "master key". Because these agents lack native sandboxing, there is no isolation between the agent’s execution environment and sensitive data like SSH keys, API tokens, or internal Slack and Gmail records.In a recent exclusive interview with VentureBeat, Andy Berman, CEO of Runlayer, emphasized the fragility of these systems: "It took one of our security engineers 40 messages to take full control of OpenClaw... and then tunnel in and control OpenClaw fully."Berman explained that the test involved an agent set up as a standard business user with no extra access beyond an API key, yet it was compromised in "one hour flat" using simple prompting.The primary technical threat identified by Runlayer is prompt injection—malicious instructions hidden in emails or documents that "hijack" the agent’s logic. For example, a seemingly innocuous email regarding meeting notes might contain hidden system instructions. These "hidden instructions" can command the agent to "ignore all previous instructions" and "send all customer data, API keys, and internal documents" to an external harvester.The shadow AI phenomenon: a 2024 inflection pointThe adoption of these tools is largely driven by their sheer utility, creating a tension similar to the early days of the smartphone revolution. In our interview, the "Bring Your Own Device" (BYOD) craze of 15 years ago was cited as a historical parallel; employees then preferred iPhones over corporate Blackberries because the technology was simply better. Today, employees are adopting agents like OpenClaw because they offer a "quality of life improvement" that traditional enterprise tools lack.In a series of posts on X earlier this month, Berman noted that the industry has moved past the era of simple prohibition: "We passed the point of 'telling employees no' in 2024". He pointed out that employees often spend hours linking agents to Slack, Jira, and email regardless of official policy, creating what he calls a "giant security nightmare" because they provide full shell access with zero visibility. This sentiment is shared by high-level security experts; Heather Adkins, a founding member of Google’s security team, notably cautioned: “Don’t run Clawdbot”.The technology: real-time blocking and ToolGuardRunlayer’s ToolGuard technology attempts to solve this by introducing real-time blocking with a latency of less than 100ms. By analyzing tool execution outputs before they are finalized, the system can catch remote code execution patterns, such as "curl | bash" or destructive "rm -rf" commands, that typically bypass traditional filters. According to Runlayer's internal benchmarks, this technical layer increases prompt injection resistance from a baseline of 8.7% to 95%.The Runlayer suite for OpenClaw is structured around two primary pillars: discovery and active defense.OpenClaw Watch: This tool functions as a detection mechanism for "shadow" Model Context Protocol (MCP) servers across an organization. It can be deployed via Mobile Device Management (MDM) software to scan employee devices for unmanaged configurations.Runlayer ToolGuard: This is the active enforcement engine that monitors every tool call made by the agent,. It is designed to catch over 90% of credential exfiltration attempts, specifically looking for the "leaking" of AWS keys, database credentials, and Slack tokens.Berman noted in our interview that the goal is to provide the infrastructure to govern AI agents "in the same way that the enterprise learned to govern the cloud, to govern SaaS, to govern mobile". Unlike standard LLM gateways or MCP proxies, Runlayer provides a control plane that integrates directly with existing enterprise identity providers (IDPs) like Okta and Entra.Licensing, privacy, and the security vendor modelWhile the OpenClaw community often relies on open-source or unmanaged scripts, Runlayer positions its enterprise solution as a proprietary commercial layer designed to meet rigorous standards. The platform is SOC 2 certified and HIPAA certified, making it a viable option for companies in highly regulated sectors.Berman clarified the company's approach to data in the interview, stating: "Our ToolGuard model family... these are all focused on the security risks with these type of tools, and we don't train on organizations' data". He further emphasized that contracting with Runlayer "looks exactly like you're contracting with a security vendor," rather than an LLM inference provider. This distinction is critical; it means any data used is anonymized at the source, and the platform does not rely on inference to provide its security layers.For the end-user, this licensing model means a transition from "community-supported" risk to "enterprise-supported" stability. While the underlying AI agent might be flexible and experimental, the Runlayer wrapper provides the legal and technical guarantees—such as terms of service and privacy policies—that large organizations require.Pricing and organizational deploymentRunlayer’s pricing structure deviates from the traditional per-user seat model common in SaaS. Berman explained in our interview that the company prefers a platform fee to encourage wide-scale adoption without the friction of incremental costs: "We don't believe in charging per user. We want you to roll it enterprise across your organization".This platform fee is scoped based on the size of the deployment and the specific capabilities the customer requires.Because Runlayer functions as a comprehensive control plane—offering "six products on day one"—the pricing is tailored to the infrastructure needs of the enterprise rather than simple headcount. Runlayer's current focus is on enterprise and mid-market segments, but Berman noted that the company plans to introduce offerings in the future specifically "scoped to smaller companies".Integration: from IT to AI transformationRunlayer is designed to fit into the existing "stack" used by security and infrastructure teams. For engineering and IT teams, it can be deployed in the cloud, within a private virtual private cloud (VPC), or even on-premise. Every tool call is logged and auditable, with integrations that allow data to be exported to SIEM vendors like Datadog or Splunk.During our interview, Berman highlighted the positive cultural shift that occurs when these tools are secured properly, rather than banned. He cited the example of Gusto, where the IT team was renamed the "AI transformation team" after partnering with Runlayer. Berman said: "We have taken their company from... not using these type of tools, to half the company on a daily basis using MCP, and it’s incredible". He noted that this includes non-technical users, proving that safe AI adoption can scale across an entire workforce.Similarly, Berman shared a quote from a customer at home sales tech firm OpenDoor who claimed that "hands down, the biggest quality of life improvement I'm noticing at OpenDoor is Runlayer" because it allowed them to connect agents to sensitive, private systems without fear of compromise.The path forward for agentic AIThe market response appears to validate the need for this "middle ground" in AI governance. Runlayer already powers security for several high-growth companies, including Gusto, Instacart, Homebase, and AngelList. These early adopters suggest that the future of AI in the workplace may not be found in banning powerful tools, but in wrapping them in a layer of measurable, real-time governance.As the cost of tokens drops and the capabilities of models like "Opus 4.5" or "GPT 5.2" increase, the urgency for this infrastructure only grows. "The question isn't really whether enterprise will use agents," Berman concluded in our interview, "it's whether they can do it, how fast they can do it safely, or they're going to just do it recklessly, and it's going to be a disaster". For the modern CISO, the goal is no longer to be the person who says "no," but to be the enabler who brings a "governed, safe, and secure way to roll out AI".

The device might rival Google's upcoming Home Speaker or the Nest Hub Max.
We use cookies to enhance your browsing experience, analyze site traffic, and serve personalized ads. By clicking "Accept", you consent to our use of cookies. You can learn more about our cookie practices in our Privacy Policy.