22 Feb 2026, Sun

Runlayer Unveils Enterprise Solution to Tame "Shadow AI" Driven by OpenClaw’s Explosive Growth

The landscape of artificial intelligence in the workplace has been dramatically reshaped since November 2025 with the meteoric rise of OpenClaw, an open-source AI agent designed for autonomous tasks on computers and accessible via popular messaging applications. In recent months, its adoption has surged, transforming from a niche tool into a widespread phenomenon, particularly among solopreneurs and employees within large enterprises eager for enhanced business automation. This widespread adoption, however, comes with a significant caveat: a growing number of documented security risks, creating a burgeoning "shadow AI" problem that is leaving IT and security departments struggling to maintain control. In response, New York City-based enterprise AI startup Runlayer has launched "OpenClaw for Enterprise," a comprehensive governance layer aimed at converting these unmanaged AI agents from potential liabilities into secure, valuable corporate assets.

At the core of the current security predicament lies the fundamental architecture of OpenClaw’s primary agent, formerly known as "Clawdbot." Unlike traditional web-based large language models (LLMs) that operate within contained environments, Clawdbot frequently possesses root-level shell access to a user’s machine. This elevated privilege level grants the agent the ability to execute commands with the full authority of the system, essentially acting as a digital "master key." The critical vulnerability stems from the absence of native sandboxing within these agents, which creates a dangerous lack of isolation between the agent’s operational environment and highly sensitive data. This includes, but is not limited to, SSH keys, API tokens, and internal records from communication platforms like Slack and Gmail, leaving them exposed to potential compromise.

Andy Berman, CEO of Runlayer, highlighted the inherent fragility of these systems in an exclusive interview with VentureBeat, detailing a stark demonstration of this vulnerability. "It took one of our security engineers 40 messages to take full control of OpenClaw… and then tunnel in and control OpenClaw fully," Berman recounted. He explained that even when configured as a standard business user with only an API key, the agent was compromised within an astonishing "one hour flat" through the use of simple prompting techniques. The primary technical threat identified by Runlayer is prompt injection, a sophisticated attack vector where malicious instructions are embedded within seemingly innocuous communications, such as emails or documents. These hidden commands can effectively "hijack" the agent’s intended logic, instructing it to disregard prior commands and exfiltrate sensitive information, including customer data, API keys, and internal documents, to an external harvesting server.

The current surge in the adoption of these advanced AI agents can be attributed to their undeniable utility, mirroring a trend reminiscent of the early days of the smartphone revolution. Berman drew a parallel to the "Bring Your Own Device" (BYOD) craze that emerged approximately 15 years ago, where employees favored the superior functionality of iPhones over corporate Blackberries. Today, employees are embracing agents like OpenClaw because they offer a tangible "quality of life improvement" and enhanced productivity that traditional enterprise tools often lack. In a series of posts on X earlier this month, Berman observed that the industry has reached an inflection point where outright prohibition is no longer a viable strategy. "We passed the point of ‘telling employees no’ in 2024," he stated, emphasizing that employees often dedicate significant time to integrating these agents with their work tools, such as Slack, Jira, and email, irrespective of official IT policies. This widespread, unmanaged integration creates what Berman describes as a "giant security nightmare" due to the inherent lack of visibility and the agents’ full shell access capabilities. This concern is echoed by prominent security experts; Heather Adkins, a foundational member of Google’s security team, notably issued a caution: "Don’t run Clawdbot."

Runlayer’s proprietary ToolGuard technology is engineered to address these security concerns by implementing real-time blocking with an impressive latency of less than 100 milliseconds. This advanced system scrutinizes tool execution outputs before they are finalized, enabling it to detect and intercept malicious remote code execution patterns, such as the dangerous "curl | bash" command or destructive "rm -rf" commands, which often bypass conventional security filters. Internal benchmarks conducted by Runlayer demonstrate that this technical layer dramatically enhances prompt injection resistance, elevating it from a baseline of 8.7% to a robust 95%. The Runlayer suite for OpenClaw is strategically structured around two core pillars: discovery and active defense. Berman elaborated on the company’s mission to provide the necessary infrastructure for governing AI agents, stating, "The goal is to provide the infrastructure to govern AI agents in the same way that the enterprise learned to govern the cloud, to govern SaaS, to govern mobile." Crucially, unlike standard LLM gateways or MCP proxies, Runlayer offers a control plane that integrates seamlessly with existing enterprise identity providers (IDPs) such as Okta and Entra, ensuring a unified and secure access management approach.

While the OpenClaw community typically relies on open-source or unmanaged scripting solutions, Runlayer positions its enterprise offering as a proprietary commercial layer designed to meet the stringent requirements of modern businesses. The platform has achieved SOC 2 and HIPAA certifications, making it a suitable and compliant solution for organizations operating in highly regulated sectors. Berman clarified Runlayer’s data handling practices, emphasizing, "Our ToolGuard model family… these are all focused on the security risks with these type of tools, and we don’t train on organizations’ data." He further stressed that engaging with Runlayer is akin to contracting with a security vendor, rather than an LLM inference provider. This critical distinction means that any data processed is anonymized at the source, and the platform does not depend on inference for its security capabilities. For end-users, this licensing model signifies a transition from managing community-supported risks to benefiting from enterprise-supported stability. While the underlying AI agent may offer flexibility and experimental capabilities, the Runlayer wrapper provides the essential legal and technical assurances, including comprehensive terms of service and robust privacy policies, that large organizations demand.

Runlayer’s pricing structure intentionally diverges from the conventional per-user seat model prevalent in SaaS offerings. Berman explained the company’s preference for a platform fee, designed to foster widespread adoption without the impediment of incremental costs. "We don’t believe in charging per user. We want you to roll it enterprise across your organization," he stated. This platform fee is meticulously scaled based on the size of the deployment and the specific suite of capabilities a customer requires. Functioning as a comprehensive control plane that delivers "six products on day one," Runlayer’s pricing is tailored to the intricate infrastructure needs of the enterprise rather than a simple headcount metric. While Runlayer’s current strategic focus is on the enterprise and mid-market segments, Berman indicated plans to introduce offerings specifically "scoped to smaller companies" in the future, broadening its accessibility.

Runlayer is meticulously engineered to integrate seamlessly into the existing technology stack utilized by security and infrastructure teams. For engineering and IT departments, it offers flexible deployment options, including cloud-based solutions, private virtual private clouds (VPCs), and even on-premise installations. Every tool call made by an AI agent is meticulously logged and auditable, with robust integrations that facilitate the seamless export of this data to leading SIEM (Security Information and Event Management) vendors such as Datadog and Splunk. During the interview, Berman highlighted the transformative cultural shift that occurs when these powerful AI tools are secured and managed effectively, rather than being outright banned. He cited the example of Gusto, a payroll and HR platform, where the IT department was rebranded as the "AI transformation team" following their partnership with Runlayer. "We have taken their company from… not using these type of tools, to half the company on a daily basis using MCP, and it’s incredible," Berman enthused, noting the widespread adoption across both technical and non-technical users, proving that safe AI adoption can indeed scale across an entire workforce. Similarly, Berman shared a powerful endorsement from a customer at OpenDoor, a home sales tech firm, who claimed, "Hands down, the biggest quality of life improvement I’m noticing at OpenDoor is Runlayer" because it empowered them to connect AI agents to sensitive, private systems without the pervasive fear of compromise.

The market’s enthusiastic response appears to validate Runlayer’s approach as the crucial "middle ground" in AI governance. The company is already providing security infrastructure for several high-growth companies, including prominent names like Gusto, Instacart, Homebase, and AngelList. These early adopters strongly suggest that the future of AI integration in the workplace will not be characterized by the prohibition of powerful tools, but rather by their intelligent wrapping within a framework of measurable, real-time governance. As the cost of AI tokens continues to decline and the capabilities of advanced models such as "Opus 4.5" or "GPT 5.2" rapidly increase, the urgency for robust AI governance infrastructure only intensifies. "The question isn’t really whether enterprise will use agents," Berman concluded, "it’s whether they can do it, how fast they can do it safely, or they’re going to just do it recklessly, and it’s going to be a disaster." For the modern Chief Information Security Officer (CISO), the paradigm is shifting; the objective is no longer to be the gatekeeper who says "no," but rather to become the enabler who facilitates a "governed, safe, and secure way to roll out AI."

By admin

Leave a Reply

Your email address will not be published. Required fields are marked *