The past few weeks have cast a spotlight on OpenClaw, demonstrating just how potent and, at times, reckless autonomous AI agents can become. This open-source project, originally known as ClawdBot and developed by Peter Steinberger, has rapidly amassed a devoted following among tech enthusiasts. Unlike conventional chatbots such as ChatGPT or Claude, OpenClaw isn’t merely a conversational interface. It’s an advanced autonomous artificial intelligence agent endowed with the tools and inherent autonomy to directly interact with a user’s computer and other systems across the internet. Imagine an AI that can independently send emails, digest your messages, purchase concert tickets, secure restaurant reservations, or manage complex online tasks – all while you, presumably, enjoy a more hands-off digital experience.
The allure of OpenClaw lies precisely in its lack of inherent restrictions. As two cybersecurity experts I spoke with this week emphasized, it offers users largely unfettered power to customize and direct its actions. Ben Seri, cofounder and CTO at Zafran Security, a firm specializing in threat exposure management for enterprises, articulated this appeal starkly: "The only rule is that it has no rules. That’s part of the game." This boundless freedom to mold the AI to one’s specific needs, to push the boundaries of automation, is undeniably exciting for developers and power users. However, it is this very absence of constraints that simultaneously transforms OpenClaw from a groundbreaking tool into a potential security nightmare.
The extraordinary power granted to OpenClaw, while enabling remarkable feats, inherently creates a vast landscape of opportunities for misuse or accidental compromise. The risks are substantial and varied, ranging from inadvertent data leaks and the execution of unintended commands to the chilling prospect of the agent being quietly hijacked by malicious actors. These attacks could manifest through conventional malware or via sophisticated "prompt injection" techniques, where an attacker embeds malicious instructions within data that the AI agent might process. The fundamental challenge, as Seri pointed out, is that the very rules and boundaries essential for safeguarding against hackers and preventing data breaches are conspicuously absent in OpenClaw’s design ethos.
Colin Shea-Blymyer, a research fellow at Georgetown’s Center for Security and Emerging Technology (CSET), where he contributes to the CyberAI Project, elaborated on these "classic" security concerns. He highlighted that many of the vulnerabilities stem from permission misconfigurations. Humans, often unknowingly, might grant OpenClaw more authority than is prudent or necessary, creating entry points for attackers to exploit. A significant part of OpenClaw’s functionality revolves around what its developers call "skills" – essentially plugins or applications that the AI agent can leverage to perform actions such as accessing files, browsing the web, or running system commands. The critical distinction here is that, unlike a human user choosing to open an application, OpenClaw autonomously decides when and how to deploy these skills, and crucially, how to chain them together in sequences of its own devising. This autonomy means that even a minor permission oversight can rapidly escalate, snowballing into a far more severe security incident.
Shea-Blymyer painted vivid scenarios to illustrate these dangers. "Imagine using it to access the reservation page for a restaurant and it also having access to your calendar with all sorts of personal information," he posited. The risk magnifies if the agent is compromised: "Or what if it’s malware and it finds the wrong page and installs a virus?" The implications of an autonomous agent with broad system access are profound. While OpenClaw’s developers have included security pages within its documentation and are actively striving to keep users informed and vigilant, the underlying security issues remain complex technical problems. Most average users lack the deep understanding required to fully grasp or mitigate these intricate risks. And while developers can diligently patch vulnerabilities, they face an intractable dilemma: they cannot easily resolve the core issue of the agent’s self-directed action, which, paradoxically, is the very feature that makes the system so compelling in the first place. "That’s the fundamental tension in these kinds of systems," Shea-Blymyer concluded. "The more access you give them, the more fun and interesting they’re going to be – but also the more dangerous."
Seri of Zafran Security conceded that suppressing user curiosity for a system like OpenClaw is likely an uphill battle. However, he stressed that enterprise companies, with their higher stakes and regulatory obligations, would be far more cautious and slower to adopt such an inherently uncontrollable and potentially insecure system. For the average user experimenting with OpenClaw, Seri offered a sobering analogy: they should proceed as if "working in a chemistry lab with a highly explosive material."
Despite the inherent risks, Shea-Blymyer identified a silver lining in OpenClaw’s emergence at the hobbyist level. "We will learn a lot about the ecosystem before anybody tries it at an enterprise level," he observed. This grassroots experimentation provides an invaluable, albeit uncontrolled, testing ground. "AI systems can fail in ways we can’t even imagine," he explained. "[OpenClaw] could give us a lot of info about why different LLMs behave the way they do and about newer security concerns." Thus, OpenClaw, while a risky endeavor for individual users, serves as a crucial, if unintentional, research project, offering insights into the complex behaviors and vulnerabilities of autonomous AI. Security experts largely view OpenClaw, despite its current hobbyist status, as a potent preview of the types of autonomous systems that enterprises will eventually feel pressure to deploy. For now, however, unless one actively seeks to become a subject of cybersecurity research, Shea-Blymyer’s advice is clear: the average user might do well to steer clear of OpenClaw. Otherwise, they shouldn’t be surprised if their personal AI agent assistant strays into very unfriendly digital territory.
With that, here’s more AI news from across the industry.
Anthropic’s New $20 Million Super PAC Counters OpenAI
In a significant development signaling a deepening ideological rift within the AI industry, Anthropic has pledged $20 million to a new super PAC operation. As reported by The New York Times, this substantial investment is earmarked to back political candidates who advocate for stronger AI safety measures and comprehensive regulation. This move sets the stage for a direct ideological and financial clash in upcoming midterm elections, particularly against "Leading the Future," a super PAC primarily supported by OpenAI President and cofounder Greg Brockman and the influential venture capital firm Andreessen Horowitz. While Anthropic carefully avoided naming OpenAI directly in its announcement, it issued a pointed warning that "vast resources" are currently being deployed to oppose efforts aimed at enhancing AI safety. The funding will be channeled through Public First Action, a dark-money nonprofit, and its allied PACs. This bold political maneuver underscores that the battle over AI governance is no longer confined to research labs and corporate boardrooms; it has unequivocally migrated to the ballot box, where significant resources are now being mobilized to shape public policy and legislative outcomes.
Mustafa Suleyman Plots AI ‘Self-Sufficiency’ as Microsoft Loosens OpenAI Ties
Microsoft is reportedly accelerating its efforts to achieve "true self-sufficiency" in artificial intelligence, according to a Financial Times report featuring comments from its AI chief, Mustafa Suleyman. This strategic pivot aims to reduce Microsoft’s long-term reliance on OpenAI, even as the tech giant remains one of the startup’s largest financial backers. Suleyman indicated that this shift follows a restructuring of Microsoft’s relationship with OpenAI in October, which, while preserving Microsoft’s access to OpenAI’s most advanced models through 2032, also granted the ChatGPT maker greater freedom to seek new investors and partners. This recalibration effectively positions OpenAI as a potential future competitor. In response, Microsoft is now making massive investments in gigawatt-scale compute infrastructure, sophisticated data pipelines, and recruiting elite AI research teams. The company plans to launch its own in-house frontier models later this year, with a clear focus on automating white-collar work and capturing a larger share of the enterprise market through what Suleyman describes as "professional-grade AGI." This initiative highlights a broader trend among major tech players to diversify their AI capabilities and reduce single-vendor dependencies.
OpenAI Releases Its First Model Designed for Super-Fast Output
OpenAI has unveiled a research preview of GPT-5.3-Codex-Spark, a groundbreaking model representing the first tangible product of its strategic partnership with Cerebras. This new iteration leverages Cerebras’ wafer-scale AI hardware to deliver ultra-low-latency, real-time coding capabilities within the Codex framework. GPT-5.3-Codex-Spark is a streamlined version of the more expansive GPT-5.3-Codex, specifically optimized for unparalleled speed rather than maximum capability. It boasts the ability to generate responses up to 15 times faster than its predecessors, a critical advancement for developers who require instantaneous feedback for targeted edits, logic reshaping, and interactive iteration without the delays of traditional, longer processing times. Initially available as a research preview to ChatGPT Pro users and a select group of API partners, this release underscores OpenAI’s increasing focus on interaction speed as AI agents undertake more autonomous and long-running tasks. Real-time coding, with its immediate demand for rapid inference, is emerging as an early and crucial test case for the transformative potential of faster AI processing.
Anthropic Will Cover Electricity Price Increases from Its AI Data Centers
Following a similar announcement by OpenAI last month, Anthropic yesterday publicly committed to taking responsibility for any increases in electricity costs that might otherwise be passed on to consumers as it expands its AI data centers across the U.S. In a move aimed at sustainable and responsible growth, Anthropic pledged to cover all grid connection and upgrade costs, bring new power generation online to match its demand, and proactively collaborate with utilities and energy experts to estimate and mitigate any potential price effects on ratepayers. Beyond direct financial coverage, the company also plans to invest heavily in power-usage reduction and grid optimization technologies. Furthermore, Anthropic committed to supporting local communities situated around its facilities and advocating for broader policy reforms designed to accelerate and lower the cost of energy infrastructure development. The company’s stance is clear: the significant energy demands of building advanced AI infrastructure should not impose a financial burden on everyday ratepayers, underscoring a growing awareness within the AI industry regarding its environmental and social footprint.
Isomorphic Labs Says It Has Unlocked a New Biological Frontier Beyond AlphaFold
Isomorphic Labs, the AI drug discovery company affiliated with Alphabet and DeepMind, announced a significant breakthrough with its new "Isomorphic Labs Drug Design Engine." The company claims this engine represents a major leap forward in computational medicine by integrating multiple AI models into a unified platform capable of predicting how biological molecules interact with unprecedented accuracy. A detailed blog post outlined that the engine more than doubled previous performance on key benchmarks and significantly outperformed traditional physics-based methods for critical tasks such as protein-ligand structure prediction and binding affinity estimation. These capabilities, the company argues, could dramatically accelerate the design and optimization of novel drug candidates. The system builds upon the success of AlphaFold 3, an advanced AI model released in 2024 that accurately predicts the 3D structures and interactions of all life’s molecules, including proteins, DNA, and RNA. However, Isomorphic Labs asserts its engine goes further by identifying novel binding pockets, generalizing effectively to structures outside its initial training data, and integrating these sophisticated predictions into a scalable platform. The ultimate vision is to bridge the existing gap between structural biology and real-world drug discovery, potentially reshaping how pharmaceutical research approaches challenging targets and expands into the complex realm of biologics.
EYE ON AI NUMBERS
77%
That’s the percentage of security professionals who report at least some level of comfort with allowing autonomous AI systems to operate without direct human oversight, according to a new survey of 1,200 security professionals conducted by Ivanti, a global enterprise IT and security software company. Furthermore, the report highlights that the adoption of agentic AI is a priority for a substantial 87% of security teams, indicating a strong industry appetite for these advanced systems.
However, Daniel Spicer, Ivanti’s chief security officer, issued a strong caution against such widespread comfort with deploying autonomous AI. While defenders often express optimism about AI’s potential in cybersecurity, Spicer noted that the survey’s findings also reveal a worrying trend: companies are increasingly falling behind in their preparedness to defend against a rapidly evolving array of threats.
"This is what I call the ‘Cybersecurity Readiness Deficit’," Spicer wrote in an accompanying blog post, describing it as "a persistent, year-over-year widening imbalance in an organization’s ability to defend their data, people and networks against the evolving tech landscape." This deficit suggests that while security teams are eager to harness AI’s power, their foundational defenses and understanding of AI-specific risks may not be keeping pace, echoing the very concerns raised by the autonomous nature of systems like OpenClaw.
AI CALENDAR
Feb. 10-11: AI Action Summit, New Delhi, India.
Feb. 24-26: International Association for Safe & Ethical AI (IASEAI), UNESCO, Paris, France.
March 2-5: Mobile World Congress, Barcelona, Spain.
March 16-19: Nvidia GTC, San Jose, Calif.
April 6-9: HumanX, San Francisco.

