18 Apr 2026, Sat

White House chief of staff to meet with Anthropic CEO about dangerous new Mythos model, official says | Fortune

A White House official, speaking on condition of anonymity to discuss the planned engagement, confirmed the administration’s proactive stance in engaging with advanced AI laboratories regarding their latest models and the critical security protocols underpinning their software. The official emphasized a crucial caveat: any novel technology proposed for federal government use would necessitate an extensive technical evaluation period. This rigorous scrutiny reflects a cautious approach to integrating tools that possess unprecedented capabilities, particularly in areas as sensitive as cybersecurity and national intelligence. The administration’s outreach signals a pragmatic pivot from past tensions, prioritizing direct dialogue with the innovators at the forefront of AI development.

The upcoming meeting carries significant historical weight, following a period of pronounced friction between the previous Trump administration and Anthropic, a company renowned for its safety-conscious philosophy. Anthropic has consistently advocated for robust guardrails in AI development, aiming to mitigate potential catastrophic risks while maximizing the technology’s immense economic and national security benefits for the United States. This commitment to ethical AI, however, became a flashpoint, illustrating the complex challenges of balancing innovation, national interest, and corporate responsibility in a rapidly evolving technological landscape.

The origins of this tension trace back to a specific dispute that escalated dramatically. President Donald Trump, in an unprecedented move, attempted to prohibit all federal agencies from utilizing Anthropic’s flagship chatbot, Claude. This directive, issued via a social media post in February, stemmed from the company’s contract disagreements with the Pentagon. Trump’s post bluntly declared that the administration "will not do business with them again!" – a stark illustration of executive displeasure impacting procurement decisions. The core of the disagreement lay in Anthropic’s insistence on ethical stipulations regarding the use of its AI.

Defense Secretary Pete Hegseth further amplified the conflict by seeking to declare Anthropic a supply chain risk, an extraordinary and potentially damaging designation rarely applied to a U.S. technology company. Anthropic vigorously challenged this classification in two federal courts, asserting its right to dictate terms of use for its proprietary technology. The company’s primary concern revolved around ensuring the Pentagon would not deploy its AI in fully autonomous weapons systems or for the surveillance of American citizens without appropriate safeguards. Anthropic’s leadership argued that without such assurances, the dual-use nature of advanced AI—its potential for both immense good and profound harm—made a cautious approach imperative. Hegseth, conversely, maintained that the company must permit any uses the Pentagon deemed lawful, underscoring a fundamental divergence in philosophy regarding military application of AI.

The legal battle provided a crucial check on executive power. In March, U.S. District Judge Rita Lin issued a significant ruling that effectively blocked the enforcement of Trump’s social media directive. Her decision served as a reminder that even in matters of national security and advanced technology, executive actions are subject to judicial review and must adhere to established legal processes, preventing unilateral bans based on social media pronouncements. This legal victory for Anthropic cleared the path, at least partially, for renewed federal engagement, setting the stage for discussions like the one planned with Susie Wiles. Anthropic, maintaining a discreet stance ahead of the critical dialogue, declined to comment publicly on the impending meeting, reflecting the sensitive nature of these high-level discussions.

The focal point of the upcoming conversation, Anthropic’s new Mythos model, announced on April 7, is generating significant buzz and concern. The San Francisco-based AI powerhouse has described Mythos as "strikingly capable," so much so that its distribution is currently limited to a select group of customers. The reason for this restricted access is startling: Mythos reportedly possesses the ability to surpass human cybersecurity experts in identifying and exploiting computer vulnerabilities. This claim elevates Mythos beyond a mere iterative improvement, positioning it as a potentially transformative, albeit dual-use, technology. On one hand, it could be an invaluable asset for defensive cybersecurity, proactively identifying weaknesses before malicious actors can exploit them. On the other, its capabilities raise serious questions about its potential misuse in offensive cyber operations, underscoring the inherent ethical dilemmas in advanced AI development.

While some industry observers and commentators have expressed skepticism, questioning whether Anthropic’s claims of overly powerful AI technology might be a strategic marketing maneuver, even the company’s staunchest critics acknowledge the potential for a genuine breakthrough. David Sacks, an influential figure who previously served as the White House’s AI and crypto czar, urged serious consideration of Anthropic’s assertions. "Anytime Anthropic is scaring people, you have to ask, ‘Is this a tactic? Is this part of their Chicken Little routine? Or is it real?’" Sacks mused on the "All-In" podcast he co-hosts with other prominent tech investors. "With cyber, I actually would give them credit in this case and say this is more on the real side." Sacks elaborated on the logical progression of AI capabilities: "It just makes sense that as the coding models become more and more capable, they are more capable at finding bugs. That means they’re more capable at finding vulnerabilities. That means they’re more capable at stringing together multiple vulnerabilities and creating an exploit." His analysis lends credence to Anthropic’s claims, highlighting the natural evolution of AI in complex problem-solving domains.

The implications of Mythos extend far beyond U.S. borders, capturing international attention. The United Kingdom’s AI Security Institute, a body at the forefront of evaluating AI risks, conducted its own assessment of the new model. Their findings reinforced the gravity of Anthropic’s claims, describing Mythos as a "step up" over previous models, which were already rapidly advancing. The institute’s report warned, "Mythos Preview can exploit systems with weak security posture, and it is likely that more models with these capabilities will be developed." This global recognition underscores the universal challenge posed by increasingly potent AI systems and the urgent need for international collaboration in governance and safety. Furthermore, Anthropic has been engaged in discussions with the European Union, a region actively pursuing comprehensive AI regulation, about its advanced models, including those not yet released in Europe, as confirmed by European Commission spokesman Thomas Regnier. These multinational dialogues reflect a collective understanding that AI development is a global endeavor with global consequences.

The initial report of the scheduled meeting between Wiles and Amodei came from Axios, highlighting the keen interest from political and tech observers alike. Recognizing the profound societal implications of Mythos, Anthropic announced a parallel initiative called Project Glasswing. This collaborative endeavor brings together a formidable alliance of technology giants, including Amazon, Apple, Google, and Microsoft, alongside major financial institutions like JPMorgan Chase. The ambitious goal of Project Glasswing is to fortify the world’s critical software infrastructure against the "severe" fallout that models like Mythos could potentially unleash on public safety, national security, and the global economy. This proactive industry-led effort signifies a shared acknowledgment of the immense power and responsibility accompanying advanced AI.

Jack Clark, Anthropic’s co-founder and policy chief, further expounded on the company’s strategy at the Semafor World Economy conference. "We’re releasing it to a subset of some of the world’s most important companies and organizations so they can use this to find vulnerabilities," Clark explained. He also provided a sobering perspective on the future trajectory of AI development. Mythos, while currently at the leading edge, is not an anomaly. "There will be other systems just like this in a few months from other companies, and in a year to a year-and-a-half later, there will be open-weight models from China that have these capabilities," Clark warned. "So the world is going to have to get ready for more powerful systems that are going to exist within it." This statement underscores the rapid pace of AI innovation, the competitive global landscape, and the imminent widespread availability of highly capable models, necessitating a proactive and adaptive approach from governments and industries worldwide.

The convergence of cutting-edge AI capabilities, past political tensions, and pressing national security concerns makes the Wiles-Amodei meeting a pivotal moment. It represents an opportunity for the White House to gain firsthand insights into a technology poised to redefine cybersecurity and potentially much more. It also tests the waters for a new kind of partnership between government and the tech sector—one characterized by frank dialogue, mutual understanding, and a shared commitment to harnessing AI for the public good while rigorously safeguarding against its inherent risks. The outcome of this and subsequent engagements will undoubtedly shape U.S. AI policy, influencing everything from defense strategies to economic regulations and setting precedents for how powerful AI is integrated into the fabric of society.


O’Brien reported from Providence, R.I. AP business reporter Kelvin Chan contributed to this report from London.

Leave a Reply

Your email address will not be published. Required fields are marked *