San Francisco, CA | October 13-15, 2026 – A stark rift has emerged between two leading artificial intelligence companies, Anthropic and OpenAI, following a controversial deal with the U.S. Department of Defense (DoD). Anthropic co-founder and CEO Dario Amodei has unleashed a scathing internal memo, obtained by The Information, directly attacking OpenAI’s agreement with the military, labeling it as "safety theater" and accusing OpenAI chief Sam Altman of misleading the public. The dispute centers on the ethical implications of AI deployment in sensitive military applications, particularly concerning domestic surveillance and autonomous weaponry.
The clash ignited when Anthropic, despite having a pre-existing $200 million contract with the DoD, ultimately refused to ink a new agreement. Amodei’s company insisted on stringent safeguards, demanding the Pentagon explicitly commit to not using Anthropic’s AI for domestic mass surveillance or the development of autonomous weapons. This principled stance, however, did not deter OpenAI. Shortly thereafter, OpenAI announced its own deal with the DoD, with Sam Altman proclaiming that the contract included technical safeguards against the very red lines Anthropic had drawn.
Amodei, in his leaked memo, did not mince words. He unequivocally stated that OpenAI’s assurances were "straight up lies," and that Altman was falsely portraying himself as a "peacemaker and dealmaker." The Anthropic CEO drew a sharp distinction between his company’s actions and OpenAI’s, writing, "The main reason [OpenAI] accepted [the DoD’s deal] and we did not is that they cared about placating employees, and we actually cared about preventing abuses." This suggests a fundamental divergence in corporate philosophy, with Anthropic prioritizing ethical considerations over commercial expediency, while OpenAI, according to Amodei, appears to be more concerned with public perception and internal employee satisfaction.
The core of the disagreement lies in the interpretation of "any lawful use." Anthropic specifically took issue with the DoD’s insistence on this broad clause, which could potentially permit a wide range of applications, including those with significant privacy and ethical concerns. OpenAI, in its public statement, stated that its contract with the DoD allows for the use of its AI systems for "all lawful purposes." Furthermore, OpenAI asserted that the DoD had clarified its intentions, indicating that mass domestic surveillance is considered illegal and not part of their planned usage. OpenAI’s blog post explicitly stated, "We ensured that the fact that it is not covered under lawful use was made explicit in our contract."
However, critics and Anthropic itself have raised valid concerns about the fluidity of legal definitions. Laws are not static and can be amended or reinterpreted over time. This raises the specter that what is considered "unlawful" today could become permissible tomorrow, potentially leaving a loophole for the misuse of AI technology. This ambiguity is precisely what Anthropic sought to preempt through its more restrictive contract demands. The DoD, under the Trump administration, was famously known as the Department of War, a historical note that adds a layer of gravity to the current negotiations and the potential implications of AI in national security.
The public reaction appears to be overwhelmingly in favor of Anthropic’s more cautious approach. Data indicates a significant surge in ChatGPT uninstalls following OpenAI’s announcement of the DoD deal, with reports showing a staggering 295% increase. This suggests a public outcry against the perceived ethical compromises made by OpenAI. Amodei, in his memo, recognized this sentiment, noting that the public and media largely view OpenAI’s deal as "sketchy or suspicious" while positioning Anthropic as the "heroes." He pointed to Anthropic’s rise to #2 in the App Store as evidence of this public support.
Amodei’s memo further revealed his strategic concerns, stating, "I think this attempted spin/gaslighting is not working very well on the general public or the media… It is working on some Twitter morons, which doesn’t matter, but my main worry is how to make sure it doesn’t work on OpenAI employees." This highlights Amodei’s belief that OpenAI is engaged in a deliberate effort to manipulate public opinion and that his immediate concern is to counter this narrative within OpenAI’s own ranks, potentially aiming to foster dissent or encourage a re-evaluation of their ethical stance among OpenAI’s workforce.
The controversy unfolds against a backdrop of increasing governmental interest in harnessing the power of AI for national security and defense. Both the U.S. military and intelligence agencies are actively exploring the integration of AI across various domains, from intelligence gathering and analysis to logistics and operational planning. This burgeoning relationship between the tech sector and the defense establishment raises profound questions about the ethical boundaries of AI development and deployment. Companies like Anthropic and OpenAI, at the forefront of AI innovation, are increasingly finding themselves at the nexus of technological advancement and geopolitical imperatives.
Anthropic, founded by former OpenAI researchers, has consistently positioned itself as a company prioritizing AI safety and ethical development. Their stated mission is to build reliable, interpretable, and steerable AI systems. This latest confrontation with the DoD, and their subsequent public critique of OpenAI, further solidifies this image. Their insistence on explicit contractual clauses against misuse, rather than relying on broad interpretations of "lawful use," underscores their commitment to a more robust and proactive approach to AI governance. This proactive stance is not merely an ideological choice; it is a response to the escalating concerns about AI’s potential for unintended consequences and malicious application.
OpenAI, on the other hand, has navigated a more complex path. While publicly advocating for responsible AI development, their recent actions suggest a willingness to engage in partnerships that carry potential ethical risks, perhaps driven by a desire to maintain their market leadership and secure lucrative government contracts. Sam Altman, a prominent figure in the AI landscape, has often spoken about the transformative potential of AI, but also the existential risks it poses. The dichotomy between these pronouncements and the company’s actions in this instance has led to scrutiny and accusations of hypocrisy.
The "lawful use" clause, in particular, has become a focal point for debate. Legal scholars and ethicists have pointed out that national security interests can sometimes lead to the redefinition or relaxation of laws. For instance, during times of heightened national security, legislative frameworks governing surveillance or data collection can be expanded, potentially encompassing activities that were previously considered intrusive or illegal. This makes contractual agreements that rely solely on the current definition of "lawful use" inherently vulnerable to future changes, especially when dealing with powerful technologies like AI that can fundamentally alter the landscape of what is possible.
The public reaction, as evidenced by the surge in ChatGPT uninstalls, indicates a growing awareness and concern among ordinary citizens regarding the ethical implications of AI. This sentiment suggests that the narrative around AI is no longer confined to the technical elite but has permeated into broader public discourse. Companies that are perceived as prioritizing profit or strategic advantage over ethical considerations risk alienating a significant portion of their user base and facing reputational damage.
Amodei’s frustration with OpenAI’s public relations strategy is palpable. By framing OpenAI’s deal as "safety theater," he implies that the company is engaging in performative gestures of safety without genuine commitment. His desire to influence OpenAI employees suggests a belief that internal pressure could be a more effective lever for change than public outcry alone. This approach hints at a deeper understanding of corporate dynamics and the potential for internal dissent to shape a company’s direction.
The event, "Techcrunch event," scheduled for October 13-15, 2026, in San Francisco, CA, will likely be a crucial venue for further discussion and debate on these critical issues. As AI technology continues its rapid evolution, the ethical frameworks governing its development and deployment will become increasingly important. The divergence between Anthropic and OpenAI serves as a stark reminder of the challenges and complexities involved in ensuring that AI is developed and used for the benefit of humanity, rather than its detriment. The ongoing dialogue, fueled by such public disputes, is essential for shaping a future where AI innovation is guided by robust ethical principles and a genuine commitment to safety and accountability. The stakes are incredibly high, as the decisions made today will shape the trajectory of artificial intelligence and its impact on society for generations to come.

