Boston, MA – June 9, 2026 – OpenAI CEO Sam Altman announced late Friday that his company has reached a landmark agreement allowing the Department of Defense (DoD) to utilize its advanced artificial intelligence models within the department’s highly sensitive classified network. This significant development comes on the heels of a high-profile and contentious standoff between the Pentagon – a department historically referred to as the Department of War under the Trump administration – and OpenAI’s chief competitor, Anthropic, over the ethical deployment of AI in military applications.
The core of the dispute centered on the Pentagon’s push for AI companies to allow their models to be used for "all lawful purposes." While this broad mandate was intended to ensure maximal utility of cutting-edge technology for national security, Anthropic, in particular, sought to establish firm ethical boundaries, drawing a clear red line around applications involving mass domestic surveillance and the development of fully autonomous weapons systems. This stance, while lauded by many AI ethicists and employees within the tech sector, placed Anthropic at odds with the DoD’s immediate operational requirements.
In a detailed statement released on Thursday, Anthropic CEO Dario Amodei articulated the company’s position, asserting that Anthropic "never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner." However, Amodei emphasized a principled concern, arguing that "in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values." This nuanced perspective highlighted Anthropic’s commitment to aligning technological advancement with fundamental societal principles, even when it potentially created friction with defense procurement.
The ethical debate surrounding AI in warfare and surveillance resonated deeply within the AI development community. This week, an open letter garnered significant support, with over 60 OpenAI employees and an even larger contingent of 300 Google employees signing on, urging their respective employers to align with Anthropic’s principled stand. The letter underscored a growing internal dissent within major AI firms regarding the potential for misuse of their technologies by government entities, particularly in areas with profound implications for civil liberties and international stability.
Following the impasse in negotiations between Anthropic and the Pentagon, the situation escalated dramatically with direct intervention from the highest levels of the Trump administration. President Donald Trump publicly lambasted Anthropic in a social media post, characterizing the company’s leadership as "Leftwing nut jobs." This strong condemnation was coupled with a directive for federal agencies to cease using Anthropic’s products, allowing for a six-month phase-out period. This move signaled a clear intent to leverage executive power to enforce the administration’s preferred approach to AI in defense.
Adding further weight to the administration’s stance, Secretary of Defense Pete Hegseth took to social media to accuse Anthropic of attempting to "seize veto power over the operational decisions of the United States military." Hegseth’s statement was not merely rhetorical; he declared Anthropic a "supply-chain risk," issuing a sweeping decree: "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." This designation effectively ostracized Anthropic from the defense industrial base, a severe blow to a company seeking to influence the ethical direction of AI in national security.
In response to these escalating developments, Anthropic issued a statement on Friday, indicating that it had "not yet received direct communication from the Department of War or the White House on the status of our negotiations." Despite the lack of direct engagement, the company remained resolute in its opposition to the supply chain risk designation, firmly stating its intention to "challenge any supply chain risk designation in court." This legal challenge underscored Anthropic’s commitment to defending its position and its belief in the importance of due process in such critical matters.
Surprisingly, in a move that many observers found unexpected given the preceding conflict, Sam Altman took to X (formerly Twitter) to announce OpenAI’s agreement with the DoD. What made this announcement particularly noteworthy were Altman’s claims that OpenAI’s new defense contract incorporated explicit protections addressing the very ethical concerns that had become a flashpoint for Anthropic. "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems," Altman stated in his post. He elaborated that "The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman further detailed OpenAI’s commitment to technical safeguards, asserting that the company "will build technical safeguards to ensure our models behave as they should, which the DoW also wanted." This included a commitment to deploy OpenAI engineers alongside Pentagon personnel "to help with our models and to ensure their safety." This integrated approach suggested a collaborative effort to embed ethical considerations directly into the operational deployment of AI.
Moreover, Altman articulated a broader ambition, stating, "We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept." He expressed a strong desire for a de-escalation of tensions, advocating for a shift "away from legal and governmental actions and towards reasonable agreements." This plea for industry-wide adoption of similar ethical frameworks indicated OpenAI’s strategic positioning as a responsible leader in the AI space, aiming to set a precedent for the entire sector.
Further details emerged from reporting by Fortune’s Sharon Goldman, who indicated that Altman informed OpenAI employees during an internal all-hands meeting that the government had indeed agreed to allow the company to develop its own "safety stack" designed to prevent misuse. Crucially, Altman reportedly conveyed that "if the model refuses to do a task, then the government would not force OpenAI to make it do that task." This provision appears to grant OpenAI a significant degree of control over the AI’s refusal capabilities, a critical element in ensuring human oversight and preventing unintended or unethical actions.
Altman’s announcement regarding the Pentagon deal was made shortly before news broke of a significant escalation in geopolitical tensions. Reports confirmed that the United States and Israeli governments had initiated bombing campaigns against Iran, with President Trump publicly advocating for the overthrow of the Iranian government. This concurrent development injected a starkly different and more volatile context into the discussion around AI’s role in national security, highlighting the immense pressure and complex ethical landscape in which these technological decisions are being made. The agreement between OpenAI and the DoD, therefore, arrives at a moment of heightened global instability, underscoring the critical importance of establishing robust ethical guardrails for AI applications, particularly within the defense sector. The future implications of this partnership, and whether its ethical framework will indeed be adopted by other AI companies as Altman hopes, remain to be seen amidst the rapidly evolving geopolitical and technological landscape.

