This landmark case marks a historic first: the Department of Defense, informally referred to as the Department of War (DOW) by the Trump administration since its recent renaming initiative, has labeled a prominent U.S.-led business as a supply-chain risk to national security. Traditionally, this severe designation is reserved for foreign adversaries, state-sponsored entities, or organizations with demonstrable ties to hostile powers, not a domestic innovator like Anthropic, a leader in the burgeoning field of AI. The genesis of this extraordinary dispute lies in a contentious contract negotiation that rapidly escalated from a disagreement over terms to a government-wide ban, casting a long shadow over the relationship between the U.S. government and its domestic tech sector.
At the heart of the contention was the DOW’s insistence on a blanket “all lawful use” clause within its contracts with Anthropic. This clause would have granted the military unfettered access to Anthropic’s advanced Claude AI tool for any legal purpose, a demand that the AI firm found deeply problematic. Anthropic, co-founded by Dario Amodei and known for its commitment to safe and ethical AI development, expressed grave reservations about the potential for its technology to be deployed in lethal autonomous warfare or extensive mass surveillance of American citizens. The company argued that its Claude tool had not been thoroughly tested for such applications and that its safety and reliability in these critical, high-stakes scenarios could not be guaranteed. Consequently, Anthropic attempted to embed explicit provisions into the contract forbidding such uses, aiming to establish clear ethical guardrails for its technology.
However, these proposed guardrails were deemed unacceptable by the Department of War. Military commanders, represented by the DOW, asserted that they required maximum latitude and flexibility to make determinations on missions, particularly in rapidly evolving operational environments where AI could play a crucial role. This fundamental philosophical clash – between the military’s demand for unconstrained utility and Anthropic’s insistence on ethical limitations – proved to be an insurmountable barrier in their negotiations.
The dispute quickly transcended mere contract terms, escalating into a public and governmental confrontation. On February 27, President Trump took to Truth Social, his preferred social media platform, to issue a direct and unequivocal directive: “EVERY federal agency” was instructed to “IMMEDIATELY CEASE” all use of Anthropic’s tools. Later that same day, Secretary of War Pete Hegseth amplified the administration’s stance, posting on X (formerly Twitter) to formally label Anthropic a “supply-chain risk.” Hegseth’s post went further, declaring that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” This sweeping ban effectively blacklisted Anthropic from any federal engagement and threatened to isolate it from its existing commercial partners who also work with the government.
In response to what it perceived as a punitive and retaliatory action, Anthropic filed a lawsuit on March 9. The company’s legal challenge asserted that the government had “retaliated against it” for expressing its views on safety guardrails for AI, thereby violating its rights under the First Amendment, which protects freedom of speech. Furthermore, Anthropic argued that the government’s actions circumvented the established procedures of the Administrative Procedure Act (APA), which mandates a fair and transparent process for federal agency actions, and infringed upon its Fifth Amendment right to due process, particularly concerning its business reputation and ability to operate.
During Tuesday’s hearing in the California federal court, the government, represented by Deputy Assistant Attorney General Eric Hamilton, defended the administration’s actions. Hamilton contended that the government’s decisions were a direct response to Anthropic’s refusal to agree to specific contractual terms. He argued vehemently that the government possesses unrestricted power to determine which companies it chooses to contract with, framing the issue as a matter of sovereign discretion rather than a suppression of free speech. Hamilton also raised a national security concern, suggesting that Anthropic’s stance and its control over its software could potentially lead to a “kill switch” scenario, where future software updates or changes in company policy might disable or impede the AI’s functionality during critical military operations. This, he argued, presented an unacceptable risk to national security and military readiness.
However, Judge Rita F. Lin expressed considerable skepticism regarding the government’s broad claims of authority and the proportionality of its response. In her opening statements, she characterized the case as a “fascinating public policy debate” over the fundamental tension between Anthropic’s ethical position on AI development and the government’s perceived military needs. Yet, she quickly clarified her judicial role: “My role isn’t to decide who is right in that debate.” Instead, Judge Lin emphasized that the court’s primary task was to determine “whether the government violated the law” when it moved beyond simply choosing not to use Anthropic’s AI services and instead implemented a far-reaching ban and a damaging “supply-chain risk” designation.
Judge Lin pointedly questioned the government’s escalation, noting, “After Anthropic went public with this contracting dispute, defendants seemed to have a pretty big reaction to that.” She highlighted the extreme nature of the government’s measures, which included banning Anthropic from any government contract – meaning even non-military entities like the National Endowment for the Arts could not use its tools for something as innocuous as designing a website. Furthermore, Secretary Hegseth’s directive compelled any entity wishing to do business with the U.S. military to sever all commercial ties with Anthropic, effectively isolating the company from a significant portion of the economy. The designation as a supply-chain risk, usually reserved for entities posing existential threats, struck Judge Lin as particularly troubling.
“What is troubling to me about these reactions is that they don’t really seem to be tailored to the stated national security concern,” Judge Lin observed. She pondered aloud that if the concern was genuinely about chain of command or the reliability of Claude, the DOW could simply stop using the AI tool and seek out another vendor. The sweeping nature of the ban, she implied, suggested a motivation beyond mere risk mitigation. Drawing a stark analogy from one of the submitted amicus briefs, she remarked, “One of the amicus briefs used the term ‘attempted corporate murder.’ I don’t know if it’s murder, but it looks like an attempt to cripple Anthropic. And specifically my concern is whether Anthropic is being punished for criticizing the government’s contracting position in the press.” Her comments underscored a deep concern about potential government overreach and the chilling effect such actions could have on free speech and open debate within the tech industry.
The gravity of the case has drawn significant attention from a diverse array of stakeholders, reflected in the numerous amicus, or “friend-of-the-court,” briefs filed. These briefs have come from prominent tech companies, civil liberties groups, industry associations, and even former government officials, nearly all of whom support Anthropic’s position and its quest for an injunction against the supply-chain risk designation.
One particularly impactful brief, which Judge Lin referenced, was submitted by a coalition of investors and the “Freedom Economy Business Association.” This brief echoed the sentiment of “attempted corporate murder,” specifically citing an X post by Dean Ball, President Trump’s former senior policy advisor for AI and emerging technology. Ball’s post starkly warned, “Nvidia, Amazon, Google will have to divest from Anthropic if Hegseth gets his way. This is simply attempted corporate murder. I could not possibly recommend investing in American AI to any investor; I could not possibly recommend starting an AI company in the United States.” This perspective highlights the profound economic implications of the government’s actions, suggesting a severe chilling effect on investment and innovation within the critical U.S. AI sector. If the government can summarily blacklist a leading AI company for expressing ethical concerns, it sends a clear message that fostering a robust, competitive, and ethically-minded domestic AI industry could become increasingly challenging.
The American Federation of Government Employees (AFGE), a union representing 800,000 federal workers, also filed an amicus brief. Their brief argued that the Trump administration has a discernible pattern of using national security concerns as a pretext for retaliation against free speech, lending weight to Anthropic’s First Amendment claims. This suggests a broader concern within the federal workforce about the government’s approach to dissent and its potential to stifle open discourse.
Microsoft, a tech giant with extensive ties to both the defense industry and the broader AI ecosystem, also weighed in. Its brief contended that a ban on Anthropic would not only harm Microsoft’s own business interests, given the interconnectedness of the tech supply chain, but would also create a significant chilling effect on future defense-industry investment and engagement with AI innovators. Microsoft’s intervention underscores the potential for this case to disrupt the delicate balance between national security and technological advancement, possibly deterring other companies from collaborating with the government on sensitive projects if they fear similar reprisals for raising ethical questions.
Engineers and researchers from other leading AI firms, including OpenAI and Google, also submitted briefs, largely in support of Anthropic. While their specific arguments varied, their collective voice signaled a shared concern within the AI research community about the precedent this case could set. Many in the field believe that ethical considerations are paramount in developing powerful AI, and government actions that punish companies for attempting to implement safety guardrails could undermine efforts to build responsible AI systems globally. This case could, therefore, profoundly influence the trajectory of AI ethics and governance, both domestically and internationally.
Finally, the Human Rights and Technology Justice Organization submitted a brief that, while not taking a definitive stance on who should prevail in court, strongly argued against the broad militarization of AI. Their brief articulated concerns that the unrestricted use of AI in warfare could lead to catastrophic human rights risks, including increased civilian casualties, reduced accountability, and the erosion of international humanitarian law. This perspective introduces a critical ethical dimension, urging the court and policymakers to consider the broader societal and global implications of unchecked military AI development.
As the tech industry, legal community, and policymakers await Judge Lin’s impending opinion, the case stands as a pivotal moment in defining the boundaries of governmental power, corporate responsibility, and the ethical development of artificial intelligence. The ruling, expected within days, will not only determine Anthropic’s immediate fate but will also likely set a significant precedent for how U.S. technology companies, particularly those at the forefront of AI innovation, can engage with the government while upholding their ethical commitments and protecting their constitutional rights. The outcome will resonate far beyond the California courtroom, shaping the landscape for national security, technological innovation, and civil liberties in the digital age.

