In a significant legal maneuver, artificial intelligence leader Anthropic has submitted two sworn declarations to a California federal court, directly challenging the Pentagon’s assertion that the company poses an "unacceptable risk to national security." The declarations, filed late Friday afternoon alongside Anthropic’s reply brief in its ongoing lawsuit against the Department of Defense, aim to dismantle the government’s case by highlighting what the AI firm describes as technical misunderstandings and arguments that were never presented during months of prior negotiations. This critical submission comes just ahead of a pivotal hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco.
The dispute erupted in late February when then-President Trump and Defense Secretary Pete Hegseth publicly announced the termination of ties with Anthropic. This drastic step was reportedly triggered by the company’s refusal to grant unrestricted military use of its advanced AI technology. The Pentagon’s subsequent "supply-chain risk designation," the first ever applied to an American company, has been framed by Anthropic as retaliatory and a violation of its First Amendment rights, stemming from its public stances on AI safety.
The two individuals who have provided sworn testimony are Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the company’s Head of Public Sector. Their declarations offer a detailed counter-narrative to the government’s claims, emphasizing a disconnect between the Pentagon’s public pronouncements and the substance of their prior discussions.
Sarah Heck, a seasoned figure with a background in national security policy, brings a unique perspective to the legal battle. Her career includes service at the White House on the National Security Council under the Obama administration, followed by roles at Stripe before joining Anthropic. At Anthropic, Heck spearheads the company’s engagement with governmental bodies and its policy initiatives. Crucially, she was personally present at a key meeting on February 24, where Anthropic CEO Dario Amodei met with Defense Secretary Hegseth and Under Secretary of Defense for Acquisition and Sustainment, Emil Michael.
In her declaration, Heck directly refutes what she identifies as a "central falsehood" in the government’s legal filings: the claim that Anthropic sought an "approval role" over military operations. "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," Heck explicitly stated in her sworn testimony. This assertion directly challenges the Pentagon’s narrative that Anthropic’s demands constituted an impediment to national security operations.
Furthermore, Heck highlighted another significant point of contention: the Pentagon’s concern that Anthropic could potentially disable or alter its technology mid-operation. According to Heck, this specific concern was never raised during the extensive negotiation period. Instead, she claims, it materialized for the first time in the government’s court filings, leaving Anthropic with no prior opportunity to address or rectify the perceived issue. This late introduction of a critical argument, she implies, undermines the legitimacy of the government’s claims.
Adding a layer of complexity and potential contradiction to the Pentagon’s stance, Heck’s declaration also brings to light a communication from Under Secretary Michael dated March 4. This was the day after the Pentagon formally finalized its supply-chain risk designation against Anthropic. In this email to CEO Dario Amodei, Michael reportedly stated that the two parties were "very close" on resolving the very issues the government now cites as evidence of Anthropic being a national security threat: the company’s positions on autonomous weapons and the mass surveillance of Americans.
Heck has attached this email as an exhibit to her declaration, and its contents stand in stark contrast to public statements made by Pentagon officials in the subsequent days. On March 5, Amodei published a statement on Anthropic’s website, describing the company’s interactions with the Pentagon as "productive conversations." However, on March 6, Under Secretary Michael posted on X (formerly Twitter), stating, "there is no active Department of War negotiation with Anthropic." A week later, Michael reiterated this stance in an interview with CNBC, declaring there was "no chance" of renewed talks.
Heck’s presentation of this timeline strongly suggests a disconnect, if not an outright contradiction, between the internal assessments of progress and the public declarations made by the Pentagon. The implication is clear: if Anthropic’s stance on autonomous weapons and mass surveillance was indeed the core reason for its designation as a national security threat, why was a senior Pentagon official signaling significant progress on these exact issues just after the designation was finalized? While Heck stops short of explicitly accusing the government of using the designation as a bargaining chip, the timeline she meticulously lays out leaves this question prominently hanging in the air for the court to consider.
Complementing Heck’s policy and negotiation-focused testimony, Thiyagu Ramasamy brings a deep technical and operational expertise to Anthropic’s defense. Prior to joining Anthropic in 2025, Ramasamy spent six years at Amazon Web Services (AWS), where he was instrumental in managing AI deployments for government clients, including those operating within highly classified environments. At Anthropic, he has been a driving force behind the team responsible for integrating the company’s Claude models into national security and defense sectors. His leadership was crucial in securing the significant $200 million contract with the Pentagon, announced last summer, which aimed to advance responsible AI in defense operations.
In his declaration, Ramasamy directly tackles the government’s technical assertions regarding Anthropic’s potential to interfere with military operations. The Pentagon has claimed that Anthropic could theoretically disable the technology or alter its behavior mid-operation. Ramasamy unequivocally dismisses this as technically infeasible. He explains that once Anthropic’s Claude models are deployed within a government-secured, "air-gapped" system managed by a third-party contractor, Anthropic loses all direct access. There is no remote kill switch, no hidden backdoor, and no mechanism for pushing unauthorized updates. Ramasamy argues that any notion of an "operational veto" by Anthropic is a fabrication, as any modification to the AI model would necessitate the Pentagon’s explicit approval and active implementation.
Furthermore, Ramasamy clarifies that Anthropic has no visibility into the data processed by its models when deployed in such secure government environments. He states that the company cannot even ascertain what government users are typing into the system, let alone extract or misuse that sensitive data. This technical explanation directly counters the government’s implied concern about data exfiltration or unauthorized intelligence gathering facilitated by Anthropic’s technology.
Ramasamy also addresses the government’s claim that Anthropic’s hiring of foreign nationals constitutes a security risk. He counters this by emphasizing that Anthropic employees working on sensitive projects have undergone rigorous U.S. government security clearance vetting – the same comprehensive background check process required for access to classified information. He further notes, "to my knowledge," Anthropic is unique among AI companies in that its cleared personnel are directly involved in building the AI models designed for deployment within classified environments. This suggests a proactive and robust approach to security vetting and personnel management within Anthropic, directly challenging the notion that foreign national employment inherently compromises national security.
Anthropic’s lawsuit itself centers on the argument that the supply-chain risk designation, unprecedented for a U.S. company, represents government retaliation for the company’s vocal advocacy for AI safety principles. The company contends that this action, disguised as a security measure, infringes upon its First Amendment rights to free speech and expression.
The Department of Defense, in a comprehensive 40-page filing submitted earlier in the week, vehemently rejected Anthropic’s framing of the dispute. The Pentagon asserted that Anthropic’s refusal to permit all lawful military uses of its technology was a purely business decision, not an exercise of protected speech. They maintained that the supply-chain risk designation was a straightforward national security determination, entirely divorced from any punitive intent or reaction to the company’s public views on AI safety.
The upcoming hearing before Judge Rita Lin is expected to delve deeply into these competing narratives. The sworn declarations from Heck and Ramasamy provide Anthropic with a powerful set of arguments and evidence to counter the Pentagon’s claims, emphasizing technical realities and challenging the timeline and veracity of the government’s stated motivations. The court will now weigh these declarations against the government’s assertions as it determines the future of this high-stakes legal battle between a burgeoning AI company and the U.S. Department of Defense.
The broader context of this dispute highlights the complex and evolving relationship between the defense establishment and cutting-edge AI technologies. As militaries worldwide seek to leverage the power of artificial intelligence for strategic advantage, the ethical considerations, safety protocols, and control mechanisms surrounding these powerful tools have become paramount. Anthropic’s stance, rooted in its commitment to AI safety and responsible development, has placed it at the forefront of this debate, pushing the boundaries of how AI can and should be integrated into sensitive government operations. The outcome of this lawsuit could set significant precedents for future collaborations and regulatory frameworks governing AI in national security.
The TechCrunch event mentioned in the provided context, scheduled for October 13-15, 2026, in San Francisco, CA, suggests an ongoing industry focus on these critical issues, underscoring the relevance of Anthropic’s legal challenges to the broader technology landscape and its intersection with national security. While this event is in the future, the current legal proceedings are shaping the immediate discourse and policy considerations surrounding AI’s role in defense.

