In a significant display of industry solidarity, hundreds of technology workers have formally petitioned the Department of Defense (DOD) to retract its designation of Anthropic, a prominent artificial intelligence research company, as a "supply chain risk." This strong rebuke, articulated through an open letter, also implores Congress to scrutinize the administration’s use of extraordinary authorities against an American technology firm. The signatories represent a powerful cross-section of the tech ecosystem, including luminaries from OpenAI, Slack, IBM, Cursor, Salesforce Ventures, and numerous other influential organizations and venture capital firms. This concerted effort emerges in the wake of a high-stakes dispute between the DOD and Anthropic, escalating after the AI lab recently declined to grant the military unrestricted access to its cutting-edge AI systems.
At the heart of the conflict lie Anthropic’s two non-negotiable ethical boundaries: the prohibition of its technology being deployed for mass surveillance of American citizens and its use in powering autonomous weapons systems that could make lethal targeting and firing decisions without direct human oversight. While the DOD asserted it had no immediate plans for such applications, it maintained that it should not be constrained by the terms of a vendor’s agreement. This fundamental disagreement proved to be an insurmountable hurdle in negotiations.
The situation intensified dramatically when, following Anthropic CEO Dario Amodei’s steadfast refusal to capitulate to pressure, reportedly from a figure identified as Hegseth, President Donald Trump issued a directive on Friday. This directive mandates federal agencies to cease their utilization of Anthropic’s technology, allowing for a six-month transition period. Adding further weight to the administration’s stance, Hegseth, in a public post on X (formerly Twitter) on Friday, declared, "Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic." This statement signaled a broad attempt to isolate Anthropic from any entity involved with the Pentagon’s procurement pipeline.
However, the open letter meticulously points out that a public declaration on social media does not automatically confer the legal status of a "supply chain risk." The government is required to undertake a formal risk assessment and provide notification to Congress before imposing such stringent measures, which would effectively blacklist Anthropic from working with any company that has dealings with the Pentagon. Anthropic itself has vocally contested this move, stating in a blog post that the designation is both "legally unsound" and that the company is prepared to "challenge any supply chain risk designation in court." This legal challenge underscores the gravity with which Anthropic views the administration’s actions, perceiving them as an overreach of governmental power.
Many within the technology industry interpret the administration’s aggressive stance towards Anthropic as a clear act of retaliation, a punitive measure for the company’s adherence to its ethical principles. The open letter articulates this concern, stating, "When two parties cannot agree on terms, the normal course is to part ways and work with a competitor. This situation sets a dangerous precedent. Punishing an American company for declining to accept changes to a contract sends a clear message to every technology company in America: accept whatever terms the government demands, or face retaliation." This sentiment reflects a broader anxiety about the potential for government overreach and the chilling effect such actions could have on innovation and ethical development within the AI sector.
Beyond the immediate concern for Anthropic, a significant undercurrent of apprehension within the industry revolves around the potential for government misuse of AI technologies. The prospect of advanced AI systems being employed for surveillance or other potentially harmful applications is a source of deep unease. Boaz Barak, a researcher at OpenAI, echoed these sentiments in a social media post on Monday, identifying the prevention of governments using AI for mass surveillance as his "personal red line" and urging that it "should be all of ours." This call to action suggests a growing recognition within the AI community of the need for robust ethical frameworks and safeguards to prevent the weaponization or misuse of AI by state actors.
Interestingly, in a development that highlights the complex and often competitive landscape of AI development and its engagement with the DOD, OpenAI announced shortly after President Trump’s public criticism of Anthropic that it had successfully negotiated its own agreement for its models to be deployed within the Pentagon’s classified environments. This announcement, made on March 1st, 2026, underscores the intense demand for advanced AI capabilities within national security circles. However, OpenAI CEO Sam Altman has publicly stated that his company shares the same ethical "red lines" as Anthropic, suggesting a commitment to responsible deployment even as they secure government contracts.
Barak further elaborated on the broader implications of the recent events, expressing hope that the controversies of the past week might catalyze a more proactive approach within the AI industry. "If anything good can come out of the events of the last week, it would be if we in the AI industry start treating the issue of using AI for government abuse and surveilling its own people as a catastrophic risk of its own right," he wrote. He emphasized the need for the industry to apply the same rigor to these concerns as it does to other critical risks. "We have done a good job of evaluations, mitigations, and processes, for risks such as bioweapons and cyber security. Let’s use similar processes here," he urged, advocating for a systematic and comprehensive approach to addressing the ethical challenges posed by AI in government contexts.
The open letter, signed by a diverse group of industry leaders and professionals, serves as a critical intervention in a rapidly evolving debate about the intersection of artificial intelligence, national security, and civil liberties. It not only defends Anthropic but also champions a principle of fair dealing between the government and its domestic technology partners. The signatories are calling for a more transparent and collaborative approach, one that respects the ethical considerations of AI developers and avoids the use of coercive measures that could stifle innovation and undermine trust. The involvement of organizations like OpenAI, which has itself secured a DOD contract, adds a layer of nuance, suggesting that the industry is not monolithic in its approach but is united in its concern for ethical boundaries and fair practices.
The designation of Anthropic as a "supply chain risk" is not merely a bureaucratic classification; it carries significant implications for the company’s ability to operate and compete. Such a designation can trigger a cascade of restrictions, impacting partnerships, investment, and access to crucial markets. For a company that has positioned itself at the forefront of responsible AI development, with a strong emphasis on safety and ethics, this move is seen as a direct challenge to its core values and business model. The letter’s signatories are essentially arguing that the government’s actions are not only disproportionate but also counterproductive, potentially pushing critical AI development away from ethical considerations and towards a more compliant, albeit potentially less principled, path.
The reference to "Hegseth" in the context of the DOD’s actions suggests the involvement of a high-ranking official within the Department of Defense, likely an Undersecretary or Assistant Secretary with oversight of technology acquisition or national security policy. The specific individuals and departments involved will be crucial in understanding the full scope of this policy shift and its potential long-term ramifications. The mention of President Trump’s direct involvement further elevates the political stakes, indicating that this is not solely a departmental decision but one with presidential backing.
The timing of OpenAI’s announcement of its DOD deal, juxtaposed with the pressure on Anthropic, also raises questions about market dynamics and potential strategic maneuvers within the AI sector. While both companies advocate for ethical AI, their differing approaches to government engagement and their respective market positions mean that such developments can have significant competitive implications. The industry will be closely watching how these dynamics play out and whether the broader call for ethical AI development can be sustained amidst these complex commercial and national security pressures.
The open letter, therefore, represents more than just a plea for Anthropic; it is a broader statement of principles for the future of AI development and its relationship with government. It underscores the critical need for clear legal frameworks, robust oversight, and a commitment to ethical considerations that transcend immediate national security imperatives. As the industry and policymakers grapple with the profound implications of advanced AI, the voices of those who build and deploy this technology are becoming increasingly crucial in shaping its responsible trajectory. The call for congressional intervention signals a desire for a more democratic and transparent process in determining how AI is integrated into national security and public life, ensuring that the pursuit of innovation does not come at the expense of fundamental rights and ethical principles. The coming weeks and months will likely see further developments as this critical debate unfolds.

