9 Mar 2026, Mon

A Pro-Human Declaration Emerges as Government Lags on AI Regulation

The recent high-profile fallout between Washington and AI firm Anthropic has starkly illuminated a critical void in governmental oversight: the complete absence of a cohesive regulatory framework for artificial intelligence development. In the vacuum left by official inaction, a remarkable bipartisan coalition of leading thinkers, scientists, and public figures has coalesced to produce precisely what policymakers have so far failed to deliver: a comprehensive blueprint for responsible AI development. This initiative, known as the Pro-Human Declaration, was finalized shortly before the Pentagon’s public dispute with Anthropic, a timing that resonated deeply with those involved, underscoring the urgency of their proposed solutions.

Max Tegmark, a distinguished MIT physicist and AI researcher who played a pivotal role in organizing the Pro-Human Declaration, articulated the seismic shift in public sentiment during a recent conversation with this editor. "There’s something quite remarkable that has happened in America just in the last four months," Tegmark observed, referencing a dramatic surge in public awareness and concern. "Polling suddenly [is showing] that 95% of all Americans oppose an unregulated race to superintelligence." This widespread public apprehension, Tegmark noted, is a powerful indicator that the nation is ready for serious consideration of AI’s societal implications.

The newly published Pro-Human Declaration, now endorsed by hundreds of distinguished experts, former government officials, and prominent public figures, begins with an unvarnished assessment of humanity’s current precipice. It posits that civilization stands at a critical juncture, with two divergent paths ahead. The first, which the declaration starkly labels "the race to replace," foresees a future where humans are progressively sidelined, first as laborers and subsequently as decision-makers, as power increasingly consolidates within unaccountable institutions and their increasingly sophisticated artificial intelligence systems. The alternative path, however, offers a vision of AI as a profound force for augmenting human potential, leading to unprecedented advancements and societal progress.

To navigate towards this more optimistic future, the Pro-Human Declaration outlines five foundational pillars for responsible AI development. Foremost among these is the principle of "keeping humans in charge," ensuring that ultimate control and oversight remain firmly in human hands. This is complemented by a commitment to "avoiding the concentration of power," preventing any single entity or group from wielding undue influence over AI technologies. Crucially, the declaration emphasizes the need to "protect the human experience," safeguarding the qualitative aspects of human life and consciousness from being diminished or exploited by AI. Furthermore, it champions the preservation of "individual liberty," ensuring that AI development does not infringe upon fundamental human rights and freedoms. Finally, it mandates that AI companies be held "legally accountable" for the impacts of their creations.

Among the declaration’s more robust and forward-thinking provisions are several specific proposals designed to preemptively address the most significant risks associated with advanced AI. It advocates for an outright prohibition on the development of superintelligence until a robust scientific consensus confirms its safety and until genuine democratic consensus has been established regarding its deployment. This cautious approach recognizes the profound, potentially existential risks that uncontrolled superintelligence could pose. Furthermore, the declaration calls for the mandatory implementation of "off-switches" on all powerful AI systems, providing a critical mechanism for immediate deactivation in the event of unforeseen or harmful behavior. It also proposes a ban on AI architectures capable of self-replication, autonomous self-improvement, or exhibiting resistance to shutdown, features that could rapidly escalate their capabilities and potential for unintended consequences.

The release of the Pro-Human Declaration arrives at a particularly salient moment, amplifying the urgency of its message and making its proposed solutions more readily appreciable. Just days before its publication, on the final Friday of February, the implications of governmental inertia on AI regulation were dramatically illustrated. Defense Secretary Pete Hegseth designated Anthropic, whose AI systems are already integrated into classified military platforms, as a "supply chain risk." This designation, typically reserved for companies with suspected ties to adversarial nations like China, was imposed after Anthropic refused to grant the Pentagon unrestricted access to its technology. In a swift and somewhat overshadowed development, OpenAI then secured its own agreement with the Department of Defense, an arrangement that legal experts have questioned for its enforceability and the potential for meaningful oversight. The confluence of these events starkly exposed the escalating costs of Congressional inaction in establishing clear rules for AI.

Dean Ball, a senior fellow at the Foundation for American Innovation, provided a concise and insightful analysis of the situation in remarks to The New York Times following these developments. "This is not just some dispute over a contract," Ball stated. "This is the first conversation we have had as a country about control over AI systems." His observation underscores that the Pentagon-Anthropic standoff was far more than a contractual disagreement; it represented a nascent national dialogue on who controls the future of artificial intelligence and under what conditions.

To illustrate the critical need for regulatory oversight, Max Tegmark employed a relatable analogy during his conversation. "You never have to worry that some drug company is going to release some other drug that causes massive harm before people have figured out how to make it safe," he explained, drawing a parallel to the established regulatory processes in the pharmaceutical industry. "Because the FDA won’t allow them to release anything until it’s safe enough." This comparison highlights the stark contrast between the cautious, safety-first approach taken with life-saving medicines and the current, largely unregulated Wild West of AI development.

While political maneuvering and turf wars in Washington rarely generate the kind of broad public pressure needed to enact significant legislative change, Tegmark identified a potentially powerful catalyst for action: child safety. He believes that focusing on the protection of children could be the pressure point that finally breaks the current legislative impasse. Indeed, the Pro-Human Declaration explicitly calls for mandatory pre-deployment testing of AI products, particularly those designed for younger users, such as chatbots and companion apps. This testing would specifically assess risks including increased suicidal ideation, the exacerbation of existing mental health conditions, and the potential for emotional manipulation.

Tegmark articulated this concern with a potent rhetorical question: "If some creepy old man is texting an 11-year-old pretending to be a young girl and trying to persuade this boy to commit suicide, the guy can go to jail for that," he stated. "We already have laws. It’s illegal. So why is it different if a machine does it?" This framing powerfully connects existing legal precedents for protecting children from malicious human actors to the urgent need for comparable safeguards against AI.

He expressed confidence that once the principle of mandatory pre-release testing is established for AI products aimed at children, its application will inevitably broaden. "People will come along and be like – let’s add a few other requirements," Tegmark predicted. "Maybe we should also test that this can’t help terrorists make bioweapons. Maybe we should test to make sure that superintelligence doesn’t have the ability to overthrow the U.S. government." This incremental approach, starting with a widely accepted concern like child safety, could pave the way for more comprehensive AI safety regulations across the board.

The breadth of support for the Pro-Human Declaration is particularly noteworthy, signifying a rare moment of bipartisan consensus on a critical technological issue. The document bears the signatures of individuals from across the political spectrum, including former Trump advisor Steve Bannon and Susan Rice, President Obama’s National Security Advisor. Their agreement alongside figures like former Joint Chiefs Chairman Mike Mullen and prominent progressive faith leaders underscores the declaration’s unifying message.

"What they agree on, of course, is that they’re all human," Tegmark remarked, highlighting the fundamental common ground that unites such diverse individuals. "If it’s going to come down to whether we want a future for humans or a future for machines, of course they’re going to be on the same side." This statement encapsulates the core ethos of the Pro-Human Declaration: a commitment to ensuring that artificial intelligence serves humanity’s best interests, rather than posing a threat to its future.

The Pro-Human Declaration, with its detailed framework and broad-based support, represents a significant step forward in the ongoing national conversation about AI governance. As the government grapples with the complex and rapidly evolving landscape of artificial intelligence, this citizen-led initiative offers a crucial roadmap, emphasizing the paramount importance of human control, accountability, and the preservation of fundamental human values in the age of intelligent machines.

This article was reported on by Connie Loizos, who has been a fixture in Silicon Valley journalism since the late 1990s, beginning her career at the original Red Herring magazine. Previously serving as the Silicon Valley Editor for TechCrunch, she ascended to the role of Editor in Chief and General Manager in September 2023. Loizos is also the founder of StrictlyVC, a widely read daily e-newsletter and lecture series that was acquired by Yahoo in August 2023 and now operates as a sub-brand of TechCrunch. For further verification or outreach, Connie can be contacted via email at [email protected] or [email protected], or through encrypted message on Signal at ConnieLoizos.53.

Leave a Reply

Your email address will not be published. Required fields are marked *