In a remarkable display of public sentiment shifting in favor of ethical AI practices, Claude, the advanced conversational AI developed by Anthropic, has experienced a significant surge in daily active users and new app installations on mobile devices. This upward trend directly follows the company’s principled stand against the Pentagon’s potential misuse of its technology, a move that led to Anthropic being labeled a "supply-chain risk" by the U.S. Department of Defense. The fallout, which saw Anthropic CEO Dario Amodei firmly refuse to allow the government to leverage its AI systems for mass surveillance of Americans or for the development of fully autonomous weapons, has paradoxically galvanized consumer support, data from leading app intelligence firms reveals.
Appfigures, a prominent provider of app market intelligence, reports that Claude’s mobile app downloads in the U.S. are now consistently outpacing those of its closest competitor, ChatGPT. As of March 2nd, Claude was estimated to be achieving approximately 149,000 daily downloads, a notable figure when contrasted with ChatGPT’s estimated 124,000 daily downloads for the same period. While download numbers provide a snapshot of initial user acquisition, the true measure of an application’s engagement lies in its active user base.
On this critical metric, Similarweb, another respected market intelligence firm, has provided compelling evidence of Claude’s burgeoning popularity. The data indicates that Claude’s application on both iOS and Android platforms registered an impressive 11.3 million daily active users on March 2nd. This represents a staggering 183% increase from the beginning of the year, when usage hovered around 4 million daily active users, and a substantial jump from approximately 5 million daily active users at the start of February. This dramatic escalation in engagement directly correlates with the public discourse surrounding Anthropic’s ethical stance and its subsequent dealings with the Pentagon.

Claude’s impressive growth trajectory in daily active users has positioned it ahead of several other prominent AI applications, including Perplexity and Microsoft Copilot. However, it has yet to surpass the overall daily active user count of ChatGPT, which, as of March 2nd, commanded a formidable 250.5 million daily active users across its iOS and Android applications. The timing of Claude’s usage surge, beginning later in February and coinciding with the news of Anthropic’s contentious negotiations with the Pentagon, suggests a direct causal link between the ethical controversy and user adoption. Should these trends persist throughout March, Claude is poised to further ascend the rankings and challenge its more established rivals.
Beyond mobile app engagement, Similarweb’s analysis also points to a significant uptick in Claude’s web traffic. While still trailing behind some of the industry’s leading AI providers in sheer volume, Claude’s web traffic saw a robust 43% month-over-month increase in February. On a year-over-year basis, this growth is even more pronounced, with a remarkable 297.7% surge. This expansion in web traffic is particularly noteworthy as it appears to be occurring, at least in part, at the expense of ChatGPT. During the same February period, ChatGPT’s web traffic experienced a 6.5% decline month-over-month. Even Gemini, Google’s AI offering, which saw a modest 2.1% bump in web traffic, exhibited slower growth compared to its previous performance, suggesting a broader market shift in user preference.
Anthropic itself has been actively promoting Claude’s burgeoning success. The company has highlighted that its AI chatbot is now attracting over 1 million new sign-ups per day. This remarkable figure was achieved shortly after Claude ascended to the coveted number one position on the U.S. App Store over the preceding weekend, a status it continues to maintain. Furthermore, Claude’s popularity extends beyond the United States, as it currently holds the top spot in the app stores of 15 other countries, including Australia, Austria, Belgium, Canada, Finland, France, Germany, Ireland, Italy, New Zealand, Norway, Portugal, Singapore, Switzerland, and the United Kingdom. The company has also reported that Claude has consistently broken its own daily signup records across all available markets since early last week, underscoring the sustained momentum of its user acquisition.
In stark contrast to Claude’s upward trajectory, earlier reports indicated a surge in ChatGPT app uninstalls. This divergence in user behavior further emphasizes the impact of the ethical considerations that have come to the forefront of public discourse surrounding AI development and deployment.

The Pentagon’s decision to classify Anthropic as a "supply-chain risk" stemmed from a series of high-stakes discussions and negotiations that underscored a fundamental disagreement over the ethical boundaries of AI application. At the heart of the dispute was the Pentagon’s interest in leveraging advanced AI capabilities for potentially sensitive military operations, including intelligence gathering and the deployment of autonomous weapon systems. However, Anthropic, under the leadership of CEO Dario Amodei, maintained a steadfast commitment to its ethical principles, refusing to participate in applications that could lead to the indiscriminate surveillance of civilians or the development of AI-powered weaponry capable of lethal action without human oversight.
Amodei’s refusal, publicly documented in late February, was not a sudden capitulation but rather a calculated decision rooted in the company’s foundational values. Anthropic has consistently positioned itself as a proponent of "AI safety" and "responsible AI development," emphasizing the critical need for human control and ethical considerations in the advancement of artificial intelligence. This stance, while potentially limiting its immediate government contracts, has resonated deeply with a growing segment of the public that is increasingly concerned about the societal implications of unchecked AI proliferation.
The designation of "supply-chain risk" by the Pentagon implies that the U.S. Department of Defense views Anthropic’s AI technology as a potential vulnerability within its operational infrastructure. This could encompass concerns about data security, the reliability of the AI’s decision-making processes in critical scenarios, or the potential for the technology to be influenced or compromised in ways that could undermine national security objectives. However, the public’s interpretation of this designation appears to have been different. Rather than viewing it as a genuine security concern, many consumers have perceived it as a testament to Anthropic’s integrity and its refusal to compromise its ethical framework for governmental or military expediency.
This perception has been amplified by the way the news has been framed. The narrative that has emerged is one of a principled AI company standing its ground against a powerful government entity that may have been seeking to exploit its technology for potentially harmful purposes. This David-and-Goliath dynamic has likely contributed to a surge in goodwill towards Anthropic and its AI assistant, Claude. Users, empowered by the growing awareness of AI’s potential impact, are increasingly seeking out technologies that align with their own ethical values, and Claude has emerged as a compelling choice.

The comparison with ChatGPT, while still showing OpenAI’s product with a larger overall user base, highlights a crucial nuance in user acquisition and retention. While ChatGPT benefits from early market entry and widespread brand recognition, Claude’s recent surge indicates a growing segment of users actively seeking alternatives that prioritize ethical development. The reported increase in ChatGPT app uninstalls further suggests that some users may be re-evaluating their choices, potentially influenced by the very controversies that have propelled Claude into the spotlight.
The sustained growth of Claude’s web traffic, coupled with its mobile app’s dominance in downloads and active users, paints a picture of a company that is not only technologically capable but also adept at navigating the complex ethical landscape of AI. Anthropic’s commitment to responsible AI development, exemplified by its stance against the Pentagon’s demands, appears to be translating into tangible market gains. As the public becomes more informed and discerning about the AI technologies they adopt, companies like Anthropic, that prioritize ethical considerations and user trust, are likely to find themselves in a stronger competitive position.
The implications of this trend extend beyond mere market share. It signals a potential paradigm shift in how consumers and governments alike perceive and interact with AI. The notion that ethical considerations can be a significant driver of user adoption, even to the point of influencing defense-related technology assessments, is a powerful indicator of evolving societal values. As AI continues to integrate itself into the fabric of daily life, the companies that champion transparency, accountability, and ethical responsibility are likely to be the ones that build lasting trust and achieve sustainable success. Anthropic’s recent surge, catalyzed by its principled stand, serves as a compelling case study in this emerging reality.
The lack of an immediate comment from Anthropic at the time of publication underscores the company’s focus on continued development and user engagement, perhaps opting to let its performance speak for itself. However, the data from Appfigures and Similarweb provides a clear and compelling narrative: a commitment to ethical AI is not just a moral imperative but a powerful driver of user growth and market influence in the rapidly evolving landscape of artificial intelligence. The fallout with the Pentagon, rather than hindering Anthropic, has seemingly served as an unexpected catalyst, propelling Claude into a new era of user engagement and public recognition.

