15 Feb 2026, Sun

OpenAI and Anthropic spark coding revolution as developers abandoned traditional programming | Fortune

Last week marked a pivotal moment with OpenAI and Anthropic dropping their respective coding models—GPT-5.3-Codex and Claude Opus 4.6—both of which represented significant, almost breathtaking, leaps in AI coding capabilities. OpenAI’s GPT-5.3-Codex showcased markedly higher performance on a comprehensive suite of coding benchmarks, demonstrating an unprecedented ability to generate correct, efficient, and well-documented code for complex problems. This model moved beyond simple code snippets, exhibiting a deeper understanding of architectural patterns, API integrations, and system-level design. Its enhanced reasoning capabilities allowed it to tackle multi-file projects and abstract problems with a level of autonomy previously unimaginable. Meanwhile, Anthropic’s Claude Opus 4.6 introduced a groundbreaking feature that lets users deploy autonomous AI agent teams. These sophisticated agents can intelligently decompose complex projects into smaller, manageable tasks, with each agent specializing in different aspects—from front-end development and database design to API integration and security auditing—working simultaneously and collaboratively to accelerate the development cycle. Both models can write, test, and debug code with minimal human intervention, even iterating on their own work and refining features before presenting polished results to developers, effectively closing the loop on many aspects of the traditional software development lifecycle. This means AI can now not only generate code but also proactively identify and rectify errors, optimize performance, and even suggest improvements based on a deeper understanding of the project’s goals.

These releases, particularly the impressive capabilities of GPT-5.3-Codex, ignited an online existential crisis among software engineers, largely playing out on platforms like Twitter (now X), LinkedIn, and developer forums such as Hacker News. At the heart of this swirling debate was a viral essay written by Matt Shumer, CEO of OthersideAI, a company focused on AI-powered writing solutions. Shumer’s perspective, coming from a leader in the AI application space, resonated deeply. He described a moment where "something clicked" following the model releases, detailing how AI models are now capable of handling the entire development cycle autonomously. In his vision, developers merely describe desired outcomes, and the AI takes over—writing tens of thousands of lines of code, opening applications, rigorously testing features, and iterating until it deems the solution satisfactory, requiring little to no human oversight. Shumer starkly proposed that these advances meant AI could disrupt jobs more severely than the COVID-19 pandemic, which primarily shifted work locations rather than fundamentally altering the nature of work itself. The implication was clear: if AI could automate the core function of a software engineer, what professions were truly safe?

The essay drew intensely mixed reactions, reflecting the deep divisions within the tech community regarding AI’s immediate and long-term impact. Some influential tech leaders, including Reddit co-founder Alexis Ohanian, openly agreed with Shumer’s assessment, envisioning a future where human developers transition into roles of high-level architects and AI overseers, leveraging the immense productivity gains. Ohanian’s venture capitalist background likely predisposes him to see the disruptive potential and efficiency benefits of such technological leaps. However, others, notably NYU professor Gary Marcus, a prominent AI skeptic and author, sharply criticized it as "weaponized hype." Marcus, known for his consistent warnings about AI’s limitations and the tendency to overstate its capabilities, pointed out that Shumer provided no empirical data or rigorous studies supporting claims that AI can reliably write complex applications without errors, bugs, or security vulnerabilities, especially for novel, real-world problems. Marcus often argues that current AI models excel at pattern matching but lack true understanding, common sense, or the ability to reason about the world in a human-like way, making fully autonomous, error-free development a distant dream.

Adding another layer to the discussion, Fortune‘s Jeremy Kahn also weighed in, arguing that coding’s unique characteristics—like the prevalence of automated testing, version control, and clearly defined success metrics—made it inherently easier to fully automate compared to other knowledge-work fields. For instance, testing a piece of code for functionality or performance is often an objective, quantifiable task, allowing AI to quickly receive feedback and iterate. In contrast, automating fields like legal drafting, scientific research, or creative writing involves subjective judgment, nuanced interpretation, and a lack of clear, universally agreed-upon "correct" answers, making full automation significantly more elusive. This perspective suggests that while coding might be on the front lines of AI disruption, it doesn’t necessarily set a precedent for all intellectual professions.

Despite the ongoing debate, for many engineers, some of Shumer’s warnings merely reflect their current reality. A growing number of engineers openly admit they have either significantly reduced their direct coding efforts or stopped coding entirely, instead relying on AI to generate, refine, and debug code under their direction. This shift isn’t a sudden phenomenon; developers acknowledge that the industry has been undergoing a slow but steady transformation over the past year. As AI models became increasingly capable of handling complex tasks autonomously, their integration into daily workflows deepened. While developers at leading tech companies haven’t stopped building software, they’ve evolved into "directors of AI systems" that do the actual typing. The fundamental skill has transformed from meticulous line-by-line coding to architecting comprehensive solutions, providing clear specifications, critically evaluating AI-generated output, and guiding the AI tools through iterative refinement. The new models, some argue, mainly "burst the bubble" around AI coding by making the general public and those outside the immediate development circles aware of a trend engineers have been experiencing for months, if not longer.

Concrete examples underscore this paradigm shift. During its earnings call this week, Spotify co-CEO Gustav Söderström made a striking revelation: the company’s best developers "have not written a single line of code since December." This isn’t a testament to idleness but to hyper-efficiency. The streaming giant’s internal system leverages advanced AI, including aspects of Claude Code, for remote deployment. This allows engineers to instruct AI to fix bugs or add entirely new features via natural language prompts, often delivered through internal communication platforms like Slack on their phones during their commute. The AI then handles the entire process, merging completed work to production before the engineers even reach the office. Söderström proudly stated that Spotify shipped over 50 new features in 2025 using these innovative, AI-powered workflows, showcasing a dramatic increase in development velocity and product iteration speed.

The reliance on AI for code generation is even more pronounced within the very companies building these tools. Boris Cherny, head of Claude Code at Anthropic, admitted earlier this month that he hasn’t written code in over two months, a powerful statement from a leader in the field. Anthropic had previously told Fortune that between 70% and 90% of the company’s internal code is now AI-generated, highlighting an almost complete reliance on their own intelligent systems for their core product development. This internal adoption serves as both a proof-of-concept and a competitive advantage, allowing these companies to iterate faster and allocate human talent to higher-level strategic challenges.

Perhaps the most fascinating development is the models themselves reaching a recursive milestone: they are now materially helping to build more advanced iterations of themselves. OpenAI explicitly stated that GPT-5.3-Codex "is our first model that was instrumental in creating itself," marking a significant, potentially exponential, shift in how AI development works. This self-improvement loop could accelerate progress at an unprecedented rate, reducing human bottlenecks in the most complex AI research and engineering. Similarly, Anthropic’s Cherny shared that his team built Claude Cowork—a non-technical version of Claude Code designed for sophisticated file management and operational tasks—in approximately a week and a half, largely using Claude Code itself. This demonstrates not just efficiency in code generation but also the ability of these models to handle complex project specifications and deliver functional software from high-level instructions. Cherny further revealed that about 90% of Claude Code’s own underlying code is now written by Claude Code, illustrating a powerful self-sustaining development ecosystem.

Despite these incredible productivity gains and the promise of a more efficient future, some developers are also sounding alarms about the potential for burnout. Steve Yegge, a veteran engineer with decades of experience at tech giants like Google and Amazon, warned that AI tools, while powerful, were paradoxically draining developers through overwork. In a widely shared blogpost, Yegge vividly described falling asleep suddenly after long AI-assisted coding sessions and noted colleagues considering installing nap pods at their office, a stark indicator of extreme fatigue. He argues that the addictive nature of AI coding tools, which offer instant gratification and a feeling of superhuman productivity, is pushing developers to take on unsustainable workloads. "With a 10x boost, if you give an engineer Claude Code, then once they’re fluent, their work stream will produce nine additional engineers’ worth of value," he wrote. However, this productivity comes at a cost: "building things with AI takes a lot of human energy." Yegge’s "AI Vampire" metaphor suggests that while the tools provide immense leverage, they also demand a constant, intense cognitive engagement from the human operator, transforming the work into a high-pressure, mentally exhausting endeavor. The challenge, therefore, is not merely adapting to new tools but understanding how to integrate them sustainably, ensuring that the promise of increased productivity doesn’t lead to an epidemic of developer exhaustion. The future of coding may not be dead, but it is undeniably transforming, demanding a new breed of human-AI collaboration that prioritizes both efficiency and human well-being.

Leave a Reply

Your email address will not be published. Required fields are marked *