This emerging narrative from the titans of the tech industry presents a fascinating, if not paradoxical, vision of the future workforce. As artificial intelligence rapidly automates tasks across industries, leading to widespread job displacement and anxieties about employment security, a select group of influential leaders is spotlighting an intrinsically human, often elusive, quality: taste. This isn’t merely about aesthetics; it’s about discerning judgment, strategic foresight, and an intuitive understanding of what truly matters in a world increasingly capable of generating infinite possibilities.
The conversation gained significant traction with Sam Altman, CEO of OpenAI, the company at the forefront of the generative AI revolution. Just a day before announcing a monumental $110 billion funding round for OpenAI, Altman took to X (formerly Twitter) to offer insights into how even non-technical individuals could contribute meaningfully to the burgeoning field of AI development, particularly within his organization. His advice centered on leveraging a uniquely human attribute that AI, for now, struggles to replicate: judgment.
Altman specifically highlighted "research recruiting" as a prime pathway for non-technical candidates. "We believe the best research teams are built through context, taste and a real feel for where the field is headed next," he asserted. He implied that individuals with discerning "taste" are crucial for such roles because their responsibilities at OpenAI extend beyond simply "filling roles" to "finding people who will move the frontier forward." This suggests a demand for individuals who possess an innate ability to recognize potential, to identify the groundbreaking amidst the mundane, and to intuit the future trajectory of a complex and rapidly evolving domain.
Altman is not alone in this sentiment. He represents a growing chorus of high-profile executives pointing to "taste" as a potential differentiator for job seekers grappling with the increasing prevalence of AI and the associated "AI job anxiety." Just last week, OpenAI president Greg Brockman echoed his CEO’s views, unequivocally stating in an X post, "Taste is a new core skill." This collective emphasis from the leadership of one of the world’s most influential AI companies underscores a potentially significant shift in how human value is perceived in an AI-driven economy.
Further lending weight to this perspective is Y-Combinator co-founder Paul Graham, a revered figure in the startup and tech ecosystem. Graham, known for his incisive essays on technology, economics, and entrepreneurship, was arguably one of the earliest proponents of the importance of "taste." In a seminal 2002 essay, he argued that "taste" is not objective but rather a cultivated sensibility, and crucially, "we need good taste to make good things." Two decades later, his insights feel more prescient than ever.
Earlier this month, Graham expanded on his original thesis in an X post, predicting, "In the AI age, taste will become even more important. When anyone can make anything, the big differentiator is what you choose to make." This statement encapsulates the core challenge and opportunity presented by advanced AI. As AI tools democratize creation, allowing individuals to generate text, images, code, and even entire systems with unprecedented ease and speed, the bottleneck shifts from execution to conception and discernment. The ability to identify truly valuable ideas, to curate effective solutions from a sea of AI-generated possibilities, and to guide creative processes with a clear vision becomes paramount.
This view resonated strongly across the tech community. Dane Knecht, Chief Technology Officer at Cloudflare, a global leader in internet security and performance, publicly agreed with Graham. In a reply on X, Knecht reiterated a prediction he had made earlier in the year, stating that "taste" would be the primary differentiator in engineering by 2026. "Building is easy now. Knowing what to build, and what not to, is the hard part," Knecht added. This perspective highlights a critical pivot in the engineering profession: from the mastery of technical implementation to the strategic acumen of problem identification and solution design. In an era where AI can write code, the value lies in defining the right problem to solve and envisioning the optimal solution, a task that demands a sophisticated blend of technical understanding, market intuition, and, indeed, taste.
However, not everyone is convinced that "taste" and "judgment" will remain exclusively human domains. Matt Shumer, co-founder and CEO of OthersideAI, offers a compelling counter-argument. In his widely discussed viral essay on the future of AI earlier this month, Shumer recounted an experience with OpenAI’s GPT-5.3 Codex model that, to him, "felt, for the first time, like judgment. Like taste." This observation challenges the very premise that these qualities are inherently beyond AI’s grasp.
Shumer further articulated his skepticism in a subsequent X post: "I don’t see why ‘taste’ and direction are uniquely human, like many people say. If an AI can train on it, it can learn it." This perspective opens a profound debate about the nature of taste itself. Is it an irreducible human characteristic, born from consciousness, empathy, and lived experience? Or is it a highly complex pattern recognition task, one that can eventually be mastered by sufficiently advanced AI models trained on vast datasets of human preferences, cultural artifacts, and successful innovations? If taste can be quantified, categorized, and learned through exposure, then Shumer’s argument suggests that AI may eventually develop its own form of "good taste," potentially blurring the lines between human intuition and algorithmic discernment.
This ongoing conversation about "taste" is particularly salient given the pervasive anxiety surrounding the future of AI and its profound implications for the global job market. Reports from researchers like Dario Amodei have highlighted widespread panic and concern about AI’s potential to displace workers across various sectors. The fear is not unfounded, as evidenced by recent announcements from major corporations.
On Thursday, Jack Dorsey, CEO of Block (formerly Square), revealed the company was laying off 4,000 of its more than 10,000 employees, explicitly citing AI as a contributing factor. Block has developed an internal AI agent named "Goose," which can leverage various AI models and seamlessly integrate with computer files, cloud storage, and online databases. Wired reported that this tool is already empowering both programmers and non-programmers within the company to accelerate idea development and prototype creation.
Dorsey’s statement on X underscored the transformative impact of AI on organizational structures: "We’re already seeing that the intelligence tools we’re creating and using, paired with smaller and flatter teams, are enabling a new way of working which fundamentally changes what it means to build and run a company. And that’s accelerating rapidly." This candid admission from a prominent tech leader illustrates the tangible effects of AI on workforce reduction and the restructuring of operational models.
The paradox here is stark: executives are simultaneously championing AI for its efficiency gains, which lead to layoffs, while advising the remaining workforce (or aspiring entrants) to cultivate a quality—"taste"—that they claim is uniquely human. This raises critical questions about the future of work, the nature of human capital, and the widening chasm between those who design and deploy AI and those whose livelihoods are impacted by it.
What does "taste" truly encompass in a professional context? It’s likely a multifaceted skill encompassing:
- Strategic Acumen: The ability to discern which problems are worth solving and which opportunities hold the most potential.
- Aesthetic Judgment: An intuitive understanding of design, user experience, and overall quality that resonates with human preferences.
- Cultural Nuance: The capacity to understand unspoken social cues, market trends, and contextual relevance.
- Ethical Discernment: The judgment to navigate complex ethical dilemmas and ensure AI development aligns with societal values.
- Curatorial Expertise: The skill to select, refine, and present the best outputs from a vast array of AI-generated content.
Cultivating "taste" in this expanded sense requires not just experience but also critical thinking, empathy, and a deep engagement with diverse fields of knowledge. It implies a shift from valuing purely technical execution to valuing higher-order cognitive functions that guide and refine AI’s immense capabilities.
The "taste" argument, therefore, can be viewed in several ways. It could be a genuine insight into the evolving demands of a technologically advanced workplace, where human creativity and discernment become increasingly valuable. Alternatively, it might be a strategic narrative to reassure a nervous workforce, offering a seemingly unassailable human attribute as a shield against automation. Regardless of its underlying motivation, it signals a critical juncture in the human-AI partnership. As AI takes over repetitive and even complex tasks, the unique contributions of humans will increasingly lie in defining purpose, setting direction, and imbuing creations with a touch of the ineffable—the "taste" that makes something not just functional, but truly resonant and valuable. The challenge for individuals and educational systems will be to identify, cultivate, and valorize these inherently human skills in an accelerating AI age.

