20 Feb 2026, Fri

Mark Zuckerberg’s entourage threatened with contempt for wearing Meta AI glasses into a no-recording courtroom | Fortune

The incident drew the immediate and stern rebuke of Judge Carolyn B. Kuhl, who presides over the high-stakes case. With palpable gravity, Judge Kuhl threatened to hold those wearing the devices in contempt of court, a serious charge that can carry fines or even imprisonment. "If you have done that, you must delete that, or you will be held in contempt of the court," Kuhl declared, her voice underscoring the severity of the violation. "This is very serious." The judge’s intervention highlighted the fundamental principle of courtroom decorum and the sanctity of judicial proceedings, where unauthorized recording is strictly prohibited to maintain fairness, protect witness privacy, and prevent the dissemination of potentially unverified or manipulated content.

The individuals identified as wearing the controversial eyewear were Andrea Besmehn, Zuckerberg’s executive assistant, and another unnamed man, both of whom were observed sporting the Meta glasses as they entered the Los Angeles courthouse. The Ray-Ban Meta Smart Glasses, a collaboration between Meta and Luxottica (Ray-Ban’s parent company), are designed to look like regular eyeglasses but feature integrated cameras, microphones, and speakers, allowing users to capture photos, record videos, listen to audio, and make calls hands-free. While marketed as a tool for capturing life’s moments, their recording capabilities present a clear conflict with stringent courtroom rules designed to ensure a controlled and confidential environment. The irony of Meta, a company at the epicenter of privacy and data concerns, having its own employees breach courtroom rules with its proprietary recording devices was not lost on observers, briefly overshadowing the substantive legal battle at hand.

At the core of this landmark trial, which sees Meta (the parent company of Facebook and Instagram) and Google (owner of YouTube) as the remaining defendants after TikTok and Snap settled prior to proceedings, is a profound and increasingly urgent question: whether social media companies deliberately designed their platforms to exploit human psychology and foster addiction among young people. The plaintiff, a 20-year-old identified only by the initials “KGM” or “Kaley,” alleges that her prolonged and intensive use of these platforms led directly to severe mental health issues, including anxiety, depression, and eating disorders. Her case is not isolated; its outcome could serve as a bellwether, potentially influencing thousands of similar lawsuits filed across the United States. These cases often leverage legal theories ranging from product liability and negligence to consumer protection, arguing that social media platforms are "defective" by design due to their alleged addictive qualities and the harm they inflict on young users.

The trial is a significant moment in the broader public discourse surrounding social media and youth mental health. Numerous reports, including those from the U.S. Surgeon General, have warned about the potential detrimental effects of excessive social media use on adolescents, citing links to poor body image, sleep disturbances, cyberbullying, and exacerbation of mental health conditions. Critics argue that features like infinite scroll, algorithmic recommendations, push notifications, and engagement-driven metrics are engineered to maximize screen time, thereby fostering dependency, particularly in developing brains.

Zuckerberg Admits to Trouble with Public Appearances

Beyond the initial product placement kerfuffle, Zuckerberg’s testimony itself provided moments of unexpected candor and sharp exchanges. The plaintiff’s lawyer honed in on Zuckerberg’s carefully cultivated public image, presenting an internal document from Meta’s communications staffers. This document, intended for media training, explicitly urged Zuckerberg to appear more "authentic, direct, human, insightful and real," while cautioning against being "not try hard, fake, robotic, corporate or cheesy" in public.

For years, Zuckerberg has been a subject of public fascination and frequent mockery for his often-perceived stiff, awkward, or "robotic" demeanor during public appearances, particularly in high-pressure situations like congressional hearings. Memes and criticisms abound regarding his seemingly uncomfortable body language and sometimes unconvincing attempts at relatability. When confronted with the document, Zuckerberg denied being "coached," dismissing the directives as mere "feedback." In a rare moment of self-deprecating humor, he conceded, "I think I’m actually well-known to be very bad at this," eliciting some laughter in the courtroom. This admission, while seemingly disarming, also underscored the intense scrutiny placed on the public face of Meta, especially when the company is defending itself against accusations of psychological manipulation. It highlighted the tension between presenting a corporate leader as relatable and the strategic management of a public persona under legal fire.

Zuckerberg Doesn’t Think Addiction "Applies Here"

Perhaps one of the most contentious exchanges revolved around the very concept of addiction. When asked by the plaintiff’s lawyer, who appeared to be John C. Lanier, if people tend to use something more if it’s addictive, Zuckerberg responded with a cautious, "I’m not sure what to say to that. I don’t think that applies here." This denial of the applicability of "addiction" to his platforms represents a core defense strategy for Meta, aiming to decouple prolonged usage from pathological dependency.

Lanier then pressed Zuckerberg on a previous statement made during a congressional hearing, where the CEO asserted that Instagram employees are not given goals to increase the amount of time people spend on the platform. Zuckerberg reiterated his pushback against the idea that user time spent on the app was a primary company goal. However, Lanier swiftly countered this claim by presenting internal documents that appeared to directly contradict Zuckerberg’s testimony. These documents, reportedly from Meta’s Head of Instagram, Adam Mosseri’s previous testimony, outlined explicit corporate objectives: actively increasing user daily engagement time on Instagram to 40 minutes in 2023 and to 46 minutes in 2026.

This revelation struck at the heart of the plaintiff’s argument – that Meta intentionally designs its platforms to maximize engagement, which critics argue is indistinguishable from fostering addiction, especially when considering the company’s advertising-driven business model. More time spent on the platform means more opportunities for ad impressions, directly translating to higher revenue. Zuckerberg, attempting to reconcile the discrepancy, responded that Instagram "previously had time engagement goals" but had since "moved away from those targets to focus on utility." He explained this shift by positing a "basic assumption" that "if something is valuable, people will use it more because it’s useful to them." This semantic pivot from "time spent" to "utility" is a crucial aspect of Meta’s defense, seeking to reframe prolonged engagement as a natural outcome of providing value rather than a result of manipulative design. However, critics argue that "utility" and "addictive design" are not mutually exclusive and that features maximizing "utility" can still be engineered to keep users hooked.

Questions Over Safety for Young Users

Another significant portion of the plaintiff’s lawyers’ questioning focused on Instagram’s efforts – or perceived lack thereof – to prevent and remove users under the age of 13, who are technically prohibited from having accounts. Zuckerberg maintained that Meta includes age limits in its terms during the sign-up process and that the company makes efforts to remove all identified underage users. He repeatedly shifted responsibility, asserting his belief that companies like Apple and Google, which control the operating systems and app stores through which users access Instagram, are "better suited" to handle robust age verification.

This argument was met with pointed skepticism by the plaintiff’s lawyer. "You expect a 9-year-old to read all of the fine print," the lawyer reportedly asked Zuckerberg, according to CNBC, highlighting the perceived absurdity of relying on a child’s comprehension of legal terms and conditions. "That’s your basis for swearing under oath that children under 13 are not allowed?" The exchange underscored the immense challenge and often criticized inadequacy of age verification systems across the internet, particularly for platforms heavily used by minors. Critics argue that tech companies often place the burden of responsibility on users or third parties, rather than implementing stringent, proactive measures themselves, thereby benefiting from the expanded user base, even if it includes underage individuals. Studies have frequently shown that a significant percentage of social media users are indeed under the age of 13, often by lying about their birthdate, a fact that platforms struggle to, or are accused of not genuinely trying to, prevent.

A Meta spokesperson, speaking to The Associated Press, reiterated the company’s official stance, stating that they "strongly disagree with the allegations in the lawsuit" and are "confident the evidence will show our longstanding commitment to supporting young people." This statement reflects the company’s broader defense strategy, emphasizing their efforts in content moderation, parental controls, and mental health resources, rather than conceding to claims of deliberate harm.

The legal challenges for Meta extend beyond this California trial. The company currently faces another consumer protection trial in New Mexico, initiated by the state’s attorney general, which alleges that Meta failed to adequately prevent child sexual exploitation on its platforms. These accumulating lawsuits and ongoing regulatory scrutiny from governments worldwide signal a rapidly changing landscape for social media companies. Lawmakers and regulators are increasingly pushing for stricter age verification, greater transparency in algorithms, and more robust protections for minors online, with proposals like the Kids Online Safety Act (KOSA) in the U.S. gaining traction.

In essence, Zuckerberg’s testimony, punctuated by the bizarre smart glasses incident, served as a microcosm of the larger battle: a tech giant defending its foundational business model and design philosophy against accusations of causing widespread harm, particularly to its youngest users. The outcome of this landmark trial will undoubtedly set a precedent, potentially reshaping the future of social media, influencing corporate accountability, and redefining the responsibility tech companies bear for the well-being of their vast global user base. It underscores a pivotal moment where the digital realm intersects with public health, legal precedent, and ethical corporate governance.

Leave a Reply

Your email address will not be published. Required fields are marked *