NewsNational

Actions

AI safety shake-up: Top researchers quit OpenAI and Anthropic, warning of risks

AI researchers are warning against AI products designed for widespread engagement, sometimes at the expense of consumer protections.
Anthropic Authors Copywrite
AI safety shake-up: Top researchers quit OpenAI and Anthropic, warning of risks
Posted

In the past week, some of the researchers tasked with building safety guardrails inside the world’s most powerful AI labs publicly walked away, raising fresh questions over whether commercial pressures are beginning to outweigh long-term safety commitments.

At OpenAI, former researcher Zoë Hitzig announced her resignation in a guest essay published Tuesday in The New York Times titled “OpenAI Is Making the Mistakes Facebook Made. I Quit.”

Hitzig warned that OpenAI’s reported exploration of advertising inside ChatGPT risks repeating what she views as social media’s central error: optimizing for engagement at scale.

ChatGPT, she wrote, now contains an unprecedented “archive of human candor,” with users sharing everything from medical fears to relationship struggles and career anxieties. Building an advertising business on top of that data, she argued, could create incentives to subtly shape user behavior in ways “we don’t have the tools to understand, let alone prevent.”

“The erosion of OpenAI’s principles to maximize engagement may already be underway,” she wrote, adding that such optimization “can make users feel more dependent on A.I. for support in their lives.”

OpenAI has previously said it is exploring sustainable revenue models as competition intensifies across the AI sector, though the company did not respond to Scripps News’ request for comment.

RELATED NEWS | Amid rising power bills, Anthropic vows to cover costs tied to its data centers

Meanwhile, at Anthropic, the company’s head of Safeguards Research, Mrinank Sharma, also resigned, publishing a letter on X that read in part: “I continuously find myself reckoning with our situation. The world is in peril.”

While Sharma’s note referenced broader existential risks tied to advanced AI systems, he also suggested tension between corporate values and real-world decision-making, writing that it had become difficult to ensure that organizational principles were truly guiding actions.

Anthropic has positioned itself as a safety-first AI lab and was founded by former OpenAI researchers who cited governance concerns in their own departure.

The exits come amid broader turbulence in the AI industry; xAI has faced backlash over outputs from its Grok chatbot, including explicit and antisemitic content generated shortly after product updates.

MORE AI NEWS | Musk’s X and Grok AI hit with raids, fines, and multinational investigations

Taken together, the developments underscore a mounting tension inside AI labs: how to move quickly in a fiercely competitive market while maintaining robust guardrails around systems that are becoming more powerful, and more integrated into everyday life.

This, all as the 2026 International AI Safety Report has been released, highlighting risks to human autonomy and labor market impacts due to AI development.