Anthropic’s Safeguards Research Team leader has left the company in a dramatic fashion—submitting a resignation letter that reads less like a professional farewell and more like a manifesto. Mrinank Sharma, whose team focused on AI safety and ethical risks, has departed to pursue poetry and what he describes as ‘courageous speech,’ framing his exit as a response to what he calls an ‘interconnected crisis’ threatening humanity.

The letter, posted publicly, stands out not just for its content but for its structure—complete with footnotes referencing philosophical texts and a closing poem. Sharma’s concerns extend beyond AI risks, framing broader societal challenges as a backdrop for his decision. While the letter touches on the difficulty of aligning values with action, it also hints at internal pressures within Anthropic to prioritize competitive goals over ethical principles.

Sharma’s departure follows a pattern of high-profile exits from Anthropic, including engineers and AI researchers who have largely remained within the industry. Unlike his predecessors, however, his resignation is marked by its unusual tone, blending existential dread with a rejection of corporate structures. The letter concludes with a plan to return to the UK, where he intends to ‘become invisible’ for a period before pursuing further studies in poetry.

ram

Industry observers have noted the irony of Sharma’s departure, particularly given the growing trend of dramatic resignation letters in AI ethics circles. Some have suggested the genre may soon resemble a literary movement, with Sharma’s letter adding to a collection of reflections on the challenges of working in AI safety. Whether his concerns reflect broader issues within Anthropic or personal philosophical shifts remains unclear, but his exit underscores the tensions between ambition and ethics in the rapidly evolving AI landscape.

The resignation also raises questions about the future of Anthropic’s Safeguards Research Team. With Sharma’s departure, the company may face scrutiny over its commitment to AI safety principles, particularly as it navigates competitive pressures in the AI market. His letter, while unconventional, serves as a stark reminder of the complexities involved in balancing innovation with ethical responsibility.

For now, Sharma’s next steps remain abstract—focused on poetry, reflection, and what he describes as ‘the practice of courageous speech.’ Whether this marks a turning point for AI ethics or simply an individual’s philosophical pivot, his resignation adds another layer to the ongoing debate about the direction of AI development and the role of safeguards in shaping its future.