What's Up With Anthropic?
Updates (or lack thereof) from Anthropic's long-term benefit trust
The race to perfect artificial intelligence is in full force. In January, OpenAI announced a new for-profit structure in an attempt to “raise more capital” based on comments from CEO Sam Altman about the company losing money off their $200 subscription tier. The New York Times reported that OpenAI lost about five billion dollars in 2024. This new structure removes the non-profit board and gives Altman equity, calling into question the motives for progressing artificial intelligence. Altman says that it’s a critical step for a company that cannot pay for its costs while critics worry about the future of checks in OpenAI. It's important to maintain transparency about these companies' governance structures given the uncertainty and future impact of artificial intelligence.
Anthropic, the creator of rival AI chatbot Claude, was set up as a long-term benefit trust (LTBT) in 2021, “to address the unique challenges and long-term opportunities” that AI presents. The corporate governance structure of Anthropic deviated from other AI companies, stressing safety and societal well-being rather than prioritizing the production of shareholder value. Check out Tiffany’s insight into the key components of Anthropic’s LTBT. So where are we now?
Anthropic started with seven former OpenAI employees who had a different vision in the “AI wars.” Anthropic CEO Dario Amodei left OpenAI to prioritize safety and make his mark on the AI ecosystem through his vision.
In September of 2023, we learned some of Amodei’s vision through an announcement that details the LTBT and Anthropic’s governance structure. This includes:
Independent Ownership: The LTBT has five trustees with “backgrounds and expertise in AI safety, national security, public policy, and social enterprise” to protect the interests of the public
Board of Directors: The trust grants itself the authority to elect and remove its directors
Public Benefit Corporation: Anthropic is set up as a Public Benefit Corporation (PBC), which legally requires itself to consider impacts on society, environment, and stockholder interests
Mission: Overall, it addresses the long-term mission of developing AI for the benefit of humanity, with safeguards in place for potential harms that could interfere with the economy and national security
Since its announcement in 2023, there have been no public updates on the maintenance or enforcement of Anthropic’s LTBT. As AI companies continue to evolve, it’s worth considering how major investments from tech firms might intersect with the company’s long-term governance. Critics have raised concerns about the LTBT’s transparency, citing, for example, the uncertainty surrounding the election of a fifth trustee. Billy Perrigo is similarly interested in the potential that the LTBT can be rewritten by a supermajority of shareholders without the consent of the five trustees. For its part, Anthropic wrote this into the LTBT as a “failsafe” for fixing existing structural issues and that the classification of stocks given to shareholders provides different voting rights in decision-making.
Anthropic’s dedication to responsibility within the AI field is present: last October, their Responsible Scaling Policy went into effect, implementing safeguards to mitigate risks of AI models causing harm. Additionally, their Internal and External Risk Assessments indicate (to me) that they are doubling down on their commitment to public safety through AI advancements. However, ensuring transparency around governance structures like the LTBT would further strengthen confidence in their long-term commitment to responsible AI development.
More to come (hopefully)!