OpenAI CEO Sam Altman is leaving the internal commission OpenAI created in May to oversee “critical” safety decisions related to the company’s projects and operations.
In a blog post today, OpenAI said the committee, the Safety and Security Committee, will become an “independent” board oversight group chaired by Carnegie Mellon professor Zico Kolter, Quora CEO Adam D’Angelo, retired U.S. army general Paul Nakasone, and ex-Sony EVP Nicole Seligman. All are existing members of OpenAI’s board of directors.
OpenAI noted in its post that the commission conducted a safety review of o1, OpenAI’s latest AI model — albeit while Altman was still a chair. The group will continue to receive regular briefings from OpenAI safety and security teams, said the company, and retain the power to delay releases until safety concerns are addressed.
“As part of its work, the Safety and Security Committee … will continue to receive regular reports on technical assessments for current and future models, as well as reports of ongoing post-release monitoring,” OpenAI wrote in the post. “[W]e are building upon our model launch processes and practices to establish an integrated safety and security framework with clearly defined success criteria for model launches.”
Altman’s departure from the Safety and Security Committee comes after five U.S. senators raised questions about OpenAI’s policies in a letter addressed to Altman this summer. Nearly half of the OpenAI staff that once focused on AI’s long-term risks have left, and ex-OpenAI researchers have accused Altman of opposing “real” AI regulation in favor of policies that advance OpenAI’s corporate aims.
To their point, OpenAI has dramatically increased its expenditures on federal lobbying, budgeting $800,000 for the first six months of 2024 versus $260,000 for all of last year. Altman also earlier this spring joined the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which provides recommendations for the development and deployment of AI throughout U.S. critical infrastructure.
Even with Altman removed, there’s little to suggest the Safety and Security Committee would make difficult decisions that seriously impact OpenAI’s commercial roadmap. Tellingly, OpenAI said in May that it would look to address “valid criticisms” of its work via the commission — “valid criticisms” being in the eye of the beholder, of course.
In an op-ed for The Economist in May, ex-OpenAI board members Helen Toner and Tasha McCauley said that they don’t think OpenAI as it exists today can be trusted to hold itself accountable. “[B]ased on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” they wrote.
And OpenAI’s profit incentives are growing.
The company is rumored to be in the midst of raising $6.5+ billion in a funding round that’d value OpenAI at over $150 billion. To cinch the deal, OpenAI could reportedly abandon its hybrid nonprofit corporate structure, which sought to cap investors’ returns in part to ensure OpenAI remained aligned with its founding mission: developing artificial general intelligence that “benefits all of humanity.”