Yet another researcher from OpenAI has packed their things and left over safety concerns.
In a series of X posts this week, former employee Steven Adler revealed that he had departed from the prominent AI company at the end of November. His reasoning for the major move was primarily related to concerns about the dangers of artificial general intelligence (AGI). This refers to AI that is capable of performing tasks with the same level of intellect as humans.
“Honestly, I’m pretty terrified by the pace of AI development these days,” he said. “When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: will humanity even make it to that point?”
Adler described the global race toward AGI as a “very risky gamble” and said it had major downsides.
“Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously,” he added.
Adler is not the only one in San Francisco who fears a Terminator-Skynet or I, Robot sort of scenario in the days to come. Several other employees from Sam Altman’s company have made their way out the door recently due to similar fears.
Some personal news: After four years working on safety across @openai, I left in mid-November. It was a wild ride with lots of chapters – dangerous capability evals, agent safety/control, AGI and online identity, etc. – and I'll miss many parts of it.
— Steven Adler (@sjgadler) January 27, 2025
Read more: Median Technologies and European Investment Bank agree to €37.5M loan
Read more: Why is DeepSeek causing widespread market disruption?
At least 20 have left OpenAI over the past year
Many of them were also primarily concerned with AGI development safety.
Notable among this sizeable group were Ilya Sutskever, co-founder and former chief scientist; Jan Leike, a safety leader who joined the AI developer Anthropic after his departure; and Daniel Kokotajlo, a former member of the company’s governance team.
“OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board,” Kokotajlo said in an interview in May.
“This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care.”
Concerns like these inspired a group of company insiders to craft an open letter regarding their worries last year. This development was the subject of a feature story in the New York Times titled “OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance” in early June.
Read more: OpenAI co-founder leaves for competitor, Anthropic
They blew the whistle
OpenAI whistleblowers filed a complaint with the United States Securities and Exchange Commission (SEC) in 2024, claiming that the nature of their non-disclosure agreements violated the SEC’s Whistleblower Incentives and Protection rules.
“Given the risks associated with the advancement of AI, there is an urgent need to ensure that employees working on this technology understand that they can raise complaints or address concerns to federal regulatory or law enforcement authorities,” the letter stated.
Many other prominent figures have voiced safety concerns as well.
Geoffrey Hinton, often referred to as the Godfather of AI, resigned from Google in 2023 due to a moral dilemma about AI technology and regrets about his contributions to what may eventually evolve into something dangerous.
Moreover, others like Elon Musk, Bill Gates and Canadian AI pioneer Yoshua Bengio have expressed fears about the dangers it could pose.
rowan@mugglehead.com
