Connect with us

Hi, what are you looking for?

Thursday, Mar 20, 2025
Mugglehead Investment Magazine
Alternative investment news based in Vancouver, B.C.
OpenAI researcher calls it quits; says he is 'terrified' of artificial intelligence
OpenAI researcher calls it quits; says he is 'terrified' of artificial intelligence
1984. Image credit: IMDb

AI and Autonomy

OpenAI researcher calls it quits, says he’s ‘terrified’ of artificial intelligence

Steven Adler has been thinking about raising kids and retirement, but wonders if we’ll even make it that far

Yet another researcher from OpenAI has packed their things and left over safety concerns.

In a series of X posts this week, former employee Steven Adler revealed that he had departed from the prominent AI company at the end of November. His reasoning for the major move was primarily related to concerns about the dangers of artificial general intelligence (AGI). This refers to AI that is capable of performing tasks with the same level of intellect as humans.

“Honestly, I’m pretty terrified by the pace of AI development these days,” he said. “When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: will humanity even make it to that point?”

Adler described the global race toward AGI as a “very risky gamble” and said it had major downsides.

“Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously,” he added.

Adler is not the only one in San Francisco who fears a Terminator-Skynet or I, Robot sort of scenario in the days to come. Several other employees from Sam Altman’s company have made their way out the door recently due to similar fears.

Read more: Median Technologies and European Investment Bank agree to €37.5M loan

Read more: Why is DeepSeek causing widespread market disruption?

At least 20 have left OpenAI over the past year

Many of them were also primarily concerned with AGI development safety.

Notable among this sizeable group were Ilya Sutskever, co-founder and former chief scientist; Jan Leike, a safety leader who joined the AI developer Anthropic after his departure; and Daniel Kokotajlo, a former member of the company’s governance team.

“OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board,” Kokotajlo said in an interview in May.

“This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care.”

Concerns like these inspired a group of company insiders to craft an open letter regarding their worries last year. This development was the subject of a feature story in the New York Times titled “OpenAI Insiders Warn of a ‘Reckless’ Race for Dominance” in early June.

Read more: OpenAI co-founder leaves for competitor, Anthropic

They blew the whistle

OpenAI whistleblowers filed a complaint with the United States Securities and Exchange Commission (SEC) in 2024, claiming that the nature of their non-disclosure agreements violated the SEC’s Whistleblower Incentives and Protection rules.

“Given the risks associated with the advancement of AI, there is an urgent need to ensure that employees working on this technology understand that they can raise complaints or address concerns to federal regulatory or law enforcement authorities,” the letter stated.

Many other prominent figures have voiced safety concerns as well.

Geoffrey Hinton, often referred to as the Godfather of AI, resigned from Google in 2023 due to a moral dilemma about AI technology and regrets about his contributions to what may eventually evolve into something dangerous.

Moreover, others like Elon Musk, Bill Gates and Canadian AI pioneer Yoshua Bengio have expressed fears about the dangers it could pose.

 

Follow Mugglehead on X

Like Mugglehead on Facebook

Follow Rowan Dunne on X

rowan@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

AI and Autonomy

The robot will provide end-to-end intelligent automation, using AI-driven precision to integrate preoperative 3D lesion segmentation

Bitcoin

Bitfarms' acquisition supports its broader goal of expanding in North America

AI and Autonomy

This deal will generate short-term financial gains for the company while positioning Gorilla as a key player

AI and Autonomy

The company will use DORA's proprietary algorithms to analyze Griffon's project-specific data alongside VRIFY's database