Former OpenAI Researcher Who Predicted There's A 50% Chance AI Could Kill Us All Is Now Heading US AI Safety Institute

Loading...
Loading...

U.S. Secretary of Commerce Gina Raimondo announced that former OpenAI researcher Paul Christiano will head the U.S. AI Safety Institute, a division of the National Institute of Standards and Technology (NIST.)

What Happened: Christiano, a former researcher at OpenAI, is known for his work on a fundamental AI safety technique called reinforcement learning from human feedback (RLHF.)

However, he has also been vocal about his concerns regarding the potential dangers of AI development, predicting a 50% chance of it leading to a catastrophic outcome that could kill humanity.

Christiano’s appointment has raised concerns among some NIST staff members, who reportedly oppose the decision.

A VentureBeat report last month said Christiano's appointment has resulted in some NIST staffers revolting, with a few scientists threatening to resign.

Earlier on the Bankless podcast, Christiano said there is a possible 10% to 20% chance that an AI takeover could result in "many [or] most humans dead."

Subscribe to the Benzinga Tech Trends newsletter to get all the latest tech developments delivered to your inbox.

However, once AI reaches the "human level," that's when the risk increases multifold.

"Overall, maybe you’re getting more up to a 50/50 chance of doom shortly after you have AI systems that are human level."

See Also: ‘Feels So Long Ago’: Elon Musk Reacts To His Old Profile From 2012 On Twitter

Despite this, Christiano’s appointment aligns with NIST’s mission to advance science and promote US innovation. As the head of AI safety, Christiano will be responsible for monitoring and mitigating current and potential AI risks.

Why It Matters: Christiano’s appointment comes at a time when the potential risks of AI are a topic of global concern. His views on the potential dangers of AI development are not unique.

Other prominent figures, such as tech billionaire and xAI founder Elon Musk, have also expressed concerns about the potential threats AI could pose to humanity.

Loading...
Loading...

Christiano’s appointment also raises questions about the role of AI safety institutes in addressing these concerns. Critics argue that focusing on hypothetical existential AI risks could divert attention from current ethical issues related to AI, such as environmental, privacy, and bias concerns.

Despite the controversy, Christiano’s appointment reflects the growing importance of AI safety in the U.S.

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read Next: Elon Musk Reacts As Marc Andreessen Says Google Is ‘Literally Run By Employee Mobs’ With ‘Chinese Spies’ Scooping Up AI Chip Designs

Disclaimer: This content was partially produced with the help of Benzinga Neuro and was reviewed and published by Benzinga editors.

Photo courtesy: Shutterstock

Loading...
Loading...
Market News and Data brought to you by Benzinga APIs
Posted In: NewsTechartificial intelligencebenzinga neuroConsumer TechGina RaimondoOpenAiPaul ChristianoUS AI Safety Institute
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...