Paul Tudor Jones Warns AI Could 'Kill 50% Of Humanity' Without Proper Oversight: 'We're Creating Something That's Really Dangerous'

Zinger Key Points

Although billionaire hedge fund manager Paul Tudor Jones believes AI can be a force for good, he also issued a stark warning on Tuesday suggesting it has the potential to wipe out 50% of humanity.

What Happened: Tuesday on CNBC’s “Squawk Box,” Jones, the founder and chief investment officer of Tudor Investment, warned of the negative potential of AI based on takeaways from a technology conference he attended two weeks ago.

Although he acknowledged that he’s “not a tech expert,” he said the conference featured a panel of four of the “leading modelers” of the top AI models being used today, who all agreed that there is at least a small chance that AI could cause significant harm to the human race.

There were three big takeaways from the panel, he said. First, AI can be a force for good and we are going to start seeing it in education and healthcare in the very near future. Second, these AI models are increasing their efficiency and performance by 25% to 500% every three or four quarters. Lastly, AI “clearly poses an imminent threat” to humanity, he said.

When asked about what they were doing to prepare for the security threat that AI poses, one of the panel members said he was buying 100 acres in the Midwest and raising chickens and cattle. According to Jones, the panelist also said that the world is not likely to take the threat of AI really seriously until an “accident” occurs where “50 to 100 million people die. ” He added that no one else on the panel pushed back against that idea.

“Afterwards, we had a breakout session, which was really interesting. All 40 people got up in a room like this and they had a series of propositions and you had to either agree with or disagree with the proposition,” Jones said.

“One of the propositions was there’s a 10% chance in the next 20 years that AI will kill 50% of humanity … the vast majority of the room moved to the disagree side … all four modelers were on the agree side.”

Check This Out: Elon Musk Wants His Legacy To Be Advancing Civilization, Says Without ‘Truth-Seeking AI,’ The Future Could Be ‘Dangerous’

Jones told CNBC that the room then debated the proposition. One of the modelers suggested that it’s possible that someone could “bio hack” a weapon that could take out half of humanity, given how quickly the models are growing and commoditizing knowledge.

“I don’t know, 10% seems reasonable to me,” Jones said.

He emphasized that he’s not an expert in technology, but noted that he’s spent his entire life managing risk.

“We just have to realize, to their credit, all these folks in AI are telling us we’re creating something that’s really dangerous — it’s going to be really great too — but we’re helpless to do anything about it. That’s, to their credit, what they’re telling us, and yet we’re doing nothing right now and it’s really disturbing,” Jones said.

Jones told CNBC that there was about $250 billion dollars spent on AI development among the Magnificent Seven tech giants in 2024. According to the four modelers at the tech conference, the AI spend on security was less than $1 billion, he said.

To mitigate the potential downside of AI, the leading AI companies need to dramatically increase spending on AI security and President Donald Trump has to increase regulations on AI development, Jones said.

“I just want to say one last thing. I’m really concerned about these open-source models and how they are commoditizing and making what were previously indecipherable pockets of knowledge easily accessible,” he said.

“You can have a bad actor like Osama bin Laden take these things with his cult following somewhere down the road, or you can have innocent actors like hopefully those researchers in Wuhan laboratory make a mistake, who made a mistake, and they can be real threats to humanity.”

Loading...
Loading...

Read Next:

Image created using artificial intelligence via Midjourney.

Market News and Data brought to you by Benzinga APIs

Posted In:
Comments
Loading...