AI Visionary Behind Call For Moratorium Now Warns Tech Giants Risk Safety In Race For Supremacy

Zinger Key Points
  • Max Tegmark warns of an AI 'race to the bottom' despite calls for development pause by tech leaders like Elon Musk.
  • Tech moguls and governments converge to discuss ethical AI, stressing the necessity for global response and standardized safety protocols.

Max Tegmark, MIT professor and co-founder of the Future of Life Institute, warns of a relentless “race to the bottom” among tech companies in the development of advanced artificial intelligence (AI) systems, despite his attempts to urge for a pause.

What Happened: An open letter initiated by Tegmark in March, signed by over 30,000 people including industry titans like Elon Musk and Steve Wozniak, pleaded for a six-month hiatus on the creation of AI models exceeding the power of GPT-4. Although, the call for a pause failed to achieve its intended effect.

Tegmark said that tech executives — though privately supportive of a pause — found themselves in intense competition with other tech companies who are developing advanced AI. With fear of being left behind or out-engineered, a unilateral pause is impossible.

The race spurred fears of creating unmanageable, superintelligent AI, prompting urgent calls for government intervention to establish development moratoriums and safety standards.

Despite the failed pause, Tegmark saw the letter as a catalyst for political discourse on AI, the Guardian reported, evidenced by Senate hearings and a scheduled UK global summit on AI safety in November at Bletchley Park. The summit at Bletchley Part aims to consolidate understanding of AI risks, stress the necessity for a global response, and advocate for prompt government action, much like the meetings in the U.S., Israel, and elsewhere.

Related: AI At Crossroads As US, UK Leaders Tackle Tech Monopolies, Ethical Dilemmas

The immediate AI risks governments are concerned about include deepfake generation and mass-produced disinformation, existential risks posed by uncontrollable superintelligent AIs.

Tegmark said there is urgency in addressing those risks, with some experts predicting the arrival of “god-like general intelligence” within a few years. Musk, a leader in the AI realm, has called on nations like China to not develop such a powerful AI in fear that it could take control of world governments.

Advancements in AI have also ignited debates over the ethical implications and risks of the tech, with industry leaders and governmental bodies converging to discuss regulatory measures and principles to prevent AI monopolization and ensure transparency, competition, and accountability.

U.S., U.K., and Israeli officials are spearheading initiatives for AI oversight, understanding AI’s unique regulatory challenges and aiming to harness its benefits ethically. However, the closed-door nature of some of the discussions raised concerns over transparency, with voices like Sen. Elizabeth Warren (D-MA) arguing against private dialogues between tech moguls and lawmakers.

The discussions on AI around the world feature tech executives from companies like Alphabet Inc’s GOOG GOOGL Google, Microsoft Corporation MSFT, Nvidia Corp NVDA, Palantir Technologies Inc PLTR, OpenAI, Anthropic, and others.

The consensus among leaders and experts is clear: the race to develop advanced AI requires immediate and thoughtful intervention, standardized safety protocols, and collaborative efforts to mitigate risks and ensure the responsible evolution of tech.

Read next: OpenAI Unveils DALL·E 3: Text-To-Image Breakthrough With ChatGPT Synergy

Photo: Shutterstock

Market News and Data brought to you by Benzinga APIs
Posted In: GovernmentLarge CapNewsRegulationsTopicsGlobalMarketsTechGeneralAIElon MuskGPT-4Max TegmarkSteve Wozniak
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...