EU Leads With Historic AI Regulation - Nvidia and Tech Peers Face New Era of Innovation

Zinger Key Points
  • EU Parliament backs first global AI regulatory framework, setting Europe as AI standards leader.
  • EU AI Act classifies AI risks, sparking debate on innovation vs. regulation among member states.

The European Union Parliament made a significant stride in technology regulation by endorsing the world’s first comprehensive regulatory framework for artificial intelligence (AI), positioning Europe as a leader in setting global standards for AI. 

This landmark decision, which saw 523 votes in favor, came after achieving a provisional political consensus in early December. 

Thierry Breton, the European Commissioner for the internal market, celebrated Europe’s newfound status as a global standard-setter in AI, CNBC reports.

Also Read: Nvidia’s Push for AI Advances Promises Lower Costs, Challenges $7T Chip Initiative

Introduced in 2021, the EU AI Act categorizes AI technologies based on their risk levels, ranging from unacceptable risks, which will lead to bans, to high, medium, and low risks. 

This move has sparked debates within the EU, with some member states, including AI pioneers Germany and France, preferring self-regulation to avoid stifling innovation and competitiveness against tech giants from China and the U.S.

The EU’s regulatory efforts extend beyond AI, with the recent implementation of the Digital Markets Act targeting anti-competitive practices by U.S. tech behemoths like Alphabet Inc GOOG GOOGLAmazon.Com Inc AMZNApple Inc AAPLMeta Platforms Inc METAMicrosoft Corp MSFT, and China’s Bytedance, aiming to ensure fair competition and consumer choice.

Despite the enthusiastic investment and development by heavyweight tech players like Microsoft, Amazon, Google, and Nvidia Corp NVDA in artificial intelligence (AI), concerns about the potential abuse of AI technologies have become increasingly prominent. 

Big Tech officials, including Google executive Kent Walker, called for industry-specific AI regulation. ChatGPT parent OpenAI shared plans to open-source models, assisting countries with AI development.

In 2023, the AI safety forum, the Frontier Model Forum, led by OpenAI, Microsoft Corp, Alphabet Inc, and AI startup Anthropic, appointed its inaugural director and announced plans to establish an advisory board soon to steer its strategy. 

The forum also revealed intentions to set up a fund supporting research into the technology.

Investors can gain exposure to the AI beneficiaries via Global X Robotics & Artificial Intelligence ETF BOTZ and IShares U.S. Technology ETF IYW.

Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.

Also Read: Artificial Intelligence Will Have A Sobering 2024, Analyst Highlights Cost and Regulation Challenges

Photo courtesy: Perplexity, OpenAI And Google

Market News and Data brought to you by Benzinga APIs
Posted In: GovernmentNewsRegulationsTechMediaAI Generatedartificial intelligenceBriefsStories That Matter
Benzinga simplifies the market for smarter investing

Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.

Join Now: Free!

Loading...