On Monday, Governor Gavin Newsom (D-Calif.) signed a landmark law requiring artificial intelligence giants such as OpenAI, Alphabet Inc.'s (NASDAQ: GOOG) (NASDAQ: GOOGL) Google, Meta Platforms, Inc. (NASDAQ: META) and Nvidia Corporation (NASDAQ: NVDA) to disclose how they plan to prevent their most advanced models from causing potential catastrophic risks.
California Takes Lead On AI Regulation
Newsom described the new law, SB 53, as a critical step in ensuring that AI innovation thrives while protecting public safety.
"California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive," he said in a press release.
"AI is the new frontier in innovation, and California is not only here for it – but stands strong as a national leader by enacting the first-in-the-nation frontier AI safety legislation," the statement read.
Newsom's office called the law a potential model for the rest of the U.S. If Congress enacts national standards, California lawmakers are expected to align state rules while maintaining the "high bar established by SB 53," noted Reuters.
What The Law Requires
SB 53 applies to AI companies with annual revenues exceeding $500 million.
These firms must conduct public risk assessments, detailing how their technology could spiral out of human control or be misused to create bioweapons.
Violations carry penalties of up to $1 million.
The law comes after Newsom vetoed an earlier bill that sought annual third-party audits of companies investing more than $100 million in AI models.
That proposal faced heavy industry pushback over the potential compliance burden.
Industry Pushes Back On Patchwork Rules
Jack Clark, co-founder of Anthropic, welcomed the move, saying, "Anthropic is proud to have supported this bill."
Sen. Scott Wiener (D-Calif.) supported the bill and took to X, formerly Twitter to say, "It's an exciting step for responsible scaling of AI innovation."
However, Collin McCune, head of government affairs at Andreessen Horowitz, warned that SB 53 risks creating "a patchwork of 50 compliance regimes that startups don’t have the resources to navigate."
Global Context: AI Rules Take Shape Worldwide
California's law follows similar efforts abroad. The EU's AI Act also imposes strict requirements on high-risk systems, from risk assessments to bias controls.
Meanwhile, China has called for a global body to coordinate AI governance, highlighting the fragmented state of international rules.
Benzinga's Edge Stock Rankings indicate that NVDA continues to trend upward across short, medium and long-term horizons, with additional performance details available here.
Read Next:
Photo Courtesy: Sheila Fitzgerald on Shutterstock.com
Disclaimer: This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.