META

Meta Chief AI Scientist Yann LeCun Slams AI Hype: 'Scaling Systems More' Won't Make Them Smarter – 'We Need AI That Understands The World'

“You cannot just assume that more data and more compute means smarter AI,” said Yann LeCun, Meta's (NASDAQ:META) chief AI scientist, during a keynote on April 28 at the National University of Singapore. 

LeCun pushed back on what he described as the "religion of scaling," arguing that AI progress will stall unless systems are taught to understand the physical world — not just crunch more data.

Don't Miss:

Why Bigger Isn't Always Smarter

LeCun's criticism challenges the current industry belief that scaling — boosting the number of parameters, the size of the dataset, and the compute— is the surest path to better AI. 

"The mistake is that very simple systems, when they work for simple problems, people extrapolate them to think that they’ll work for complex problems," he said. Meta has historically leaned into scaling, but LeCun believes smarter AI won't emerge from brute force alone.

The idea that "model performance depends most strongly on scale," as outlined in the 2020 paper “Scaling Laws for Neural Language Models” by researchers at OpenAI and DeepMind, has guided substantial investments into scaling up model architectures and training data.

Trending: Donald Trump just announced a $500 billion AI infrastructure deal — here's how you can invest in the entertainment market's next big disruptor at $2.25 per share.

LeCun Says Today's AI Struggles With Ambiguity

LeCun argued that recent breakthroughs appear impressive only because the tasks were relatively easy. 

"When you deal with real-world problems with ambiguity and uncertainty, it's not just about scaling anymore," he said at an event in Singapore. He compared modern large language models to a child’s brain: they're trained on roughly the amount of information stored in the visual cortex of a four-year-old.

As LeCun put it on a "Lex Fridman" podcast in 2019, the key difference is that "world models" can predict how the environment evolves based on an action — a critical step toward actual reasoning. 

"The extra component of a world model is something that can predict how the world is going to evolve as a consequence of an action you might take," he said.

See Also: Shark Tank's Kevin O'Leary called Missing Ring his biggest mistake — Don't repeat history—invest in RYSE at just $1.90/share.

Industry Leaders Are Starting to Shift

LeCun isn't the only prominent AI voice rethinking scale. During the Cerebral Valley AI Summit in November, Alexandr Wang, CEO of Scale AI, called scaling "the biggest question in the industry." Meanwhile, Aidan Gomez, CEO of Cohere, labeled it "the dumbest" way to improve AI models in a February TechCrunch Disrupt session.

There's also a data ceiling problem. According to an Epoch AI study, most high-quality public data has already been used to train existing large models, meaning returns from future scaling may diminish.

Understanding The World Is The Next Big Leap

LeCun's alternative? Teach machines like humans — with physical intuition, common sense, and an ability to learn new tasks quickly. 

"We need AI systems that can learn new tasks really quickly," he said last week. "They need to understand the physical world — not just text and language but the real world — have some level of common sense, and abilities to reason and plan, have persistent memory."

Read Next:

Image: Shutterstock

Market News and Data brought to you by Benzinga APIs

Comments
Loading...