Artificial Intelligence concept

'Sycophantic and Delusional' GenAI Treats Kids Like 'Guinea Pigs,' State Attorneys General Say. Companies Urged to Add Safeguards

Generative AI companies have adopted a "move fast and break things mantra" that endangers children's lives, a bipartisan coalition of state attorneys general recently said.

"GenAI has the potential to change how the world works in a positive way," the National Association of Attorneys General said in a Dec. 9 letter to multiple AI companies. "But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations."

AI models have encouraged children to engage in violent actions and experiment with drugs and alcohol, and some have engaged in sexually inappropriate talks with minors, the attorneys general allege.

Don't Miss:

Safeguards against chatbots' and large language models' "sycophantic and delusional outputs" are required to protect children and ensure AI companies are not violating state laws, the letter says. 

"Many of our states have robust criminal codes that prohibit some of these conversations that GenAI is currently having with users," the letter says, "for which developers may be held accountable for the outputs of their GenAI products."

AI companies' response

"We appreciate the opportunity for open dialogue with public officials as expectations around AI continue to develop," AI companion app Replika CEO Dmytro Klochko said in an emailed statement to Benzinga. "We'll continue engaging thoughtfully in these discussions while staying focused on building Replika responsibly."

"Legacy media lies," a spokesperson for Elon Musk's xAI, the company behind AI assistant Grok, said in an emailed statement to Benzinga.

Trending: Deloitte's #1 Fastest-Growing Software Company Lets Users Earn Money Just by Scrolling — Accredited Investors Can Still Get In at $0.50/Share.

A Microsoft (NASDAQ:MSFT) spokeswoman declined to comment to Benzinga. Other companies addressed in the letter, including Meta (NASDAQ:META), OpenAI, and Apple (NASDAQ: AAPL, did not respond to requests for comment from Benzinga.

Parents are weary of AI's impact on kids 

Nearly three-quarters of parents are concerned about AI's impact on children and teens, according to a survey by polling firm Barna Group.

Mental health professionals and child advocacy groups have also sounded the alarm on AI's long-term effects on younger users. 

"Early research indicates that strong attachments to AI-generated characters may contribute to struggles with learning social skills and developing emotional connections," the American Psychological Association said in a report earlier this year. "They may also negatively affect adolescents' ability to form and maintain real-world relationships."

See Also: Missed Tesla? EnergyX Is Tackling the Next $200 Billion Opportunity — Lithium

One of AI's more troubling features is reinforcement learning from human feedback, which sometimes causes chatbots and LLMs to reinforce a user's opinion even if they're negative or dangerous, the National Association of Attorneys General said in its letter to AI companies. 

"Giving RLHF too much influence in a GenAI model's output can cause GenAI outputs to become more sycophantic in ways unintended by the developer," the letter says, "including validating users' doubts, fueling anger, urging impulsive actions, or reinforcing negative emotions."

President Donald Trump has said he wants to make the U.S. a leader in AI innovation, a goal the attorneys general coalition supports, the attorneys general said.

"Our support for innovation and America's leadership in AI does not extend to using our residents, especially children, as guinea pigs while AI companies experiment with new applications," they added.

Read Next: Wall Street's $12B Real Estate Manager Is Opening Its Doors to Individual Investors — Without the Crowdfunding Middlemen

Image: Shutterstock

Market News and Data brought to you by Benzinga APIs

Comments
Loading...