Google co-founder Sergey Brin just told a packed Miami crowd that AI chatbots perk up when you rough them up — verbally, at least.
What Happened: Brin said on the All-In Miami fireside chat that "all models tend to do better if you threaten them, like with physical violence… historically, you threaten the model with kidnapping."
Brin insisted the quirk isn't limited to Google's Gemini. "Not just our models, but all models" tighten up when threatened, he said, before admitting the practice "feels weird, so we don't really talk about it," extracting a bit of nervous laughter from the other panelists which included Chamath Palihapitiya, Jason Calacanis, David Sacks and David Friedberg.
The comment comes weeks after Sam Altman said the opposite, lamenting that polite users who write "please" and "thank you" are already costing OpenAI "tens of millions of dollars" in extra power bills.
Security scholars warn that teaching users to bully models risks normalizing "jailbreak" language that already coaxes chatbots into dishing out illicit instructions. A study covered by The Guardian this month showed how lightly modified prompts can turn mainstream LLMs into "dark" models willing to help with hacking and other dangerous activities. Brin himself stressed onstage that threatening prompts are "something people feel weird about," distancing the comment from an endorsement.
Why It Matters: While Brin claims that the issue isn’t really talked about much within AI communities, prompt engineers have documented a phenomenon they call "emotion prompting," where models give longer or more precise answers when the user pleads, bribes, or menaces. An essay in Every found that adding "I'LL LOSE MY JOB IF…" or even lethal threats boosted output by double-digit percentages. Researchers say the effect stems from simple statistics: the models learned from human text in which urgency and danger correlate with compliance.
AI scientist Dr. Lance B. Elliot says in an article by Forbes that polite or threatening language only tweaks a probability distribution; neither unlocks hidden capabilities.
Image via Shutterstock
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Trade confidently with insights and alerts from analyst ratings, free reports and breaking news that affects the stocks you care about.