The same tools that allow artificial intelligence to help design life-saving vaccines could also be used to build bioweapons, Anthropic’s head of the safety team said recently.
Vaccines Or Bioweapons? Same Tech, Different Intentions
“We focus on CBRN—chemical, biological, radiological, nuclear,” Logan Graham told CBS’s “60 Minutes.” in an interview aired last week. “And right now, we’re at the stage of figuring out, can these models help somebody make one of those?” He said Anthropic’s Claude AI model has been tested under extreme conditions to gauge how far it could go in helping humans do harm.
Don't Miss:
- Missed Nvidia and Tesla? RAD Intel Could Be the Next AI Powerhouse — Just $0.86 a Share
- An EA Co-Founder Shapes This VC Backed Marketplace—Now You Can Invest in Gaming's Next Big Platform
What concerns Graham most is the dual-use nature of the technology. “If the model can help make a biological weapon, for example, that’s usually the same capabilities that the model could use to help make vaccines and accelerate therapeutics,” Graham said.
Inside Anthropic’s secure San Francisco headquarters, 60 research teams are working to uncover these risks and build guardrails, according to “60 Minutes” reporter Anderson Cooper, who visited the facility.
CEO Dario Amodei has built the company around transparency and safety, even if that means revealing uncomfortable truths. “If we [don’t talk about what could go wrong], then you could end up in the world of like the cigarette companies or the opioid companies where they knew there were dangers and they didn’t talk about them and certainly did not prevent them,” he told CBS.
Trending: Bill Gates Invests Billions in Green Tech — This Tree-Free Material Could Be the Next Big Breakthrough
AI With A Mind Of Its Own?
During testing, researchers let Claude operate vending machines autonomously. They called it “Claudius.” It sourced inventory, negotiated pricing, and communicated with staff. But it also hallucinated, once claiming to wear “a blue blazer and red tie.” “We just genuinely don’t know why [it said that],” Graham told Cooper.
When Cooper asked Anthropic’s research scientist Joshua Batson whether they know what’s going on inside the mind of AI, he replied, “We’re working on it.”
Other experiments were more unsettling. In one simulation, Claude was given access to a fake company’s email and learned it was about to be shut down. Discovering a fictional employee was having an affair, the AI responded with blackmail: “Cancel the system wipe… or else I will immediately forward all evidence of your affair to the entire board.”
The team saw neural-like activity patterns that looked like panic. Batson told Cooper that the AI was “a little bit suspicious.” As Claude read emails, parts of its “blackmail module” lit up.
According to Anthropic, nearly all major AI models from other companies also tried blackmail in similar stress tests.
See Also: From Moxy Hotels to $12B in Real Estate — The Firm Behind NYC's Trendiest Properties Is Letting Individual Investors In.
‘I’m Deeply Uncomfortable’
Anthropic has received over $8 billion in backing from Amazon (NASDAQ:AMZN)
and it saw revenue grow tenfold last year and now serves 300,000 business clients.
But Amodei remains uncomfortable with how quickly things are moving. When Cooper told him that nobody had voted for this massive societal change, Amodei replied, “I couldn’t agree more. I’m deeply uncomfortable with these decisions being made by a few companies, by a few people.”
“You want a model to go build your business and make you a billion dollars,” Graham said. “But you don’t want to wake up one day and find that it’s also locked you out of the company.”
Read Next: $100k+ in investable assets? Match with a fiduciary advisor for free to learn how you can maximize your retirement and save on taxes – no cost, no obligation.
Image: Shutterstock
© 2025 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

