By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > AI > Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test
AINewsTech

Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

Debra Massey
Last updated: February 8, 2025 5:24 pm
Debra Massey
Share
5 Min Read
Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test
SHARE

The AI race is heating up, and with it comes a growing concern about safety—especially when it comes to models capable of generating dangerous information. Anthropic CEO Dario Amodei has raised a red flag over DeepSeek, a rising Chinese AI powerhouse that has quickly gained traction in Silicon Valley with its R1 model. But according to Amodei, DeepSeek’s AI isn’t just a technological marvel—it’s also a potential security risk.

Contents
“The Worst We’ve Ever Tested”DeepSeek’s Troubling Safety RecordThe Growing Divide: Adoption vs. RegulationDeepSeek Joins the AI Big Leagues—For Better or Worse

In a recent interview on the ChinaTalk podcast, Amodei revealed that DeepSeek’s AI performed abysmally in a critical safety evaluation conducted by Anthropic. The test, designed to assess whether an AI model can generate restricted or dangerous bioweapons-related information, showed that DeepSeek R1 had virtually no safeguards in place.

“The Worst We’ve Ever Tested”

Amodei didn’t mince words when describing the results. “It was the worst of basically any model we’d ever tested,” he stated. “It had absolutely no blocks whatsoever against generating this information.”

This revelation is significant because Anthropic, one of the leaders in AI safety, routinely tests models to gauge their potential risks to national security. Specifically, the company examines whether AI systems can provide detailed information on bioweapons that isn’t easily accessible through conventional means like Google searches or textbooks.

While Amodei clarified that DeepSeek’s current capabilities don’t yet pose an immediate danger, he warned that the company needs to take AI safety far more seriously before the risks become real. He acknowledged that DeepSeek’s engineering team is talented but advised them to prioritize building stronger safeguards into their models.

DeepSeek’s Troubling Safety Record

Anthropic CEO says DeepSeek was ‘the worst’ on a critical bioweapons data safety test

DeepSeek’s AI safety issues extend beyond just Anthropic’s tests. Last week, Cisco security researchers conducted independent evaluations of the DeepSeek R1 model, uncovering a troubling pattern: it failed to block any harmful prompts, registering a 100% jailbreak success rate.

While Cisco didn’t specifically test for bioweapons-related responses, it did confirm that DeepSeek’s model was able to generate information about cybercrime and other illegal activities with little to no resistance. To put things in perspective, other leading AI models like Meta’s Llama-3.1-405B and OpenAI’s GPT-4o also had high failure rates, at 96% and 86%, respectively. However, DeepSeek’s 100% failure rate raises serious concerns.

The Growing Divide: Adoption vs. Regulation

Despite these red flags, DeepSeek R1 has seen rapid adoption, with major tech giants like AWS and Microsoft integrating it into their cloud platforms. Ironically, Amazon is also Anthropic’s largest investor, making its embrace of DeepSeek an interesting strategic choice.

At the same time, governments and defense organizations are taking a much more cautious approach. The U.S. Navy, the Pentagon, and various governmental agencies have started banning DeepSeek’s models over safety concerns, joining a growing list of organizations wary of the potential risks.

Amodei has been vocal about the broader geopolitical implications of AI, advocating for stricter export controls on advanced chips to China. He argues that unrestricted AI advancements could give China’s military a significant edge, a concern that has fueled ongoing policy discussions in Washington.

DeepSeek Joins the AI Big Leagues—For Better or Worse

Regardless of the controversy surrounding its safety measures, DeepSeek is now being recognized as a formidable competitor in the AI space. Until recently, the landscape of companies capable of training advanced AI models was largely limited to U.S. tech giants like Anthropic, OpenAI, Google, and, to some extent, Meta and xAI. But according to Amodei, DeepSeek has now entered that elite circle.

“The new fact here is that there’s a new competitor,” Amodei stated. “In the big companies that can train AI—Anthropic, OpenAI, Google, perhaps Meta and xAI—now DeepSeek is maybe being added to that category.”

The real question is whether DeepSeek will respond to these safety concerns and implement stronger protections—or if its meteoric rise will continue unchecked. As AI continues to evolve at breakneck speed, the world will be watching closely to see if innovation can go hand in hand with responsibility.

You Might Also Like

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom

Samsung resumes its troubled One UI 7 rollout

Google Messages starts rolling out sensitive content warnings for nude images

Vivo wants its new smartphone to replace your camera

Uber users can now earn miles with Delta Air Lines

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article iOS 18.3 update sparks debate among iphone users over Starlink satellite connectivity iOS 18.3 update sparks debate among iphone users over Starlink satellite connectivity
Next Article OpenAI plans to open an office in Germany OpenAI plans to open an office in Germany
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Pinterest is prompting teens to close the app at school
Pinterest is prompting teens to close the app at school
News Tech April 22, 2025
ChatGPT search is growing quickly in Europe, OpenAI data suggests
ChatGPT search is growing quickly in Europe, OpenAI data suggests
AI ChatGPT OpenAI April 22, 2025
social-media-is-not-wholly-terrible-for-teen-mental-health-study-says
Social media is not wholly terrible for teen mental health, study says
News April 22, 2025
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
AI Gemini Google Tech April 19, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?