By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: Meta says it may stop development of AI systems it deems too risky
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > AI > Meta says it may stop development of AI systems it deems too risky
AIMetaNewsOpenAITech

Meta says it may stop development of AI systems it deems too risky

Usama
Last updated: February 4, 2025 3:53 pm
Usama
Share
6 Min Read
Meta says it may stop development of AI systems it deems too risky
SHARE

Meta, the tech giant behind some of the world’s most influential social platforms, has long championed the idea of open AI development. CEO Mark Zuckerberg has even expressed ambitions to one day make artificial general intelligence (AGI) openly available. However, a newly released policy document suggests a more cautious stance: Meta may withhold certain AI systems it deems too risky for public release.

Contents
The Line Between High-Risk and Critical-Risk AINo Simple Formula for AI Risk AssessmentWhat Happens When AI Crosses the Risk Threshold?Meta’s Strategic Response to AI CriticismA Competitive Landscape: Meta vs. DeepSeekThe Future of AI at Meta

The document, known as the Frontier AI Framework, outlines Meta’s internal guidelines for assessing and managing the risks associated with powerful AI models. Specifically, it identifies two categories of AI that the company considers too dangerous to share: “high-risk” and “critical-risk” systems.

The Line Between High-Risk and Critical-Risk AI

According to Meta, both high-risk and critical-risk AI systems possess capabilities that could be exploited for nefarious purposes, such as cyberattacks or even the development of biological or chemical weapons. However, the distinction lies in the severity and likelihood of such outcomes.

  • High-risk AI: These systems might facilitate harmful activities, such as making it easier to breach a secure corporate network, but without guaranteeing success.
  • Critical-risk AI: These systems pose an extreme threat, with the potential to enable catastrophic outcomes that cannot be mitigated in their intended deployment context.

To illustrate these risks, Meta provides examples such as the “automated end-to-end compromise of a best-practice-protected corporate-scale environment” and the “proliferation of high-impact biological weapons.” While the list is not exhaustive, it highlights what Meta perceives as the most pressing concerns tied to AI advancements.

No Simple Formula for AI Risk Assessment

One particularly interesting aspect of Meta’s approach is its rejection of a purely empirical testing method for classifying AI risks. Rather than relying on a definitive set of metrics, the company turns to internal and external experts, whose evaluations are reviewed by senior decision-makers.

Why this approach? Meta states that the science of AI risk assessment is still too underdeveloped to allow for precise, quantifiable risk measurement. Instead, the company prefers a more holistic and expert-driven strategy, factoring in real-world concerns rather than rigid test results.

What Happens When AI Crosses the Risk Threshold?

If an AI system falls into the high-risk category, Meta says it will limit access to the system internally and will not release it to the public until sufficient risk-mitigation strategies are in place. In contrast, if an AI system is deemed critical-risk, Meta asserts that it will take immediate action to halt development and implement additional security measures to prevent unauthorized access or leaks.

While Meta has not disclosed specific security measures, it has hinted at robust internal controls to prevent exfiltration and ensure that such AI models never make their way into the wrong hands.

Meta’s Strategic Response to AI Criticism

Meta’s decision to introduce the Frontier AI Framework may be more than just a reflection of internal policy—it appears to be a direct response to increasing scrutiny of its “open AI” strategy. Unlike companies such as OpenAI, which restrict access to their models through APIs, Meta has embraced a more permissive approach, allowing its AI systems to be downloaded and used widely.

This approach has been a double-edged sword. On one hand, Meta’s Llama AI models have enjoyed immense popularity, with hundreds of millions of downloads worldwide. On the other hand, reports suggest that these same models have already been exploited for potentially harmful applications, including the development of a defense chatbot by at least one U.S. adversary.

A Competitive Landscape: Meta vs. DeepSeek

By publishing its Frontier AI Framework, Meta is likely also setting itself apart from Chinese AI firm DeepSeek, another major player in the open AI movement. While DeepSeek also makes its models widely available, its safeguards appear to be far weaker than Meta’s, with reports indicating that its AI can easily be manipulated into generating toxic and harmful outputs.

By contrast, Meta is positioning itself as a responsible leader in open AI development—one that considers both the benefits and risks before releasing its technology. In the policy document, Meta underscores this approach:

“We believe that by considering both benefits and risks in making decisions about how to develop and deploy advanced AI, it is possible to deliver that technology to society in a way that preserves the benefits of that technology while also maintaining an appropriate level of risk.”

The Future of AI at Meta

Meta’s Frontier AI Framework is not set in stone. The company has made it clear that the framework will evolve alongside advancements in AI and security research. With the AI landscape changing rapidly, Meta appears to be taking a more cautious, yet still open, approach to development—one that acknowledges both innovation and responsibility.

As AI continues to push the boundaries of what’s possible, Meta’s evolving policies will likely set a precedent for how other tech giants approach the delicate balance between progress and risk management. Whether this move will be enough to satisfy critics—or prevent potential AI misuse—remains to be seen.

You Might Also Like

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom

Samsung resumes its troubled One UI 7 rollout

Google Messages starts rolling out sensitive content warnings for nude images

Vivo wants its new smartphone to replace your camera

Uber users can now earn miles with Delta Air Lines

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article Trump says new US sovereign wealth fund could purchase TikTok Trump says new US sovereign wealth fund could purchase TikTok
Next Article Vivo V50 Launch in India: Check likely date, price, camera, key features and specifications Vivo V50 Launch in India: Check likely date, price, camera, key features and specifications
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Pinterest is prompting teens to close the app at school
Pinterest is prompting teens to close the app at school
News Tech April 22, 2025
ChatGPT search is growing quickly in Europe, OpenAI data suggests
ChatGPT search is growing quickly in Europe, OpenAI data suggests
AI ChatGPT OpenAI April 22, 2025
social-media-is-not-wholly-terrible-for-teen-mental-health-study-says
Social media is not wholly terrible for teen mental health, study says
News April 22, 2025
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
Google is trying to get college students hooked on AI with a free year of Gemini Advanced
AI Gemini Google Tech April 19, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?