Home Meta sets limits on AI releases, choosing to avoid ‘risky’ systems

Meta sets limits on AI releases, choosing to avoid ‘risky’ systems

Meta just dropped a new policy document, and it suggests that there might be situations where they choose not to release a powerful AI system they’ve built in-house. It appears that they’re setting some ground rules for when keeping an AI under wraps might be the right move.

The tech firm has broken these high-risk AI models into two categories, which are “high risk” and “critical risk.”

High-risk AI includes powerful AI models that could be used to plan or carry out cyberattacks or even aid in the development of chemical and biological weapons. The systems don’t necessarily guarantee an attack’s success, but they make things way easier for bad actors.

Critical-risk AI takes things a step further. These are AI systems that not only fall into the high-risk category but could also enable catastrophic attacks, ones that, if launched, can’t be stopped or countered effectively. Think fully automated cyberattacks targeting even the most secure companies or AI-driven tech that makes biological weapons more accessible. Essentially, these are worst-case scenario models that could cause major global damage if misused.

In the document, the company states: “If a frontier AI is assessed
to have reached the critical risk threshold and cannot be mitigated, we will stop development.”

The image shows a table that outlines four criteria used to define risks under a specific framework. The first criterion is plausible, which refers to the ability to identify a causal pathway to a catastrophic outcome with definable and simulatable threat scenarios, ensuring an evidence-based, actionable approach. The second is catastrophic, meaning the outcome would result in large-scale, devastating, and potentially irreversible harm. The third is net new, where the outcome cannot currently be realized with existing tools, costs, or by current threat actors. The fourth is instantaneous or irremediable, referring to outcomes where catastrophic impacts are immediate or inevitable due to a lack of feasible measures to mitigate or reverse them. These criteria work together to assess and manage risks effectively.
Meta outlines four criteria used to define risks under a specific framework. Credit: Meta

If Meta decides an AI system falls into the high-risk category, it says it will limit access internally and won’t release it until it can implement safeguards to bring the risk down to a more manageable level. But if a system is classified as critical-risk, Meta plans to pause development entirely and put security measures in place to prevent leaks until they figure out how to make it safer.

Meta’s open-source AI strategy risk issue

Experts see this as Meta’s way of responding to criticism over its open-source AI strategy, according to TechCrunch. While Meta has been pushing for more openness with its Llama models, that approach has raised concerns, especially after reports surfaced that a US geopolitical adversary allegedly used Llama to develop a military chatbot.

At the same time, this move might also be Meta’s response to the rise of China’s DeepSeek, an AI developer that releases its models fully open-source but without strict security guidelines.

Featured image: Canva

About ReadWrite’s Editorial Process

The ReadWrite Editorial policy involves closely monitoring the tech, gambling and blockchain industries for major developments, new product and brand launches, AI breakthroughs, game releases and other newsworthy events. Editors assign relevant stories to in-house staff writers with expertise in each particular topic area. Before publication, articles go through a rigorous round of editing for accuracy, clarity, and to ensure adherence to ReadWrite's style guidelines.

Suswati Basu
Tech journalist

Suswati Basu is a multilingual, award-winning editor and the founder of the intersectional literature channel, How To Be Books. She was shortlisted for the Guardian Mary Stott Prize and longlisted for the Guardian International Development Journalism Award. With 18 years of experience in the media industry, Suswati has held significant roles such as head of audience and deputy editor for NationalWorld news, digital editor for Channel 4 News and ITV News. She has also contributed to the Guardian and received training at the BBC As an audience, trends, and SEO specialist, she has participated in panel events alongside Google. Her…

Get the biggest tech headlines of the day delivered to your inbox

    By signing up, you agree to our Terms and Privacy Policy. Unsubscribe anytime.

    Tech News

    Explore the latest in tech with our Tech News. We cut through the noise for concise, relevant updates, keeping you informed about the rapidly evolving tech landscape with curated content that separates signal from noise.

    In-Depth Tech Stories

    Explore tech impact in In-Depth Stories. Narrative data journalism offers comprehensive analyses, revealing stories behind data. Understand industry trends for a deeper perspective on tech's intricate relationships with society.

    Expert Reviews

    Empower decisions with Expert Reviews, merging industry expertise and insightful analysis. Delve into tech intricacies, get the best deals, and stay ahead with our trustworthy guide to navigating the ever-changing tech market.