Meta just dropped a new policy document, and it suggests that there might be situations where they choose not to release a powerful AI system they’ve built in-house. It appears that they’re setting some ground rules for when keeping an AI under wraps might be the right move.
The tech firm has broken these high-risk AI models into two categories, which are “high risk” and “critical risk.”
High-risk AI includes powerful AI models that could be used to plan or carry out cyberattacks or even aid in the development of chemical and biological weapons. The systems don’t necessarily guarantee an attack’s success, but they make things way easier for bad actors.
Critical-risk AI takes things a step further. These are AI systems that not only fall into the high-risk category but could also enable catastrophic attacks, ones that, if launched, can’t be stopped or countered effectively. Think fully automated cyberattacks targeting even the most secure companies or AI-driven tech that makes biological weapons more accessible. Essentially, these are worst-case scenario models that could cause major global damage if misused.
In the document, the company states: “If a frontier AI is assessed to have reached the critical risk threshold and cannot be mitigated, we will stop development.”
If Meta decides an AI system falls into the high-risk category, it says it will limit access internally and won’t release it until it can implement safeguards to bring the risk down to a more manageable level. But if a system is classified as critical-risk, Meta plans to pause development entirely and put security measures in place to prevent leaks until they figure out how to make it safer.
Meta’s open-source AI strategy risk issue
Experts see this as Meta’s way of responding to criticism over its open-source AI strategy, according to TechCrunch. While Meta has been pushing for more openness with its Llama models, that approach has raised concerns, especially after reports surfaced that a US geopolitical adversary allegedly used Llama to develop a military chatbot.
At the same time, this move might also be Meta’s response to the rise of China’s DeepSeek, an AI developer that releases its models fully open-source but without strict security guidelines.
Featured image: Canva