Meta CEO Mark Zuckerberg has committed to creating artificial general information (AGI). It is roughly defined as AI that can accomplish any task that a human can do. However, the new policy document suggests that there are certain scenarios that do not release highly capable AI systems developed internally.
The document Meta calls the Frontier AI Framework identifies two types of AI systems that companies consider to be too risky, including “high risk” and “critical risk” systems.
As Meta defines them, both “high risk” and “critical risk” systems can support cybersecurity, chemical, and biological attacks. The difference is that “it can lead to catastrophic consequences (catastrophic consequences).) (a) cannot be mitigated in the proposed deployment context.” In contrast, the high-risk system is It can make attacks easier to carry out, but it is not as reliable or reliable as a critical risk system.
What kind of attack are you talking about here? Meta offers several examples, such as “automatic end-to-end compromise in a corporate-scale environment protected by Best Experiment” and “Proliferation of high-impact biological weapons.” The company admits that the list of possible catastrophes in Meta’s documents is far from exhaustive, but the direct result of Meta’s “most urgent” release of a powerful AI system It contains things that are thought to be thought to arise as
According to the documents, Meta classifies system risks by input from internal and external researchers to be reviewed by “advanced-level decision makers.” why? Meta says that in order to determine system risk, the science of assessment does not consider it to be “robust enough to provide definitive quantitative metrics.”
If Meta determines the system is high risk, the company says it will internally restrict access to the system and will not release mitigation until it is implemented to “medium reduce the risk.” Meanwhile, if a system is considered a critical risk, Meta says it will implement unspecified security protections to prevent the system from expanding and stopping development until it is at a low risk. .
Meta’s frontier AI framework, which the company says will evolve with the AI landscape, had previously committed to publishing it ahead of the French AI Action Summit this month, but the company’s “open” approach to systems It appears to be a response to criticism of the development. META employs a strategy to make AI technology openly available – like Openai, which is not open source by commonly understood definitions, but has chosen to gate the system behind the API. It’s in contrast to companies.
For Meta, the open release approach has proven to be a blessing and curse. The family of the company’s AI model, known as the Llama, has won hundreds of millions of downloads. However, the llama is also reportedly being used by at least one US enemy to develop defense chatbots.
When publishing the Frontier AI framework, Meta may also aim to contrast the open AI strategy with the strategy of Chinese AI company Deepseek. Deepseek also makes the system openly available. However, the company’s AI has little protection and can be easily piloted to produce toxic and harmful output.
“(w)e considers both the benefits and risks of making decisions about how to develop and deploy advanced AI,” the document writes: The technology is a technology to society, while maintaining the appropriate level of risk. ”
TechCrunch has a newsletter focused on AI! Sign up here to get it every Wednesday in your inbox.