Xai, the AI company of Elon Musk, has missed the voluntary deadline to expose the final AI safety framework, as the WatchDog Group The Midas Project points out.
Xai is not known exactly for its strong commitment to AI’s safety, as is well understood. A recent report found that Grok, the company’s AI chatbot, stripped off a photo of a woman when asked. Grok is much worse than chatbots like Gemini and Chatgpt, and curses them without much restraining their conversation.
Nevertheless, in February at the AI Seoul Summit, a global gathering of AI leaders and stakeholders, Xai unveiled a draft framework that outlines the company’s approach to AI safety. The eight-page document documented Xai’s safety priorities and philosophy. This includes considerations for the deployment of the company’s benchmark protocols and AI models.
However, as the MIDAS project pointed out in a blog post Tuesday, the draft was applied to “not currently under development” only for unspecified future AI models. Furthermore, it was not possible to clarify how Xai would identify and implement risk mitigation, a core component of documents signed with AI Seoul Summit.
In the draft, Xai said it plans to release a revised version of its safety policy “within three months” by May 10th.
Despite frequent warnings about the dangers of Musk’s unchecked AI, Xai has a low AI safety track record. A recent study by Saferai, a nonprofit organization aimed at improving AI Labs’ accountability, found that “very weak” risk management practices make Xai not ranked among its peers.
That’s not to suggest that other AI labs are dramatically superior. Over the past few months, Xai rivals, including Google and Openai, have been rushing to safety tests, and have been slow to publish safety reports for the model (or skipped the public report entirely). Some experts have expressed concern that what is coming is the appearance of a stripping of safety efforts at a time when AI is more capable and thus potentially dangerous than ever.