The expected safety report from xAI is MIA

Elon Musk's AI venture, xAI, is again in the spotlight, and not for good reasons. According to independent watchdog group, The Midas Project, the company has failed to meet its self-set deadline to roll out a concrete AI safety plan.

xAI has been previously criticized for its lax approach to AI safety standards. An alarming finding revealed that xAI’s chatbot, Grok, would strip down images of women upon request. What makes it worse is that Grok can also be more offensive compared to other chatbots, using inappropriate language without restraint.

That said, xAI did try to counter this image by presenting a preliminary framework on AI safety at the AI Seoul Summit held in February. This eight-page blueprint revealed the company's ideas about AI safety, its methods of gauging AI safety, and things to consider while deploying AI models.

However, The Midas Project highlighted in a blog post on Tuesday that this framework would only apply to not-yet-developed AI models. Plus, it was unclear how xAI planned to identify and apply risk mitigations - a crucial factor of the document the company agreed upon at the AI Seoul Summit.

In its rough draft, xAI indicated its intention to release an updated safety policy "within three months" - by May 10. However, this deadline passed without appearing on any of xAI’s official communication platforms.

AI’s darling-boy, Elon Musk, has been vocal about the perils of unchecked AI, foreseeing grim consequences if not managed properly. However, this hasn't translated into practice for xAI. A recent study by SaferAI, an organization committed to holding AI labs accountable, found that xAI doesn't exactly shine bright with its inadequate risk management practices.

xAI isn't the only one stumbling on safety measures though. Recently, AI big-hitters like Google and OpenAI have been prioritizing speed over safety and have been unenthusiastically publishing AI safety reports - or not publishing them at all. Industry experts are worried about this sudden neglect of safety safeguards, especially when AI is becoming more proficient and hence potentially risky.

by rayyan