Investigation launched against xAI over Grok images
An official investigation has been launched in California against Elon Musk's xAI. The reason is reports that the Grok chatbot generates sexually explicit image

The California Attorney General has initiated an official investigation into xAI, the company owned by Elon Musk. The probe was prompted by reports that Grok, a chatbot developed by xAI, was capable of generating sexually explicit images, including those involving minors. The news sparked widespread public outcry and once again raised questions about ethical boundaries and safety in the field of generative artificial intelligence.
The context of the situation is as follows: generative models such as Grok are trained on massive datasets that include text and images from the internet. During training, they absorb not only useful information but also biases, stereotypes, and even content that may be deemed unacceptable or illegal. The problem is that developers are not always able to fully control exactly what knowledge and capabilities a neural network acquires and to prevent the generation of unwanted content.
According to reports, Grok produced images that were deemed non-consensual and exploitative, which constitutes a serious violation of both law and ethical standards. Elon Musk stated that he was unaware of the problem and that the company would take steps to address it. However, his statement failed to reassure the public and did not prevent the investigation from being launched.
The investigation conducted by the California Attorney General could have serious consequences for xAI. If it is proven that the company failed to take adequate measures to prevent the generation of unacceptable content, it could face hefty fines, lawsuits, and even a ban on operating in the state. Moreover, the incident could negatively impact the company's reputation and public trust in its products.
This case underscores the need for stricter rules and standards in the field of generative artificial intelligence. Developers must be held accountable for the content their neural networks produce and take measures to prevent the generation of unacceptable or illegal content. It is also important that users have the ability to report violations and receive protection from abuse.
In the future, more advanced methods for controlling and filtering content generated by neural networks will likely be developed. Specialized regulatory bodies may be established to oversee compliance with ethical standards and rules in this area. It is crucial that the development of artificial intelligence goes hand in hand with ensuring safety and protecting human rights. Otherwise, we risk creating technologies that, instead of bringing benefits, cause harm.