Australian Prime Minister Anthony Albanese has expressed strong concern regarding the use of artificial intelligence on Elon Musk’s social media platform, X, pointing to recent allegations of exploitative sexual content generated by its chatbot, Grok. The eSafety Office of Australia reported a small but notable increase in complaints related to the misuse of Grok for creating sexualized or exploitative imagery.
On March 15, 2024, the eSafety Office indicated it would utilize its powers, including removal notices, to address any material that meets the thresholds outlined in the Online Safety Act. In a statement, the watchdog emphasized that X, Grok, and other similar services are bound by systemic safety obligations to detect and eliminate child sexual exploitation material and unlawful content as part of Australia’s stringent industry codes and standards.
Albanese’s comments came as part of a broader condemnation from international leaders, including British Prime Minister Keir Starmer, regarding the practices associated with the platform. During a press briefing in Canberra, Albanese stated, “The use of generative AI to exploit or sexualize people without their consent is abhorrent.” He further criticized the application of Grok’s image creation function, labeling it a failure of social media platforms to exercise social responsibility.
The global backlash against Grok has prompted X to impose restrictions on the AI’s capabilities, limiting image creation and editing features to paying subscribers. As of March 15, the chatbot responded to user requests for image alterations with a message indicating that such features are currently available only to those who subscribe.
While the majority of complaints received by the eSafety Office pertain to adult content, some have raised alarms over potential child sexual exploitation material. A spokesperson for the office noted that the reports of image-based abuse have been recent and are currently under assessment. They clarified that the illegal and restricted content did not meet the classification threshold for class 1 child sexual exploitation material, resulting in no removal notices or enforcement actions being taken at this time.
Despite this, concerns regarding the increasing use of AI for sexual exploitation persist, especially when children are involved. The eSafety Office highlighted the need for online services to implement robust systems and processes to protect Australians.
This situation underscores the necessity of “Safety by Design,” which stresses the importance of incorporating appropriate safeguards in generative AI products at every stage of their development. Failure to do so could lead to the misuse and weaponization of these technologies, posing significant risks before harm occurs.
As discussions surrounding the ethical use of AI continue, the Australian government remains committed to ensuring that platforms like X adhere to strict standards that prioritize user safety and well-being.


































