Australia’s eSafety Commissioner has issued a stark warning concerning the rising misuse of artificial intelligence (AI) tools to generate sexualized and exploitative images without consent. At the forefront of this issue is Grok, an AI chatbot developed by Elon Musk’s xAI startup, which has recently faced backlash internationally for its controversial features.
The Rise in AI Image Abuse Complaints
Reports of non-consensual AI-generated sexual images have surged, with complaints doubling since late 2025, according to eSafety Commissioner Julie Inman Grant. Alarmingly, some complaints involve potential child exploitation content, while others pertain to adults falling victim to image-based abuse. These developments come amidst global concern about the ethical implications of “generative AI” tools like Grok.
Unlike other AI platforms such as ChatGPT, Grok has cultivated an “edgy” reputation, launching features such as “Spicy Mode,” designed to generate explicit content. This has raised regulatory red flags globally, with the European Union going as far as declaring the feature illegal. “AI’s ability to create hyper-realistic content is exacerbating the challenges faced by regulators and law enforcement,” Grant emphasized.
Australia’s Leading Role in AI Regulation
Australia has taken a firm stance on this issue through enforceable industry codes mandating online platforms to safeguard against child sexual exploitation material, whether real or AI-generated. The eSafety Commissioner has previously targeted similar tools, successfully forcing the withdrawal of “nudify” services from the Australian market.
Grant highlighted that companies have a responsibility to incorporate safeguards throughout the product lifecycle. With new legislation on the horizon, her regulatory body is prepared to investigate and impose penalties on platforms that fail to comply. For instance, in 2023, Australia handed down a groundbreaking fine of $212,000 (A$343,500) to a man who distributed AI-generated pornographic images of public figures.
Proposed Legislation for Tackling Deepfake Abuse
The push for stronger protections against non-consensual deepfakes is gaining momentum. Independent Senator David Pocock has introduced the “My Face, My Rights” Bill, which would authorize fines up to $510,000 (A$825,000) for companies that fail to comply with removal notices. Individuals could face fines of $102,000 (A$165,000).
“We are now living in a world where anyone can create a deepfake and weaponize it,” Pocock commented, calling for urgent government action against AI misuse.
Protecting Yourself Against AI Misuse
As AI becomes more accessible, individuals are encouraged to protect themselves online. Using tools like identity verification services and watermarking authentic digital content can help reduce the risks associated with image manipulation. For those seeking an extra layer of protection, consider exploring cybersecurity-focused products such as Norton 360, which offers privacy features to monitor and secure your personal data.
Governments and tech companies must work together to ensure that generative AI prioritizes safety, accountability, and ethical innovation as we navigate this emerging digital landscape.