

The UK’s data watchdog has opened a formal investigation into Elon Musk-owned AI company X.AI and X Internet Unlimited Company over the use of personal data in its Grok artificial intelligence system, a move that could have far-reaching implications for the advertising and marketing industry’s use of generative AI.
The Information Commissioner’s Office (ICO) confirmed on February 3rd that it is examining whether Grok has breached UK data protection law following reports the tool has been used to generate non-consensual sexualised images and videos of individuals, including children. The investigation will assess whether personal data was processed lawfully, fairly and transparently, and whether sufficient safeguards were embedded in the system’s design to prevent harmful outputs.
For the ad industry, which has increasingly used generative AI for ideation, content creation, image generation and personalisation, the probe is a stark reminder that regulatory scrutiny is tightening around how AI tools are trained, deployed and governed. The ICO’s focus on privacy by design raises questions about brand and agency liability when using third-party AI platforms in commercial work.
William Malcom, the ICO’s executive director for regulatory risk and innovation, said the reported misuse of personal data to generate intimate imagery represents “immediate and significant harm”, particularly where children are involved. He added that the regulator is working closely with Ofcom and international counterparts to ensure a coordinated response to AI-driven risks.
The investigation follows an earlier ICO intervention in January, when it sought urgent information from both companies, and lands amid growing regulatory momentum around AI governance in the UK and Europe. For advertisers, agencies and production partners, the case underscores the need for stricter due diligence on AI tools, clearer consent frameworks, and robust internal policies governing synthetic content, especially as regulators move from guidance to enforcement.
As AI continues to reshape creative workflows, the Grok investigation signals that data protection compliance is no longer a theoretical concern, but a material business risk for brands operating at the intersection of creativity, technology and personal data.