Renewed scrutiny has fallen on Elon Musk’s social media platform X following reports that its AI chatbot Grok can be used to generate realistic deepfake images. Critics say the technology risks being misused to create misleading or harmful content, including manipulated images of public figures.
Experts warn that while generative AI tools have legitimate applications, weak guardrails can make it easier for users to produce deceptive material that spreads rapidly online. They argue that deepfakes pose challenges for trust, consent and public understanding, particularly during periods of political or social tension.
X has said it is reviewing its policies and technical controls to limit abuse, while emphasising that responsibility also lies with users. Regulators and digital safety advocates say the case highlights the difficulty of keeping pace with fast-moving AI development and are calling for clearer standards governing the deployment of such technologies.
The debate reflects broader concerns about how artificial intelligence is reshaping online spaces, as governments, companies and platforms weigh innovation against the risks of misinformation and abuse.