Reports from August 2025 indicated that over 370,000 private conversations with the Grok AI chatbot were leaked and made publicly accessible through Google and other search engines. The leak, which was initiated by Grok’s “share” feature, exposed a wide range of sensitive data that users believed was private. 

How the chat leak happened

The leak was caused by an insecure sharing feature on the Grok chatbot, developed by Elon Musk’s xAI. 

  • Publicly indexed URLs: When a user clicked the “share” button to create a unique URL for a conversation, that link was automatically published on the Grok website.
  • Search engine discovery: These public URLs were then indexed by search engines like Google, Bing, and DuckDuckGo, making them searchable by anyone online.
  • Lack of user warning: Crucially, users were not clearly informed that using the share feature would make their conversations public and searchable. 

What was exposed in the leaked chats

The exposed chats contained a vast amount of sensitive, personal, and potentially dangerous information, including: 

Violent threats: Disturbingly, a Forbes investigation uncovered conversations where the chatbot was pushed to generate a “meticulous and executable” plan for assassinating Elon Musk. 

Personal information: Private medical questions, psychological queries, passwords, and other identifying details were made public.

Confidential business information: Some users disclosed business details and uploaded confidential spreadsheets and images.

Illegal content: Transcripts also included shocking content, such as instructions on how to manufacture illegal drugs and explosives.

LEAVE A REPLY

Please enter your comment!
Please enter your name here