This illustration, taken on March 11, 2024, shows the xAI Grok chatbot and the ChatGPT logo.
Dado Ruvic | Reuters
Be careful what you tell your chatbot: your conversations may be used to improve the artificial intelligence system that underpins the chatbot.
If you ask ChatGPT for advice about an embarrassing medical condition, be aware that anything you disclose may be used to tweak the algorithms underlying OpenAI's AI models, just as if you were to upload a confidential company report to Google's Gemini to be summarised for a conference.
It’s no secret that the AI models underlying popular chatbots are trained on vast amounts of information gleaned from the internet – blog posts, news articles, social media comments – and can predict a user’s next words when responding to a question.
This training often happens without consent, raising copyright concerns, and experts say it's probably too late to remove any data that may have been used, given the opaque nature of the AI models.
But what you can do going forward is to ensure that your chatbot interactions aren't used to train the AI. While this isn't always possible, some companies are giving users the option.
Google Gemini
Google stores conversations with the Gemini chatbot to train its machine learning systems. For users over 18, chats are stored for 18 months by default, but can be adjusted in settings. Human reviewers also have access to conversations to improve the quality of the generative AI models that power Gemini. Google warns users not to communicate sensitive information to Gemini or hand over any data that you don't want a human reviewer to see.
To opt out of this, go to Gemini's website and click on the Activity tab. Once you click the Turn Off button, a drop-down menu will let you choose to stop recording all future chats or delete all previous conversations. The company warns that any conversations selected for human review won't be deleted, but will be stored separately. Whether you turn activity off or on, Google also says that all chats with Gemini will be stored for 72 hours to “provide our services and process your feedback.”
Gemini's help page also details the process for iPhone and Android app users.
Meta AI
Meta has an AI chatbot that has been barging into conversations on Facebook, WhatsApp, and Instagram, and it uses the company's open-source AI language models. The company says these models are trained on information shared on its platforms, such as social media posts, photos, and caption information, but not private messages with friends and family. The models are also trained on publicly available information collected by “third parties” from other parts of the web.
Not everyone can opt out. People in the 27 European Union countries and the UK, which have strict privacy regulations, have the right to object to their information being used to train Meta's AI systems. On Facebook's privacy page, click “More policies and articles” from the list on the left, then click the Generative AI section. Scroll down to find a link to the form to object.
There is a box to provide additional information to support your request, but it doesn't spell out exactly what to include. I wrote that, as a UK resident, I was exercising my right to revoke consent to my personal information being used for AI training. I received an almost immediate email saying Meta had considered my request and would respect my objection. “This means your request will apply going forward,” it said.
People in the United States or other countries without national data privacy laws do not have this option.
Meta's Privacy Hub has a link to a form where users can request that their data collected by third parties not be used to “develop and improve Meta's AI.” However, the company cautions that it won't automatically fulfill requests and will review them based on local law. The process itself is cumbersome, requiring users to provide a chatbot request that generates a response including their personal information and screenshots of it.
Microsoft Copilot
If you're an individual user, there is no way to opt out. Your best bet is to delete your interactions with the Copilot chatbot by visiting your Microsoft account settings and privacy page. To find the delete button, look for a drop-down menu that says “Copilot interaction history” or “Copilot activity history.”
ChatGPT by OpenAI
If you have an OpenAI account, you can go to your web browser's settings menu, navigate to the data controls section, and disable the “Improve models for all users” setting. If you don't have an account, you can find the same option to opt out of AI training by clicking the small question mark at the bottom right of the web page, then clicking settings. Mobile users can make the same choice on the ChatGPT Android and iOS apps.
On its data management help page, OpenAI says that if a user opts out, the conversation will remain in the chat history but won't be used for training. These ephemeral chats are stored for 30 days and will only be reviewed if necessary to monitor for abuse.
Grok
Elon Musk's X has quietly enabled a setting that allows the billionaire Tesla CEO's AI chatbot Grok to train with data from social media platforms. The setting is on by default, allowing Grok to use data like user posts, “interactions, inputs, and results” for training and “fine-tuning.”
The change was not publicly announced and only came to light when X users noticed it in July. To opt out, users need to go to the settings of the desktop browser version of X, click on “Privacy and Safety,” scroll down to “Grok” and uncheck the box. They can also delete their conversation history with Grok, if they have one. Unfortunately, there's no way to do this from the X mobile app.
Claude
Anthropic AI said its chatbot, “Claude,” is not trained with personal data, and by default, no questions or requests are used to train the AI model. However, users can give “explicit permission” to use certain responses for training by giving a thumbs up or down or sending the company an email. Conversations flagged for safety review may also be used to train the company's systems to better enforce rules.