Anthropic to train AI on your chats—How to opt out
29 Aug 2025


Anthropic, the artificial intelligence (AI) company behind the Claude family of chatbots, has announced a major shift in its data policy.

The firm will now use user data such as chat transcripts and coding sessions for training its AI models. However, this will only happen if users do not opt out of the process.

The company also plans to extend its data retention policy to five years for those who don't opt out.


Users have until September 28 to make a decision
Deadline


All Anthropic users will have to make a decision on this new policy by September 28.

If they click "Accept," the company will start using their data for training its models and retain it for up to five years.

The updated policy applies to "new or resumed chats and coding sessions." However, previous chats or coding sessions that haven't been resumed won't be used for training AI models.


Updated policy applies to all consumer subscription tiers of Claude
Scope


The updated policy will apply to all consumer subscription tiers of Claude, including Free, Pro, and Max. This also includes when they use Claude Code from accounts associated with those plans.

However, it won't be applicable for commercial usage tiers such as Claude Gov, Claude for Work, and Claude for Education or API usage via third parties like Amazon Bedrock and Google Cloud's Vertex AI.


New users will have to choose their preference during signup
Signup process


New users of Claude will have to choose their preference during the signup process.

Existing users, on the other hand, will have to decide via a pop-up. They can defer this decision by clicking a "Not now" button but will be forced to make a decision by September 28.

The pop-up informs about updates in Consumer Terms and Privacy Policy effective from September 28, 2025.


How to opt out?
User control


Users who want to opt out can do so by toggling the switch to "Off" when they see the pop-up.

If they accidentally accepted without knowing and want to change their decision, they can go to Settings > Privacy tab > Privacy Settings section > toggle "Off" under "Help improve Claude."

Consumers can change their decision anytime via privacy settings, but it will only apply to future data.


Anthropic uses tools to filter sensitive data in chats
Privacy assurance


In its blog post, Anthropic assured users that it uses a combination of tools and automated processes to filter or obfuscate sensitive data.

The company also clarified that it does not sell user data to third parties.

This is part of their commitment to protect user privacy while still leveraging the power of AI technology for improvement and development purposes.

Read more
Shooting of Vishnu Vishal's 'Gatta Kusthi 2' to be completed in single stretch
Newspoint
My Siddi co-actors picked up the steps with ease: Aahan
Newspoint
I have heard how grief feels, but this is very new to me: Sudeep
Newspoint
Turning director from lyricist was a fun learning experience: Punit Rangaswamy
Newspoint
Acting coach Saurabh Sachdeva on working with Ranbir Kapoor in Animal, mentoring Raghav Juyal: 'He is a curious human being'
Newspoint
'Diwali for us today': Maratha activist Manoj Jarange ends fast
Newspoint
CPI(M) says Sabarimala women's entry issue is 'closed chapter'
Newspoint
5 small lies that will make your relationship stronger
Tezzbuzz
When should Shraddh to die of snake bites, you will get freedom from debt!
Tezzbuzz
Ganesh Utsav Special: Easy recipe of instant and healthy poha modak, offer Bappa
Tezzbuzz