Anthropic will begin training its artificial intelligence models using user chats and extend data retention to five years, with options for users and businesses to limit data use. According to Seeking Alpha, the company outlined controls for opting out of training and managing retention settings.
Training on chats with opt-out controls
The report states Anthropic will use user conversations to improve its models. Users will be able to opt out of having their chats used for training, and the company described ways for accounts to manage data preferences. The move is presented as part of efforts to enhance model performance while providing mechanisms to restrict data processing for those who choose.
Business settings and data handling
Seeking Alpha notes that Anthropic detailed settings for business and enterprise customers to control whether employee interactions contribute to model training. The company also described how retention choices can be configured, allowing organizations to align data handling with their internal policies. The update emphasizes configurable controls around data use.
Data retention expanded to five years
Anthropic will extend its data retention period to five years, according to the report. The company explained that users and businesses can adjust certain retention settings, with options referenced for limiting training usage even as data may be retained for a longer window. The report characterizes the changes as an expansion of retention alongside user-managed preferences.
Seeking Alpha’s coverage highlights Anthropic’s description of these policy updates as balancing model improvement with customer-controlled options. The article points to mechanisms for opting out of training and administering data policies across different account types. Anthropic’s updated approach, as conveyed in the report, centers on leveraging user chats for model development while offering avenues to reduce or decline participation.
The report does not include a timeline beyond the described policy update and focuses on Anthropic’s stated controls for both individual and organizational users. According to Seeking Alpha, the company’s guidance frames the use of chats and extended retention within a structure of adjustable preferences.