OpenAI CEO Sam Altman Warns: ChatGPT Conversations Not Legally Confidential

Sam Altman
X

 Sam Altman

Your most personal ChatGPT chats may not be private—OpenAI's CEO reveals they can be disclosed during lawsuits.

As artificial intelligence tools become more entwined with daily life, many users have turned to platforms like ChatGPT for emotional guidance, life coaching, and even informal therapy. However, OpenAI CEO Sam Altman has now raised an important red flag: your private conversations with ChatGPT might not be as private as you think.

In a candid discussion on This Past Weekend, a popular YouTube podcast hosted by comedian Theo Von, Altman shared that conversations users have with ChatGPT—no matter how sensitive or personal—are not protected by legal confidentiality. This means that, under certain circumstances such as a lawsuit, these interactions could be made public.

“People talk about the most personal sh*t in their lives to ChatGPT,” Altman remarked. “People use it – young people, especially, use it – as a therapist, a life coach; having these relationship problems and [asking] what should I do? And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

This acknowledgment has sparked fresh concerns among users and privacy advocates, especially considering how deeply integrated these AI tools have become in everyday emotional and mental wellness routines. Altman emphasized the urgency of developing policies that extend similar confidentiality protections to AI interactions.

“So, if you go talk to ChatGPT about your most sensitive stuff and then there’s like a lawsuit or whatever, we could be required to produce that, and I think that’s very screwed up,” he said.

This revelation underscores a major gap in current legal frameworks, especially as AI is rapidly being adopted across sectors. Unlike end-to-end encrypted platforms like WhatsApp or Signal, which are designed to keep user communications private and unreadable by third parties, ChatGPT's infrastructure allows OpenAI to access conversations. This access, according to the company, helps improve the model and monitor for potential misuse.

Although OpenAI has stated that it deletes free-tier user chats within 30 days, the company has also noted that some interactions may be retained for legal or safety-related purposes. The issue becomes even more relevant as OpenAI faces a high-profile lawsuit filed by The New York Times, which has compelled the company to preserve user conversations—excluding those from enterprise accounts—for legal scrutiny.

As AI becomes more trusted with intimate parts of our lives, Altman’s remarks make it clear that the industry has yet to address the need for legal and ethical protections that match those of human professionals. Until then, users are urged to exercise caution and understand the limitations of confidentiality when engaging with AI chatbots for personal or mental health matters.


Next Story
Share it