OpenAI CEO Sam Altman has warned that conversations with ChatGPT are not protected by legal privilege and could be admitted as evidence in court, raising urgent privacy concerns as the AI tool continues to be widely used for personal and emotional support.
Altman made these comments during an appearance on comedian Theo Von’s This Past Weekend podcast, where he expressed concern about the volume of sensitive and personal information users share with the chatbot, particularly young people who often turn to it for emotional guidance.
“People talk about the most personal shit in their lives to ChatGPT,” Altman said. “People use it, young people especially, use it as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’” he said.
Confidentiality laws do not protect ChatGPT conversations
Altman highlighted the lack of confidentiality protections surrounding AI interactions, which contrasts with well-established legal privileges for communications with doctors, lawyers, and therapists.
“And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it.”
“There’s doctor-patient confidentiality, there’s legal confidentiality, whatever.”
“And we haven’t figured that out yet for when you talk to ChatGPT,” he added.
Legal repercussions
Altman cautioned that there may be major repercussions from this loophole.
“If you go talk to ChatGPT about the most sensitive stuff and then there’s a lawsuit or whatever, we could be required to produce that.”
“If someone confides their most personal issues to ChatGPT, and that ends up in legal proceedings, we could be compelled to hand that over. And that’s a real problem,” he said.
Altman called for urgent action to establish privacy frameworks for AI tools.
“I think that’s very screwed up. I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever, and no one had to think about that even a year ago,” Altman concluded.
GPT-5 launch in August
Altman’s remarks come as OpenAI prepares to launch GPT-5, its next-generation AI language model, anticipated in August 2025. Techpression earlier reported that GPT-5 will offer improved accuracy, reduced errors, and enhanced understanding of multimodal inputs such as images and audio.
As AI technologies become increasingly integrated into daily life, Altman’s comments highlight the urgent need for clear legal frameworks to protect user privacy while balancing transparency and accountability.
According to multiple reports, GPT-5 has been in development since late 2023 and is undergoing final safety training and internal evaluations before its release.
The upgraded model aims to enhance understanding of multimodal inputs such as images and audio, making it more versatile for real-world applications.