Stop overpaying - start transferring money with Ogvio. Sign up, invite friends & grab Rewards now! 🎁
No Legal Shield: Sam Altman Flags ChatGPT Privacy Risks
Key Takeaways
- ChatGPT chats are not legally protected like those with doctors or lawyers, Sam Altman warns;
- OpenAI may be forced to share user data from ChatGPT if it is requested in a lawsuit;
- Altman said AI privacy laws are lacking and fears governments could overstep with surveillance.
OpenAI CEO Sam Altman has warned that anything users say to ChatGPT could potentially be made public in a legal case.
Unlike conversations with doctors, therapists, or lawyers, chats with the AI chatbot do not have any legal protection.
In a July 24 interview with podcaster Theo Von, Altman said, "And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s like legal privilege for it. And we haven’t figured that out yet for when you talk to ChatGPT".
Did you know?
Subscribe - We publish new crypto explainer videos every week!
NEAR Protocol Explained: Beginner's Guide to NEAR (Animated)
He added that if someone shares personal information with ChatGPT and ends up in a lawsuit, OpenAI might be required to hand over that data.
Altman noted that conversations with AI should be treated the same way as ones with licensed professionals, at least when it comes to privacy. According to him, several policymakers also agree and think the issue needs to be addressed soon.
Because the legal rules around AI tools remain unclear, even Altman admitted that he feels uneasy using them for anything too personal. He said:
That’s one of the reasons I get scared sometimes to use certain AI stuff.
Altman also stated that as AI tools become more common, governments may want to keep tighter control to prevent misuse. He said, "History is that the government takes that way too far, and I’m really nervous about that".
On July 25, OpenAI stated that a new ChatGPT feature called “agent mode” might pose a risk to personal data. How? Read the full story.