Tengrinews.kz - OpenAI has confirmed that it actively monitors user chats in ChatGPT for harmful content, according to Futurism.
What information gets passed to the police?
The company explained that it focuses on conversations where there may be a potential threat to other people or organizations. Such cases are reviewed and, if necessary, forwarded to law enforcement.
“When we detect users who are planning to harm others, we route their conversations to specialized pipelines where they are reviewed by a small team trained on our usage policies and who are authorized to take action, including banning accounts. If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement,” OpenAI stated.
What remains confidential?
At the same time, OpenAI clarified that conversations involving self-harm are not shared with police. The company said this decision is based on respecting user privacy and personal autonomy when it comes to their own mental health.
Is there really privacy?
OpenAI also outlined prohibited uses of ChatGPT, which include:
- encouraging suicide or self-harm,
- developing or using weapons,
- harming people or destroying property,
- engaging in unauthorized activities that compromise the safety of services or systems.
These statements have sparked confusion because of their contradictions. Some user conversations will remain confidential, while others may be flagged, reviewed, and even passed on to the police.
The controversy deepens against the backdrop of OpenAI’s public stance on privacy. The company resisted legal demands from The New York Times and other publishers, who sought access to ChatGPT transcripts to check whether copyrighted material had been used to train AI models. At that time, OpenAI firmly rejected the request, citing the need to protect user privacy.
Earlier, OpenAI introduced a new generation of artificial intelligence powering its popular ChatGPT chatbot.