-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Allow ChatGPT to identify public figures from images — this limitation makes no sense #2495
-
Summary
Right now, ChatGPT refuses to identify clearly recognizable public figures from images — even when those faces are widely visible across the media, social networks, public appearances, and official accounts.
This restriction might have been intended to protect user privacy, but in reality, it’s counterproductive and often absurd.
Problem
When uploading an image of someone like a public official, influencer, or well-known personality and asking, "Who is this?", ChatGPT replies:
"Sorry, I can't identify people in images for privacy reasons."
This response ignores the fact that:
- These individuals have voluntarily made their image public.
- Their names, faces, and content are already indexed by search engines.
- Tools like Google Lens, Yandex, FaceCheck, and even free web scrapers can identify them instantly — often without restrictions.
- The current policy protects impostors and scammers more than it protects real users.
What should change?
Let ChatGPT do what other public tools already do — with clear ethical boundaries.
Suggested solution:
- If the person in the image is a widely recognized public figure, allow ChatGPT to provide their name and basic public info.
- If the image is of a private citizen, maintain the current restrictions.
This isn’t about stalking or invasion of privacy. It’s about verifying publicly available truth — safely, ethically, and without forcing users to rely on shady or overpriced alternatives.
Why it matters
Users have a right to verify identities — especially when facing impostors, scams, or misleading content. By refusing to identify even the most public of figures, ChatGPT undermines trust and utility.
This isn’t privacy.
It’s fear disguised as ethics.
Let’s fix this.
Beta Was this translation helpful? Give feedback.