Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Allow ChatGPT to identify public figures from images — this limitation makes no sense #2495

danybranding started this conversation in Ideas
Discussion options

Summary

Right now, ChatGPT refuses to identify clearly recognizable public figures from images — even when those faces are widely visible across the media, social networks, public appearances, and official accounts.

This restriction might have been intended to protect user privacy, but in reality, it’s counterproductive and often absurd.

Problem

When uploading an image of someone like a public official, influencer, or well-known personality and asking, "Who is this?", ChatGPT replies:

"Sorry, I can't identify people in images for privacy reasons."

This response ignores the fact that:

  • These individuals have voluntarily made their image public.
  • Their names, faces, and content are already indexed by search engines.
  • Tools like Google Lens, Yandex, FaceCheck, and even free web scrapers can identify them instantly — often without restrictions.
  • The current policy protects impostors and scammers more than it protects real users.

What should change?

Let ChatGPT do what other public tools already do — with clear ethical boundaries.

Suggested solution:

  1. If the person in the image is a widely recognized public figure, allow ChatGPT to provide their name and basic public info.
  2. If the image is of a private citizen, maintain the current restrictions.

This isn’t about stalking or invasion of privacy. It’s about verifying publicly available truth — safely, ethically, and without forcing users to rely on shady or overpriced alternatives.

Why it matters

Users have a right to verify identities — especially when facing impostors, scams, or misleading content. By refusing to identify even the most public of figures, ChatGPT undermines trust and utility.

This isn’t privacy.
It’s fear disguised as ethics.

Let’s fix this.

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
1 participant

AltStyle によって変換されたページ (->オリジナル) /