By Touradj Ebrahimi,
AI Expert, Professor at EPFL, and Standards Leader
LinkedIn
Trust in digital content isn’t just a technology issue – it’s a societal risk. According to the World Economic Forum (WEF)’s Global Risks Report 2025, misinformation and disinformation rank as the top global short-term risks, ahead of even conflict and environmental crises. These risks have the potential to erode trust in institutions, destabilize societies and derail cooperation on urgent global challenges.
That’s why technical solutions – anchored in strong International Standards – are more important than ever.
This week, as AI experts from around the world gather at the AI for Good Summit in Geneva, Switzerland, the conversation is no longer only about the power of artificial intelligence. It’s about trust.
We live in an era where video, audio and images can be fabricated with stunning realism. Deepfakes and AI-generated synthetic content have moved beyond text and are now indistinguishable from reality itself. This revolution offers incredible opportunities – but also serious risks.
The trust challenge: not so black and white
How do we know what to believe? Trust is not a simple yes or no situation. It is probabilistic – a question of degree and context. A news clip might be authentic in one sense but altered in another. A voice recording might be real but taken out of context. This means any solution to verify authenticity must account for complexity, nuance and the possibility of error.
The danger isn’t theoretical. We’ve already seen deepfakes and manipulated media undermine social reputation, incite violence and threaten the credibility of journalism. Deepfakes – manipulated videos and images created with artificial intelligence – have been linked to misinformation campaigns, fraud and the erosion of public trust. Their sophistication is outpacing many detection methods, leaving experts scrambling for answers.
As deepfakes become increasingly convincing, the challenge of distinguishing fact from fiction grows ever more urgent. Without clear tools for authentication, the information ecosystem becomes vulnerable to manipulation, leading to confusion, division and harm.
Enter JPEG Trust, a new International Standard aimed at restoring confidence in digital images and videos by embedding verifiable “trust indicators” directly into the media itself.
Standards as a roadmap for trust
JPEG Trust offers a promising solution: a secure, standardized way to tag media with information about its origin, authenticity and any changes it has undergone. These cryptographically sealed “tags” allow viewers to verify whether content is genuine or altered, and to what degree it can be trusted.
The newly published standard, developed by ISO and IEC, lays a strong foundation, and what sets it apart is its broad, open framework. It has been developed through a global consensus designed to ensure consistency and interoperability across devices, platforms and borders.
JPEG Trust introduces not only a new technical requirement, but a new philosophy of contextual trust. It defines “trust profiles”, allowing different users – journalists, regulators or everyday viewers – to assign their own weight to authenticity indicators. Trust becomes layered, not binary.
In practice, this means you could see a video online accompanied by cryptographic seals showing it came from a verified camera, hasn’t been modified, and has passed several independent AI authenticity checks. Based on your own “trust profile”, you decide how much confidence to place in that content.
Why proprietary solutions fall short
We cannot rely on closed, proprietary systems to safeguard truth in the digital age. The challenge of trust is too urgent – and too global – for fragmented solutions. Instead, we need harmonized International Standards like JPEG Trust, developed through inclusive collaboration.
One of the most meaningful milestones of the 2025 AI for Good Summit is the launch of the AI & Multimedia Authenticity Standards (AMAS) Mapping. This comprehensive overview charts over 30 international standardization initiatives focused on the challenges of synthetic media and deepfakes, providing a clearer picture of where global action is already underway and where greater coordination is still needed.
This body of work, developed under the AMAS collaboration between ISO, IEC and ITU, represents a foundational step toward harmonizing global efforts and reducing duplication in one of the most urgent areas of AI governance – multimedia authenticity.
In my three decades of working on standards – from the creation of JPEG to today’s work on JPEG Trust – I’ve seen first-hand how international cooperation can lay the foundation for interoperability, scalability and global trust. The AMAS mapping builds on that principle. It allows policymakers, technologists and industry leaders to see where efforts converge and where they must coordinate to be effective.
This is why it’s a pleasure and a privilege for me to moderate a dialogue among leaders in standards development, regulation, academia and industry at the AI for Good Summit.
Trust can’t wait
Restoring trust in the age of artificial intelligence will not come from technical fixes alone. It requires cooperation that crosses borders, sectors and disciplines – a global infrastructure grounded in shared values like transparency, accountability and fairness.
At the heart of this effort stand the international standards-setting bodies ISO, IEC and ITU, where experts in artificial intelligence, multimedia and digital trust converge to shape the rules of engagement for a rapidly evolving landscape.
Through the AMAS collaboration, these organizations have begun the painstaking work of aligning fragmented efforts. Governments, companies and researchers are coming together to create a common language for authenticity – a way to tell what’s real from what’s been altered.
That’s what makes the AI for Good Summit more than a showcase of innovation. It is, in effect, a staging ground for the architecture of trust in the AI era. Here, amid the promises of generative models and autonomous systems, we are also deciding what kind of future we want to build.
And who we can believe in.