1. Home
  2. Blog
  3. Testing to build trustworthy AI
News & blogs

Testing to build trustworthy AI

2025年11月30日
AI

AI has the potential to solve many of the world’s challenges, driving innovation and providing insights that can transform industries and the planet, but can we trust it? Studies show that while most of us are using AI, trust in the technology is still very low. This is not surprising given reports of errors and ‘hallucinations’. Recent research by the BBC and European Broadcasting Union (EBU), for example, revealed that AI assistants misrepresent news content in nearly half of their responses to user queries. In a time when many people use AI to source their news, this is one of societal concern.

What’s more, there have been many other incidences of chatbots going awry such as the case where a chatbot for the social media platform X gave detailed instructions on how to break into a politician’s home and assault him.

Hallucinations, bias, discrimination and misinformation are just some of the risks that AI systems encounter, which can erode trust in AI and be harmful to society. Yet robust data and trustworthy systems are what is needed for AI to fulfil its potential to do good.

IEC has been developing standards aimed at building trustworthy AI systems for many years. An example is the management system standard ISO/IEC 42001 which helps organizations develop, provide or use AI systems responsibly. It provides guidance for establishing, implementing, maintaining and continually improving an AI management system. Being a management system standard, it can be certified to, thus providing reassurance to governments and stakeholders that the requirements have been implemented correctly.

IEC is also part of the AI and Multimedia Authenticity Standards Collaboration (AMAS), a global, multistakeholder initiative led by the World Standards Cooperation aimed at combatting global disinformation, misinformation and the misuse of AI-generated content.

Comprehensive and relevant testing of AI systems is important and necessary for reducing risks. The first in a series of standards to specifically support testing of AI systems has just been published. ISO/IEC TS 42119, Testing of AI — Part 2: Overview of testing AI systems gives an overview of the concepts and approach used in the series which is aimed at helping AI developers to identify appropriate test practices, approaches and techniques applicable to AI systems.

Specifically, it provides the requirements and guidance on the application of the ISO/IEC/IEEE 29119 series to the testing of AI systems. The ISO/IEC/IEEE 29119 series is the industry benchmark for software testing.

Future standards in the series include:

  • ISO/IEC TS 42119-3: approaches and guidance on processes for the verification and validation analysis of AI systems;

  • ISO/IEC TS 42119-7: technology-agnostic guidance for conducting red teaming assessments on AI systems;

  • ISO/IEC TS 42119-8: definitions, concepts, requirements and guidance related to assessing prompt-based text-to-text AI systems that utilize generative AI.

How to build trustworthy AI systems is a key item on the agenda of the first International AI Standards Summit held 2-3 December in Seoul, Korea. The event will bring together senior leaders from both the public and private sector to collaborate and exchange in a bid to drive trusted, beneficial use of AI through International Standards.

Leading up to the event, IEC Academy is running a series of public webinars to provide an introduction to many of the issues that the Summit will explore in greater detail. The series will feature speakers from government, industry and the standards community offering insights and perspectives on how current and future standards can help shape a more sustainable AI future.

Learn more about the International AI Standards Summit.

Register here for one or more pre-Summit webinars.