[フレーム]
You are viewing this page in an unauthorized frame window.

This is a potential security issue, you are being redirected to https://csrc.nist.gov.

You have JavaScript disabled. This site requires JavaScript to be enabled for complete site functionality.

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

    Publications

Conference Paper

Hallucination Detection in Large Language Models Using Diversion Decoding

Documentation Topics

Published: June 24, 2025

Author(s)

Basel Abdeen (University of Texas at Dallas), S. M. Tahmid Siddiqui (University of Texas at Dallas), Meah Tahmeed Ahmed (University of Texas at Dallas), Anoop Singhal (NIST), Latifur Khan (University of Texas at Dallas), Punya Parag Modi (University of Texas at Dallas), Ehab Al-Shaer (CMU)

Conference

Name: 39th IFIP WG 11.3 Annual Conference on Data and Applications Security and Privacy, DBSec 2025
Dates: 06/23/2025 - 06/24/2025
Location: Gjøvik, Norway
Citation: Data and Applications Security and Privacy XXXIX, vol. 15722, pp. 116-133

Abstract

Large language models (LLMs) have emerged as a powerful tool for retrieving knowledge through seamless, human-like interactions. Despite their advanced text generation capabilities, LLMs exhibit hallucination tendencies, where they generate factually incorrect statements and fabricate knowledge, undermining their reliability and trustworthiness. Multiple studies have explored methods to evaluate LLM uncertainty and detect hallucinations. However, existing approaches are often probabilistic and computationally expensive, limiting their practical applicability.

In this paper, we introduce diversion decoding, a novel method for developing an LLM uncertainty heuristic by actively challenging model-generated responses during the decoding phase. Through diversion decoding, we extract features that capture the LLM’s resistance to produce alternative answers and utilize these features to train a machine-learning model to develop a heuristic measure of the LLM’s uncertainty. Our experimental results demonstrate that diversion decoding outperforms existing methods with significantly lower computational complexity, making it an efficient and robust solution for evaluating hallucination detection.

Large language models (LLMs) have emerged as a powerful tool for retrieving knowledge through seamless, human-like interactions. Despite their advanced text generation capabilities, LLMs exhibit hallucination tendencies, where they generate factually incorrect statements and fabricate knowledge,... See full abstract

Large language models (LLMs) have emerged as a powerful tool for retrieving knowledge through seamless, human-like interactions. Despite their advanced text generation capabilities, LLMs exhibit hallucination tendencies, where they generate factually incorrect statements and fabricate knowledge, undermining their reliability and trustworthiness. Multiple studies have explored methods to evaluate LLM uncertainty and detect hallucinations. However, existing approaches are often probabilistic and computationally expensive, limiting their practical applicability.

In this paper, we introduce diversion decoding, a novel method for developing an LLM uncertainty heuristic by actively challenging model-generated responses during the decoding phase. Through diversion decoding, we extract features that capture the LLM’s resistance to produce alternative answers and utilize these features to train a machine-learning model to develop a heuristic measure of the LLM’s uncertainty. Our experimental results demonstrate that diversion decoding outperforms existing methods with significantly lower computational complexity, making it an efficient and robust solution for evaluating hallucination detection.


Hide full abstract

Keywords

large language models; hallucination detection; diversion decoding
Control Families

None selected

Documentation

Publication:
https://doi.org/10.1007/978-3-031-96590-6_7
Preprint (pdf)

Supplemental Material:
None available

Document History:
06/24/25: Conference Paper (Final)

Topics

Security and Privacy

assurance

Technologies

artificial intelligence

AltStyle によって変換されたページ (->オリジナル) /