Uncertainty Quantification and Mitigation
in Large Language Models

2025 ICDM Tutorial

Thursday, November 13   Afternoon (4:10-6:00 PM, Room: California)

Location: Capital Hilton, 1001 16th Street NW Washington, DC 20036, USA

Abstract

Large Language Models (LLMs) have revolutionized numerous applications with their impressive capabilities, but their reliability remains a concern due to the lack of robust uncertainty quantification (UQ) methods. This tutorial addresses this gap by providing a comprehensive overview of UQ techniques tailored for LLMs. We explore the theoretical foundations of UQ, categorize UQ tasks, datasets, and evaluations, and delve into the sources of uncertainty in LLMs, including input, reasoning, parameter, and prediction uncertainties. Recent advances in UQ, such as novel frameworks for decomposing uncertainty into aleatoric and epistemic components, supervised approaches for uncertainty estimation, and multi-dimensional UQ frameworks, will be discussed.

The tutorial is designed for researchers and practitioners from both machine learning and LLM communities, aiming to enhance the trustworthiness and reliability of LLMs in high-stakes applications. By integrating UQ into LLMs, we can improve their decision-making capabilities, address ethical considerations, and foster more reliable AI systems.

Schedule

Tutorial Materials

Presenters

Longchao Da

Longchao Da

Longchao Da is a PhD candidate at Arizona State University. His research interests are Reinforcement Learning and Trustworthy AI (Sim-to-Real). He has publications in top venues such as NeurIPS, ICML, KDD, AAAI, IJCAI, ECML-PKDD, CIKM, CDC, CASE, IJMLC, Machine Learning, and SDM. He successfully hosted in-person hands-on tutorials at the top inter-discipline venue, ITSC 2023 in Spain, with more than 60 participants.

Xiaoou Liu

Xiaoou Liu

Xiaoou Liu is a second-year Ph.D. student in Computer Science at Arizona State University. Her research focuses on trustworthy machine learning, with an emphasis on explainable graph neural networks and uncertainty quantification in large language models. Her work has been published at venues such as KDD and ICCPS.

Hua Wei

Hua Wei

Hua Wei is an Assistant Professor in the School of Computing and Augmented Intelligence at Arizona State University. His research focuses on data mining, reinforcement learning, and uncertainty quantification. He has been awarded the Amazon Research Award for LLM uncertainty quantification and multiple Best Paper Awards on top conferences in machine learning, artificial intelligence and data mining. He has also actively organized events related to uncertainty and LLMs, such as the Workshop on Uncertainty Reasoning and Quantification in Decision Making and Agent4IR workshops at CIKM and KDD.

ICDM'25 Conference