Reliable Uncertainty Quantification in Foundation Models
As foundation models (Large language models) are increasingly deployed in high-stakes domains, the risk of generating incorrect or low-quality content has become a major concern. This is often amplified after fine-tuning for downstream tasks, where domain shift and limited task-specific data can substantially degrade reliability. In this poster presentation, we present the novel evidential formulation to reliably quantify uncertainty score to detect low-quality content from a foundation model.
Topics
Exhibitor
Spandan Pyakurel
Xumin Liu
Advisor(s)
Qi Yu and Xumin Liu
Organization
Dissertation research
Thank you to all of our sponsors!






