CHAI Advanced PhD Student Talk: Deep Pandey, Advanced PhD Student, Computer Science, RIT

Event Image
Pic of speaker Deep Pandey, Graduate Student

CHAI Fall 2024 Seminar Series Advanced PhD Student Talk
                       Refreshments will be served

DATE:                  Monday, October 7, 2024, 12:00-1:00 PM

SPEAKER:          Deep Pandey, Advanced PhD Student, Computer and        Information Sciences, RIT

TITLE:                Regularized Evidential Deep Learning Models for            Uncertainty-Aware Learning from Limited Data            

IN PERSON:    Golisano Hall (070), Room CYB-1710/1720

ADVISOR:       Qi Yu PhD, Associate Director, CHAI; Professor, School of Information; Director, Graduate Program; Director, Machine Learning and Data Intensive Computing (MINING) Lab

ABSTRACT:  Deep Learning models have achieved great success in large data fields ranging from computer vision and natural language processing to digital arts and robotics. However, the effectiveness of the deep learning models are challenged by many real-world limited data problems (e.g., medicine, health-care, and security intelligence) where data for model training is scarce. In such limited data problems, the model needs to have accurate uncertainty awareness to ensure that the model's predictions are trustworthy. To this end, Evidential Deep Learning models, based on Subjective Logic, introduce a principled and computationally efficient way to turn deterministic Deep Learning models uncertainty-aware. The resultant evidential models can quantify fine-grained uncertainty using the learned evidence. However, the subjective logic-based theoretical framework of the evidential model constrains the output evidence to be non-negative, leading to the design of special activation functions for model training and inference. This constraint often leads to inferior predictive performance compared to standard softmax models, making it challenging to extend them to many large-scale datasets. We investigate evidential models to identify a fundamental limitation of existing evidential deep learning models: existing evidential activation functions create zero evidence regions, which prevent the model to learn from training samples falling into/near such regions. We design a family of evidential regularizers that address the fundamental limitation enabling the evidential models to learn from all training data points. We carry out extensive experiments over multiple benchmark datasets and settings that demonstrate the effectiveness of the developed regularized evidential deep learning models.

BIO:  Deep Shankar Pandey is a sixth-year PhD candidate in Computing and Information Sciences at RIT and a member of the Machine Learning and Data Intensive Computing research lab led by his advisor Dr. Qi Yu. His research focuses on building novel trustworthy uncertainty-aware deep learning models that can effectively learn from limited data. He has published papers in top machine learning conferences including CVPR, AAAI, and ICML. More details of his work are available at his website:

NOTE: To schedule interpreter and/or services for this event, please use

Susan A Brightman
Event Snapshot
When and Where
October 07, 2024
12:00 pm - 1:00 pm
Room/Location: CYB-1710-1720

Open to the Public

Interpreter Requested?


artificial intelligence
student experience