AIR Lab researchers combine AR, VR and AI to advance accessibility
Faculty and students develop tools to support job training processes for neurodivergent individuals.
Researchers in RIT’s Accessible and Immersive Realities Lab (AIR Lab) are integrating LLM-enhanced AR/VR technologies into job training processes for neurodivergent individuals. Each project entails developing LLM-powered virtual avatars to support soft skill and communication skill training for autistic individuals, and leveraging LLMs/VLMs to provide real-time, context-aware information to autistic individuals in customer service roles.
Led by Director Roshan Peiris, an assistant professor in the School of Information, with faculty and student collaborators from Rochester Institute of Technology, University of Maine, University of Moratuwa, and Heritage Christian Services, the team’s core research focuses on “Immersive Accessibility.” As such, their research agenda covers two main themes: (1) making immersive research technologies accessible and (2) using immersive technologies for accessibility application, with a third research theme of understanding, augmenting and enhancing the human, which involves research into broader Human-Computer Interaction topics.
“The goal of our research is to understand human abilities and develop innovative technologies and methods that enable, empower, and enhance human experiences and capabilities,” says Peiris.
Current projects include Virtual Reality for Job Training, Spatially Aware VR-LLM Agents, and VR-LLM Agents for Public Speaking. As part of this research, the AIR Lab is actively collaborating with several job training organizations for people with disabilities across New York State to deploy and evaluate their approaches in real-world settings.
You can learn more about the AIR Lab’s ongoing projects below:
Virtual Reality for Job Training
This project examines the use of virtual reality for job-training people with autism. Here we examine several aspects of job training such as developing the soft skills and the hard skills of the trainees. Additionally, our efforts primarily focus on developing tools for job coaches and integrating these tools directly into the job training curricula.
Spatially Aware VR-LLM Agents
In this project, we integrated a Large Language Model into Virtual Reality as a VR chatbot. As applications, we explored several scenarios such as using the VR chatbot for training communication skills for job training, practicing presentations Q&A and exploring presenting body language using LLMs.
VR-LLM Agents for Public Speaking
This work explores the potential of a large-language-model (LLM)-driven chatbot training tool to mitigate PSA, addressing the common negative perception of public-speaking skills among students. While most existing studies focus on anxiety during scripted presentations, this formative diary study examines the effectiveness of two versions of a chatbot application (virtual reality (VR) and desktop) in reducing anxiety during question-and-answer (Q&A) sessions. Our study aims to assess the impact of the chatbot training tool on graduate students' PSA, explore its practical applications, and compare the effectiveness of both versions against each other. The findings offer valuable insights for developing more effective and sustainable public-speaking training interventions.
https://www.ritairlab.org/projects-1/vr-llm-agents-for-public-speaking