Student researchers pave the way for human-centered AI advances in the College of Liberal Arts Computational Linguistics and Speech Processing Lab.
“Humans are great at pattern recognition. This project aims to tap into that tendency and improve the understanding of a given experiment’s results, which can lead to more advances in the future,” explains Isabelle Arthur (Computer Science BS, mathematics immersion, RIT ’26).
In a project guided by Ph.D. students Michael Peechatt and Rajesh Titung and two faculty mentors, Drs. Cecilia Alm and Reynold Bailey, Arthur is working alongside other student researchers this summer at the College of Liberal Arts Computational Linguistics and Speech Processing Lab, a lab dedicated to research involving text, speech, dialogue, and multimodal data.
Here Arthur and researchers Jordan Quinn (Computer Science BS with a mathematics minor, RIT ’25) and Gabriel Park (Data Science MS, RIT ’24) offer a glimpse of their work and its potential impact in data inspection and artificial intelligence.
What is the focus of your research?
Jordan: Creating an application to assist in visualizing multimodal data and its use in AI. This resource will allow scientists to inspect and visually interact with the data collected from multiple sessions and multiple datasets.
Isabelle: I contribute to a research project that aims to assist other researchers to more easily inspect and digest the data they have collected in unique experiments. With another undergraduate researcher, I am creating an online application that allows for the display of multiple time-aligned data streams and analytical graphs — all in one place. Providing this information in a format that allows side-by-side data analysis facilitates researchers’ recognition of data patterns, and it can also enable research communication. While this system is being made for one group of researchers in particular, we plan to make it generic enough for broader use.
Gabriel: How can we develop a mobile application that engages multiple language modalities through video and sound capture, to collect human feedback for model improvements? My current research focuses on developing a mobile application for human-centered interactive machine learning. The area of application is affective computing, combined with artificial intelligence focused on language and interactive communication. We seek to improve the way humans and computer systems partner to solve tasks.
What is innovative about this research?
Jordan: This project helps researchers visualize, analyze, and inspect highly multimodal data. We are also exploring how to create the visualization interface to have high usability, and which data visualizations are most helpful.
Gabriel: We are utilizing mobile technology to engage a broad spectrum of users. The software application and front-end interface that I am working on aim to enable interaction between human and AI models, toward making our models more comprehensive and accurate.
How do you see this research impacting the field?
Isabelle: Humans are great at pattern recognition, and this project aims to tap into that tendency. Putting all the data of an experiment in one place will enable researchers to visually inspect and identify otherwise hidden patterns in their data. This will improve the understanding of a given experiment’s results, which can lead to more advances in the future.
What types of students (majors, grad vs. undergrad, etc.) and in what ways are other students involved in this research?
Gabriel: Our research lab engages both undergraduate and graduate students from various disciplines. They bring unique skill sets in computing or cognitive science areas, contributing to research in the lab.
As a graduate research student at College of Liberal Arts CLaSP Lab, I’m focusing on AI, HCI, and affective computing. I'm excited about this research opportunity that allows me to explore the interface of data science and human experiences. I aim to acquire research skills that prepare me to make positive contributions to society.
Isabelle: This research is guided by Ph.D. students Michael Peechatt and Rajesh Titung and two faculty mentors, Drs. Cecilia Alm and Reynold Bailey.
What gives you a competitive edge in this research area?
Gabriel: This research effort combines my interest in human-centered AI with the software engineering and data science skills that I am acquiring at RIT. My education and research experience at RIT are equipping me with new skills across the three areas, and my enthusiasm for new methods ensures that I stay committed and updated about new advancements in the field.
What advice for success would you give to others considering research in this field?
Gabriel: Stay curious, constantly update your knowledge, and always be open to new learning opportunities. It is also important to remember that challenges are a part of the process, so don't get discouraged, just learn, and move forward.
About the CLaSP Lab
The College of Liberal Arts Computational Linguistics and Speech Processing (CLaSP) Lab is dedicated to advancing applied and theoretical research involving text, speech, dialogue, and multimodal data.
With a focus area of linguistic and multimodal sensing, CLaSP provides research opportunities for graduate (Ph.D. and MS) and undergraduate students, and hosts a regular student research discussant series. Lab researchers collaborate closely with other research groups on campus.
Recently graduated former students have gone on to jobs in artificial intelligence, software and user interface development, game development, human factors, and more for employers such as EyeGaze Inc, ISI, Amazon, Digital Ocean, Apple, SpaceX, Qualcomm, Cogent Labs (Japan), UL-Wiklund, Athena Health, Commonwealth Care Alliance, Nordstrom Technologies, Facebook, Kensho, Constant Contact, Knewton, Kodak Alaris, Microsoft, IBM, Thompson Reuters, Smartvid, Interactive Brokers, and Pronology.