Researching at the intersection of computing and accessibility
Ph.D. student Saad Hassan receives Duolingo grant for thesis on sign-language search systems
Saad Hassan believes that accessibility should be a primary focus of technological innovation, not an afterthought. He recently received a grant from language-learning company Duolingo to fund his doctoral thesis on look-up systems for unfamiliar signs in languages like American Sign Language. He has also researched live captioning, biases in natural language processing systems, language learning, and mental health.
“A lot of people are creating technology, and AI is also booming, but, at the same time, a lot of that I feel is happening without regard for individuals with disabilities,” Hassan said.
Hassan is a computing and information sciences Ph.D. student from Lahore, Pakistan, who works on campus at the Center for Accessibility and Inclusion Research (CAIR) and the Linguistic and Assistive Technologies Laboratory, and off campus as a research scientist intern for Meta’s Reality Labs.
To summarize the look-up systems that his doctoral research focuses on, Hassan explained that they are “basically advanced dictionary systems that allow people to either sign something in front of a webcam or select a span in a video that contains signing to look up unfamiliar signs.”
The grant that Hassan won is part of the Duolingo Dissertation Grant in Language Learning with Technology. The grants support master’s and doctoral degree candidates whose research centers around using technology to support language learning and teaching. In Duolingo’s blog post about the grant winners, Hassan explains that his research “examines the challenges learners of sign languages face when searching for an unfamiliar sign, since there is no standard writing system to facilitate searching.”
Matt Huenerfauth, dean of the Golisano College of Computing and Information Sciences, is Hassan’s doctoral advisor and has worked with him on much of his research.
“I was delighted, but not surprised, to see that Saad had been a recipient of this award, given the strength of his work, and I am excited for his bright future in the computing field,” he said.
Hassan’s work with Google and Meta has also emphasized accessibility. As a research intern with Google AI’s Perception Team in 2021, he worked on designing and evaluating sign language recognition technologies. In his current role with Meta’s Reality Labs, Hassan works with the audio team on captioning.
Hassan and his colleagues will be presenting research at the upcoming ASSETS 2022 conference in Athens, Greece, Oct. 23-26. Huenerfauth explains that ASSETS is “the top research conference in the field of computing accessibility.” Hassan will present the paper “Support in the Moment: Benefits and use of video-span selection and search for sign-language video comprehension among ASL learners,” and the poster “Understanding ASL Learners’ Preferences for a Sign Language Recording and Automatic Feedback System to Support Self-Study.”
“I think the sense of satisfaction that I get is from actually creating technologies that people around me can directly benefit from,” Hassan said. “I want to keep into account people with all sorts of backgrounds and communities that were often ignored.”