Improving ASL communication

Myers Creative Imaging

Matt Huenerfauth, right, professor of information sciences and technologies, is developing animations of American Sign Language.

One day, Siri might be able to respond to questions using sign language in the same way she now speaks using voice output.

That may happen through work by Matt Huenerfauth and his research team, which is developing animations of American Sign Language (ASL)—a language that requires precise control of hand and body movement as well as facial expressions.

“Those are larger, fundamental aspects of how you make things look natural,” said Huenerfauth, professor in the B. Thomas Golisano College of Computing and Information Sciences and co-lead of the human-centered AI pillar. “Technology for automatically producing animations of ASL could make it easier for companies to provide information in sign language on websites, since updating these animations may be easier than re-recording videos of human ASL signers.”

Taking a machine-learning approach involves some human linguistic intuition to tell the new system what characteristics are important. For example, machine- learning algorithms are trained to predict where to pause—spaces between words— to ensure that the pause reflects the actual end of a sentence in a conversation.

Research team members have thousands of hours of recordings of native signers, which have been analyzed by expert linguists, to produce this distinctive data set. They have strategies to use different machine learning models to learn patterns in how humans move during sign language.

“There is a hesitancy among people who do natural language processing—also called computational linguistics—to apply their methods to sign language because it feels unfamiliar. We hope we are reducing a bit of this mystery and trying to create useful tools,” he said.

“A focus of our lab is to take an imperfect AI system and try to do something with it to help people now, even though we know it is not perfect; we wrap it up in some application that someone can derive a benefit from.”

Who is involved?

The Center for Human-Aware Artificial Intelligence (CHAI) was formed last year after more than 200 faculty, students and staff attended a retreat called Move 78 to talk about how RIT can further distinguish itself in new AI discoveries. Center leaders are:

Brain-inspired computing: Dhireesha Kudithipudi and Andreas Savakis, both professors of computer engineering;

Machine learning and perception: Christopher Kanan, assistant professor of imaging science and CHAI’s associate director, and Reynold Bailey, associate professor of computer science;

Automation and robotics: Ray Ptucha, assistant professor of computer engineering, and Ferat Sahin, professor of electrical engineering;

Human-centered AI: Cecilia Ovesdotter Alm, associate professor of computational linguistics and language science, and Matt Huenerfauth, professor of information sciences and technologies.

Related News
Topics
computing
deaf community
engineering
faculty
innovation
research
science

Recommended News