Signing Together: Collaborative Learning Technologies to Reduce Language Deprivation for DHH Children, A Cognitive Science Speaker Series Presentation

Event Image
Graphic promoting the "Cognitive Science Speaker Series" for Fall 2025. The title is prominently displayed in blue and white text on an orange banner, with a light abstract background featuring hexagonal molecular patterns.

Speaker: Zhen Bai, Ph.D.

Title: Signing Together: Collaborative Learning Technologies to Reduce Language Deprivation for DHH Children

Short Bio: Dr. Zhen Bai is an Assistant Professor co-leading the ROCHCI research group at the Department of Computer Science, University of Rochester. She completed her PhD degree in Computer Science from University of Cambridge and was a post-doctoral fellow at CMU HCII and LTI before joining Rochester. Dr. Bai’s research focuses on creating embodied and intelligent user interfaces to foster socio-emotional (e.g., imagination, social understanding, curiosity, communication) and STEM learning (e.g., AI literacy, scientific inquiry) for children with diverse backgrounds (e.g., autism, deaf and hard of hearing). She is the recipient of the NSF CAREER award, Google Inclusion Research Award, the Asaro Biggar family fellowship, and the co-PI of the NSF interdisciplinary graduate training program for Augmented and Virtual Reality.

Abstract: Parent-child interaction is irreplaceable for language acquisition and social bonding in early childhood. Over 90% of Deaf and Hard of Hearing (DHH) children in the US are born to hearing families, putting them at severe risk of language deprivation due to the lack of immersive sign language environments. Such deprivation could lead to lifelong impact on linguistic, cognitive and socio-emotional growth. My work adopts a family-centered approach to design and develop playful and collaborative American Sign Language (ASL) learning and communication technologies.

In this talk, I will share ongoing projects that explore the design space of Augmented and Virtual Reality (AR/VR) and AI-mediated assistance that help hearing parents to produce ASL on-the-fly during face-to-face interaction with their DHH children and learn non-manual sign through collaborative game play. I’d like to engage the audience with discussion of future technologies that combine cognitive science theories about human learning and cognition as well as cultural capital from the Deaf community to better bridge learning and communication for hearing families with DHH children. 

ASL-English interpreters have been requested. Light refreshments will be provided.


Contact
Frances Cooley
This event may be photographed/videoed by RIT. By attending, you grant Rochester Institute of Technology (“RIT”) permission to use photographs and/or audio-visual recordings in which you appear, now and in the future, for illustration, promotion, art, editorial, advertising, and trade, or any other purpose.
Event Snapshot
When and Where
September 12, 2025
11:00 am - 12:00 pm
Room/Location: WAL-3440
Who

Open to the Public

Interpreter Requested?

Yes

Topics
imaging science
research