Resources
- RIT/
- Music and Audio Cognition Laboratory/
- Resources
Active Grants
Semantic-based auditory attention decoder using Large Language Model, Support for Research Excellence (SuRE) Program (R16), R16GM159795, Funded by NIH (National Institutes of Health), Award amount: $740,000, Person-months: 6, Investigators: Hwan Shim (Principal Investigator), Sungyoung Kim (Co-Investigator)
Facilities
Immersive Audio Lab
The primary purpose of this laboratory is to provide a customizable space for the development and implementation of innovative auditory and crossmodal participant experiments. Specific research areas include advanced applied psychoacoustics experiments for open-space extended reality (XR), audio-visual congruence for immersive telecommunication applications, spatial masking in speech-in-noise understanding, auditory training and critical listening, and more. This laboratory is also being used to teach the latest technologies in spatial and XR audio including Higher Order Ambisonics (HoA), Dolby ATMOS, Auro3D, Yamaha Active Field Control (AFC), and the NHK 22.2 System.
- 22.2 Channel Surround Reproduction System which can reproduce Higher Order Ambisonics (HoA), Dolby ATMOS, Auro3D, Yamaha Active Field Control (AFC), and the NHK 22.2 Standard.
- 22 loudspeakers (Genelec 8020B) with adaptors and stands
- 2 JBL M2 loudspeakers and Crown IT4X3500HDB amplifier
- 2 JBL LSR310S Active Subwoofers
- 250” wrap-around screen for immersive visual rendering with a 120° horizontal field-of-view (FOV) coverage
- Brüel & Kjær Head-And-Torso-Simulator (HATS dummy-head binaural microphones)
- Yamaha Active Field Control (AFC) 4 System
- Audio interface (RME Fireface UFX)
- Audio digital signal processing (DSP) software (Cycling74 Max)