Psychoacoustics, Physiology of Hearing, and Auditory Modelling, from the Ear to the Brain
19-24 Jun 2022 Lyon (France)
Assessing spatial listening skills using virtual acoustics
Marina Salorio-Corbetto  1, 2, *@  , Ben Williges  1@  , Wiebke Lamping  1@  , Lorenzo Picinali  2@  , Deborah Vickers  1@  
1 : University of Cambridge, Department of Clinical Neurosciences, SOUND Lab, Cambridge Hearing Group
2 : Imperial College London, Audio Experience Design Group, Dyson School of Design Engineering
* : Corresponding author

Aim: Although there is a need for better understanding of the functional limitations faced by users of bilateral cochlear implants in daily communication, the lack of suitable equipment and large clinical spaces has limited clinical assessments of spatial hearing. We have implemented a version of the Spatial Speech in Noise test (SSiN), which simultaneously assesses speech discrimination and relative localisation in the presence of multi-talker babble, on a virtual acoustics platform to deliver complex listening environments via headphones. The aims of this work were to determine: 1) whether the patterns of responses for speech discrimination and relative localisation using virtual audio resemble those obtained using loudspeaker setups; 2) how the location of the multi-talker babble affects the patterns of responses. 

Methods: The Virtual Acoustics (VA) version of the Spatial Speech in Noise test, the SSiN-VA, was implemented using the 3D Tune-In Toolkit. Seven loudspeaker locations, from –90º to 90º azimuth, at 30º intervals were simulated. In Experiment 1, twelve normal-hearing participants were tested. Their typical responses were characterised by fitting maximum likelihood binary logistic regression generalized mixed-effects models for relative localisation and speech discrimination separately. Additionally, the outcomes for SSiN-VA were compared with an existing dataset consisting of responses of normal-hearing participants in an anechoic chamber with a loudspeaker array. In Experiment 2 (ongoing), normal-hearing participants (12) and cochlear-implant participants (12), carried out the task with two different babble location configurations: symmetrical (babble location at –60º, –30º, 30º and 60º) and asymmetrical (babble location at –60º and –30º or at 30º and 60º). In addition to comparing measured performance for these conditions, modelling will be used to predict the differences across conditions.

Results: Results are available for Experiment 1. The patterns of responses as a function of azimuth were similar to those obtained with loudspeaker setups for both relative localisation and speech discrimination. Performance for relative localisation was significantly better at the highest SNR than at the lowest SNR tested, and a relative localisation shift to the right was associated with an increase in the likelihood of a correct response. For word discrimination, performance improved with higher SNR and there was an interaction between SNR and word group (type of discrimination contrast assessed). 

Conclusion: These outcomes support the use of virtual audio as an alternative to loudspeaker setups for the clinical evaluation of spatial listening skills. 

Acknowledgements: MSC was funded by Imperial Confidence in Concept (ICiC). MSC, BW, WL, and DAV are funded by the Medical Research Council (MRC) UK, Grant code MR/S002537/1. MSC and DAV are funded by a Programme Grant for Applied Research (NIHR201608). The views expressed are those of the authors and not necessarily those of the NIHR or the Department of Health and Social Care. Lorenzo Picinali and Deborah Vickers are co-senior authors.


Online user: 2 Privacy
Loading...