Adam Hill presenting his research on safe listening at live music events
Adam started the talk with an update on his work surrounding safe listening at live music events. After providing some background on work he has contributed to with the Audio Engineering Society and the World Health Organization, he detailed a recently funded pilot study that was in collaboration with the University of Nottingham and Nottingham Trent University. The study aimed to develop an adequately accurate and practical method for monitoring sound exposure at small/medium-sized music venues.
The central challenge was in separating the sound exposure due to the sound system and the sound exposure due to the musical instruments and monitors on the stage. This was particularly difficult as the stage configuration typically changes between musical acts, making system calibration nearly impossible. The proposed system requires a minimum of two sound level meters: one above the stage and one above the audience, along with calibration measurements to define the transfer function between points across the audience and stage and the monitoring locations on the ceiling.
Birds-eye view of the venue configuration for the pilot study
The results indicate that such a system is largely successful in estimating the stage and sound system contributions to sound exposure, with a few notable limitations relating to the unknowns surrounding the stage configuration. The example event’s data analysis indicates that all audience members and stage personnel received more than the recommended daily noise dose (85 dBA over 8 hours), with audience members at the front of the stage receiving close to 900% of the recommended noise dose.
Estimated sound levels due to the sound system and the stage sources for two of the four bands measured
Estimated noise dose across the venue for the duration of the 4-band musical showcase
Adam highlighted how moderate reductions in exposure (3, 6, or 9 dB) would likely bring most attendees and musicians to within safe listening levels. This could be achieved by addressing room acoustics, sound system configuration, stage levels, provision of hearing protection, and (as a last resort) turning down the sound system. The full AES paper detailing this research can be found here.
Potential noise dose reduction due to small to moderate sound level attenuation during the event
Bruce Wiggins presenting his research on very high-order ambisonics
Bruce followed Adam’s talk with an exploration of his recent work on auralisation using very high-order ambisonics, research which he conducted with EARLab’s own Mark Dring. The talk began with an overview of the situation, where headphone-based auralisations without head movement are typically uninteresting, inaccurate, and lacking immersion. The incorporation of head tracking, however, provides a more natural and immersive experience. The question, then, is how far we can push such auralisation and at what point does increased accuracy become inaudible?
An area that is often overlooked is maintaining alignment between the direct sound and reverberant field in a virtual space. Preservation of accurate binaural cues such as interaural level difference and interaural time difference is equally important for accurate renderings of an aural soundscape. Bruce demonstrated how certain ambisonic orders will naturally limit such accuracy once a certain frequency is reached.
Interaural level difference errors vs. frequency for 3rd, 8th and 17th order ambisonics
This was demonstrated by examining HRTFs at various listening orientations and ambisonic orders, looking at both the time and frequency domains. Subjective evaluations, led by Mark, have shown that above around 7th to 9th order ambisonics, there is no further noticeable improvement to the spatialised sound field.
Ambisonic order detection limit vs. source signal
Lastly, Bruce detailed the development of WHAM: Webcam Head-tracked Ambisonics, which grew out of necessity during the pandemic, as he and Mark had to carry on their research without bringing participants into the lab. This system used readily available software packages/plugins and bespoke code to use peoples’ webcams to track their head movements to provide spatial audio up to 31st-order ambisonics. Findings using this system indicate that lower-order ambisonics can be used for simple symmetrical spaces, but higher order is needed for more realistic complex spaces.
Thank you to the ISVR for hosting Adam and Bruce. It was great to reconnect and learn about the exciting projects currently underway down in Southampton!