Do join us for our next instalment of the HearMus seminar series.
HearMus 8. Wednesday 8th October 14:00-16:00 BST (15:00-17:00 CEST)
Speakers:
Vinzenz Schönfelder (Mimi Hearing Technologies GmbH)
Dr Pauline Mouawad (UCL Ear Institute)
Zoom details:
Topic: HearMus Seminar 8 (Wed 8th Oct 3-5pm CEST)
Time: Oct 8, 2025 02:00 PM London
Join Zoom Meeting
https://universityofleeds.zoom.us/j/82380577236
Meeting ID: 823 8057 7236
Hands-on experiences with measuring subjective benefit of music processing strategies for HI listeners
Vinzenz Schönfelder, Mimi Hearing Technologies GmbH
Reliably quantifying the benefit of signal processing strategies represents an essential part of developing products for hearing impaired listeners. However, unlike in the speech processing context, there is no standard method or metric to determine the extent to which the enhancement of music signals improves the individual listening experience. In addition to the specific differences between speech and music stimuli, this is primarily due to differences in the purpose of speech vs. music processing: preference for one or another music signal is highly subjective and individual, and thus inherently hard to quantify.
In this presentation, we will provide an overview over the specific challenges and potential solutions in the context of measuring subjective preference for processing strategies aimed at music signals. We will present and discuss study designs and data analysis methods that we have used to quantify processing benefit and share some study results along with our hands-on experience in the general context of music processing for hearing impaired listeners.
Musical complexity governs a tradeoff between reliability and dimensionality in the neural code
Dr Pauline Mouawad
Previous studies have explored neural responses to simple musical sounds, but the neural coding of complex music remains unexplored. We addressed this gap by analyzing multi-unit activity (MUA) recorded from the inferior colliculus (IC) of normal-hearing (NH) and hearing-impaired (HI) gerbils in response to a range of music types at multiple sound levels. The music types included individual stems (vocals, drums, bass, and other) as well as mixtures in which the stems were combined.
Using coherence analysis, we assessed how reliably music is encoded in the IC across repeated presentations and the degree to which individual stems are distorted when presented in a mix- ture. To explore neural activity patterns at the network level, we used PCA to identify the signal manifold, the subspace where reliable musical information is embedded. To model neural trans- formations, we developed a deep neural network (DNN) to generate MUA from sound. To assess the impact of hearing loss, we compared results for NH and HI at equal sound and sensation levels.
We identified strong nonlinear interactions between stems, affecting both the reliability and geometry of neural coding. The reliability of responses and the dimensionality of the signal manifold varied widely across music types, with reliability decreasing and dimensionality increasing with increasing musical complexity. The leading modes in the signal manifold were reliable and shared across all music types, but as musical complexity increased, new neural modes emerged, though these were increasingly unreliable. Our DNN successfully synthesized MUA from music with high fidelity. After hearing loss, neural coding was strongly distorted at equal sound level but these distortions were largely corrected at equal sensation level.
Music processing in the early auditory pathway involves nonlinear interactions that shape the neural representation in complex ways. The signal manifold contains a fixed set of leading modes that are invariant across music types. As music becomes more complex the manifold is not reconfigured; instead, new, less reliable modes are added. These new modes reflect a fundamental trade-off between fidelity and complexity in the neural code. The fact that suitable amplification restores near-normal neural coding suggests that mild-to-moderate hearing loss primarily affects audibility rather than the brainstem’s capacity to process music.








