‘Hearing Impairment and the Enjoyment and Performance of Music’

Conference at Institute of Acoustics, Kingston University, 9 July 2015

Only six months into our project, we were pleased to be invited to talk at an Institute of Acoustics (IoA) conference on the topic of hearing aids and music. Delegates included audiologists, music therapists, technical consultants, acousticians, psychologists and, of course, hearing aid users themselves.

Opening the conference was Mike Wright, Chair of the Musical Acoustics Group at the IoA, who began by reminding us that there is a longstanding need to understand music listening for people with hearing impairments. He cited a famous quote by the deaf percussionist, Evelyn Glennie, about the nature of our senses:

“…in the Italian language… the verb ‘sentire’ means to hear and the same verb in the reflexive form ‘sentirsi’ means to feel. Deafness does not mean that you can’t hear, only that there is something wrong with the ears. Even someone who is totally deaf can still hear/feel sounds”. (Glennie, 2010)

In the first talk, Graham Frost, an audiologist and technical consultant, introduced principles about the effects of deafness on music perception. Deafness affects intensity or ‘loudness’ perception, but it also affects how we perceive frequency or ‘pitch’ and temporal aspects such as ‘rhythm’ or timing. For example, we recognise different instruments because they have different harmonic profiles and onset rise times. Music can also cause people with hearing impairments to experience tinnitus (ringing or buzzing in the ears), hyperacusis (extreme sensitivity to sounds) or diplacusis (experiencing different pitches or timings in each ear). Graham argued that we cannot predict the effects of deafness on music perception from standard audiometry alone, only measure thresholds of sound intensity. Every hearing loss is unique and can only be partially compensated for.

Next, acoustician Peter Mapp (Peter Mapp Associates) talked about assistive listening devices (ALDs) for music and speech. At home, listening to the TV or radio, many people can simply turn the volume up. However, in live settings, this is not possible and many people experience issues with reverb, and background noise. The good news is that many venues use T-loop, infrared and new Wi-Fi technology to feed sound directly to the hearing aid. The bad news is that not all venues are using the best microphone technology and this can affect both speech and music intelligibility. Manufacturers may be wary of Noise Regulations as they do not want to be sued for causing hearing damage from over-amplification!

Acoustician Carl Hopkins (University of Liverpool) then talked about a project which aimed to help musicians with hearing impairments access music by feeling vibrations. Carl showed how our sense of touch is far more limited than hearing. For this reason, the team identified the best pitch range for feeling vibrations on the skin of the hands and feet and also showed that hearing people are no less sensitive that deaf people to vibrations. Perhaps vibrations could be used to help all musicians in group performance? The researchers demonstrated the musical power of vibrations by playing a Beatles song ‘Day Tripper’ in separate, acoustically isolated rooms where instead of hearing each other, they could feel the vibrations of each other’s instruments. Click here to see the video.

After lunch, Music Therapist and Educator, Christine Rocca (Nordoff Robbins / Mary Hare Schools), presented a number of case studies from her work with children with cochlear implants (CIs), some of whom also wear hearing aids. The children at Mary Hare explore pitch glides and interval imitation in the context of familiar songs like ‘Humpty Dumpty’, often exploring major and minor intervals. The therapists and teachers use both recorded and live accompaniments for the children which helps them pick out their part. Christine highlighted that music is ‘multi-sensory’ and even very young babies can learn social ‘turn-taking’ and imitation skills by playing musical games.

On a similar theme, Janet McKenzie, a Speech and Language Therapist (SLT) at the Cambridge Hearing Implant Centre spoke about musical development in children and adults with CIs. For CI users, music is often a positive, unexpected outcome; people who had never previously experienced music are suddenly able to access musical rhythms and melodic shapes. In fact, due to changes in candidacy criteria, some people are now being implanted just so that they can access music and environmental sounds.

Stephen Dance from the London South Bank University shared his research on the hearing acuity of music students at the Royal Academy of Music. So far, 2,576 students have completed audiometric tests and the team have identified patterns of hearing loss attributed to noise induced hearing loss (NIHL) in musical settings. The findings showed that players of certain instruments such as the organ, percussion and brass are most at risk of hearing loss. There were also some lateralised effects; violinists’ and horn players’ left ears are affected more than their right, and piano accompanists have worse hearing in the right ear potentially as a result of working with singers. The ‘notch’ in the audiogram as a result of music-induced hearing loss seems to be at 6kHz rather than 4kHz for musicians but this could be consistent with different kinds of hearing loss.

Finally, we presented initial findings from our patient survey of two UK audiology clinics, one NHS, one private. The short questionnaire asked hearing aid (HA) users about their music listening experiences, effects on quality of life, and the extent to which they had discussed music listening with their audiologist. So far, results showed that HA users frequently experience problems with music listening and almost half of the sample reported that this negatively affects their quality of life. The most common problems reported were a lack of fidelity, difficulty hearing words in songs, and issues hearing at live music performances.  The data also suggested that most participants had never talked with audiologist about music listening, and for those that had, the outcomes had rarely been successful.

The closing discussion that followed raised many issues. Hearing aid manufacturers have responded to demand from the user market to make HAs small and discreet. This has meant that batteries are also smaller and less powerful (< 3 volts), while conversely, the digital signal processing contained within them for optimising speech amplification has become more and more complex. Multi-channel compression requires a lot of processing power and therefore battery power. Perhaps simpler processing would be better for music? One idea would be to set up a person’s HA for each instrument or the type of music they listen to the most. But, as Evelyn Glennie once told me, every acoustical situation is different and even the same instrument never sounds the same.

The group also discussed the problem of uptake for hearing aid users. There are 6 million people who would benefit from hearing aids in the UK alone but only 1.4 million wear them regularly. Many people find it difficult to take the time to adjust to the new sound world provided by their new HAs, which audiologists know to be beneficial in the longer term. Perhaps if HAs were designed for amplifying music as well as speech, more people would wear them?

If you have any thoughts or questions, or would to share your experiences of listening to music with hearing aids, please email us musicandhearingaids@leeds.ac.uk or join our Discussion Forum.

RF