An organist’s perspective

In this blog post, Brian Henderson describes the trajectory of his hearing loss and how this has affected his experiences of playing the organ over time.

“I am a 70 year old church organist, now with moderate hearing loss in both ears.  I have played the organ from the age of 18.  Other relevant personal information includes a career of Physics teaching up to A level and a 10-year spell of helping a local organ builder after retiring from full time teaching.

“My first experience of hearing loss was sudden and traumatic.  I was making a mobile phone call in 2011 in a busy shopping street and put the phone firmly to my ear to hear it answered.  At that exact instant someone called me and the phone rang while against my ear.  My head seemed to explode.  Luckily I was with family members who helped me to a seat and half an hour later I felt able to move on, but with the realisation that hearing in my left ear was damaged.  A visit to my GP the next day brought the news that my hearing might or might not recover.  It didn’t.  Hospital ENT consultation and an MRI scan followed but produced no answers, and an NHS hearing aid was soon supplied.  The loss was worst from 1kHz upwards, so consonants were missing from speech and the organ upperwork (the higher pitched stops) lost from my left ear.  But I still had a good right ear and I thought life was still mostly fine in spite of the chance in a million that had deafened me.

“I used the aid for conversation, but I took it out for playing as it distorted the organ sounds badly.  I read that one of the consequences of sudden hearing loss could be hyperacusis, increased sensitivity to some sounds.  This explained why organ notes in the tenor octave were now sounding thick and unpleasant, with tenor A and B booming out from what had been a well-regulated quiet flute stop.  After a year or so I realised I was not hearing this.  This was my first taste of the ability of the brain to gradually improve an initially troubling situation but little did I know that I would come to rely on this property of the brain to help me a few years on down the line.

“In mid 2015 I became aware that my right ear was not hearing as well as before.  It showed up most on the organ where I could no longer hear the highest notes of a 2 foot stop (a stop which plays 2 octaves above piano pitch) and I realised that sounds around 6 kHz and above were gone.  There was also a strange blocked feeling in my right ear, with intermittent popping, and I was aware that this right deafness did not feel the same as the left deafness.  A GP investigation started, I tried a nasal spray and inhalation.  I used olive oil and later sodium bicarbonate solution but the blocked feeling persisted even though the ear drum was visible.  I had an audiogram and a right aid was supplied for what was then described as slight deafness.

“The GP investigation into the blocked feeling continued (now 6 months after it started) and in February 2016 microsuction was performed to remove the small amount of wax that was visible.  Initially all seemed well, but 3 hours later I realised my right hearing had gone the same way as my left.  Organ experimentation showed a fall off at 2KHz, not quite as bad as the left but bad enough to make the organ sound dreadful.  The hearing loss was described now as moderate in both ears and I found conversation difficult and TV listening often unintelligible.  My life seemed to collapse around me.  To lose the left hearing had been an accident, but the right deafness seemed the direct result of a GP procedure.  I felt bitter and defeated.  And my greatest relaxation and my defining role – as church organist – was lost.

“There were two separate but overlapping strands to my life with hearing loss.  One was searching for advice about hearing loss and music.  The other was an NHS investigation into my sudden right hearing loss.  This investigation took the form of two hospital ENT consultations, several audiograms and an MRI scan.  The noisy MRI machine accentuated the hyperacusis now present in the right ear but revealed no reasons for my problems.  The three audiograms were wildly inconsistent, one even showing normal hearing in the right ear, possibly because my tinnitus and hyperacusis were masking the true situation.  This was a time of fear and frustration until in May I was finally passed on to a wonderful senior audiologist who listened intently to my descriptions.  I could tell from the way she conducted my hearing test  (with quick repetitions and surprising frequency jumps) that she was using her considerable experience to “catch me out”.  I was delighted that she produced an audiogram that matched the view I had gleaned from listening note by note on the organ.  The aids were reprogrammed and at least speech in a quiet space became easily intelligible.  Furthermore the senior audiologist understood the importance of music in my life and ordered for me a pair of Phonak Nathos S+ MW aids which she said had better musical capabilities than the standard NHS aids.

“I felt I was making real progress now with audiology, but frustration soon set in with delay in the delivery of the aids, the substitution by management of a locum at one appointment resulting in a mis-setting of the aids, and repeated difficulties in ensuring that future appointments were made with the senior audiologist who had rescued me (and actually asked that all appointments be made with her).  There is a real personal difficulty here – does one complain and risk alienating the organisation that is trying to help?  In the end I have been quietly persistent and have eventually seen the person I need, but the missed opportunities and time lost have led to a roller coaster of hopes and disappointments lasting over 6 months.

“On the musical side things have at least been more under my control.  When the right hearing loss occurred the unpleasant sound of the organ made it impossible to continue playing for services.  With hearing aids (even the later Phonak pair described above) the distortion was more than I could bear, and I tried playing with no aids.  Quiet music on 8’ flutes1  was similar to what I remembered.  Louder music on 8’ and 4’ diapasons2  sounded thick and muddy, and adding further upperwork (2’stops and mixtures) was simply frustrating because there was no change.  All the majesty of the brighter sounds was lost, but I persisted in playing to myself frequently and for short times using only the quiet foundation stops.  Over a period of time my musical memory and the adaptability of the brain enabled me to hear (or imagine) brighter sounds as the higher pitched stops were added.  Separately these stops were almost inaudible, and different notes had no discernible pitch difference.  But in the chorus there was an unmistakable element of brightness that enabled me to get some enjoyment from my playing and even contemplate returning to service playing a month or so after the sudden loss.  At this time I was still taking my aids out as I approached the organ, so my initial service playing showed up the inevitable problem – I did not know what was going on in the service.  I sometimes only knew when to play a hymn after a gesture from my wife in the front row of the congregation!

“This situation could not continue.  I persisted without aids but with the help of a small loudspeaker placed as close to my ear as possible.  It was driven from the church microphone system using its own amplifier with bass turned down and treble up to max.  In this way I played for some services although most were covered by pianists in the church doing a good job on the organ in spite of their initial fears.

“As time went on I got more used to the various programmes in the aids, and began to play to myself with aids in, music programme selected, volume set almost to lowest.  The distortions were many and varied.  All sounds above 500 Hz had a strange edge to them.  Soft flutes (with an almost pure sine waveform) had a curious repetitive hiccup caused I believe by the digital signal processing.  Rapid passages of music did not sound too bad, as individual notes did not last long enough for the distortion to offend, but slow sustained notes were horrid.  It was almost impossible to balance a solo stop with a suitable accompaniment on a different manual.  For instance an oboe stop did not sound as it used to because so much of its energy is in the upper harmonics.  The accompanying flute stop will have much of its energy within the fundamental, and the differing amplification of high and low frequencies, intended to correct my hearing, is not done precisely enough to judge the balance between a distorted oboe and a distorted flute.  Much of my playing is done by remembering combinations that used to work, but when I try a new piece (or harder still a different organ) it is almost impossible to judge whether I am producing reasonable sounds.

“There are some other aspects that make practising harder work than before.  Hyperacusis presents itself in odd and initially unsettling ways.  Treble F on a stopped flute is hugely louder than its neighbouring notes.  It actually shouts out and unbalances any chord containing it.  The same note played on a stop with a differing harmonic make up, such as a diapason rank, or an open flute, fits perfectly with its neighbouring notes.  Pitch discrimination has suffered.  Any given note sounds slightly sharper in one ear than in the other!  Chords which contain close harmonies can now set up a beating effect, presumably because of this discrepancy.  So all practice is now punctuated by repeated checks of strange out of tune sounds.  They are often caused by wrong notes, but they are equally often caused by my wrong ears.  In some cases repeated playing of a nasty sounding chord has taught my brain to accept it, and I can even return to a piece several weeks later and find that the chord I battled with and beat into submission has stayed reasonable.

“I do not expect the lost hearing to magically return, but I do hope that somehow in the future I will find better settings for the existing aids, or perhaps better aids, that might help me hear more of the organ as it really sounds.  The present problems are still considerable.  So should I have given in and stopped playing?  My answer is an emphatic no.  I am back to playing for about 3 services a month. I have dispensed with my local treble enhanced loudspeaker.  I use my aids on the music setting and have found a volume setting which is reasonably appropriate for organ sounds and much of the spoken word, and clergy have helped by giving clear announcement of hymns.  I do get satisfaction from playing the right notes in the right order, even if the practice has taken longer and even though the sound of the instrument has lost a lot of beauty and majesty.  I can still get my excitement from a loud conclusion with several ranks of mixture and pedal reeds.  And above all I once again get a buzz from leading a congregation which sings with enthusiasm and sensitivity as I play.

“And of course there is more to musical life than playing the organ.  I enjoy singing (although sometimes with difficulty) in a community choir.  Pitching notes is far more uncertain than it used to be, and the trick of checking by putting a finger in an ear is not possible with a hearing aid in the way!   I nearly stopped attending concerts in Birmingham Symphony Hall after a couple of disappointments, but then experimented with different seating positions.  Concerts have once more become enjoyable provided I pay for the best seats in the house.  Now that I can see the full orchestra clearly I find that I can hear and recognise individual instruments much better; another example of the brain’s remarkable ability to adapt and improve distressing situations.”

  1. the organist’s term 8’ means organ pipes at piano pitch (ie middle C key plays a middle C sound)
  2. 4’ refers to pipes sounding one octave above piano pitch. Diapasons are the family of open metal pipes which give the basic organ tone, and they have more harmonic development than organ flutes.

 

Brian Henderson

Bromsgrove

March 2017   

From a Musician with a Hearing Loss

Rick Ledbetter is a professional musician and composer based in the US. In this blog post, he talks about his experiences using and programming hearing aids for music, and his advice for other musicians with a hearing loss.

“I have been a musician, a bass player and composer / arranger, for over 50 years, and I have a profound bi lateral hearing loss. I have played professionally in all types of situations from small clubs to arenas, and in recording studios from coast to coast. I have my own computer based music production studio, and I have been programming my own aids for over a decade.

“Around 1989, at a recording session, the engineer told me that he had my headphones up very loud, and suggested I get my hearing tested. The test revealed my worst fear – I was losing my hearing, and subsequent tests revealed an increasing loss. It was harder and harder for me to hear conversation at rehearsals, some soft musical passages were hard to hear, my pitch perception fell, and some musicians got upset with me when I couldn’t hear what they were saying. I began to lose work.

“I bought my first pair of analog aids and they sounded terrible for music – tinny, harsh, and loud. They distorted easily, and they had no low end, so they went straight into the drawer. Then I went to digital, but I encountered the same issues, but at a greater purchase price. My audiologist tried hard, but unsuccessfully, to help me find a setting for live performance. I endured months of “try this and come back in two weeks”. With my background and ability to focus on a particular sound and know its frequency, I could better describe what I heard, and while this made his chore of solving the problems a bit easier, but it still was trial and error. At each visit, I watched him operate the software, and I saw how this was much like digital audio production software. And I wondered why I couldn’t do this myself, so I got the software and interface, and off I went. I learned the software and made improvements in the sound of my aids, using basic music production principles. So far, as my loss has progressed, I have had 5 sets of aids, and I have programmed them all.

“The journey hasn’t been easy. While, for me, the various hearing aid apps were fairly easy to learn, each make of aid had its own set of issues. Some didn’t have enough input stage headroom to handle on stage volume levels, so they produced the nasty, buzzy sound of digital distortion. All of them suffered from over reliance on sound processing: anti-feedback, noise reduction, speech enhancers, environmental adapters, directional microphone switching, and more. All of these adversely affect the sound of music. For one example, anti-feedback does not know the difference between feedback and the sound of a sustained flute. And any type of sound processor that is active, that is, listening and trying to compensate in real time, gets totally confused by music. So, to properly adjust an aid for best music quality, all of that has to be turned off, first.

“Traditionally, aids have a “music” program, a program that usually is a single EQ curve. While this may work for sitting and listening to recorded music, it does not work well for live music or on stage performance because it does not have enough dynamic range to accommodate both music and speech. Musicians need to be able to talk to one another in between playing, and they need a single program to work in all situations. We can’t be distracted by switching programs, so a single program must be created to address our needs.

“I think I have managed fairly well. At least in my case, I have found that reworking the traditional three EQ curve program produces much better results for live music. I could go into detail about this, but that’s another subject. Sometimes I also use a bluetooth wireless device that sends audio directly into my aids. It’s marketed as a TV Streamer. Fortunatley, its input requirements happen to be the same as a mixing desk, so I can use my aids as in the ear monitors, and use the cell phone app to mix between the signal and the sound from the aids’ microphone. A nice thing to have.

“I was asked to include a bit about working with audiologists, but I must be frank: in my experience there are audiologists who don’t know how to fit aids for musicians. So ask your prospective audiologist about their experience fitting for musicians before you buy – choosing the right audiologist is just as important as choosing the right hearing aid. Hearing professionals must understand that our professional reputation, our performance, and our livelihood, not to mention our stress levels, depend on our aids, and they must be right from the beginning. We cannot go through weeks of “try this and see”. We need our aids to work properly from day one.

“The audiologist would ideally have quality sound amplification gear capable of on stage volume levels. Sorry, computer speakers won’t do the job. A real time analyzer is a valuable tool to test the aids’ performance in the ear. A collection of sound samples is handy, but note that recorded music is compressed, so you will not hear the full dynamic range of music samples, but they are still useful. If possible, you should bring your instrument to the office and play it, while adjustments are made, until it sounds right to you.

“But this is highly critical: to properly adjust an aid for best music quality, all of the sound processing must be to be turned off. You cannot get good sound quality if the aids sound processors are active while you are hearing music. For example, anti-feedback thinks a flute is feedback, so it will reduce the volume of a sustained flute note, and “hunt” while the flute is being played, at an attempt to stop what it thinks is feedback. So you will hear treble sounds warble and drop out. Of course, this is unacceptable.

“But let me offer some meantime solutions:

“Miscommunication is a big problem in the process of getting the aids set right. The patient and the audiologist need to establish a common language to describe and understand what the hearing aid wearer is experiencing. To use colour as a comparison example, your definition of red may not be the same as another’s. So if you tell an audiologist, “too screechy” what may get adjusted could be 3000Hz, when what you meant is actually at 1500Hz. Or worse, an adjustment is made without regard to how to various sound processors may be causing the problem, or affecting the adjustment. So a standard language is needed. To that end, a few tools are needed:

1 – A good bar graph real time analyzer with screenshot capture capability for your cell phone. Most of them have a snapshot feature, so get one that has this. This allows you to save the readout for recall at a later time. The bar graph type is easier to read to determine what frequency and at what volume level to problem occurs. Note that Android phones have a lower audio ceiling than iPhones do, but they still can be relied upon up to 85dB. Many are free, and many are low cost. The professional apps will, of course, give better results at a greater purchase price..

2- A pitch to frequency chart, to translate what is off on your musical instrument into numbers. There are several on the internet, some that lay out a piano, others that include other instruments. Here is a link to one I like:

http://obiaudio.com/eq-chart/

A chart of the frequencies of speech is a thing good to have, too. Audiologists have them.

3 – A list of the frequencies of everyday noisemakers: For instance, a coffee grinder is about 750Hz, dropping a metal fork or spoon into a steel sink is about 1000Hz, flushing the toilet (yes, I’m a Yank) is 500Hz to 750Hz, harsh sibilants is about 4000Hz, the sound of your voice through your aids is about 500Hz.

“The Cell phone Real Time Analyzer takes a lot of guesswork out of the process. It lets you see the frequencies of what you are hearing. When you have problems hearing, open it, and take a readout, then save it. The readout will show what you are hearing at what frequency and how loud it is. Hearing aids have three EQ curves to adjust, each for a different volume level (dB), soft- 50dB, normal – 65dB, and loud -86dB. Is it is very important to know at what volume level something is too loud or too soft, so the audiologist can make the exact adjustment. In other words, while conversation at soft levels may sound just fine to you, music, which is much louder, may not, so it requires adjusting the loud EQ curve.

“The Pitch to frequency charts work great, too. Just sit at a piano and play each note, pay attention to what notes are too loud and which are too soft, and write down those notes. Then look at the chart and translate that to a corresponding frequency. The audiologist can use this information to make the proper adjustments to your aids.

“In conclusion, I hope this article will help to clear up some things. While the technology has greatly improved over the years, a lot of problems still remain to be solved. I trust that the hearing aid business will rise to the challenge and meet the needs of musicians. After all, whatever is learned and addressed will go far towards improving aids for the average user”

RL

If you would like to correspond with Rick, please send us your email and we will forward it to him.

And please do continue to email the project team with your ideas and experiences: musicandhearingaids@leeds.ac.uk

Networking November

November has been a busy month of presentations, meetings and networking!

baa-logo

In the first week, we presented a poster summarising the main themes from our interview study at the British Academy of Audiology Annual Conference in Glasgow. To access the poster, click here.

 

aamhl

We also took part in a webinar organised by Wendy Cheng, Founder of the Association of Adult Musicians with Hearing Loss. We heard talks by Marshall Chasin who described some of the limitations of hearing aid technology for listening to and performing music, and Brian Fligor who focused on ways in which musicians could optimise their experiences with their audiologists. Musicians Nancy Williams, Adam Schwalje and Charles Mokotoff presented personal stories, which led into a Q&A session for musicians to share their experiences and seek advice from the panel.

 

hmuk

Last week, at the Hearing Steering Committee, it was wonderful to hear that the Musicians Hearing Health Scheme is off to a flying start! Well over 1,000 applications have been received since the scheme started on 1st August and all agreed that this was a fine example of applied research that will benefit musicians for years to come. If you are a professional musician who would like to have access to specialist hearing assessment and bespoke hearing protection, click here.

frequalise-logowhite-w500

 

Yesterday, we had the pleasure of being part of Music and the Deaf’s FREQUALISE dissemination event. Frequalise is a project to enable deaf children and young people to explore the potential technology offers in creating, performing and sharing music. We heard talks by Danny Lane (MatD, CEO) Ros Rowe (Project Manager), Ros Hawley (Project Evaluator), and Liz Dobson (Senior Lecturer in Music Technology, University of Huddersfield); demonstrations from the workshop leaders and participants (Danny Chadwin, Mohsin Ahmed); and a live musical performance from project participant Adam Butler. The event highlighted some of the challenges of the project including delivering workshops to children and young people of different ages, and with differing levels of hearing loss, and consideration of accessible and affordable technologies (e.g. Etherpad, Garageband) that participants could continue to use at home. The day closed with a discussion about developing collaborations to secure further funding to support this important work, and the recognition that a network of people interested in improving access to music for deaf children and young people needs establishing.

Watch this space…

AG

 

HAFM Online survey

ss_intro

We are conducting research into the music listening behaviour of people (aged 18 and over) with hearing loss and who wear hearing aids for a minimum of one hour a day. As part of this study we have developed an online survey and would like to recruit as many participants as possible to take part.

To participate you will have

  • have a confirmed hearing loss (e.g. mild, moderate, severe or profound),
  • wear hearing aid(s) (but NOT a cochlear implant),
  • are between 18-90 years old

A BSL version of all the information and questions is available in the questionnaire

We will ask you about your

  • experiences of music in everyday life,
  • musical preferences
  • hearing
  • hearing aids

It should take about 30 minutes to complete and you will remain anonymous. If you leave your contact details, you will be entered into a prize draw to win one of three £75 cash prizes. Winners will be selected at random and notified in JANUARY 2017.

All the information we collect about you will be kept confidential and you will not be identifiable in any reports or publications.

The survey is available by clicking here

If you have any questions, please contact us at:

Email: musicandhearingaids@leeds.ac.uk

Text mobile: 07763648802

If you would be willing to follow us on Twitter @musicndeafness and retweet information about the survey that would be greatly appreciated.

If you do not wear hearing aids yourself, but know someone who does who might be willing to take part, please forward the following link: http://tinyurl.com/musicandhearingaids

Many thanks

‘Music and Hearing Aids team’

‘Hearing Aids for Music’ at ICMPC14 in San Francisco


IMAG1120

Earlier this month, I flew to San Francisco to attend and present at the 14th International Conference on Music Perception and Cognition which is a biennial conference covering fields such as acoustics and psychophysics, aesthetic perception and response, musical development, music education, and music, health and well-being.

HAFM front page

I presented findings from our first study (clinical questionnaire) which explored the extent of music listening issues and the frequency and success of discussions with audiologists about music. Data from 176 hearing aid (HA) users, aged 21 – 93 years old, showed that challenges with music listening were often experienced and almost half reported that this negatively affects their quality of life. Participants described issues listening to live music performances, hearing words in songs, the loss of music from their lives and associated social exclusion. The majority of participants had not discussed music with their audiologist. For those who had, some reported positive experiences wherein increased HA tailoring by the audiologist had enhanced music appreciation. Other experiences were less positive with no improvements reported. Results suggest that more could be done to help audiologists fit HAs for music and to inform HA users of their options. An overview of the results is available here.

ICMPC14_logo

I then discussed preliminary findings from our second study (in-depth interviews, with collection of audiometric data). Data from 22 HA users, aged between 24-82 years old, with varying levels of hearing impairment, highlighted the complexities of listening to music with hearing aids. Some of the problems encountered mirrored those found in our first study and in previous work (e.g. Chasin & Hockley, 2014; Madsen & Moore, 2014) such as distortion (particularly at higher frequencies), a reduction in tone quality, and challenges listening to music in live contexts. However, there were less problems with feedback and distortion than anticipated, and positively, several interviewees reported that they did not experience any difficulties when listening to music with their hearing aids. These individuals tended to be non-musicians with milder levels of hearing loss, but nonetheless were highly engaged with music in everyday life.

Results show differences in hearing aid use according to people’s level of hearing impairment, level of musical engagement and training, the musical style(s) being listened to, and the context(s) in which the music is being heard. This supports theorising by Hargreaves and colleagues (e.g. Hargreaves et al., 2006) which stipulates that responses to music are a result of interactions between listener, music and contextual variables. However, our data provide new insights into how levels of hearing impairment, and the type and functionality of the HA technology affect musical experiences. There were differences in interviewees’ understanding of their HA technology (musicians stood out as being the most informed) and in the process of acclimatising to the new sound world. Problems experienced appear to be mediated by general attitudes towards the HA technology. Some were proactive in adjusting, adapting, and experimenting, whereas others were less inclined to explore the possibilities. Across all participants, the use of Assistive Listening Devices (ALDs) was low which suggests that HA users are not as aware as they could be about what tools are out there that could help. These are just some preliminary findings. We are conducting an in-depth analysis of the dataset and will be able to report a fuller analysis in due course.

This event was attended by Alinka Greasley.

Team engage public at ‘Be Curious’ event

Be_Curious_1

On Saturday 19th March, the Hearing Aids for Music team took part in the ‘Be Curious’ Festival, which gave the general public an opportunity to learn about research projects being undertaken at the University of Leeds through talks and interactive activities.

The theme of the Wellcome Trust funded university-wide event was ‘Health and Well-being’ and was intended for those curious about how the human body works, and factors affecting health and well-being. We focused on conveying information about how we hear, how easily our hearing can be damaged, and what speech (conversation) and music (classical, popular) sound like with differing levels of hearing loss. We also set up a booth so that people could take an online hearing test.

How we hear

How loud is too loud?

Hearing loss – what it sounds like (conversation)

Hearing loss – what it sounds like (music)

Hearing test

We’d like to thank audiology@leeds for providing us with model ears, Alex Santos for designing our hearing awareness posters, and Action on Hearing Loss and Hear the World Foundation organisations for supplying us with leaflets and online resources.

IMAG0775

IMAG0774IMAG0773

Feedback

As part of the event, feedback was collected from visitors. Respondents included children and adults (age range 4-66 years old) and their responses indicated that our activities were effective in raising awareness of the prevalence and causes of hearing loss, and of healthy hearing behaviour.

What did you like best?

“Ear workshop” [Aged 12]

“Ears!” [Aged 4]

Did you learn anything new today?

“Hearing aids info” [Aged 39]

“Hearing – how it is damaged.” [Aged 44]

“Lots about hearing impairments and how to prevent hearing loss” [Aged 45]

“Extent and causes of hearing loss” [Aged 35]

Will it change anything you do? If so, in what way(s)?

“It will change how loud I listen to music through headphones” [Aged 14]

“Yes, iPads will be turned down and will buy ear defenders for my son playing drums” [Aged 44]

“Get my hearing checked more regularly!” [Aged 50]

How likely are you to tell someone else what you’ve learnt?

64% reported that they were ‘Very Likely’ to tell someone else what they had learnt.

Visitors were intrigued by the microscopic pictures of hair cells, and were surprised to learn how easily hair cells can be damaged. The hearing simulations, including the opportunity to listen to Sting’s Fields of Gold, and Eros Ramazzotti’s Sei Un Pensiero Speciale with different severities of hearing impairment, were popular with both younger and older visitors as they contemplated what their lives would be like with hearing loss. Several visitors who got their ears tested in our booth reported that it had prompted them to go and get their ears tested by a professional. Overall, feedback suggested that the activities were very informative!

This event was led by Alinka Greasley and Jackie Salter.

‘Effects of Advanced Hearing aid settings on Music Perception’

Cardiff event 21st Jan

Some practical tips for audiologists and listeners

In January 2016 we attended a seminar on the effects of advanced hearing aid features at Cardiff Metropolitan University.  This was a useful opportunity to hear from world renowned speakers on the science behind challenges with listening to music with hearing aids, feedback and practical tips from the clinical world and also insights into the benefits and limitations of hearing aid technology.

We heard from Professor Brian Moore on the effects of both hearing loss and hearing aids on music perception and from Marshall Chasin on fitting aids for musicians.  We were reminded that damage to the inner ear is not always obvious in relation to the audiogram. The Audiogram (a hearing test) is a very broad way of testing hearing and for Noise induced hearing loss (NIHL), a person may even have a normal audiogram but with underlying damage to the inner ear that causes difficulties in discriminating sounds (for more on Hidden Hearing Loss, see Chris Plack’s recent BSA seminar).  To perceive music well we need to be able to discriminate a much wider range of frequencies than is tested with an average hearing test.

Another relevant point for listening is that with hearing loss, as well as losing the ability to pick out specific sounds we also have poorer localisation skills or abilities to tell where sound is coming from.  For music this can be really important in separating sounds out from a mixed musical signal of several instruments or voices.

Specifically with hearing aids, multi-channel aids can flatten the spectrum of the musical signal which can make it harder to identify instruments.  A recent paper by Madsen, Stone, McKinney, Fitz & Moore (2015) explored the effects of wide dynamic range compression on identifying instruments and identified lower reports of clarity when using WDRC versus linear amplification.  The effects of slow versus fast compression are more complex and may relate to the type of music being listened to.

There are pros and cons of both fast acting and slow compression. Slow acting compression can facilitate being able to pick out the main tune/instrument when louder backing sounds are there, which otherwise might cause the hearing aid to cut sounds levels down too quickly.  However, it does not restore loudness perception to ‘normal’ and is not good if various sources are at different levels.  In the time it takes to recover, we can miss dynamic changes in music. Overall the consensus of opinion was that there seems to be a preference for slow compression versus fast acting compression for music but this is very dependent on setting and type of music being listened to (Moore & Sek, forthcoming).

Other Considerations for fitting aids:

In terms of microphones, directional microphones can be useful, and can help to pick out specific instruments in the presence of competing sounds.  However, they can also make things worse by reducing ability to hear the separation of sounds (where sounds are located and that they are coming from separate sources); again, this depends on the listening setting.

Low Frequency (LF) gain:  The limited LF in the Hearing aid bandwidth can also be a problem as we don’t get amplification of the lower pitches and the LF range of music exceeds the typical range we are concerned with for speech.  The LFs are limited on purpose for speech to prevent LF masking where the low frequency sounds potentially cover over the speech sounds.

In this regard consider open fitting where possible as a preference so there is natural acoustic use of LF where hearing is good for these frequencies. Music tends to be louder than speech so even with some mild LF loss we may well still hear the LF cues effectively without needing amplification from the hearing aid. Go for as wide a bandwidth as possible in the aid, again as the range of musical sounds tends to exceed that of speech.

Many aids have frequency lowering technology available but this can introduce inharmonicity where high and low harmonics are out of tune. This was considered manageable over 2 kHz as listeners with high frequency (HF) hearing loss may be unlikely to detect the mistuning with high harmonics.

Smoothing the peaks in frequency response during the fitting may help, though more evidence is needed for this.  Feedback cancellation can also be problematic as it can mistake musical tones for feedback.  Where there is frequency shifting involved this may potential alter perception of pitch and or harmonics.

The peak input limiting level of aids are a significant problem; we know music typically has a wider and higher dynamic range than speech and peak input limiting levels below 105dB simply mean we lose some of the input signal for music resulting in poor sound quality.  We were played examples of this in the seminar down to 92dB peak input limiting and the effects were very obvious.  Whilst for speech anything above 85dB is likely not to be problematic, this is not the case for music and we cut out an awful lot by the aid being optimised for speech (to hear for yourself, click here)

One issue for these factors in hearing aid fittings is that we don’t always have access to all these areas transparently in the fitting software or on the specification sheet.  In some cases it is hard to know exactly what and how the aid is affecting input or rather what algorithms are in use. Changes to compressions that used to be more obvious may be in the fitting tools but without specific parameters and it may be that clinicians will need to ask manufacturers more about what the aid is doing so that we can optimise for individual listeners.

Strategies for fitting:

NB: remember in the music program not the speech program

Consider slow compression

Higher input peak limiting

Take off feedback manager

Use open fitting where possible

Turn off frequency transpositions

Turn off noise reduction algorithms

Set OSPL90 6dB lower than for speech

If possible, play some musical scales in the clinic and check listener can hear each note

Choose the widest available bandwidth for mild losses;  consider using a narrower HF bandwidth for HL >60dB HL, and for steeper slopes to test for cochlear dead regions where patients are reporting specific discrimination problems.

Strategies for listening

When listening to recorded music – lower volume on the sound source and increase the volume on the aid

Consider use of Assistive Listening Devices (ALDs) such as using FM system as input or streamers, loop or direct audio input (DAI). Connevans have a range of ALDs that may be helpful.

Use scotch tape to cover the hearing aid microphone (this provides 10-12dB attenuation up to 4,000Hz)

Also consider whether a listener with lower degrees of loss is actually better without hearing aids for music listening given the overall louder dynamics of music.

This event was attended by Harriet Crook and Alinka Greasley.

References

Chasin, M. & Hockley, N. S. (2014). Some characteristics of amplified music through hearing aids. Hearing Research, 308, 2-12.

Madsen, S. M. K., Stone, M. A., McKinney, M. F., Fitz, K. & Moore, B. C. J. (2015). Effects of wide dynamic-range compression on the perceived clarity of individual musical instruments. Journal of the Acoustic Society of America, 137, 1867-1876.

Moore, B.C.J.  & Sek, A. (2015). Comparison of the CAM2A and NAL-NL2 hearing-aid fitting methods for participants with a wide range of hearing losses. International Journal of Audiology, 55(2), 1-8.

Get in touch

If you have any thoughts, please email the project team:

musicandhearingaids@leeds.ac.uk

You can also get updates about the project and information about music and deafness on our twitter feed @musicndeafness.

‘Hearing aids for music’ project secures £250k AHRC funding

ahrc-logoThe School of Music at the University of Leeds is celebrating as Dr Alinka Greasley has secured a significant research grant from the Arts and Humanities Research Council.

Dr Alinka Greasley is a Music Psychologist who has secured an award of £247,295 for a project that will explore the music listening behaviour of people with hearing impairments.

The three year interdisciplinary project will bring together researchers in the fields of music psychology and clinical audiology and represents the first large-scale systematic investigation of how music listening experiences are affected by deafness and the use of hearing aids.

She will lead the project accompanied by Co-Investigator Dr Harriet Crook, from Sheffield Teaching Hospitals NHS Foundation Trust and Dr Robert Fulford, Post Doctoral Research Fellow in Music Psychology at the University of Leeds.

The research will benefit from the input of a highly esteemed advisory panel consisting of academics and practitioners with expertise in auditory processing, signal processing, HA fitting, HA manufacturing, hearing therapy and deaf education.