How does music sound to me?

The Sound of Music

How Does Music Sound To Me?

Ben Bennetts

Background

I have listened to and enjoyed music all my life.  The interest was sparked by the early rock and roll greats of the late ‘50s, early ‘60s.  Performers such as Buddy Holly, Gene Vincent, Elvis Presley (his early music), the Everly Brothers, Bill Haley, Fats Domino and so on got me started.   I was then introduced to classical music at my boarding school in the late ‘50s when I joined an after-class music appreciation society.  From these two vantage points, rock and roll and classical music, I expanded my musical interests and as I travelled around the world during my professional years, I collected all sorts of music.  I estimate that I have around 1,400 CDs in my collection; half classical, the rest a mix of jazz, world, ambient, percussion, trance, hip-hop, popular and other forms of progressive and modern music.

And then tragedy struck.  As I entered my sixtieth decade, my hearing started to go down.  I accepted it as a sign of advancing years and started wearing BTE aids.  They helped, but to my great disappointment, I discovered I could no longer enjoy music, with or without the aids.  All music, live or recorded, became distorted.  I could no longer whistle a simple tune, such as Happy Birthday.  I could no longer enjoy any of my CDs.  I began investigating the cause.  I had an MRI scan.  Maybe something was wrong inside my head—dead frequency zones in the cochlea, a damaged auditory nerve, a brain tumour?  Nothing serious was revealed.  As part of my investigation, I read about research going on at Cambridge University, contacted the group, and ultimately made three visits.  Here’s what happened.

The Experiment

In 2012, I visited a PhD student, Marina Salorio-Corbetto, in the Auditory Perception Group in the Department of Experimental Psychology at Cambridge University.  At the lab, I was introduced to Professor Brian Moore.  Brian is the leader of the Auditory Perception Group and during our brief discussion he asked me to describe how music sounds to me—”What do you hear?” he asked.  This was a tough question and my reply at the time was flippant—”A cacophony”—and devoid of any meaningful content.   Subsequently, I pondered on how to answer this question more accurately and on my return, I conducted an experiment.  I selected around twenty of my CDs and played samples from them on my Bose Lifestyle 20 music system.  I kept my hearing aids in place (Phonak Nathos MW behind-the-ear aids) rather than listen to the music either with my hearing aids removed (I would not be able to hear much unless the volume was at full blast), or through headphones (which would again mean removing my hearing aids).  Here’s a summary of the results.

Results: Classical

I tried old favourites such as a couple of Beethoven Symphonies: the 5th (the Allegro, first movement) and 9th (the PrestoOde to Joy’, last movement).   I followed this with lighter music—Vivaldi’s mandolin concertos, classical guitar music composed by Fernado Sor and Robert de Visée, Telemann’s trumpet concertos, Dvořák’s From the New World, some Spanish Renaissance music (lute, vihuelas, guitar and female voice), and something more savage: Stravinsky’s Rite of Spring, an all-time favourite of mine.  In all cases, I would not have recognised the music in a ‘blind’ hearing.  I tried hard to recreate the music in my head from memory but it was extremely difficult to match memory with what was being received and processed by whatever part of the brain does this.  Notes did not go up and down as they should and in many cases the base lines dominated anything that was happening at higher frequencies.  Then I turned to other styles of music.

Results: Non-Classical

Drums and drum music have long been a passion of mine and I dredged up several CDs containing predominantly drum music, starting with a Dutch percussion group called Slagerij van Kampen performing on an album called Tan.  Here, I had more success.  I could make out the beat and also the transition and juxtapositioning of the drums.

Moving on from this, I tried other drum music—a Japanese group called Ondekoza, a Mission into Drums, a compilation of different trance-ambient artists who focus their music around a repetitive drum beat, an early 1986 electronic style recording called Transfer Station Blue by Michael Shrieve who used an electronic drum to create a sharp rhythmic beat, rich in pulse-punctuated full-frontal higher frequencies.

From here I moved on to more lush electronic music—albums such as Suave by B-Tribe (flamenco music mixed with trip-hop and ambient), Suzuki by Tosca (Dorfmeister and Huber, two early exponents of what we nowadays call ambient music); Big Calm by Morcheeba;  Moodfood by Moodswings (featuring the pure voice of Chrissie Hynde),  Sanchez and Mouquet’s Deep Forest,  Gevecht met de Engle (Battle with the Angel) by another Dutch group, Flairck, and, finally, one of my jazz CDs, an album called Blue Camel by the Lebanese oud  player Rabih Abou-Khalil.

All were lost on me.  Even with my musical memory, I could no longer appreciate the music.

Conclusion

I tried to be objective in my answer to the question, “What do you hear?”  By its very nature, the answer is difficult to express in unambiguous objective scientific terms.  We hear what we hear and if it’s pleasurable, fine.  If it’s not pleasurable, also fine.  We just don’t like it.  But, if it’s not pleasurable, whereas once it was, not fine.   I would like to know why listening to music is no longer pleasurable and why I can no longer recall complex tunes in my head.  Even if I can’t fix the problem, it would be good to know its root cause.

Marina, the PhD student working in Professor Brian Moore’s department, did send me a summary of her findings.  Basically, she said that my hearing loss had affected my ability to use temporal fine structure information which, in turn could affect pitch perception in both speech and music.  She also suggested that my hearing loss has modified my frequency selectivity and that my auditory filters were wider than normal.  Apparently, both deficits are common when you have a cochlear hearing loss but, contrary to my own thoughts, she could not find any dead regions in my cochleas.

So there you have it.  I am still working at understanding the physiological behaviour that underpins temporal fine structure and auditory filters but this is taking me deep into the realms of how the ear works and how aural data is processed by the brain and, quite frankly, is currently beyond me.  I accept that my ability to hear and enjoy music correctly will never return; there is no cure for what ails me, more’s the pity.

Footnote

In my initial answer to Brian Moore, I also said that music, to me, these days sounds just as if it had been played by the defunct Portsmouth Sinfonia.   This orchestra, founded in 1970, was comprised of people who were either non-musicians or, if they were musicians, were asked to play an instrument with which they were unfamiliar.  They were also asked to do the best they could rather than deliberately play out of tune.   Their first recording became a surprise hit and they continued playing until they disbanded in 1979.   You can sample the sound of this orchestra on YouTube.   Enter ‘Portsmouth Sinfonia’ into Google and see where it takes you.

If you do this, the renditions you will hear are extremely close to what I now hear when I play correctly-played classical and non-classical music.  But, there’s an interesting paradox here.  When I listen to the Portsmouth Sinfonia on YouTube, am I hearing what they actually played or am I hearing a distorted version of what they played?   A distorted version of music that is already distorted?   I will never know the answer to this question but I have to admit, I collapsed with laughter today when I listened to Also sprach Zarathustra, Richard Strauss’s stirring music used to herald the start of Stanley Kubrick’s movie 2001, A Space Odyssey.  The Portsmouth Sinfonia’s version of this piece of music encapsulates the meaning of what I meant when I described my listening experience to be a cacophony.

Note: this article is a summary of a much longer article with the same title that delves more into what I heard when I conducted my experiment.  You can download the full article here.

If you would like to contact Ben about his experiences, email ben@ben-bennetts.com

If you would like to share your own experiences of listening to music with a hearing loss, do get in touch with us on: musicandhearingaids@leeds.ac.uk

 

New Year Update

Dear all,

Happy New Year to you!

Here at HAFM HQ we’ve been busy reflecting on our conference, looking at some initial findings from the online survey, and drafting various documents, including a revision of our patient leaflet, and drafts of a practitioner leaflet, glossary of terms and stakeholder report.

Conference materials

All of the abstracts, many of the powerpoint slides and some of the videos from the conference can now be downloaded on our conference webpage. This includes the concert by the FORTE Ensemble.

We are interested to hear your reflections on the conference four months on, so if you have two minutes, please answer three short questions here.

 

Extension until August 2018

Numerous NHS Trusts across the country have expressed a desire to be involved in the HAFM research, and so we have secured an extension to the project (until the end of August) to be able to do this. We have just started working with 20 Trusts, including  Aintree, Airedale, Birmingham, Durham and Darlington, Harrogate, Kingston, Manchester, Sheffield, South Tees, Southend, Sunderland, Tameside and Glossop, and Western Sussex to name a few! We are adding further sites this week. This is very exciting for us because it ensures that we can survey hearing aid users from all over the UK.

Working with the first N=1,000

We are currently analysing the first 1,000 responses to the online survey, and results summaries will be available in due course. Thanks to all who have taken the time to participate in this survey – your time is greatly appreciated.

If you know others who would be willing to complete the survey, do send on the link: http://tinyurl.com/musicandhearingaidssurvey

Your thoughts!

If you have any other reflections, or you would like to write a blogpost for our website, please do get in touch with us on musicandhearingaids@leeds.ac.uk

Best wishes from

Alinka, Harriet and Amy

University of Leeds, 23.01.18

Hearing Aids in Brass Bands

In this blog post, Professor Pete Thomas describes how he and his wife use accessories alongside their hearing aids to help them enjoy playing in a brass band.

Despite being branded as tone deaf in schooldays, and suffering from moderate hearing loss too, seven years ago I started to play trombone. Now, aged 69, I play in a brass band. My wife, Carol, a lifelong musician who plays euphonium is afflicted with a severe hearing loss as well, a long term condition combined with severe tinnitus.

Playing music in a brass band presents a range of challenges for hearing aid users. Some opt in despair to abandon aids and make do with whatever residual hearing they have, or else to give up playing altogether. Sound levels may exceed 105dBA, although this can depend on seating position within the band, and most hearing aids do not work well in these conditions.

Pete Thomas Blogpost 2Players should be able to clearly hear neighbouring players, so as to be able to play in time and in tune with one another. They also need to hear other sections of the band to effect the overall tuning and timing of the music. As a trombone player, I need to be able to hear euphonium and baritone horns immediately in front, to perceive the higher pitched sounds of the cornets from the far side of the band, to be aware of the horns, all whilst not forgetting the basses (tubas) which are hard to ignore. And in rehearsal, it is important of course to hear the instructions from the conductor!

For me, playing without hearing aids is not a realistic option. With my high frequency loss, I am barely aware of the cornet sounds and much of the articulation is lost. The resultant dead and rather woolly musical environment, with no perception of commands from the conductor would preclude participation.

My aids were initially prescribed following diagnosis/treatment for benign positional vertigo; as a result the world became a more interesting place where many of the sounds diminished over the years were reinforced. Percussive sounds and cornet sounds became so much clearer and more vibrant.

However, with those first digital aids there was a major drawback. When the cornets played certain higher notes, this tended to excite feedback cancellation in the aids and it seemed as though some of the feature recognition aspects of those aids distorted the balance of the sound. Exploring this, I found that even on what was supposed to be the music setting, if I sat at home listening to my wife playing the piano, the ticking clock which is normally barely perceptible, would become a loud clacking, clearly audible above the piano. The squealing with the cornets was clearly a problem, especially as when it occurred it would take some considerable time for the aids to settle back to normal operation. I found the condition could be repeated in a quiet environment with a tone generator – a tone of approximately 2KHz from a loudspeaker would trigger it. I discussed the problem with audiologists who attempted changes of settings, changed ear moulds and generally puzzled over the problem, before concluding I was seeking the impossible.

Fortunately I encountered a more determined audiologist who appreciated the objective feedback and wanted to be of help. She prescribed some alternative aids for evaluation and following some adjustments they have proved remarkably effective. Initially, things were very confusing, as each aid made my custom-made trombone sound rather different and unpleasant to my ears. Fortunately, this was a transient situation as my brain adapted to the different hearing aids and within a few days I could switch between aids without any unpleasant perception of the sound. The new aids, whilst providing the necessary high frequency compensation, appeared less intrusive, such that apart from the useful improvement in music and comprehension of speech, I could be unaware of using them. Most importantly they were far less susceptible to excitement from those higher pitched cornet sounds!

Initially the aids were sometimes apparently overloaded by the mellow tones of the euphonium, but presumably due to the adaptive capabilities of the aids, even this problem rapidly diminished. The aids are not perfect and I will sometimes query as to whether the conductor wants to play from rehearsal mark ‘M’ or ‘N’, and it can be frustrating to miss out on the punch line of jokes from around the band. This leads on to the consideration of Carol’s more challenging problems of playing the euphonium.

Carol has played church pipe organ and piano since childhood, but during the last three or four years she has succumbed to pressure to join the band, playing a euphonium. The expectation was that this would be easy for someone of her musical experience, but with brass band pitch being transposed from concert pitch such that notes written as C sound as B-flat, there were additional challenges for her hearing. The single line of euphonium music might have paled into insignificance compared to the complexities of the Bach and Buxtehude, with pedals and multiple manuals to cope with; however in the brass band there is someone else (the conductor) setting the tempo and all those other players to fit in with.

Carol uses two Phonak Nathos SP aids with features such as the frequency translation of higher pitched sounds, enabling her to comprehend some of those missing high frequency sounds. Early experience with these aids suggested that she had trouble precisely pitching and playing in tune and this was particularly evident if playing in a small ensemble. Fortunately, the enabling of a music program, disabling some features of the aids, made a dramatic difference and in the small ensemble she was able to play far more reliably in tune. However, in band a major problem unfolded whereby when playing her euphonium, especially when accompanied by a neighbouring euphonium, she could hear virtually nothing of the rest of the band. This problem intrigued me and I set about trying to find why the euphonium was so troublesome!

Pete Thomas Blogpost5All brass instruments have characteristic spectral properties, whereby the fundamental of a note with a particular set of overtones gives the instrument its sound. The different instruments differ in size (from the tiny soprano cornet to the large B-flat tuba) and in construction with the size and degree of taper in the bore. The trombone is a parallel bore instrument, and this reflects in the sound which is rich in overtones – the FFT analysis here shows peaks extending to the 15th or even 20th harmonic of the note being played. Curiously, the trombone can seemingly be very light on the fundamental of the note being played. In contrast, the euphonium with its taper bore is very strong in the fundamental, with the overtones rapidly dying away. The similarly pitched baritone horn with a less tapered bore has a spectrum more like that of the trombone.

It is therefore no surprise when my wife encountered the problem with the euphonium and the musical director of the band suggested trying the baritone horn for a while, that Carol found this a great benefit. This enabled her to play in the band and still hear much of the music from other sections of the band. However it did not help with hearing direction from the conductor.

We therefore looked into the potential use of a microphone and loop system. Whilst this could have worked within the band room, it was clearly not a practical solution for performance venues. It was around this time we discovered the Phonak Roger pen system. Initial enquiries with a supplier were far from optimistic of its utility, but discussion with CamTAD suggested it might be worth further exploration. With a musical director keen to cooperate, a microphone and receivers were sourced. With Carol’s severe high frequency hearing loss, the bandwidth limitations of the telecoil loop interface was not seen to be a problem compared to the convenience of implementation without assistance from the NHS audiologist. However with my helpful NHS audiologist happy to enable such things we got some Roger receivers for my aids in the hope they might be a benefit.

The Roger pen proved to be a major benefit for Carol. She was able to clearly hear instruction from the musical director (wearing the microphone) and as a bonus could hear more of the cornet section music, which often provided the lead in the music. Somewhat disappointingly, we found that although I could hear the conductor more clearly with the microphone, due to the bandwidth limitations of the Roger pen and my open ear moulds, I perceived the rather tinny overlay of music rather unhelpful even with the wireless receivers giving direct audio to my aids. However I have at times found the mixed mode of normal aids and the Roger input useful. The wider benefits of the Roger pen are obvious.

Recently with more experience, Carol has been returned to playing euphonium. We have to conclude that from our experience, hearing aids and appropriate accessories can be a real benefit and enable hearing impaired users to successfully participate within a brass band. The recent HAFM conference at Leeds inspires us for further work.

 

An organist’s perspective

In this blog post, Brian Henderson describes the trajectory of his hearing loss and how this has affected his experiences of playing the organ over time.

“I am a 70 year old church organist, now with moderate hearing loss in both ears.  I have played the organ from the age of 18.  Other relevant personal information includes a career of Physics teaching up to A level and a 10-year spell of helping a local organ builder after retiring from full time teaching.

“My first experience of hearing loss was sudden and traumatic.  I was making a mobile phone call in 2011 in a busy shopping street and put the phone firmly to my ear to hear it answered.  At that exact instant someone called me and the phone rang while against my ear.  My head seemed to explode.  Luckily I was with family members who helped me to a seat and half an hour later I felt able to move on, but with the realisation that hearing in my left ear was damaged.  A visit to my GP the next day brought the news that my hearing might or might not recover.  It didn’t.  Hospital ENT consultation and an MRI scan followed but produced no answers, and an NHS hearing aid was soon supplied.  The loss was worst from 1kHz upwards, so consonants were missing from speech and the organ upperwork (the higher pitched stops) lost from my left ear.  But I still had a good right ear and I thought life was still mostly fine in spite of the chance in a million that had deafened me.

“I used the aid for conversation, but I took it out for playing as it distorted the organ sounds badly.  I read that one of the consequences of sudden hearing loss could be hyperacusis, increased sensitivity to some sounds.  This explained why organ notes in the tenor octave were now sounding thick and unpleasant, with tenor A and B booming out from what had been a well-regulated quiet flute stop.  After a year or so I realised I was not hearing this.  This was my first taste of the ability of the brain to gradually improve an initially troubling situation but little did I know that I would come to rely on this property of the brain to help me a few years on down the line.

“In mid 2015 I became aware that my right ear was not hearing as well as before.  It showed up most on the organ where I could no longer hear the highest notes of a 2 foot stop (a stop which plays 2 octaves above piano pitch) and I realised that sounds around 6 kHz and above were gone.  There was also a strange blocked feeling in my right ear, with intermittent popping, and I was aware that this right deafness did not feel the same as the left deafness.  A GP investigation started, I tried a nasal spray and inhalation.  I used olive oil and later sodium bicarbonate solution but the blocked feeling persisted even though the ear drum was visible.  I had an audiogram and a right aid was supplied for what was then described as slight deafness.

“The GP investigation into the blocked feeling continued (now 6 months after it started) and in February 2016 microsuction was performed to remove the small amount of wax that was visible.  Initially all seemed well, but 3 hours later I realised my right hearing had gone the same way as my left.  Organ experimentation showed a fall off at 2KHz, not quite as bad as the left but bad enough to make the organ sound dreadful.  The hearing loss was described now as moderate in both ears and I found conversation difficult and TV listening often unintelligible.  My life seemed to collapse around me.  To lose the left hearing had been an accident, but the right deafness seemed the direct result of a GP procedure.  I felt bitter and defeated.  And my greatest relaxation and my defining role – as church organist – was lost.

“There were two separate but overlapping strands to my life with hearing loss.  One was searching for advice about hearing loss and music.  The other was an NHS investigation into my sudden right hearing loss.  This investigation took the form of two hospital ENT consultations, several audiograms and an MRI scan.  The noisy MRI machine accentuated the hyperacusis now present in the right ear but revealed no reasons for my problems.  The three audiograms were wildly inconsistent, one even showing normal hearing in the right ear, possibly because my tinnitus and hyperacusis were masking the true situation.  This was a time of fear and frustration until in May I was finally passed on to a wonderful senior audiologist who listened intently to my descriptions.  I could tell from the way she conducted my hearing test  (with quick repetitions and surprising frequency jumps) that she was using her considerable experience to “catch me out”.  I was delighted that she produced an audiogram that matched the view I had gleaned from listening note by note on the organ.  The aids were reprogrammed and at least speech in a quiet space became easily intelligible.  Furthermore the senior audiologist understood the importance of music in my life and ordered for me a pair of Phonak Nathos S+ MW aids which she said had better musical capabilities than the standard NHS aids.

“I felt I was making real progress now with audiology, but frustration soon set in with delay in the delivery of the aids, the substitution by management of a locum at one appointment resulting in a mis-setting of the aids, and repeated difficulties in ensuring that future appointments were made with the senior audiologist who had rescued me (and actually asked that all appointments be made with her).  There is a real personal difficulty here – does one complain and risk alienating the organisation that is trying to help?  In the end I have been quietly persistent and have eventually seen the person I need, but the missed opportunities and time lost have led to a roller coaster of hopes and disappointments lasting over 6 months.

“On the musical side things have at least been more under my control.  When the right hearing loss occurred the unpleasant sound of the organ made it impossible to continue playing for services.  With hearing aids (even the later Phonak pair described above) the distortion was more than I could bear, and I tried playing with no aids.  Quiet music on 8’ flutes1  was similar to what I remembered.  Louder music on 8’ and 4’ diapasons2  sounded thick and muddy, and adding further upperwork (2’stops and mixtures) was simply frustrating because there was no change.  All the majesty of the brighter sounds was lost, but I persisted in playing to myself frequently and for short times using only the quiet foundation stops.  Over a period of time my musical memory and the adaptability of the brain enabled me to hear (or imagine) brighter sounds as the higher pitched stops were added.  Separately these stops were almost inaudible, and different notes had no discernible pitch difference.  But in the chorus there was an unmistakable element of brightness that enabled me to get some enjoyment from my playing and even contemplate returning to service playing a month or so after the sudden loss.  At this time I was still taking my aids out as I approached the organ, so my initial service playing showed up the inevitable problem – I did not know what was going on in the service.  I sometimes only knew when to play a hymn after a gesture from my wife in the front row of the congregation!

“This situation could not continue.  I persisted without aids but with the help of a small loudspeaker placed as close to my ear as possible.  It was driven from the church microphone system using its own amplifier with bass turned down and treble up to max.  In this way I played for some services although most were covered by pianists in the church doing a good job on the organ in spite of their initial fears.

“As time went on I got more used to the various programmes in the aids, and began to play to myself with aids in, music programme selected, volume set almost to lowest.  The distortions were many and varied.  All sounds above 500 Hz had a strange edge to them.  Soft flutes (with an almost pure sine waveform) had a curious repetitive hiccup caused I believe by the digital signal processing.  Rapid passages of music did not sound too bad, as individual notes did not last long enough for the distortion to offend, but slow sustained notes were horrid.  It was almost impossible to balance a solo stop with a suitable accompaniment on a different manual.  For instance an oboe stop did not sound as it used to because so much of its energy is in the upper harmonics.  The accompanying flute stop will have much of its energy within the fundamental, and the differing amplification of high and low frequencies, intended to correct my hearing, is not done precisely enough to judge the balance between a distorted oboe and a distorted flute.  Much of my playing is done by remembering combinations that used to work, but when I try a new piece (or harder still a different organ) it is almost impossible to judge whether I am producing reasonable sounds.

“There are some other aspects that make practising harder work than before.  Hyperacusis presents itself in odd and initially unsettling ways.  Treble F on a stopped flute is hugely louder than its neighbouring notes.  It actually shouts out and unbalances any chord containing it.  The same note played on a stop with a differing harmonic make up, such as a diapason rank, or an open flute, fits perfectly with its neighbouring notes.  Pitch discrimination has suffered.  Any given note sounds slightly sharper in one ear than in the other!  Chords which contain close harmonies can now set up a beating effect, presumably because of this discrepancy.  So all practice is now punctuated by repeated checks of strange out of tune sounds.  They are often caused by wrong notes, but they are equally often caused by my wrong ears.  In some cases repeated playing of a nasty sounding chord has taught my brain to accept it, and I can even return to a piece several weeks later and find that the chord I battled with and beat into submission has stayed reasonable.

“I do not expect the lost hearing to magically return, but I do hope that somehow in the future I will find better settings for the existing aids, or perhaps better aids, that might help me hear more of the organ as it really sounds.  The present problems are still considerable.  So should I have given in and stopped playing?  My answer is an emphatic no.  I am back to playing for about 3 services a month. I have dispensed with my local treble enhanced loudspeaker.  I use my aids on the music setting and have found a volume setting which is reasonably appropriate for organ sounds and much of the spoken word, and clergy have helped by giving clear announcement of hymns.  I do get satisfaction from playing the right notes in the right order, even if the practice has taken longer and even though the sound of the instrument has lost a lot of beauty and majesty.  I can still get my excitement from a loud conclusion with several ranks of mixture and pedal reeds.  And above all I once again get a buzz from leading a congregation which sings with enthusiasm and sensitivity as I play.

“And of course there is more to musical life than playing the organ.  I enjoy singing (although sometimes with difficulty) in a community choir.  Pitching notes is far more uncertain than it used to be, and the trick of checking by putting a finger in an ear is not possible with a hearing aid in the way!   I nearly stopped attending concerts in Birmingham Symphony Hall after a couple of disappointments, but then experimented with different seating positions.  Concerts have once more become enjoyable provided I pay for the best seats in the house.  Now that I can see the full orchestra clearly I find that I can hear and recognise individual instruments much better; another example of the brain’s remarkable ability to adapt and improve distressing situations.”

  1. the organist’s term 8’ means organ pipes at piano pitch (ie middle C key plays a middle C sound)
  2. 4’ refers to pipes sounding one octave above piano pitch. Diapasons are the family of open metal pipes which give the basic organ tone, and they have more harmonic development than organ flutes.

 

Brian Henderson

Bromsgrove

March 2017   

From a Musician with a Hearing Loss

Rick Ledbetter is a professional musician and composer based in the US. In this blog post, he talks about his experiences using and programming hearing aids for music, and his advice for other musicians with a hearing loss.

“I have been a musician, a bass player and composer / arranger, for over 50 years, and I have a profound bi lateral hearing loss. I have played professionally in all types of situations from small clubs to arenas, and in recording studios from coast to coast. I have my own computer based music production studio, and I have been programming my own aids for over a decade.

“Around 1989, at a recording session, the engineer told me that he had my headphones up very loud, and suggested I get my hearing tested. The test revealed my worst fear – I was losing my hearing, and subsequent tests revealed an increasing loss. It was harder and harder for me to hear conversation at rehearsals, some soft musical passages were hard to hear, my pitch perception fell, and some musicians got upset with me when I couldn’t hear what they were saying. I began to lose work.

“I bought my first pair of analog aids and they sounded terrible for music – tinny, harsh, and loud. They distorted easily, and they had no low end, so they went straight into the drawer. Then I went to digital, but I encountered the same issues, but at a greater purchase price. My audiologist tried hard, but unsuccessfully, to help me find a setting for live performance. I endured months of “try this and come back in two weeks”. With my background and ability to focus on a particular sound and know its frequency, I could better describe what I heard, and while this made his chore of solving the problems a bit easier, but it still was trial and error. At each visit, I watched him operate the software, and I saw how this was much like digital audio production software. And I wondered why I couldn’t do this myself, so I got the software and interface, and off I went. I learned the software and made improvements in the sound of my aids, using basic music production principles. So far, as my loss has progressed, I have had 5 sets of aids, and I have programmed them all.

“The journey hasn’t been easy. While, for me, the various hearing aid apps were fairly easy to learn, each make of aid had its own set of issues. Some didn’t have enough input stage headroom to handle on stage volume levels, so they produced the nasty, buzzy sound of digital distortion. All of them suffered from over reliance on sound processing: anti-feedback, noise reduction, speech enhancers, environmental adapters, directional microphone switching, and more. All of these adversely affect the sound of music. For one example, anti-feedback does not know the difference between feedback and the sound of a sustained flute. And any type of sound processor that is active, that is, listening and trying to compensate in real time, gets totally confused by music. So, to properly adjust an aid for best music quality, all of that has to be turned off, first.

“Traditionally, aids have a “music” program, a program that usually is a single EQ curve. While this may work for sitting and listening to recorded music, it does not work well for live music or on stage performance because it does not have enough dynamic range to accommodate both music and speech. Musicians need to be able to talk to one another in between playing, and they need a single program to work in all situations. We can’t be distracted by switching programs, so a single program must be created to address our needs.

“I think I have managed fairly well. At least in my case, I have found that reworking the traditional three EQ curve program produces much better results for live music. I could go into detail about this, but that’s another subject. Sometimes I also use a bluetooth wireless device that sends audio directly into my aids. It’s marketed as a TV Streamer. Fortunatley, its input requirements happen to be the same as a mixing desk, so I can use my aids as in the ear monitors, and use the cell phone app to mix between the signal and the sound from the aids’ microphone. A nice thing to have.

“I was asked to include a bit about working with audiologists, but I must be frank: in my experience there are audiologists who don’t know how to fit aids for musicians. So ask your prospective audiologist about their experience fitting for musicians before you buy – choosing the right audiologist is just as important as choosing the right hearing aid. Hearing professionals must understand that our professional reputation, our performance, and our livelihood, not to mention our stress levels, depend on our aids, and they must be right from the beginning. We cannot go through weeks of “try this and see”. We need our aids to work properly from day one.

“The audiologist would ideally have quality sound amplification gear capable of on stage volume levels. Sorry, computer speakers won’t do the job. A real time analyzer is a valuable tool to test the aids’ performance in the ear. A collection of sound samples is handy, but note that recorded music is compressed, so you will not hear the full dynamic range of music samples, but they are still useful. If possible, you should bring your instrument to the office and play it, while adjustments are made, until it sounds right to you.

“But this is highly critical: to properly adjust an aid for best music quality, all of the sound processing must be to be turned off. You cannot get good sound quality if the aids sound processors are active while you are hearing music. For example, anti-feedback thinks a flute is feedback, so it will reduce the volume of a sustained flute note, and “hunt” while the flute is being played, at an attempt to stop what it thinks is feedback. So you will hear treble sounds warble and drop out. Of course, this is unacceptable.

“But let me offer some meantime solutions:

“Miscommunication is a big problem in the process of getting the aids set right. The patient and the audiologist need to establish a common language to describe and understand what the hearing aid wearer is experiencing. To use colour as a comparison example, your definition of red may not be the same as another’s. So if you tell an audiologist, “too screechy” what may get adjusted could be 3000Hz, when what you meant is actually at 1500Hz. Or worse, an adjustment is made without regard to how to various sound processors may be causing the problem, or affecting the adjustment. So a standard language is needed. To that end, a few tools are needed:

1 – A good bar graph real time analyzer with screenshot capture capability for your cell phone. Most of them have a snapshot feature, so get one that has this. This allows you to save the readout for recall at a later time. The bar graph type is easier to read to determine what frequency and at what volume level to problem occurs. Note that Android phones have a lower audio ceiling than iPhones do, but they still can be relied upon up to 85dB. Many are free, and many are low cost. The professional apps will, of course, give better results at a greater purchase price..

2- A pitch to frequency chart, to translate what is off on your musical instrument into numbers. There are several on the internet, some that lay out a piano, others that include other instruments. Here is a link to one I like:

http://obiaudio.com/eq-chart/

A chart of the frequencies of speech is a thing good to have, too. Audiologists have them.

3 – A list of the frequencies of everyday noisemakers: For instance, a coffee grinder is about 750Hz, dropping a metal fork or spoon into a steel sink is about 1000Hz, flushing the toilet (yes, I’m a Yank) is 500Hz to 750Hz, harsh sibilants is about 4000Hz, the sound of your voice through your aids is about 500Hz.

“The Cell phone Real Time Analyzer takes a lot of guesswork out of the process. It lets you see the frequencies of what you are hearing. When you have problems hearing, open it, and take a readout, then save it. The readout will show what you are hearing at what frequency and how loud it is. Hearing aids have three EQ curves to adjust, each for a different volume level (dB), soft- 50dB, normal – 65dB, and loud -86dB. Is it is very important to know at what volume level something is too loud or too soft, so the audiologist can make the exact adjustment. In other words, while conversation at soft levels may sound just fine to you, music, which is much louder, may not, so it requires adjusting the loud EQ curve.

“The Pitch to frequency charts work great, too. Just sit at a piano and play each note, pay attention to what notes are too loud and which are too soft, and write down those notes. Then look at the chart and translate that to a corresponding frequency. The audiologist can use this information to make the proper adjustments to your aids.

“In conclusion, I hope this article will help to clear up some things. While the technology has greatly improved over the years, a lot of problems still remain to be solved. I trust that the hearing aid business will rise to the challenge and meet the needs of musicians. After all, whatever is learned and addressed will go far towards improving aids for the average user”

RL

If you would like to correspond with Rick, please send us your email and we will forward it to him.

And please do continue to email the project team with your ideas and experiences: musicandhearingaids@leeds.ac.uk

Hearing aids at the Thackray Medical Museum

Thackray logoGoing back in time: hearing aids through the decades

In March this year, just a few weeks into our project ‘Hearing aids for music’, the project team visited the Thackray Medical Museum in Leeds.

We wanted to learn something about the history of hearing aids from the important collection of amplification and audiology equipment housed there.

But we also had another agenda… We wanted to see if there was any evidence in the collection of hearing aids having been used to amplify music – not just speech.

Read more