Action on Hearing Loss Logo
    Total results:
    Search
      Total results:

      Detecting sound is easy - the hard bit is getting a machine to know what it means

      Whether it’s the sound of a doorbell, or complex speech patterns, sound recognition technology will transform everyone’s lives - including people with hearing loss and deafness. Kevin Taylor, our Product Technologist, tells us more.

      Scientists estimate that airborne hearing evolved over 350 million years ago when animals first colonised the land. However, creatures that lived in the oceans may have been sensitive to vibrations long before that.

      It is likely that hearing came about as a survival strategy - a wild animal has greater chance of survival if it knows the sound of a predator, and hearing has evolved in all sorts of ways. Take a bat for example: its finely tuned hearing has evolved to detect ultrasonic sounds that bounce off objectives. That way, it can navigate and detect prey. We humans use sound to warn of danger, for communication, and for pleasure.

      We’re good at sound recognition and so are animals, but could our technology ever match nature’s hearing abilities? And if it could, would there be benefits for people with hearing loss and deafness?

      German-born Emile Berliner solved the easy bit over a hundred years ago. He invented the microphone, a mechanical equivalent of the ear that converts sound into an electrical signal.

      We are now making inroads into the hard part: getting machines to make sense of what they hear. Or are we?

      Can technology recognise sounds critical to our safety?

      How do we get a machine to ‘know’ that a sound picked up through its microphone is a siren, a doorbell or conversation? We may not have to – what we can do is create an illusion. We can easily get hardware to flash a light when it detects the sound of a doorbell. We can do this by detecting sound level (loudness), duration, and frequency (pitch). The problem is the hardware may flash the light for other sounds too.

      A machine that can respond to a particular sound through a complex learning process, or when it matches the sound to a database of stored sounds is a step further. And there are already devices and applications that use this sort of technology to identify sounds in the environment.

      One example is the Braci Sound Alert app. It responds to nearby sounds, such as a doorbell, and can be set up to give a vibration alert on the phone, or relay the alert to another device. The app is not smart enough to tell you if the sound is too far away to pick up and process. So whether you’d want to rely on it for safety-critical sounds, such as a fire alarm, is a good question.

      Sonicco has a different approach. Their soon to be launched Orb is a dedicated device that can learn a particular alarm sound. Once it recognises the sound, it activates a wireless connected vibrating pad that goes under a pillow. A typical use is to listen out for fire alarms in hotel rooms. It learns to recognise the sound from the fire alarm, which regulations state has to be above a certain loudness level in the hotel room.

      Developments in speech-to-text technology

      Human speech is the most complex sound and we learn to make sense of it at an early age. Technology can respond to these complex sounds we make to a high degree of accuracy. Although speech-to-text technology has been around for a while, it’s improving all the time, to the point where it can work out context in a conversation and automatically add punctuation. Try this with the Geemarc TextHear app (free on Android). Download the app and say this into your device’s microphone: ‘This year I am going to Greece for a holiday’. Then say ‘I need to grease the door hinges’. TextHear ‘knows’ the context of the word ‘Greece’ and ‘grease’ in both conversations.

      Speech-to-text technology is set to have huge benefits for people with hearing loss and deafness, from translating ad-hoc announcements at train stations in real time to automated text relay over phone networks. But will each word, sentence and paragraph ever have real meaning to a machine, and would we want it to?

      When we think of the word ‘Greece’, we know it’s a country in southern Europe. But a word can conjure up so much more - a thought, a feeling, or a memory. Can a machine ever go that deep? For sound and speech to be truly meaningful, might the machine need to be self-aware? If that ever comes about, then human ingenuity is truly remarkable. And ‘it’ probably will ask us to stop calling it a machine.

      Find out more


      By: Kevin Taylor | 17 September 2018
      A futuristic robotic ear
      A futuristic robotic ear

      Recent Posts

      Could ‘chemical earmuffs’ prevent noise-induced hearing damage?

      Researchers in the US have identified molecules in the inner ear that are involved in the damage that loud noise causes to hearing. Blocking their activity protected against this damage when mice were exposed to loud noise. These findings could form the basis of new treatments to protect people’s hearing from noise.

      By: Tracey Pollard
      16 March 2020

      Helping patients to be heard: What the new NICE guidance means for people with tinnitus

      Imagine you’re trying to enjoy a moment of silence, but it’s interrupted by a relentless ringing noise. What if this happened all day, every day? That’s persistent tinnitus, and as an audiologist, I see the impact of this condition every day.

      By: Vai Maheswaran
      11 March 2020

      A clinical trial of a new investigational drug for vertigo in Ménière’s disease - OTO-104

      A clinical study team are looking for volunteers to test their new investigational drug, OTO-104, for vertigo episodes in Ménière’s disease.

      By: The OTO-104 Study Team
      11 March 2020

      Our future research leaders

      Last month, we invited our PhD students and our early-career Fellows to visit our head office in Highbury, to find out more about the work we do, to meet each other and to meet our staff. Marta Narkiewicz, from our research team, tells us more about the day.

      By: Marta Narkiewicz
      10 March 2020