Action on Hearing Loss Logo
    Total results:
    Search
      Total results:

      Detecting sound is easy - the hard bit is getting a machine to know what it means

      Whether it’s the sound of a doorbell, or complex speech patterns, sound recognition technology will transform everyone’s lives - including people with hearing loss and deafness. Kevin Taylor, our Product Technologist, tells us more.

      Scientists estimate that airborne hearing evolved over 350 million years ago when animals first colonised the land. However, creatures that lived in the oceans may have been sensitive to vibrations long before that.

      It is likely that hearing came about as a survival strategy - a wild animal has greater chance of survival if it knows the sound of a predator, and hearing has evolved in all sorts of ways. Take a bat for example: its finely tuned hearing has evolved to detect ultrasonic sounds that bounce off objectives. That way, it can navigate and detect prey. We humans use sound to warn of danger, for communication, and for pleasure.

      We’re good at sound recognition and so are animals, but could our technology ever match nature’s hearing abilities? And if it could, would there be benefits for people with hearing loss and deafness?

      German-born Emile Berliner solved the easy bit over a hundred years ago. He invented the microphone, a mechanical equivalent of the ear that converts sound into an electrical signal.

      We are now making inroads into the hard part: getting machines to make sense of what they hear. Or are we?

      Can technology recognise sounds critical to our safety?

      How do we get a machine to ‘know’ that a sound picked up through its microphone is a siren, a doorbell or conversation? We may not have to – what we can do is create an illusion. We can easily get hardware to flash a light when it detects the sound of a doorbell. We can do this by detecting sound level (loudness), duration, and frequency (pitch). The problem is the hardware may flash the light for other sounds too.

      A machine that can respond to a particular sound through a complex learning process, or when it matches the sound to a database of stored sounds is a step further. And there are already devices and applications that use this sort of technology to identify sounds in the environment.

      One example is the Braci Sound Alert app. It responds to nearby sounds, such as a doorbell, and can be set up to give a vibration alert on the phone, or relay the alert to another device. The app is not smart enough to tell you if the sound is too far away to pick up and process. So whether you’d want to rely on it for safety-critical sounds, such as a fire alarm, is a good question.

      Sonicco has a different approach. Their soon to be launched Orb is a dedicated device that can learn a particular alarm sound. Once it recognises the sound, it activates a wireless connected vibrating pad that goes under a pillow. A typical use is to listen out for fire alarms in hotel rooms. It learns to recognise the sound from the fire alarm, which regulations state has to be above a certain loudness level in the hotel room.

      Developments in speech-to-text technology

      Human speech is the most complex sound and we learn to make sense of it at an early age. Technology can respond to these complex sounds we make to a high degree of accuracy. Although speech-to-text technology has been around for a while, it’s improving all the time, to the point where it can work out context in a conversation and automatically add punctuation. Try this with the Geemarc TextHear app (free on Android). Download the app and say this into your device’s microphone: ‘This year I am going to Greece for a holiday’. Then say ‘I need to grease the door hinges’. TextHear ‘knows’ the context of the word ‘Greece’ and ‘grease’ in both conversations.

      Speech-to-text technology is set to have huge benefits for people with hearing loss and deafness, from translating ad-hoc announcements at train stations in real time to automated text relay over phone networks. But will each word, sentence and paragraph ever have real meaning to a machine, and would we want it to?

      When we think of the word ‘Greece’, we know it’s a country in southern Europe. But a word can conjure up so much more - a thought, a feeling, or a memory. Can a machine ever go that deep? For sound and speech to be truly meaningful, might the machine need to be self-aware? If that ever comes about, then human ingenuity is truly remarkable. And ‘it’ probably will ask us to stop calling it a machine.

      Find out more


      By: Kevin Taylor | 17 September 2018
      A futuristic robotic ear
      A futuristic robotic ear

      Recent Posts

      Developing an objective test for tinnitus

      Our new PhD students started their research projects in October, studying topics from a new way to measure tinnitus to improving cochlear implant surgery.

      By: Ralph Holme
      16 December 2019

      Research breakthrough in hair cell regeneration

      Researchers in the US recently discovered a way to ‘re-programme’ inner ear cells to produce cells similar to the sound-sensing hair cells in adult mice. This is an important step forward in research to develop treatments for hearing loss, as cells in the adult inner ear do not naturally replace themselves when they are damaged.

      By: Tracey Pollard
      13 December 2019

      Lloyds Bank launches sign language support for online Customers

      Lloyds Bank has announced they are the first UK organisation to offer Signly, a pioneering website translation tool for Sign Language customers.

      By: Kevin Taylor
      12 December 2019

      International Symposium on Inner Ear Therapeutics

      Earlier this month Action on Hearing Loss joined scientists, pharmaceutical companies and clinicians from around the world in Hanover, Germany, to discuss the latest developments in treatments for inner ear-related diseases, including hearing loss and tinnitus.

      By: Cláudia Gonçalves
      19 November 2019