Current hearing aids are able to monitor the sound of the environment they are in and automatically adapt their settings to what they calculate to be the best sound for that environment. However, they do this using generalised preferences and assumptions, not an individual user’s needs. A hearing aid also can’t always figure out what a user wants from each environment. For example, you could be walking along a noisy street on your own, or with someone you want to talk to. The hearing aid will struggle to determine what setting to use when the only information it receives is from a noisy street. Without the social context, it is difficult for it to determine which settings are more beneficial.
Hearing aid users have the ability to select specific programmes to overcome the social context aspect. For example, you can have a programme for group conversations so that the hearing aid knows to listen for speech in noise rather than to reduce all noise, which might be what you want when you’re walking on a noisy street. However, having to think about changing the programme and doing it can distract the user and increase the amount of effort required to listen to a conversation.
Machine learning is a sub-field of Artificial Intelligence. Artificial intelligence aims to simulate human intelligence to solve a problem. With machine learning, a machine (or computer) copies the processes that people use to learn to improve how it performs a task. So, just as we learn from our experiences, machine learning operates in a similar way. When given a task such as learning how to play a video game, it can analyse the game, different strategies and outcomes and learn from these in order to gain the best result.
Machine Learning for hearing aids
Using machine learning in hearing aids has a number of benefits. Up until now, researchers and developers have tried to separate speech from noise by trying to identify gaps in people’s speech, then filtering out the background noise that fills these gaps, and then using the information gained to filter out the same noise sounds throughout the speech as well.
One of the benefits of this could be that a computer programme can be loaded with a large number of real life sounds, including thousands of speech sounds and background noises, and then be taught how to use a hearing aid’s digital processing capabilities to “filter” the noise. To do this, the machine learning programme would create methods to separate out different sounds and give them markers so that it can distinguish between them. This information would then be fed back into the system, which could then learn to recognise these markers and separate the noise from the speech, learning through its own experience as it listens to similar sound samples.
If successful, early stage research shows that using machine learning can improve a hearing aid user’s ability to understand words obscured by noise from 10 to 90 percent.
Where else can this kind of machine learning help people with hearing loss?
This kind of technology isn’t just limited to hearing aids either. Monitoring and manipulating sounds in this way has the potential to improve things like mobile and smartphone speech recognition as well as for staff on noisy factory floors, to help them communicate without having to remove their ear protection. This would also go a long way towards protecting the hearing of military staff from noise damage – they risk damaging their hearing because there is no easy way for them to communicate when wearing noise protection.
Find out more
You can find out more about how technology is improving the lives of people with hearing loss by signing up for our email newsletter, Soundbite. It contains all the latest news from the worlds of hearing technology and research. It only takes a minute to sign up!