Hearing aids are a great tool that helps many to regain lost hearing and enjoy a much fuller life, but they are not without their issues. Hearing aid wearers are well aware of the difficulties associated with background noise. It’s not unusual for people to avoid social settings entirely due to the difficulty differentiating between the conversation and unwanted noises.

There is hope, however. A team of hearing scientists and engineers from Ohio State University believe they’ve made a breakthrough. They have developed a new algorithm which utilizes a technique called machine learning. DeLiang “Leon” Wang, professor of computer science and engineering at Ohio State, and doctoral student Yuxuan Wang are training the algorithm to separate speech by exposing it to different words in the midst of background noise. They use a special type of neural network called a “deep neural network” to do the processing—so named because its learning is performed through a deep layered structure inspired by the human brain.

In the Journal of the Acoustical Society of America, they describe how this latest developments in neural networks boost test subjects’ recognition of spoken words from as low as 10 percent to as high as 90 percent.

Researchers are hopeful that this will create a new generation of hearing aids that can eliminate, or greatly diminish, the issue of background noises for hearing aid users.

Tests showed hearing impaired people who had the benefit of this algorithm could hear better than those with no hearing loss. For a mind boggling audio sample of this technology, visit our blog and click on the link in this article. It sounds a little artificial in my opinion, but the technology is still brand new and it is already truly amazing.