When Alexa Can’t Understand You

Someone looking frustrated while talking to an Amazon Echo.

Excerpt from this article:

In the United States, 7.5 million people have trouble using their voice and more than 3 million people stutter, which can make it difficult for them to fully realize voice-enabled technology. Speech disabilities can stem from a wide variety of causes, including Parkinson’s disease, cerebral palsy, traumatic brain injuries, even age. In many cases, those with speech disabilities also have limited mobility and motor skills. This makes voice-enabled technology especially beneficial for them, as it doesn’t involve pushing buttons or tapping a screen. For disabled people, this technology can provide independence, making speech recognition that works for everyone all the more important.

Yet voice-enabled tech developers struggle to meet their needs. People with speech disabilities use the same language and grammar that others do. But their speech musculature—things like their tongue and jaw—is affected, resulting in consonants becoming slurred and vowels blending together, says Frank Rudzicz, an associate professor of computer science at the University of Toronto who studies speech and machine learning. These differences present a challenge in developing voice-enabled technologies.

Advertisements