Voice Squatting Attacks Impact Amazon Alexa and Google Home Assistants

Voice Squatting Attacks Impact Amazon Alexa and Google Home Assistants


The online retail giant said Wednesday that developers can now apply to test the voices, which were built to sound like U.S. English speakers. For example, in an adventure story or game, developers can give distinct voices to different characters.

The free voices are provided through Poly, the Amazon Web Services text-to-speech service.

Specifically, the company has released a developer preview that will let skill developers choose from a wider range of Alexa voices for use in their projects.

Developers today may already be using multiple voices in their skills, but the process of doing so is more cumbersome and rigid, as with mp3 file uploads. Amazon will provide more information to those who are selected.

This means that developers can now customize voice apps using eight unique voices other than that of Alexa. You can cache and replay Amazon Polly's speech output to prompt callers through interactive voice response (IVR) systems, such as Amazon Connect. Amazon Polly can generate speech in dozens of languages, making it easy to add speech to applications with a global audience, such as RSS feeds, websites, or videos.

The entire idea of a voice masquerading attack is to prolong the interaction time of a running app but without telling the user. They can pick from a handful of available voices (made possible by Polly, Amazon's text-to-speech service).

  • Terrell Bush