ATLANTA — Virtual assistants, like Apple’s Siri and the Google Home, use machine learning to make our lives easier, but new research suggests that convenience could have a cost.
Researchers at the University of California at Berkley discovered virtual assistants may respond to more than a human voice. Malicious attacks can be hidden in music, white noise, even speech, and trick your smart device.
“I was surprised that it was possible,” explained Nicholas Carlini, who helped discover the targeted attacks on speech to text while he was a Ph.D student at UC Berkley. “You wouldn’t expect that you can play a piece of music and have some device be absolutely convinced that what you were doing was playing some person talking.”
Carlini said small changes to sounds, like manipulating the pitch and loudness, can change how the machines translate sounds, but not how humans hear them. Carlini said he was surprised at the speed he was able to trick the translation software.
RELATED STORIES:
- Alexa is everywhere, even the microwave
- Siri shortcut can record video of police stops
- Amazon responds to story that couple's private conversation was sent to random contact
"It really only took us a couple of days to actually make these attacks work on the systems we tried to attack," Carlini said. He showed Channel 2 Action News examples of how a potentially malicious attack could be hidden. He said hiding commands in an online video, or in music would be likely scenarios a criminal could use to get access to a virtual assistant.
Some virtual assistants have built in security features to mitigate attacks. A Google spokes person said by email Google Home has a voice match feature designed to prevent the assistant from responding to requests relating to actions such as shopping, accessing personal information.
Privacy concerns
Atlanta privacy attorney Bess Hinson said Carlini’s research is the latest example of the dangers of virtual assistants.
“There are very sophisticated criminals who work every day to find the vulnerabilities in the devices we use,” Hinson said. She’s so skeptical of the devices collecting information on users, legal or illegal, she refuses to keep one in her home.
“You are literally inviting companies into your home to listen to conversations that you may be having,” Hinson said. She said consumers should be aware of companies using virtual assistants to collect information on buyer habits.
Hinson said some smart devices, like dolls connected to blue tooth, have been banned in some countries because of concerns criminals could use them to reach children.
“We like this technology because it's convenient, and because it's convenient we think there's no price, when in fact there is a price,” Hinson said.
Pervasiveness of Machine learning
“I think it’s really transforming our lives, not just for convenience,” said Emory University computer science professor Li Xiong. She said the machine learning behind virtual assistants also fuels the artificial intelligence based medicine she researches.
“What we want to do is harness the value of the big data without compromising the privacy confidentiality of the data,” Xiong said from a computational perspective collecting data without putting people at risk is an optimization problem.
Rarity of attacks
Carlini said if criminals started using hidden audio commands to attack virtual assistants it would be easy to spot.
“One of the nice things about these types of attacks is that they’re sort of very wide scale and if it were to happen a bunch of people would report ‘why did my Alexa buy this thing on amazon?’” Carlini said.
While Carlini said there is currently a low financial incentive to hack a virtual assistant, as their popularity rises that could change.
Channel 2 Action News asked several manufacturers of popular virtual assistants about the UC Berkley research, and the vulnerability of their products. By email, a Google spokesperson said:
“Security is an ongoing focus for our teams and they constantly test and improve security features for the Assistant and Google Home devices. The Google Assistant has several features which will mitigate aggressive actions such as the use of undetectable audio commands. For example, users can enable Voice Match, which is designed to prevent the Assistant from responding to requests relating to Actions such as shopping, accessing personal information, and similarly sensitive Actions unless the device recognizes the user’s voice.”