Defending Against Microphone-Based Attacks with Personalized Noise

Voice-activated commands have become a key feature of popular devices such as smartphones, home assistants, and wearables. For convenience, many people configure their devices to be ‘always on’ and listening for voice commands from the user using a trigger phrase such as “Hey Siri,” “Okay Google,” o...

Full description

Bibliographic Details
Main Authors: Liu Yuchen, Xiang Ziyu, Seong Eun Ji, Kapadia Apu, Williamson Donald S.
Format: Article
Language:English
Published: Sciendo 2021-04-01
Series:Proceedings on Privacy Enhancing Technologies
Subjects:
Online Access:https://doi.org/10.2478/popets-2021-0021
id doaj-ce062c50e03947e8a648a6a9c090afa9
record_format Article
spelling doaj-ce062c50e03947e8a648a6a9c090afa92021-09-05T14:01:11ZengSciendoProceedings on Privacy Enhancing Technologies2299-09842021-04-012021213015010.2478/popets-2021-0021Defending Against Microphone-Based Attacks with Personalized NoiseLiu Yuchen0Xiang Ziyu1Seong Eun Ji2Kapadia Apu3Williamson Donald S.4Indiana University BloomingtonStanford University (This work was conducted while at Indiana University Bloomington)Indiana University BloomingtonIndiana University BloomingtonIndiana University BloomingtonVoice-activated commands have become a key feature of popular devices such as smartphones, home assistants, and wearables. For convenience, many people configure their devices to be ‘always on’ and listening for voice commands from the user using a trigger phrase such as “Hey Siri,” “Okay Google,” or “Alexa.” However, false positives for these triggers often result in privacy violations with conversations being inadvertently uploaded to the cloud. In addition, malware that can record one’s conversations remains a signifi-cant threat to privacy. Unlike with cameras, which people can physically obscure and be assured of their privacy, people do not have a way of knowing whether their microphone is indeed off and are left with no tangible defenses against voice based attacks. We envision a general-purpose physical defense that uses a speaker to inject specialized obfuscating ‘babble noise’ into the microphones of devices to protect against automated and human based attacks. We present a comprehensive study of how specially crafted, personalized ‘babble’ noise (‘MyBabble’) can be effective at moderate signal-to-noise ratios and can provide a viable defense against microphone based eavesdropping attacks.https://doi.org/10.2478/popets-2021-0021privacyaudiomicrophonesobfuscatingnoise
collection DOAJ
language English
format Article
sources DOAJ
author Liu Yuchen
Xiang Ziyu
Seong Eun Ji
Kapadia Apu
Williamson Donald S.
spellingShingle Liu Yuchen
Xiang Ziyu
Seong Eun Ji
Kapadia Apu
Williamson Donald S.
Defending Against Microphone-Based Attacks with Personalized Noise
Proceedings on Privacy Enhancing Technologies
privacy
audio
microphones
obfuscating
noise
author_facet Liu Yuchen
Xiang Ziyu
Seong Eun Ji
Kapadia Apu
Williamson Donald S.
author_sort Liu Yuchen
title Defending Against Microphone-Based Attacks with Personalized Noise
title_short Defending Against Microphone-Based Attacks with Personalized Noise
title_full Defending Against Microphone-Based Attacks with Personalized Noise
title_fullStr Defending Against Microphone-Based Attacks with Personalized Noise
title_full_unstemmed Defending Against Microphone-Based Attacks with Personalized Noise
title_sort defending against microphone-based attacks with personalized noise
publisher Sciendo
series Proceedings on Privacy Enhancing Technologies
issn 2299-0984
publishDate 2021-04-01
description Voice-activated commands have become a key feature of popular devices such as smartphones, home assistants, and wearables. For convenience, many people configure their devices to be ‘always on’ and listening for voice commands from the user using a trigger phrase such as “Hey Siri,” “Okay Google,” or “Alexa.” However, false positives for these triggers often result in privacy violations with conversations being inadvertently uploaded to the cloud. In addition, malware that can record one’s conversations remains a signifi-cant threat to privacy. Unlike with cameras, which people can physically obscure and be assured of their privacy, people do not have a way of knowing whether their microphone is indeed off and are left with no tangible defenses against voice based attacks. We envision a general-purpose physical defense that uses a speaker to inject specialized obfuscating ‘babble noise’ into the microphones of devices to protect against automated and human based attacks. We present a comprehensive study of how specially crafted, personalized ‘babble’ noise (‘MyBabble’) can be effective at moderate signal-to-noise ratios and can provide a viable defense against microphone based eavesdropping attacks.
topic privacy
audio
microphones
obfuscating
noise
url https://doi.org/10.2478/popets-2021-0021
work_keys_str_mv AT liuyuchen defendingagainstmicrophonebasedattackswithpersonalizednoise
AT xiangziyu defendingagainstmicrophonebasedattackswithpersonalizednoise
AT seongeunji defendingagainstmicrophonebasedattackswithpersonalizednoise
AT kapadiaapu defendingagainstmicrophonebasedattackswithpersonalizednoise
AT williamsondonalds defendingagainstmicrophonebasedattackswithpersonalizednoise
_version_ 1717810643727286272