Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa

This paper presents novel attacks on voice-controlled digital assistants using nonsensical word sequences. We present the results of a small-scale experiment which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparentl...

全面介绍

书目详细资料
Main Authors: Bispham, M, Agrafiotis, I, Goldsmith, M
格式: Conference item
出版: SciTePress 2019
_version_ 1826263139825483776
author Bispham, M
Agrafiotis, I
Goldsmith, M
author_facet Bispham, M
Agrafiotis, I
Goldsmith, M
author_sort Bispham, M
collection OXFORD
description This paper presents novel attacks on voice-controlled digital assistants using nonsensical word sequences. We present the results of a small-scale experiment which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparently nonsensical sounds of which the meaning is opaque to humans. Several instances of nonsensical word sequences were identified which triggered a target command in a voice-controlled digital assistant, but which were incomprehensible to humans, as shown in tests with human experimental subjects. Our work confirms the potential for hiding malicious voice commands to voice-controlled digital assistants or other speech-controlled devices in speech sounds which are perceived by humans as nonsensical. This paper also develops a novel attack concept which involves gaining unauthorised access to a voice-controlled system using apparently unrelated utterances. We present the results of a proof-of-co ncept study showing that it is possible to trigger actions in a voice-controlled digital assistant using utterances which are accepted by the system as a target command despite having a different meaning to the command in terms of human understanding.
first_indexed 2024-03-06T19:46:57Z
format Conference item
id oxford-uuid:22a26ee8-9c36-4dda-878f-09f3ec32cc9c
institution University of Oxford
last_indexed 2024-03-06T19:46:57Z
publishDate 2019
publisher SciTePress
record_format dspace
spelling oxford-uuid:22a26ee8-9c36-4dda-878f-09f3ec32cc9c2022-03-26T11:39:50ZNonsense attacks on Google Assistant and missense attacks on Amazon AlexaConference itemhttp://purl.org/coar/resource_type/c_5794uuid:22a26ee8-9c36-4dda-878f-09f3ec32cc9cSymplectic Elements at OxfordSciTePress2019Bispham, MAgrafiotis, IGoldsmith, MThis paper presents novel attacks on voice-controlled digital assistants using nonsensical word sequences. We present the results of a small-scale experiment which demonstrates that it is possible for malicious actors to gain covert access to a voice-controlled system by hiding commands in apparently nonsensical sounds of which the meaning is opaque to humans. Several instances of nonsensical word sequences were identified which triggered a target command in a voice-controlled digital assistant, but which were incomprehensible to humans, as shown in tests with human experimental subjects. Our work confirms the potential for hiding malicious voice commands to voice-controlled digital assistants or other speech-controlled devices in speech sounds which are perceived by humans as nonsensical. This paper also develops a novel attack concept which involves gaining unauthorised access to a voice-controlled system using apparently unrelated utterances. We present the results of a proof-of-co ncept study showing that it is possible to trigger actions in a voice-controlled digital assistant using utterances which are accepted by the system as a target command despite having a different meaning to the command in terms of human understanding.
spellingShingle Bispham, M
Agrafiotis, I
Goldsmith, M
Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa
title Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa
title_full Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa
title_fullStr Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa
title_full_unstemmed Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa
title_short Nonsense attacks on Google Assistant and missense attacks on Amazon Alexa
title_sort nonsense attacks on google assistant and missense attacks on amazon alexa
work_keys_str_mv AT bisphamm nonsenseattacksongoogleassistantandmissenseattacksonamazonalexa
AT agrafiotisi nonsenseattacksongoogleassistantandmissenseattacksonamazonalexa
AT goldsmithm nonsenseattacksongoogleassistantandmissenseattacksonamazonalexa