TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices

We propose a novel tapstroke inference attack method, called TapSnoop, that precisely recovers what user types on touchscreen devices. Inferring tapstrokes is challenging owing to 1) low tapstroke intensity and 2) dynamically-changing noise. We address these challenges by revealing the unique charac...

Full description

Bibliographic Details
Main Authors: Hyosu Kim, Byunggill Joe, Yunxin Liu
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8957506/
_version_ 1819276422140657664
author Hyosu Kim
Byunggill Joe
Yunxin Liu
author_facet Hyosu Kim
Byunggill Joe
Yunxin Liu
author_sort Hyosu Kim
collection DOAJ
description We propose a novel tapstroke inference attack method, called TapSnoop, that precisely recovers what user types on touchscreen devices. Inferring tapstrokes is challenging owing to 1) low tapstroke intensity and 2) dynamically-changing noise. We address these challenges by revealing the unique characteristics of tapstrokes from audio recordings exploited by TapSnoop as a side channel of tapstrokes. In particular, we develop tapstroke detection and localization algorithms that collectively leverage audio features obtained from multiple microphones, which are designed to reflect the core properties of tapstrokes. Furthermore, we improve its robustness against environmental changes, by developing environment-adaptive classification and noise subtraction algorithms. Extensive experiments with ten real-world users on both number and QWERTY keyboards show that TapSnoop can achieve an inference accuracy of 85.4% and 75.6% (96.2% and 90.8% in best case scenarios) in stable environments, respectively. TapSnoop can also achieve a reasonable accuracy even with varying noise. For example, it shows an inference accuracy of 84.8% and 72.7% in a numeric keyboard when the noise level is varied from 37.9 to 51.2 dBA and 46.7 to 60.0 dBA, respectively.
first_indexed 2024-12-23T23:39:58Z
format Article
id doaj.art-c4e55e1341c14b5e97c6e503a2dd65f8
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-12-23T23:39:58Z
publishDate 2020-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-c4e55e1341c14b5e97c6e503a2dd65f82022-12-21T17:25:42ZengIEEEIEEE Access2169-35362020-01-018147371474810.1109/ACCESS.2020.29662638957506TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen DevicesHyosu Kim0https://orcid.org/0000-0001-5612-2988Byunggill Joe1https://orcid.org/0000-0001-5360-5271Yunxin Liu2https://orcid.org/0000-0001-7352-8955School of Computer Science and Engineering, Chung-Ang University, Seoul, South KoreaSchool of Computing, Korean Advanced Institute of Science and Technology, Daejeon, South KoreaMicrosoft Research Asia, Beijing, ChinaWe propose a novel tapstroke inference attack method, called TapSnoop, that precisely recovers what user types on touchscreen devices. Inferring tapstrokes is challenging owing to 1) low tapstroke intensity and 2) dynamically-changing noise. We address these challenges by revealing the unique characteristics of tapstrokes from audio recordings exploited by TapSnoop as a side channel of tapstrokes. In particular, we develop tapstroke detection and localization algorithms that collectively leverage audio features obtained from multiple microphones, which are designed to reflect the core properties of tapstrokes. Furthermore, we improve its robustness against environmental changes, by developing environment-adaptive classification and noise subtraction algorithms. Extensive experiments with ten real-world users on both number and QWERTY keyboards show that TapSnoop can achieve an inference accuracy of 85.4% and 75.6% (96.2% and 90.8% in best case scenarios) in stable environments, respectively. TapSnoop can also achieve a reasonable accuracy even with varying noise. For example, it shows an inference accuracy of 84.8% and 72.7% in a numeric keyboard when the noise level is varied from 37.9 to 51.2 dBA and 46.7 to 60.0 dBA, respectively.https://ieeexplore.ieee.org/document/8957506/Acoustic signal processingacoustic sensorsmobile computingprivacyside-channel attacktapstroke inference
spellingShingle Hyosu Kim
Byunggill Joe
Yunxin Liu
TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices
IEEE Access
Acoustic signal processing
acoustic sensors
mobile computing
privacy
side-channel attack
tapstroke inference
title TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices
title_full TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices
title_fullStr TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices
title_full_unstemmed TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices
title_short TapSnoop: Leveraging Tap Sounds to Infer Tapstrokes on Touchscreen Devices
title_sort tapsnoop leveraging tap sounds to infer tapstrokes on touchscreen devices
topic Acoustic signal processing
acoustic sensors
mobile computing
privacy
side-channel attack
tapstroke inference
url https://ieeexplore.ieee.org/document/8957506/
work_keys_str_mv AT hyosukim tapsnoopleveragingtapsoundstoinfertapstrokesontouchscreendevices
AT byunggilljoe tapsnoopleveragingtapsoundstoinfertapstrokesontouchscreendevices
AT yunxinliu tapsnoopleveragingtapsoundstoinfertapstrokesontouchscreendevices