Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals
In this article, we present a novel user identification mechanism for smart spaces called Echo-ID (referred to as E-ID). Our solution relies on inaudible sound signals for capturing the user's behavioral tapping/typing characteristics while s/he types the PIN on a PIN-PAD, and uses them to iden...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9229201/ |
_version_ | 1818725935778627584 |
---|---|
author | Syed Wajid Ali Shah Arash Shaghaghi Salil S. Kanhere Jin Zhang Adnan Anwar Robin Doss |
author_facet | Syed Wajid Ali Shah Arash Shaghaghi Salil S. Kanhere Jin Zhang Adnan Anwar Robin Doss |
author_sort | Syed Wajid Ali Shah |
collection | DOAJ |
description | In this article, we present a novel user identification mechanism for smart spaces called Echo-ID (referred to as E-ID). Our solution relies on inaudible sound signals for capturing the user's behavioral tapping/typing characteristics while s/he types the PIN on a PIN-PAD, and uses them to identify the corresponding user from a set of ${N}$ enrolled inhabitants. E-ID proposes an all-inclusive pipeline that generates and transmits appropriate sound signals, and extracts a user-specific imprint from the recorded signals (E-Sign). For accurate identification of the corresponding user given an E-Sign sample, E-ID makes use of deep-learning (i.e., CNN for feature extraction) and SVM classifier (for making the identification decision). We implemented a proof of the concept of E-ID by leveraging the commodity speaker and microphone. Our evaluations revealed that E-ID can identify the users with an average accuracy of 93% to 78% from an enrolled group of 2-5 subjects, respectively. |
first_indexed | 2024-12-17T21:50:13Z |
format | Article |
id | doaj.art-e4b143c050044495b6e5c50949842975 |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-17T21:50:13Z |
publishDate | 2020-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-e4b143c050044495b6e5c509498429752022-12-21T21:31:20ZengIEEEIEEE Access2169-35362020-01-01819450819452210.1109/ACCESS.2020.30318999229201Echo-ID: Smart User Identification Leveraging Inaudible Sound SignalsSyed Wajid Ali Shah0https://orcid.org/0000-0001-5420-5499Arash Shaghaghi1https://orcid.org/0000-0001-6630-9519Salil S. Kanhere2https://orcid.org/0000-0002-1835-3475Jin Zhang3https://orcid.org/0000-0001-9001-1931Adnan Anwar4https://orcid.org/0000-0003-0070-182XRobin Doss5https://orcid.org/0000-0001-6143-6850School of Computer Science and Engineering, The University of New South Wales, Sydney, NSW, AustraliaSchool of Computer Science and Engineering, The University of New South Wales, Sydney, NSW, AustraliaSchool of Computer Science and Engineering, The University of New South Wales, Sydney, NSW, AustraliaShenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, ChinaCentre for Cyber Security Research and Innovation (CSRI), Deakin University, Geelong, VIC, AustraliaCentre for Cyber Security Research and Innovation (CSRI), Deakin University, Geelong, VIC, AustraliaIn this article, we present a novel user identification mechanism for smart spaces called Echo-ID (referred to as E-ID). Our solution relies on inaudible sound signals for capturing the user's behavioral tapping/typing characteristics while s/he types the PIN on a PIN-PAD, and uses them to identify the corresponding user from a set of ${N}$ enrolled inhabitants. E-ID proposes an all-inclusive pipeline that generates and transmits appropriate sound signals, and extracts a user-specific imprint from the recorded signals (E-Sign). For accurate identification of the corresponding user given an E-Sign sample, E-ID makes use of deep-learning (i.e., CNN for feature extraction) and SVM classifier (for making the identification decision). We implemented a proof of the concept of E-ID by leveraging the commodity speaker and microphone. Our evaluations revealed that E-ID can identify the users with an average accuracy of 93% to 78% from an enrolled group of 2-5 subjects, respectively.https://ieeexplore.ieee.org/document/9229201/Smart-spacesuser identificationsound-signals |
spellingShingle | Syed Wajid Ali Shah Arash Shaghaghi Salil S. Kanhere Jin Zhang Adnan Anwar Robin Doss Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals IEEE Access Smart-spaces user identification sound-signals |
title | Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals |
title_full | Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals |
title_fullStr | Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals |
title_full_unstemmed | Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals |
title_short | Echo-ID: Smart User Identification Leveraging Inaudible Sound Signals |
title_sort | echo id smart user identification leveraging inaudible sound signals |
topic | Smart-spaces user identification sound-signals |
url | https://ieeexplore.ieee.org/document/9229201/ |
work_keys_str_mv | AT syedwajidalishah echoidsmartuseridentificationleveraginginaudiblesoundsignals AT arashshaghaghi echoidsmartuseridentificationleveraginginaudiblesoundsignals AT salilskanhere echoidsmartuseridentificationleveraginginaudiblesoundsignals AT jinzhang echoidsmartuseridentificationleveraginginaudiblesoundsignals AT adnananwar echoidsmartuseridentificationleveraginginaudiblesoundsignals AT robindoss echoidsmartuseridentificationleveraginginaudiblesoundsignals |