Common evaluation pitfalls in touch-based authentication systems

In this paper, we investigate common pitfalls affecting the evaluation of authentication systems based on touch dynamics. We consider different factors that lead to misrepresented performance, are incompatible with stated system and threat models or impede reproducibility and comparability with prev...

Deskribapen osoa

Xehetasun bibliografikoak
Egile Nagusiak: Georgiev, M, Eberz, S, Turner, H, Lovisotto, G, Martinovic, I
Formatua: Conference item
Hizkuntza:English
Argitaratua: Association for Computing Machinery 2022
_version_ 1826307991973920768
author Georgiev, M
Eberz, S
Turner, H
Lovisotto, G
Martinovic, I
author_facet Georgiev, M
Eberz, S
Turner, H
Lovisotto, G
Martinovic, I
author_sort Georgiev, M
collection OXFORD
description In this paper, we investigate common pitfalls affecting the evaluation of authentication systems based on touch dynamics. We consider different factors that lead to misrepresented performance, are incompatible with stated system and threat models or impede reproducibility and comparability with previous work. Specifically, we investigate the effects of (i) small sample sizes (both number of users and recording sessions), (ii) using different phone models in training data, (iii) selecting non-contiguous training data, (iv) inserting attacker samples in training data and (v) swipe aggregation. We perform a systematic review of 30 touch dynamics papers showing that all of them overlook at least one of these pitfalls. To quantify each pitfall's effect, we design a set of experiments and collect a new longitudinal dataset of touch dynamics from 470 users over 31 days comprised of 1,166,092 unique swipes. We make this dataset and our code available online. Our results show significant percentage-point changes in reported mean EER for several pitfalls: including attacker data (2.55%), non-contiguous training data (3.8%), phone model mixing (3.2%-5.8%). We show that, in a common evaluation setting, cumulative effects of these evaluation choices result in a combined difference of 8.9% EER. We also largely observe these effects across the entire ROC curve. Furthermore, we validate the pitfalls on four distinct classifiers - SVM, Random Forest, Neural Network, and kNN. Based on these insights, we propose a set of best practices that, if followed, will lead to more realistic and comparable reporting of results in the field.
first_indexed 2024-03-07T07:12:58Z
format Conference item
id oxford-uuid:d5919db3-db18-4fc5-a711-b3c79b032dee
institution University of Oxford
language English
last_indexed 2024-03-07T07:12:58Z
publishDate 2022
publisher Association for Computing Machinery
record_format dspace
spelling oxford-uuid:d5919db3-db18-4fc5-a711-b3c79b032dee2022-06-30T10:40:45ZCommon evaluation pitfalls in touch-based authentication systemsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:d5919db3-db18-4fc5-a711-b3c79b032deeEnglishSymplectic ElementsAssociation for Computing Machinery2022Georgiev, MEberz, STurner, HLovisotto, GMartinovic, IIn this paper, we investigate common pitfalls affecting the evaluation of authentication systems based on touch dynamics. We consider different factors that lead to misrepresented performance, are incompatible with stated system and threat models or impede reproducibility and comparability with previous work. Specifically, we investigate the effects of (i) small sample sizes (both number of users and recording sessions), (ii) using different phone models in training data, (iii) selecting non-contiguous training data, (iv) inserting attacker samples in training data and (v) swipe aggregation. We perform a systematic review of 30 touch dynamics papers showing that all of them overlook at least one of these pitfalls. To quantify each pitfall's effect, we design a set of experiments and collect a new longitudinal dataset of touch dynamics from 470 users over 31 days comprised of 1,166,092 unique swipes. We make this dataset and our code available online. Our results show significant percentage-point changes in reported mean EER for several pitfalls: including attacker data (2.55%), non-contiguous training data (3.8%), phone model mixing (3.2%-5.8%). We show that, in a common evaluation setting, cumulative effects of these evaluation choices result in a combined difference of 8.9% EER. We also largely observe these effects across the entire ROC curve. Furthermore, we validate the pitfalls on four distinct classifiers - SVM, Random Forest, Neural Network, and kNN. Based on these insights, we propose a set of best practices that, if followed, will lead to more realistic and comparable reporting of results in the field.
spellingShingle Georgiev, M
Eberz, S
Turner, H
Lovisotto, G
Martinovic, I
Common evaluation pitfalls in touch-based authentication systems
title Common evaluation pitfalls in touch-based authentication systems
title_full Common evaluation pitfalls in touch-based authentication systems
title_fullStr Common evaluation pitfalls in touch-based authentication systems
title_full_unstemmed Common evaluation pitfalls in touch-based authentication systems
title_short Common evaluation pitfalls in touch-based authentication systems
title_sort common evaluation pitfalls in touch based authentication systems
work_keys_str_mv AT georgievm commonevaluationpitfallsintouchbasedauthenticationsystems
AT eberzs commonevaluationpitfallsintouchbasedauthenticationsystems
AT turnerh commonevaluationpitfallsintouchbasedauthenticationsystems
AT lovisottog commonevaluationpitfallsintouchbasedauthenticationsystems
AT martinovici commonevaluationpitfallsintouchbasedauthenticationsystems