Cognitive reasoning and trust in human-robot interactions

We are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with the...

Full description

Bibliographic Details
Main Author: Kwiatkowska, M
Format: Conference item
Published: Springer 2017
Subjects:
_version_ 1797080446938382336
author Kwiatkowska, M
author_facet Kwiatkowska, M
author_sort Kwiatkowska, M
collection OXFORD
description We are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with them socially. Key questions that need to be asked are how to ensure safety and trust in such interac-tions. How do we know when to trust a robot? How much should we trust? And how much should the robots trust us? This paper will give an overview of a probabilistic logic for expressing trust between human or robotic agents such as “agent A has 99% trust in agent B’s ability or willingness to perform a task” and the role it can play in explaining trust-based decisions and agent’s dependence on one another. The logic is founded on a probabilistic notion of belief, supports cognitive reason-ing about goals and intentions, and admits quantitative verification via model checking, which can be used to evaluate trust in human-robot interactions. The paper concludes by summarising future challenges for modelling and verification in this important field.
first_indexed 2024-03-07T01:00:10Z
format Conference item
id oxford-uuid:89715914-3b41-43c4-aa33-ed2162d7fffa
institution University of Oxford
last_indexed 2024-03-07T01:00:10Z
publishDate 2017
publisher Springer
record_format dspace
spelling oxford-uuid:89715914-3b41-43c4-aa33-ed2162d7fffa2022-03-26T22:24:34ZCognitive reasoning and trust in human-robot interactionsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:89715914-3b41-43c4-aa33-ed2162d7fffa*subject*Symplectic Elements at OxfordSpringer2017Kwiatkowska, MWe are witnessing accelerating technological advances in autonomous systems, of which driverless cars and home-assistive robots are prominent examples. As mobile autonomy becomes embedded in our society, we increasingly often depend on decisions made by mobile autonomous robots and interact with them socially. Key questions that need to be asked are how to ensure safety and trust in such interac-tions. How do we know when to trust a robot? How much should we trust? And how much should the robots trust us? This paper will give an overview of a probabilistic logic for expressing trust between human or robotic agents such as “agent A has 99% trust in agent B’s ability or willingness to perform a task” and the role it can play in explaining trust-based decisions and agent’s dependence on one another. The logic is founded on a probabilistic notion of belief, supports cognitive reason-ing about goals and intentions, and admits quantitative verification via model checking, which can be used to evaluate trust in human-robot interactions. The paper concludes by summarising future challenges for modelling and verification in this important field.
spellingShingle *subject*
Kwiatkowska, M
Cognitive reasoning and trust in human-robot interactions
title Cognitive reasoning and trust in human-robot interactions
title_full Cognitive reasoning and trust in human-robot interactions
title_fullStr Cognitive reasoning and trust in human-robot interactions
title_full_unstemmed Cognitive reasoning and trust in human-robot interactions
title_short Cognitive reasoning and trust in human-robot interactions
title_sort cognitive reasoning and trust in human robot interactions
topic *subject*
work_keys_str_mv AT kwiatkowskam cognitivereasoningandtrustinhumanrobotinteractions