From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions

In commentary driving, drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary ca...

Full description

Bibliographic Details
Main Authors: Omeiza, D, Anjomshoae, S, Webb, H, Jirotka, M, Kunze, L
Format: Conference item
Language:English
Published: IEEE 2022
_version_ 1797107373311000576
author Omeiza, D
Anjomshoae, S
Webb, H
Jirotka, M
Kunze, L
author_facet Omeiza, D
Anjomshoae, S
Webb, H
Jirotka, M
Kunze, L
author_sort Omeiza, D
collection OXFORD
description In commentary driving, drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions, thereby assisting a driver or an end-user during driving operations in challenging and safety-critical scenarios. In this paper, we conducted a field study in which we deployed a research vehicle in an urban environment to obtain data. While collecting sensor data of the vehicle’s surroundings, we obtained driving commentary from a driving instructor using the think-aloud protocol. We analysed the driving commentary and uncovered an explanation style; the driver first announces his observations, announces his plans, and then makes general remarks. He also makes counterfactual comments. We successfully demonstrated how factual and counterfactual natural language explanations that follow this style could be automatically generated using a transparent tree-based approach. Generated explanations for longitudinal actions (e.g., stop and move) were deemed more intelligible and plausible by human judges compared to lateral actions, such as lane changes. We discussed how our approach can be built on in the future to realise more robust and effective explainability for driver assistance as well as partial and conditional automation of driving functions.
first_indexed 2024-03-07T07:15:06Z
format Conference item
id oxford-uuid:4d0c8492-1d0d-4027-8329-076b2d1b0f25
institution University of Oxford
language English
last_indexed 2024-03-07T07:15:06Z
publishDate 2022
publisher IEEE
record_format dspace
spelling oxford-uuid:4d0c8492-1d0d-4027-8329-076b2d1b0f252022-08-09T11:08:03ZFrom spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actionsConference itemhttp://purl.org/coar/resource_type/c_5794uuid:4d0c8492-1d0d-4027-8329-076b2d1b0f25EnglishSymplectic ElementsIEEE2022Omeiza, DAnjomshoae, SWebb, HJirotka, MKunze, LIn commentary driving, drivers verbalise their observations, assessments and intentions. By speaking out their thoughts, both learning and expert drivers are able to create a better understanding and awareness of their surroundings. In the intelligent vehicle context, automated driving commentary can provide intelligible explanations about driving actions, thereby assisting a driver or an end-user during driving operations in challenging and safety-critical scenarios. In this paper, we conducted a field study in which we deployed a research vehicle in an urban environment to obtain data. While collecting sensor data of the vehicle’s surroundings, we obtained driving commentary from a driving instructor using the think-aloud protocol. We analysed the driving commentary and uncovered an explanation style; the driver first announces his observations, announces his plans, and then makes general remarks. He also makes counterfactual comments. We successfully demonstrated how factual and counterfactual natural language explanations that follow this style could be automatically generated using a transparent tree-based approach. Generated explanations for longitudinal actions (e.g., stop and move) were deemed more intelligible and plausible by human judges compared to lateral actions, such as lane changes. We discussed how our approach can be built on in the future to realise more robust and effective explainability for driver assistance as well as partial and conditional automation of driving functions.
spellingShingle Omeiza, D
Anjomshoae, S
Webb, H
Jirotka, M
Kunze, L
From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions
title From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions
title_full From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions
title_fullStr From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions
title_full_unstemmed From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions
title_short From spoken thoughts to automated driving commentary: Predicting and explaining intelligent vehicles' actions
title_sort from spoken thoughts to automated driving commentary predicting and explaining intelligent vehicles actions
work_keys_str_mv AT omeizad fromspokenthoughtstoautomateddrivingcommentarypredictingandexplainingintelligentvehiclesactions
AT anjomshoaes fromspokenthoughtstoautomateddrivingcommentarypredictingandexplainingintelligentvehiclesactions
AT webbh fromspokenthoughtstoautomateddrivingcommentarypredictingandexplainingintelligentvehiclesactions
AT jirotkam fromspokenthoughtstoautomateddrivingcommentarypredictingandexplainingintelligentvehiclesactions
AT kunzel fromspokenthoughtstoautomateddrivingcommentarypredictingandexplainingintelligentvehiclesactions