Learning retrospective knowledge with reverse reinforcement learning
We present a Reverse Reinforcement Learning (Reverse RL) approach for representing retrospective knowledge. General Value Functions (GVFs) have enjoyed great success in representing predictive knowledge, i.e., answering questions about possible future outcomes such as “how much fuel will be consumed...
Huvudupphovsmän: | , , |
---|---|
Materialtyp: | Conference item |
Språk: | English |
Publicerad: |
NeurIPS
2020
|
_version_ | 1826293473731411968 |
---|---|
author | Zhang, S Veeriah, V Whiteson, S |
author_facet | Zhang, S Veeriah, V Whiteson, S |
author_sort | Zhang, S |
collection | OXFORD |
description | We present a Reverse Reinforcement Learning (Reverse RL) approach for representing retrospective knowledge. General Value Functions (GVFs) have enjoyed great success in representing predictive knowledge, i.e., answering questions about possible future outcomes such as “how much fuel will be consumed in expectation if we drive from A to B?”. GVFs, however, cannot answer questions like “how much fuel do we expect a car to have given it is at B at time t?”. To answer this question, we need to know when that car had a full tank and how that car came to B. Since such questions emphasize the influence of possible past events on the present, we refer to their answers as retrospective knowledge. In this paper, we show how to represent retrospective knowledge with Reverse GVFs, which are trained via Reverse RL. We demonstrate empirically the utility of Reverse GVFs in both representation learning and anomaly detection. |
first_indexed | 2024-03-07T03:30:39Z |
format | Conference item |
id | oxford-uuid:ba9a9139-e754-4184-b96f-3ebcc074d78e |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T03:30:39Z |
publishDate | 2020 |
publisher | NeurIPS |
record_format | dspace |
spelling | oxford-uuid:ba9a9139-e754-4184-b96f-3ebcc074d78e2022-03-27T05:10:55ZLearning retrospective knowledge with reverse reinforcement learningConference itemhttp://purl.org/coar/resource_type/c_5794uuid:ba9a9139-e754-4184-b96f-3ebcc074d78eEnglishSymplectic ElementsNeurIPS2020Zhang, SVeeriah, VWhiteson, SWe present a Reverse Reinforcement Learning (Reverse RL) approach for representing retrospective knowledge. General Value Functions (GVFs) have enjoyed great success in representing predictive knowledge, i.e., answering questions about possible future outcomes such as “how much fuel will be consumed in expectation if we drive from A to B?”. GVFs, however, cannot answer questions like “how much fuel do we expect a car to have given it is at B at time t?”. To answer this question, we need to know when that car had a full tank and how that car came to B. Since such questions emphasize the influence of possible past events on the present, we refer to their answers as retrospective knowledge. In this paper, we show how to represent retrospective knowledge with Reverse GVFs, which are trained via Reverse RL. We demonstrate empirically the utility of Reverse GVFs in both representation learning and anomaly detection. |
spellingShingle | Zhang, S Veeriah, V Whiteson, S Learning retrospective knowledge with reverse reinforcement learning |
title | Learning retrospective knowledge with reverse reinforcement learning |
title_full | Learning retrospective knowledge with reverse reinforcement learning |
title_fullStr | Learning retrospective knowledge with reverse reinforcement learning |
title_full_unstemmed | Learning retrospective knowledge with reverse reinforcement learning |
title_short | Learning retrospective knowledge with reverse reinforcement learning |
title_sort | learning retrospective knowledge with reverse reinforcement learning |
work_keys_str_mv | AT zhangs learningretrospectiveknowledgewithreversereinforcementlearning AT veeriahv learningretrospectiveknowledgewithreversereinforcementlearning AT whitesons learningretrospectiveknowledgewithreversereinforcementlearning |