Summary: | Anomaly detection in smart environments is important when dealing with rare events, which can be safety-critical to individuals or infrastructure. Safety-critical means in this case, that these events can be a threat to the safety of individuals (e.g. a person falling to the ground) or to the security of infrastructure (e.g. unauthorized access to protected facilities). However, recognizing abnormal events in smart environments is challenging, because of the complex and volatile nature of the data recorded by monitoring sensors. Methodologies proposed in the literature are frequently domain-specific and are subject to biased assumptions about the underlying data. In this work, we propose the adaption of a deep reinforcement learning algorithm, namely double deep q-learning (DDQN), for anomaly detection in smart environments. Our proposed anomaly detector directly learns a decision-making function, which can classify rare events based on multivariate sequential time series data. With an emphasis on improving the performance in rare event classification tasks, we extended the algorithm with a prioritized experience replay (PER) strategy, and showed that the PER extension yields an increase in detection performance. The adaption of the improved version of the DDQN reinforcement learning algorithm for anomaly detection in smart environments is the major contribution of this work. Empirical studies on publicly available real-world datasets demonstrate the effectiveness of our proposed solution. Here specifically, we use a dataset for fall and for occupancy detection to evaluate the solution proposed in this work. Our solution yields comparable detection performance to previous work, and has the additional advantages of being adaptable to different environments and capable of online learning.
|