User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning
A well-designed demand response (DR) program is essential in smart home to optimize energy usage according to user preferences. In this study, we proposed a multiobjective reinforcement learning (MORL) algorithm to design a DR program. The proposed approach improved conventional algorithms by mitiga...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9638574/ |
_version_ | 1819101512936194048 |
---|---|
author | Song-Jen Chen Wei-Yu Chiu Wei-Jen Liu |
author_facet | Song-Jen Chen Wei-Yu Chiu Wei-Jen Liu |
author_sort | Song-Jen Chen |
collection | DOAJ |
description | A well-designed demand response (DR) program is essential in smart home to optimize energy usage according to user preferences. In this study, we proposed a multiobjective reinforcement learning (MORL) algorithm to design a DR program. The proposed approach improved conventional algorithms by mitigating the effect of the change in user preferences and addressed the uncertainty induced by future price and renewable energy generation. Because two Q-tables were used, the proposed algorithm simultaneously considers electricity cost and user dissatisfaction; when user preference changes, the proposed MORL algorithm uses the previous experience to customize appliances’ scheduling and swiftly achieve the best objective value. The generalizability of the proposed algorithm is high. Therefore, the algorithm can be implemented in a smart home equipped with an energy storage system, renewable energy source, and various types of appliances such as inflexible, time-flexible, and power-flexible ones. Numerical analysis using real-world data revealed that in case of price and renewable uncertainty, the proposed approach can deliver excellent performance after a change of user preference; it achieved 8.44% cost reduction as compared with mixed-integer nonlinear programming based DR while increasing the dissatisfaction level only by 1.37% on average. |
first_indexed | 2024-12-22T01:19:51Z |
format | Article |
id | doaj.art-1b884287649e438a84fee20eef593fdd |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-22T01:19:51Z |
publishDate | 2021-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-1b884287649e438a84fee20eef593fdd2022-12-21T18:43:45ZengIEEEIEEE Access2169-35362021-01-01916162716163710.1109/ACCESS.2021.31329629638574User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement LearningSong-Jen Chen0Wei-Yu Chiu1https://orcid.org/0000-0003-2450-9314Wei-Jen Liu2Department of Electrical Engineering, Multi-Objective Control and Reinforcement Learning (MOCaRL) Laboratory, National Tsing Hua University, Hsinchu, TaiwanDepartment of Electrical Engineering, Multi-Objective Control and Reinforcement Learning (MOCaRL) Laboratory, National Tsing Hua University, Hsinchu, TaiwanDepartment of Electrical Engineering, Multi-Objective Control and Reinforcement Learning (MOCaRL) Laboratory, National Tsing Hua University, Hsinchu, TaiwanA well-designed demand response (DR) program is essential in smart home to optimize energy usage according to user preferences. In this study, we proposed a multiobjective reinforcement learning (MORL) algorithm to design a DR program. The proposed approach improved conventional algorithms by mitigating the effect of the change in user preferences and addressed the uncertainty induced by future price and renewable energy generation. Because two Q-tables were used, the proposed algorithm simultaneously considers electricity cost and user dissatisfaction; when user preference changes, the proposed MORL algorithm uses the previous experience to customize appliances’ scheduling and swiftly achieve the best objective value. The generalizability of the proposed algorithm is high. Therefore, the algorithm can be implemented in a smart home equipped with an energy storage system, renewable energy source, and various types of appliances such as inflexible, time-flexible, and power-flexible ones. Numerical analysis using real-world data revealed that in case of price and renewable uncertainty, the proposed approach can deliver excellent performance after a change of user preference; it achieved 8.44% cost reduction as compared with mixed-integer nonlinear programming based DR while increasing the dissatisfaction level only by 1.37% on average.https://ieeexplore.ieee.org/document/9638574/Energy management system (EMS)reinforcement learning (RL)multiobjective reinforcement learning (MORL)demand response (DR)smart home |
spellingShingle | Song-Jen Chen Wei-Yu Chiu Wei-Jen Liu User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning IEEE Access Energy management system (EMS) reinforcement learning (RL) multiobjective reinforcement learning (MORL) demand response (DR) smart home |
title | User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning |
title_full | User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning |
title_fullStr | User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning |
title_full_unstemmed | User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning |
title_short | User Preference-Based Demand Response for Smart Home Energy Management Using Multiobjective Reinforcement Learning |
title_sort | user preference based demand response for smart home energy management using multiobjective reinforcement learning |
topic | Energy management system (EMS) reinforcement learning (RL) multiobjective reinforcement learning (MORL) demand response (DR) smart home |
url | https://ieeexplore.ieee.org/document/9638574/ |
work_keys_str_mv | AT songjenchen userpreferencebaseddemandresponseforsmarthomeenergymanagementusingmultiobjectivereinforcementlearning AT weiyuchiu userpreferencebaseddemandresponseforsmarthomeenergymanagementusingmultiobjectivereinforcementlearning AT weijenliu userpreferencebaseddemandresponseforsmarthomeenergymanagementusingmultiobjectivereinforcementlearning |