Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review
Utility grids are undergoing several upgrades. Distributed generators that are supplied by intermittent renewable energy sources (RES) are being connected to the grids. As RES get cheaper, more customers are opting for peer-to-peer energy interchanges through the smart metering infrastructure. Conse...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9261330/ |
id |
doaj-179eaf069c384cc7a9f8351154b9fdc0 |
---|---|
record_format |
Article |
spelling |
doaj-179eaf069c384cc7a9f8351154b9fdc02021-03-30T03:34:58ZengIEEEIEEE Access2169-35362020-01-01820899220900710.1109/ACCESS.2020.30387359261330Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive ReviewErick O. Arwa0https://orcid.org/0000-0002-8278-7972Komla A. Folly1https://orcid.org/0000-0001-8012-9098Department of Electrical Engineering, University of Cape Town, Cape Town, South AfricaDepartment of Electrical Engineering, University of Cape Town, Cape Town, South AfricaUtility grids are undergoing several upgrades. Distributed generators that are supplied by intermittent renewable energy sources (RES) are being connected to the grids. As RES get cheaper, more customers are opting for peer-to-peer energy interchanges through the smart metering infrastructure. Consequently, power management in grid-tied RES-based microgrids has become a challenging task. This paper reviews the applications of reinforcement learning (RL) algorithms in managing power in grid-tide microgrids. Unlike other optimization methods such as numerical and soft computing techniques, RL does not require an accurate model of the optimization environment in order to arrive at an optimal solution. In this paper, various challenges associated with the control of power in grid-tied microgrids are described. The application of RL techniques in addressing those challenges is reviewed critically. This review identifies the need to improve and scale multi-agent RL methods to enable seamless distributed power dispatch among interconnected microgrids. Finally, the paper gives directions for future research, e.g., the hybridization of intrinsic and extrinsic reward schemes, the use of transfer learning to improve the learning outcomes of RL in complex power systems environments and the deployment of priority-based experience replay in post-disaster microgrid power flow control.https://ieeexplore.ieee.org/document/9261330/Electric vehicle charging stationenergy managementMarkov decision processmicrogridreinforcement learning |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Erick O. Arwa Komla A. Folly |
spellingShingle |
Erick O. Arwa Komla A. Folly Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review IEEE Access Electric vehicle charging station energy management Markov decision process microgrid reinforcement learning |
author_facet |
Erick O. Arwa Komla A. Folly |
author_sort |
Erick O. Arwa |
title |
Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review |
title_short |
Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review |
title_full |
Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review |
title_fullStr |
Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review |
title_full_unstemmed |
Reinforcement Learning Techniques for Optimal Power Control in Grid-Connected Microgrids: A Comprehensive Review |
title_sort |
reinforcement learning techniques for optimal power control in grid-connected microgrids: a comprehensive review |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Utility grids are undergoing several upgrades. Distributed generators that are supplied by intermittent renewable energy sources (RES) are being connected to the grids. As RES get cheaper, more customers are opting for peer-to-peer energy interchanges through the smart metering infrastructure. Consequently, power management in grid-tied RES-based microgrids has become a challenging task. This paper reviews the applications of reinforcement learning (RL) algorithms in managing power in grid-tide microgrids. Unlike other optimization methods such as numerical and soft computing techniques, RL does not require an accurate model of the optimization environment in order to arrive at an optimal solution. In this paper, various challenges associated with the control of power in grid-tied microgrids are described. The application of RL techniques in addressing those challenges is reviewed critically. This review identifies the need to improve and scale multi-agent RL methods to enable seamless distributed power dispatch among interconnected microgrids. Finally, the paper gives directions for future research, e.g., the hybridization of intrinsic and extrinsic reward schemes, the use of transfer learning to improve the learning outcomes of RL in complex power systems environments and the deployment of priority-based experience replay in post-disaster microgrid power flow control. |
topic |
Electric vehicle charging station energy management Markov decision process microgrid reinforcement learning |
url |
https://ieeexplore.ieee.org/document/9261330/ |
work_keys_str_mv |
AT erickoarwa reinforcementlearningtechniquesforoptimalpowercontrolingridconnectedmicrogridsacomprehensivereview AT komlaafolly reinforcementlearningtechniquesforoptimalpowercontrolingridconnectedmicrogridsacomprehensivereview |
_version_ |
1724183150841561088 |