A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 1075
Function: getPubMedXML

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3195
Function: GetPubMedArticleOutput_2016

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Lyapunov-Based Safe Reinforcement Learning for Microgrid Energy Management. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

The rapid development of renewable energy sources (RESs) has led to their increased integration into microgrids (MGs), emphasizing the need for safe and efficient energy management in MG operations. We investigate the methods of MG energy management, primarily categorized into model-based and model-free approaches. Due to a lack of incremental knowledge, model-based methods need to be reengineered for new scenarios during the optimization process, leading to reduced computational efficiency. In contrast, model-free methods can obtain incremental knowledge via trial-and-error in the training phase, and output energy management scheme rapidly. However, ensuring the safety of the scheme during the training phases poses significant challenges. To address these challenges, we propose a safe reinforcement learning (SRL) framework. The proposed SRL framework initially includes a safety assessment optimization model (SAOM) to evaluate scheme constraints and refine unsafe schemes for ensuring MG safety. Subsequently, based on SAOM, the MG energy management issue is formulated as an assess-based constrained Markov decision process (A-CMDP), enabling the SRL can be adopted in this issue. After that, we adopt a Lyapunov-based safety policy optimization for agent policy learning to ensure that policy updates are confined within a safe boundary, theoretically ensuring the safety of the MG throughout the learning process. Numerical studies highlight the superior performance of our proposed method. Specifically, the SRL framework effectively learns energy management policy, ensures MG safety, and demonstrates outstanding outcomes in the economic operation of MG.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TNNLS.2024.3496932DOI Listing

Publication Analysis

Top Keywords

energy management
24
ensuring safety
12
srl framework
12
safe reinforcement
8
reinforcement learning
8
incremental knowledge
8
energy
7
management
6
safety
6
lyapunov-based safe
4

Similar Publications