Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Hydrogen-based electric vehicles such as Fuel Cell Electric Vehicles (FCHEVs) play an important role in producing zero carbon emissions and in reducing the pressure from the fuel economy crisis, simultaneously. This paper aims to address the energy management design for various performance metrics, such as power tracking and system accuracy, fuel cell lifetime, battery lifetime, and reduction of transient and peak current on Polymer Electrolyte Membrane Fuel Cell (PEMFC) and Li-ion batteries. The proposed algorithm includes a combination of reinforcement learning algorithms in low-level control loops and high-level supervisory control based on fuzzy logic load sharing, which is implemented in the system under consideration. More specifically, this research paper establishes a power system model with three DC-DC converters, which includes a hierarchical energy management framework employed in a two-layer control strategy. Three loop control strategies for hybrid electric vehicles based on reinforcement learning are designed in the low-level layer control strategy. The Deep Deterministic Policy Gradient with Twin Delayed (DDPG TD3) is used with a network. Three DRL controllers are designed using the hierarchical energy optimization control architecture. The comparative results between the two strategies, Deep Reinforcement Learning and Fuzzy logic supervisory control (DRL-F) and Super-Twisting algorithm and Fuzzy logic supervisory control (STW-F) under the EUDC driving cycle indicate that the proposed model DRL-F can ensure the Root Mean Square Error (RMSE) reduction for 21.05% compared to the STW-F and the Mean Error reduction for 8.31% compared to the STW-F method. The results demonstrate a more robust, accurate and precise system alongside uncertainties and disturbances in the Energy Management System (EMS) of FCHEV based on an advanced learning method.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11681074 | PMC |
http://dx.doi.org/10.1038/s41598-024-81769-1 | DOI Listing |