Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Recent advances in low-light image enhancement (LLIE) have achieved impressive progress. However, the scarcity of paired data has emerged as a significant obstacle to further advancements. In this work, we propose Semi-LLIE, a novel semi-supervised framework that introduces unpaired low- and normal-light images into model training via the mean-teacher paradigm. While the mean-teacher framework is promising, directly applying it to LLIE faces two key challenges. Firstly, pixel-wise consistency losses are insufficient for transferring realistic illumination distribution from the teacher to the student model. Secondly, existing image enhancement backbones are not well-suited for integration with semi-supervised learning to restore fine-grained details in dark regions. To address these challenges, we propose a semantic-aware contrastive loss which leverages vision-language representations to align illumination semantics and achieve accurate illumination distribution equalization, thereby improving color naturalness in enhanced images. In addition, we design a Mamba-based low-light image enhancement backbone with a multi-scale feature learning scheme that enhance global-local pixel dependency modeling for improved detail restoration. In addition, we propose a novel RAM-based perceptive loss is further introduced to guide texture enhancement at semantic level. The experimental results indicate that our Semi-LLIE surpasses existing methods in both quantitative and qualitative metrics. The code and models are available at https://github.com/guanguanboy/Semi-LLIE.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2025.108010 | DOI Listing |