A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Learning Guided Implicit Depth Function With Scale-Aware Feature Fusion. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Recently, the single image super-resolution based on implicit image function is a hot topic, which learns a universal model for arbitrary upsampling scales. By contrast, color-guided depth map super-resolution is less explored based on implicit function learning. The related research faces three questions. First, is it also necessary and applicable to fuse the depth feature and the color feature in the encoder with continuous upsampling scales? Second, is the scale information in the encoder as important as that in the decoder? Third, how to efficiently and effectively model the affinity of location distance and content similarity within cross domains in the decoder? This paper proposes a transformer-based network to answer the above questions, which includes a depth super-resolution branch and a guidance extraction branch. Specifically, in the encoder, the effective implicit cross transformer is designed to fuse the guidance from the color feature with continuous coordinate mapping. In addition, the unrelated guidance is filtered out by correlation evaluation in the high-dimension feature space. Unlike the scale only introduced in the decoder, this paper additionally embeds the scale into the position encoding and the feed-forward network in the encoder to learn the scale-aware feature representation. In the decoder, the high-resolution depth feature is reconstructed by using the internal prior and the external guidance. The internal prior is implemented by implicit self-attention in the depth super-resolution branch, and the external guidance is exploited via implicit cross-attention between both branches. Finally, the above decoded features are complementary to generate the high-resolution depth map. The sufficient experiments on the synthetic and real datasets for in-distribution and out-of-distribution upsampling scales validate the improved performance. The code and the models are public via https://github.com/NaNRan13/GIDF.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TIP.2025.3570571DOI Listing

Publication Analysis

Top Keywords

scale-aware feature
8
based implicit
8
upsampling scales
8
depth map
8
depth feature
8
color feature
8
depth super-resolution
8
super-resolution branch
8
high-resolution depth
8
internal prior
8

Similar Publications