A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Depth estimation from light fields via epipolar geometry and an axial attention mechanism. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Light-field depth estimation plays a pivotal role in various applications. This technology facilitates the creation of immersive 3D environments in virtual and augmented reality, and supports real-time environmental perception for enhanced autonomous driving safety. The academic community widely recognizes that the epipolar plane image (EPI) contains essential depth cues. To further explore this characteristic, we analyze the linear texture of EPI patches from the vector sequence perspective, through which we find that the coupling relationship between sequences can represent the complex morphology of EPI strips. Moreover, we discover that using the horizontal and vertical EPIs of an object point as depth-estimation metadata aligns well with the axial-attention calculation method. Building upon these findings, we design an EGAA model, which combines PI eometry and an xial-ttention mechanism. EGAA's encoder module is designed to process multi-directional image volumes, where directional features are independently extracted before undergoing comprehensive fusion encoding. At the heart of this encoder lies a sophisticated axial attention block, which integrates dual attention mechanisms: horizontal attention and vertical attention. EGAA's decoder is composed of stacked hourglass-shaped decoding blocks. These hourglass-shaped decoding blocks are implemented by convolutional neural networks and can simultaneously receive the skip connections from the encoding layer and the output of the previous decoder layer. We carried out comparative experiments and ablation experiments on both synthetic light-field datasets and real light-field datasets. The experimental results show that the EGAA model exhibits excellent performance in both quantitative and qualitative comparisons.

Download full-text PDF

Source
http://dx.doi.org/10.1364/OE.558649DOI Listing

Publication Analysis

Top Keywords

depth estimation
8
axial attention
8
egaa model
8
hourglass-shaped decoding
8
decoding blocks
8
light-field datasets
8
attention
5
estimation light
4
light fields
4
fields epipolar
4

Similar Publications