Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Augmented Reality (AR) has long been expected to help users improve their working efficiency. However, due to the absence of intelligent systems, existing AR applications are greatly affected by the virtual content interference with real-world activities. Unlike existing work, which focuses more on hiding virtual content to reduce interference, in this work, we propose an innovative AR Task Support System where virtual contents actively guide users with task completion. During task execution, our system proactively searches for and tracks key objects in the scene, and uses this context information to automatically select appropriate virtual content and display positions. Through introducing open-world prompt-based visual models, our system can effectively retrieve few-shot or even zero-shot objects that are uncommon in the dataset. This approach extends the use of AR Task Support System beyond controlled industrial settings to more uncontrolled daily scenarios, overcoming the limitations of existing systems. It also significantly reduces development costs for developers. We demonstrate the advantages of our system over traditional virtual content management systems through a series of experiments that are closer to users' real usage situations.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TVCG.2025.3567346 | DOI Listing |