Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Drones are extensively utilized in both military and social development processes. Eliminating the reliance of drone positioning systems on GNSS and enhancing the accuracy of the positioning systems is of significant research value. This paper presents a novel approach that employs a real-scene 3D model and image point cloud reconstruction technology for the autonomous positioning of drones and attains high positioning accuracy. Firstly, the real-scene 3D model constructed in this paper is segmented in accordance with the predetermined format to obtain the image dataset and the 3D point cloud dataset. Subsequently, real-time image capture is performed using the monocular camera mounted on the drone, followed by a preliminary position estimation conducted through image matching algorithms and subsequent 3D point cloud reconstruction utilizing the acquired images. Next, the corresponding real-scene 3D point cloud data within the point cloud dataset is extracted in accordance with the image-matching results. Finally, the point cloud data obtained through image reconstruction is matched with the 3D point cloud of the real scene, and the positioning coordinates of the drone are acquired by applying the pose estimation algorithm. The experimental results demonstrate that the proposed approach in this paper enables precise autonomous positioning of drones in complex urban environments, achieving a remarkable positioning accuracy of up to 0.4 m.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11722939PMC
http://dx.doi.org/10.3390/s25010209DOI Listing

Publication Analysis

Top Keywords

point cloud
28
autonomous positioning
12
positioning systems
8
real-scene model
8
cloud reconstruction
8
positioning drones
8
positioning accuracy
8
cloud dataset
8
cloud data
8
positioning
7

Similar Publications

3D Structural Phenotype of the Optic Nerve Head in Glaucoma and Myopia - A Key to Improving Glaucoma Diagnosis in Myopic Populations.

Am J Ophthalmol

September 2025

Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Duke-NUS Graduate Medical School, Singapore; Department of Ophthalmology, Emory University School of Medicine, Emory University; Department of Biomedical Engineering, Georgia Institute of Technology/Emory University, Atlanta

Purpose: To characterize the 3D structural phenotypes of the optic nerve head (ONH) in patients with glaucoma, high myopia, and concurrent high myopia and glaucoma, and to evaluate their variations across these conditions.

Design: Retrospective cross-sectional study.

Participants: A total of 685 optical coherence tomography (OCT) scans from 754 subjects of Singapore-Chinese ethnicity, including 256 healthy (H), 94 highly myopic (HM), 227 glaucomatous (G), and 108 highly myopic with glaucoma (HMG) cases METHODS: We segmented the retinal and connective tissue layers from OCT volumes and their boundary edges were converted into 3D point clouds.

View Article and Find Full Text PDF

Inter-modality feature prediction through multimodal fusion for 3D shape defect detection.

Neural Netw

September 2025

School of Automation and Intelligent Sensing, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai, 200240, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China.

3D shape defect detection plays an important role in autonomous industrial inspection. However, accurate detection of anomalies remains challenging due to the complexity of multimodal sensor data, especially when both color and structural information are required. In this work, we propose a lightweight inter-modality feature prediction framework that effectively utilizes multimodal fused features from the inputs of RGB, depth and point clouds for efficient 3D shape defect detection.

View Article and Find Full Text PDF

This work reports the nanoscale micellar formation in single and mixed surfactant systems by combining an amphiphilic graft copolymer, Soluplus® (primary surfactant), blended with other polyoxyethylene (POE)-based nonionic surfactants such as Kolliphor® HS15, Kolliphor® EL, Tween-80, TPGS®, and Pluronics® P123 in an aqueous solution environment. The solution behaviour of these surfactants as a single system were analyzed in a wide range of surfactant concentrations and temperatures. Rheological measurements revealed distinct solution behaviour in the case of Soluplus®, ranging from low-viscosity () and fluid-like behavior at ≤20% w/v to a highly viscous state at ≥90% w/v, where the loss modulus ('') exceeded the storage modulus (').

View Article and Find Full Text PDF

Background And Objectives: Stroke is a leading cause of long-term disability. Etanercept, a competitive tumor necrosis factor-α inhibitor, has been proposed as a potential treatment for post-stroke impairments when given through a perispinal subcutaneous injection. We aimed to evaluate the safety and efficacy of perispinal etanercept in patients with chronic stroke.

View Article and Find Full Text PDF

Multi-modal data fusion plays a critical role in enhancing the accuracy and robustness of perception systems for autonomous driving, especially for the detection of small objects. However, small object detection remains particularly challenging due to sparse LiDAR points and low-resolution image features, which often lead to missed or imprecise detections. Currently, many methods process LiDAR point clouds and visible-light camera images separately, and then fuse them in the detection head.

View Article and Find Full Text PDF