98%
921
2 minutes
20
RGB-D cameras have been commercialized, and many applications using them have been proposed. In this paper, we propose a robust registration method of multiple RGB-D cameras. We use a human body tracking system provided by Azure Kinect SDK to estimate a coarse global registration between cameras. As this coarse global registration has some error, we refine it using feature matching. However, the matched feature pairs include mismatches, hindering good performance. Therefore, we propose a registration refinement procedure that removes these mismatches and uses the global registration. In an experiment, the ratio of inliers among the matched features is greater than 95% for all tested feature matchers. Thus, we experimentally confirm that mismatches can be eliminated via the proposed method even in difficult situations and that a more precise global registration of RGB-D cameras can be obtained.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC7867328 | PMC |
http://dx.doi.org/10.3390/s21031013 | DOI Listing |
Front Robot AI
July 2025
Electrical and Computer Engineering Department, Lebanese American University, Byblos, Lebanon.
This paper presents a multi-robot collaborative manipulation framework, implemented in the Gazebo simulation environment, designed to enable the execution of autonomous tasks by mobile manipulators in dynamic environments and dense obstacles. The system consists of multiple mobile robot platforms, each equipped with a robotic manipulator, a simulated RGB-D camera, and a 2D LiDAR sensor on the mobile base, facilitating task coordination, object detection, and advanced collision avoidance within a simulated warehouse setting. A leader-follower architecture governs collaboration, allowing for the dynamic formation of teams to tackle tasks requiring combined effort, such as transporting heavy objects.
View Article and Find Full Text PDFPrecision Livestock Farming (PLF) has evolved dramatically from basic monitoring systems to sophisticated artificial intelligence(AI)-driven decision support systems that enhance livestock management efficiency, sustainability, and animal welfare. This review examines the technological evolution of PLF since 2017, highlighting significant advancements in sensing technologies, computer vision, and artificial intelligence. Non-invasive technologies, including RGB-D cameras, 3D imaging systems, and IoT-enabled platforms, now capture detailed biometric and behavioral data in real time, while AI algorithms enable early disease detection, optimize feeding strategies, and improve reproductive management.
View Article and Find Full Text PDFSensors (Basel)
August 2025
Department of Electrical and Electronics Engineering, Izmir Katip Celebi University, Cigli, 35620 Izmir, Türkiye.
High-precision 6D pose estimation for pick-and-place operations remains a critical problem for industrial robot arms in manufacturing. This study introduces an analytics-based solution for 6D pose estimation designed for a real-world industrial application: it enables the Staubli TX2-60L (manufactured by Stäubli International AG, Horgen, Switzerland) robot arm to pick up metal plates from various locations and place them into a precisely defined slot on a brake pad production line. The system uses a fixed eye-to-hand Intel RealSense D435 RGB-D camera (manufactured by Intel Corporation, Santa Clara, California, USA) to capture color and depth data.
View Article and Find Full Text PDFIEEE Trans Med Robot Bionics
November 2024
Department of Radiology, Harvard Medical School, Boston, MA 02115 USA.
Mastectomy is often coupled with breast reconstruction surgery (BRS) to reconstruct the breast mound. However, BRS is challenging and subject to the judgement of the surgeon in determining the amount of tissue to be harvested and the shape of reconstructed breast. To date, the existing tools aimed at maintaining symmetry and appearance of the reconstructed breast are costly.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
August 2025
Depth completion and super-resolution are crucial tasks for comprehensive RGB-D scene understanding, as they involve reconstructing the precise 3D geometry of a scene from sparse or low-resolution depth measurements. However, most existing methods either rely solely on 2D depth representations or directly incorporate raw 3D point clouds for compensation, which are still insufficient to capture the fine-grained 3D geometry of the scene. In this paper, we introduce Tri-Perspective View Decomposition (TPVD) frameworks that can explicitly model 3D geometry.
View Article and Find Full Text PDF