98%
921
2 minutes
20
Objective: To develop the first known deep learning-based photoacoustic visual servoing system utilizing point source localization and hybrid position-force control to track catheter tips in three dimensions in real-time.
Methods: We integrated either object detection or instance segmentation-based localization with hybrid position-force control to create our novel system. Cardiac catheter tips were then tracked across distances of 40 mm in a plastisol phantom and 25-64 mm in an in vivo swine in real-time in nine visual servoing trials total.
Results: Object detection-based localization identified the cardiac catheter tip in 88.0-91.7% and 66.7-70.4% of phantom and in vivo channel data frames, respectively. Instance segmentation detection rates ranged 86.4-100.0% in vivo. These catheter tips were tracked with errors as low as 0.5 mm in phantom trials and 0.8 mm in the in vivo trials. The mean inference times were ≥145.3 ms and ≥516.3 ms with object detection-based and instance segmentation-based point source localization, respectively. Hybrid position-force control system enabled contact with the imaging surface during ≥99.43% of each visual servoing trial.
Conclusion: Our novel deep learning-based photoacoustic visual servoing system was successfully demonstrated. Object detectionbased localization operated with inference times that are more suitable for real-time implementations while instance segmentation had lower tracking errors.
Significance: After implementing suggested optimization modifications, our novel system has the potential to track catheter tips, needle tips, and other surgical tool tips in real-time during surgical and interventional procedures.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TBME.2025.3584076 | DOI Listing |
Front Plant Sci
August 2025
School of Future Technology, Fujian Agriculture and Forestry University, Fuzhou, China.
With the development of smart agriculture, fruit picking robots have attracted widespread attention as one of the key technologies to improve agricultural productivity. Visual perception technology plays a crucial role in fruit picking robots, involving precise fruit identification, localization, and grasping operations. This paper reviews the research progress in the visual perception technology for fruit picking robots, focusing on key technologies such as camera types used in picking robots, object detection techniques, picking point recognition and localization, active vision, and visual servoing.
View Article and Find Full Text PDFSensors (Basel)
August 2025
Institute for Mechatronics Engineering and Cyber-Physical Systems (IMECH.UMA), University of Malaga, 29071 Malaga, Spain.
The integration of robotics and mobile networks (5G/6G) through the Internet of Robotic Things (IoRT) is revolutionizing telemedicine, enabling remote physician participation in scenarios where specialists are scarce, where there is a high risk to them, such as in conflicts or natural disasters, or where access to a medical facility is not possible. Nevertheless, touching a human safely with a robotic arm in non-engineered or even out-of-hospital environments presents substantial challenges. This article presents a novel IoRT approach for healthcare in or from remote areas, enabling interaction between a specialist's hand and a robotic hand.
View Article and Find Full Text PDFSensors (Basel)
July 2025
School of Automation, Harbin University of Science and Technology, Harbin 150080, China.
Blind and visually impaired (BVI) people face significant challenges in perception, navigation, and safety during travel. Existing infrastructure (e.g.
View Article and Find Full Text PDFSci Robot
July 2025
Department of Computer Science and Engineering, Chinese University of Hong Kong, HKSAR, China.
Surgical robots capable of autonomously performing various tasks could enhance efficiency and augment human productivity in addressing clinical needs. Although current solutions have automated specific actions within defined contexts, they are challenging to generalize across diverse environments in general surgery. Embodied intelligence enables general-purpose robot learning with applications for daily tasks, yet its application in the medical domain remains limited.
View Article and Find Full Text PDFIEEE Trans Cybern
September 2025
With the advancement of robotic-assisted minimally invasive surgery, visual servo control has become a crucial technique for improving surgical outcomes. However, traditional visual servo methods often rely on precise kinematic models and camera calibration, limiting their generalizability. Considering these, this article proposes a novel uncalibrated model-free visual servo control scheme.
View Article and Find Full Text PDF