Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objective: To develop the first known deep learning-based photoacoustic visual servoing system utilizing point source localization and hybrid position-force control to track catheter tips in three dimensions in real-time.

Methods: We integrated either object detection or instance segmentation-based localization with hybrid position-force control to create our novel system. Cardiac catheter tips were then tracked across distances of 40 mm in a plastisol phantom and 25-64 mm in an in vivo swine in real-time in nine visual servoing trials total.

Results: Object detection-based localization identified the cardiac catheter tip in 88.0-91.7% and 66.7-70.4% of phantom and in vivo channel data frames, respectively. Instance segmentation detection rates ranged 86.4-100.0% in vivo. These catheter tips were tracked with errors as low as 0.5 mm in phantom trials and 0.8 mm in the in vivo trials. The mean inference times were ≥145.3 ms and ≥516.3 ms with object detection-based and instance segmentation-based point source localization, respectively. Hybrid position-force control system enabled contact with the imaging surface during ≥99.43% of each visual servoing trial.

Conclusion: Our novel deep learning-based photoacoustic visual servoing system was successfully demonstrated. Object detectionbased localization operated with inference times that are more suitable for real-time implementations while instance segmentation had lower tracking errors.

Significance: After implementing suggested optimization modifications, our novel system has the potential to track catheter tips, needle tips, and other surgical tool tips in real-time during surgical and interventional procedures.

Download full-text PDF

Source
http://dx.doi.org/10.1109/TBME.2025.3584076DOI Listing

Publication Analysis

Top Keywords

visual servoing
20
catheter tips
16
deep learning-based
12
learning-based photoacoustic
12
photoacoustic visual
12
servoing system
12
localization hybrid
12
hybrid position-force
12
position-force control
12
point source
8

Similar Publications

With the development of smart agriculture, fruit picking robots have attracted widespread attention as one of the key technologies to improve agricultural productivity. Visual perception technology plays a crucial role in fruit picking robots, involving precise fruit identification, localization, and grasping operations. This paper reviews the research progress in the visual perception technology for fruit picking robots, focusing on key technologies such as camera types used in picking robots, object detection techniques, picking point recognition and localization, active vision, and visual servoing.

View Article and Find Full Text PDF

The IoRT-in-Hand: Tele-Robotic Echography and Digital Twins on Mobile Devices.

Sensors (Basel)

August 2025

Institute for Mechatronics Engineering and Cyber-Physical Systems (IMECH.UMA), University of Malaga, 29071 Malaga, Spain.

The integration of robotics and mobile networks (5G/6G) through the Internet of Robotic Things (IoRT) is revolutionizing telemedicine, enabling remote physician participation in scenarios where specialists are scarce, where there is a high risk to them, such as in conflicts or natural disasters, or where access to a medical facility is not possible. Nevertheless, touching a human safely with a robotic arm in non-engineered or even out-of-hospital environments presents substantial challenges. This article presents a novel IoRT approach for healthcare in or from remote areas, enabling interaction between a specialist's hand and a robotic hand.

View Article and Find Full Text PDF

Blind and visually impaired (BVI) people face significant challenges in perception, navigation, and safety during travel. Existing infrastructure (e.g.

View Article and Find Full Text PDF

Surgical robots capable of autonomously performing various tasks could enhance efficiency and augment human productivity in addressing clinical needs. Although current solutions have automated specific actions within defined contexts, they are challenging to generalize across diverse environments in general surgery. Embodied intelligence enables general-purpose robot learning with applications for daily tasks, yet its application in the medical domain remains limited.

View Article and Find Full Text PDF

With the advancement of robotic-assisted minimally invasive surgery, visual servo control has become a crucial technique for improving surgical outcomes. However, traditional visual servo methods often rely on precise kinematic models and camera calibration, limiting their generalizability. Considering these, this article proposes a novel uncalibrated model-free visual servo control scheme.

View Article and Find Full Text PDF