Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Robot-assisted surgery (RAS) is transforming modern healthcare by enhancing precision, reducing human error, and improving patient outcomes. A crucial step toward fully autonomous robotic surgery is the accurate and real-time recognition of surgical instruments. In this work, we present a comprehensive surgical instrument dataset named as SID-RAS which comprises of 6000 high resolution images categorized into nine distinct classes: cotton, episiotomy scissors, forceps, gloves, hemostats, mayo, scalpel, stitch scissors, and syringe. To ensure dataset's diversity and simulate real world surgical scenarios, multiple augmentations were applied, including motion blur, varying lighting conditions (low light and high brightness), simulated blood stains, and 360-degree rotation. The dataset was evaluated using YOLOv10 (nano, small, medium) and YOLOv11 (nano, small, medium) object detection models, aiming to assess their effectiveness in recognizing and localizing surgical instruments in real-time. On an average, the models have attained 99.3% of mean Average Precision (mAP) and 99.2% F1-score, demonstrating the quality of SID-RAS dataset for surgical tool detection. These findings contribute to the preliminary development of AI-driven robotic surgical assistance systems, which can be extended to various types of surgeries.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12269440PMC
http://dx.doi.org/10.1016/j.dib.2025.111798DOI Listing

Publication Analysis

Top Keywords

dataset surgical
8
surgical instrument
8
autonomous robotic
8
robotic surgery
8
surgical instruments
8
nano small
8
small medium
8
surgical
7
robust dataset
4
instrument detection
4

Similar Publications

EndoChat: Grounded multimodal large language model for endoscopic surgery.

Med Image Anal

August 2025

The Chinese University of Hong Kong, 999077, Hong Kong Special Administrative Region of China. Electronic address:

Recently, Multimodal Large Language Models (MLLMs) have demonstrated their immense potential in computer-aided diagnosis and decision-making. In the context of robotic-assisted surgery, MLLMs can serve as effective tools for surgical training and guidance. However, there is still a deficiency of MLLMs specialized for surgical scene understanding in endoscopic procedures.

View Article and Find Full Text PDF

Ultrasonographic Analysis of Site-Specific Plantar Skin Thickness for Melanoma Staging and Excision.

Clin Anat

September 2025

Division in Anatomy and Developmental Biology, Department of Oral Biology, Human Identification Research Institute, BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, South Korea.

Plantar melanomas present unique diagnostic and surgical challenges owing to substantial regional variations in skin thickness. Although the Breslow thickness remains the primary criterion for staging and surgical excision, its application on plantar melanoma is complicated by the inherent thickness of the glabrous plantar epidermis, which may lead to tumor depth overestimation. Accurate assessment of plantar skin thickness is essential for optimizing staging accuracy and refining surgical margins.

View Article and Find Full Text PDF

GESur_Net: attention-guided network for surgical instrument segmentation in gastrointestinal endoscopy.

Med Biol Eng Comput

September 2025

Key Laboratory of Mechanism Theory and Equipment Design of Ministry of Education, Tianjin University, Tianjin, 300072, China.

Surgical instrument segmentation plays an important role in robotic autonomous surgical navigation systems as it can accurately locate surgical instruments and estimate their posture, which helps surgeons understand the position and orientation of the instruments. However, there are still some problems affecting segmentation accuracy, like insufficient attention to the edges and center of surgical instruments, insufficient usage of low-level feature details, etc. To address these issues, a lightweight network for surgical instrument segmentation in gastrointestinal (GI) endoscopy (GESur_Net) is proposed.

View Article and Find Full Text PDF

Cervical cancer remains a significant cause of female mortality worldwide, primarily due to abnormal cell growth in the cervix. This study proposes an automated classification method to enhance detection accuracy and efficiency, addressing contrast and noise issues in traditional diagnostic approaches. The impact of image enhancement on classification performance is evaluated by comparing transfer learning-based Convolutional Neural Network (CNN) models trained on both original and enhanced images.

View Article and Find Full Text PDF

Background: Historically, cosmetic surgery has been primarily utilized by White patients. However, in recent decades, the population in the United States has become increasingly diversified. It is unknown how these national demographic changes have affected the racial and ethnic distribution of those utilizing cosmetic surgical services.

View Article and Find Full Text PDF