98%
921
2 minutes
20
A large image dataset with the aim of developing an insect recognition algorithm like YOLO. The dataset contains more than 25,000 annotations on the taxonomy of urban insects according to their order and the localization of the insect (as a bounding box) on a scanned image. This annotated image dataset of flying insects was collected using UV light traps placed in food warehouses, manufacturers and grocery stores in urban environments. The traps, equipped with UVA lamps (365 nm), captured a variety of insect species on sticky cards over 7-10 days. The sticky traps with all captured insects were used to create high-resolution scanned images (1200 dpi, 48-bit colour), with the resolution preserving fine morphological details of the insect, such as the antenna. To annotate the dataset for computer vision and deep learning models with detection tasks, annotation was performed using CVAT, with bounding boxes labelled by entomology experts at the order level. The dataset was intended to serve as a dataset for computer scientists or entomologists to compare the performance of deep learning models that can be used to build an automatic detection system for urban insect diversity or pest control studies.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12159940 | PMC |
http://dx.doi.org/10.1016/j.dib.2025.111673 | DOI Listing |
BMC Musculoskelet Disord
September 2025
Department of Clinical Sciences at Danderyds Hospital, Department of Orthopedic Surgery, Karolinska Institutet, Stockholm, 182 88, Sweden.
Background: This study evaluates the accuracy of an Artificial Intelligence (AI) system, specifically a convolutional neural network (CNN), in classifying elbow fractures using the detailed 2018 AO/OTA fracture classification system.
Methods: A retrospective analysis of 5,367 radiograph exams visualizing the elbow from adult patients (2002-2016) was conducted using a deep neural network. Radiographs were manually categorized according to the 2018 AO/OTA system by orthopedic surgeons.
Med Eng Phys
October 2025
Biomedical Device Technology, Istanbul Aydın University, Istanbul, 34093, Istanbul, Turkey. Electronic address:
Deep learning approaches have improved disease diagnosis efficiency. However, AI-based decision systems lack sufficient transparency and interpretability. This study aims to enhance the explainability and training performance of deep learning models using explainable artificial intelligence (XAI) techniques for brain tumor detection.
View Article and Find Full Text PDFAJNR Am J Neuroradiol
September 2025
From the Department of Diagnostic Radiology (E.W., A.D., C.J.M., M.C., M.K.G.) and Department of Pathology (L.Y.B.), MD Anderson Cancer Center, Houston, TX, USA; Department of Radiology and Biomedical Imaging (L.T., J.M.J), Yale University, New Haven, CT, USA.
Background And Purpose: Brain imaging with MRI or CT is standard in screening for intracranial disease among ambulatory cancer patients. Although MRI offers greater sensitivity, CT is frequently employed due to its accessibility, affordability, and faster acquisition time. However, the necessity of routinely performing a non-contrast CT with the contrast-enhanced study is unknown.
View Article and Find Full Text PDFAJNR Am J Neuroradiol
September 2025
From the Department of Radiology, Memorial Sloan Kettering Cancer Center, New York, New York, United States of America (J.S.S., B.M., S.H., A.H., J.S.), and Department of Aerospace Engineering, Indian Institute of Technology Madras, Chennai, Tamil Nadu, India (H.S.).
Background And Purpose: The choroid of the eye is a rare site for metastatic tumor spread, and as small lesions on the periphery of brain MRI studies, these choroidal metastases are often missed. To improve their detection, we aimed to use artificial intelligence to distinguish between brain MRI scans containing normal orbits and choroidal metastases.
Materials And Methods: We present a novel hierarchical deep learning framework for sequential cropping and classification on brain MRI images to detect choroidal metastases.
J R Soc Interface
September 2025
Institute of Intelligent Systems and Robotics, Sorbonne Université, Paris, Île-de-France, France.
A number of techniques have been developed to measure the three-dimensional trajectories of protists, which require special experimental set-ups, such as a pair of orthogonal cameras. On the other hand, machine learning techniques have been used to estimate the vertical position of spherical particles from the defocus pattern, but they require the acquisition of a labelled dataset with finely spaced vertical positions. Here, we describe a simple way to make a dataset of images labelled with vertical position from a single 5 min movie, based on a tilted slide set-up.
View Article and Find Full Text PDF