Article Synopsis

  • Neuromorphic vision sensors, or event cameras, enable ultra-fast visual perception but struggle with capturing edges parallel to motion due to intrinsic limitations.
  • Inspired by human microsaccades—tiny involuntary eye movements—the authors designed a system called the artificial microsaccade-enhanced event camera (AMI-EV) that incorporates a rotating wedge prism to improve texture stability.
  • Testing shows that AMI-EV significantly outperforms standard and other event cameras in real-world scenarios, enhancing robotics' ability to perceive both low-level and high-level visual tasks.

Video Abstracts
Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Neuromorphic vision sensors or event cameras have made the visual perception of extremely low reaction time possible, opening new avenues for high-dynamic robotics applications. These event cameras' output is dependent on both motion and texture. However, the event camera fails to capture object edges that are parallel to the camera motion. This is a problem intrinsic to the sensor and therefore challenging to solve algorithmically. Human vision deals with perceptual fading using the active mechanism of small involuntary eye movements, the most prominent ones called microsaccades. By moving the eyes constantly and slightly during fixation, microsaccades can substantially maintain texture stability and persistence. Inspired by microsaccades, we designed an event-based perception system capable of simultaneously maintaining low reaction time and stable texture. In this design, a rotating wedge prism was mounted in front of the aperture of an event camera to redirect light and trigger events. The geometrical optics of the rotating wedge prism allows for algorithmic compensation of the additional rotational motion, resulting in a stable texture appearance and high informational output independent of external motion. The hardware device and software solution are integrated into a system, which we call artificial microsaccade-enhanced event camera (AMI-EV). Benchmark comparisons validated the superior data quality of AMI-EV recordings in scenarios where both standard cameras and event cameras fail to deliver. Various real-world experiments demonstrated the potential of the system to facilitate robotics perception both for low-level and high-level vision tasks.

Download full-text PDF

Source
http://dx.doi.org/10.1126/scirobotics.adj8124DOI Listing

Publication Analysis

Top Keywords

event camera
16
event cameras
8
low reaction
8
reaction time
8
stable texture
8
rotating wedge
8
wedge prism
8
event
6
camera
5
microsaccade-inspired event
4

Similar Publications

Accurate observations at birth and during newborn resuscitation are fundamental for quality improvement initiatives and research. However, manual data collection methods often lack consistency and objectivity, are not scalable, and may raise privacy concerns. The NewbornTime project aims to develop an AI system that generates accurate timelines from birth and newborn resuscitation events by automated video recording and processing, providing a source of objective and consistent data.

View Article and Find Full Text PDF

Event-based sensors (EBS), with their low latency and high dynamic range, are a promising means for tracking unresolved point-objects. Conventional EBS centroiding methods assume the generated events follow a Gaussian distribution and require long event streams ($\gt 1$s) for accurate localization. However, these assumptions are inadequate for centroiding unresolved objects, since the EBS circuitry causes non-Gaussian event distributions, and because using long event streams negates the low-latency advantage of EBS.

View Article and Find Full Text PDF

Background And Objectives: Multiple sclerosis (MS) is common in adults while myelin oligodendrocyte glycoprotein antibody-associated disease (MOGAD) is rare. Our previous machine-learning algorithm, using clinical variables, ≤6 brain lesions, and no Dawson fingers, achieved 79% accuracy, 78% sensitivity, and 80% specificity in distinguishing MOGAD from MS but lacked validation. The aim of this study was to (1) evaluate the clinical/MRI algorithm for distinguishing MS from MOGAD, (2) develop a deep learning (DL) model, (3) assess the benefit of combining both, and (4) identify key differentiators using probability attention maps (PAMs).

View Article and Find Full Text PDF

Background: Patient-specific dosimetry in radiopharmaceutical therapy (RPT) offers a promising approach to optimize the balance between treatment efficacy and toxicity. The introduction of 360° CZT gamma cameras enables the development of personalized dosimetry studies using whole-body single photon emission computed tomography and computed tomography (SPECT/CT) data.

Purpose: This study proposes to validate the collapsed-cone superposition (CCS) approach against Monte Carlo (MC) simulations for whole-body dosimetry of [177Lu]Lu-PSMA-617 therapy in patients with metastatic castration resistant prostate cancer (mCRPC).

View Article and Find Full Text PDF

An array of photomultiplier tubes (PMTs) provides energy readout for gamma cameras, leading to event selection and positioning. However, operational and environmental changes, such as temperature, can cause PMTs to "drift" away from their nominal energy readouts and, therefore, require a correction procedure to return to their reference energies. We present two methods for determining the energy-scale change of each PMT using data collected on C-SPECT, a dedicated cardiac single photon emission computational tomography (SPECT) scanner.

View Article and Find Full Text PDF