Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Humans can label and categorize objects in a visual scene with high accuracy and speed, a capacity well characterized with studies using static images. However, motion is another cue that could be used by the visual system to classify objects. To determine how motion-defined object category information is processed by the brain in the absence of luminance-defined form information, we created a novel stimulus set of "object kinematograms" to isolate motion-defined signals from other sources of visual information. Object kinematograms were generated by extracting motion information from videos of 6 object categories and applying the motion to limited-lifetime random dot patterns. Using functional magnetic resonance imaging (fMRI) ( = 15, 40% women), we investigated whether category information from the object kinematograms could be decoded within the occipitotemporal and parietal cortex and evaluated whether the information overlapped with category responses to static images from the original videos. We decoded object category for both stimulus formats in all higher-order regions of interest (ROIs). More posterior occipitotemporal and ventral regions showed higher accuracy in the static condition, while more anterior occipitotemporal and dorsal regions showed higher accuracy in the dynamic condition. Further, decoding across the two stimulus formats was possible in all regions. These results demonstrate that motion cues can elicit widespread and robust category responses on par with those elicited by static luminance cues, even in ventral regions of visual cortex that have traditionally been associated with primarily image-defined form processing. Much research on visual object recognition has focused on recognizing objects in static images. However, motion is a rich source of information that humans might also use to categorize objects. Here, we present the first study to compare neural representations of several animate and inanimate objects when category information is presented in two formats: static cues or isolated dynamic motion cues. Our study shows that, while higher-order brain regions differentially process object categories depending on format, they also contain robust, abstract category representations that generalize across format. These results expand our previous understanding of motion-derived animate and inanimate object category processing and provide useful tools for future research on object category processing driven by multiple sources of visual information.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC9888510PMC
http://dx.doi.org/10.1523/JNEUROSCI.0371-22.2022DOI Listing

Publication Analysis

Top Keywords

object category
20
static images
12
category
10
object
9
category representations
8
categorize objects
8
images motion
8
sources visual
8
visual object
8
object kinematograms
8

Similar Publications

Visual search relies on the ability to use information about the target in working memory to guide attention and make target-match decisions. The 'attentional' or 'target' template is thought to be encoded within an inferior frontal junction (IFJ)-visual attentional network. While this template typically contains veridical target features, behavioral studies have shown that target-associated information, such as statistically co-occurring object pairs, can also guide attention.

View Article and Find Full Text PDF

Object identification has been widely used in several applications, utilising the annotated data with bounding boxes to specify each object's exact location and category in images and videos. However, relatively little research has been conducted on identifying plant species in their natural environments. Natural habitats play a crucial role in preserving biodiversity, ecological balance, and overall ecosystem health.

View Article and Find Full Text PDF

The impact of scene inversion on early scene-selective activity.

Biol Psychol

September 2025

Department of Psychology, Wright State University, Dayton OH. Electronic address:

Category-selectivity is a ubiquitous property of high-level visual cortex manifested in distinct cortical responses to faces, objects, and scenes. These signatures emerge early during visual processing, with each category sensitive to specific types of visual information at different time points. However, it is still not clear what information is extracted during early scene-selective processing, as scenes are rich, complex, and multidimensional stimuli.

View Article and Find Full Text PDF

Neurophysiological markers of the global/local biases in face perception.

Cortex

August 2025

Department of Biological and Health Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Campus de Cantoblanco, Madrid, Spain. Electronic address:

Global/local biases in the visual processing of structurally complex stimuli occur under certain conditions of the beholder. Previous experiments using hierarchical letters (large letters made of small ones) have reported a global precedence in young adults. Here, we aimed to define neurophysiological markers of a possible global/local bias during the implicit processing of new faces.

View Article and Find Full Text PDF

By comparing the relevant image manifestations and diagnostic results of high resolution CT (HRCT) and digital radio graphy (DR), to deeply explore the clinical application value of HRCT in the diagnosis and staging of occupational pneumoconiosis. A total of 180 pneumoconiosis patients with different stages diagnosed in Guangzhou Twelfth People's Hospital from January 2022 to May 2023 were selected as the research objects by systematic sampling method, and their HRCT and DR examinations were performed. The display of lung imaging features of patients with pneumoconiosis by the two examination methods was analyzed, and the chi-square test and rank sum test were used to compare the differences in diagnostic staging results and the detection of pulmonary complications.

View Article and Find Full Text PDF