Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Traditionally, concepts are assumed to be situational invariant mental knowledge entities (conceptual stability), which are represented in a unitary brain system distinct from sensory and motor areas (amodality). However, accumulating evidence suggests that concepts are embodied in perception and action in that their conceptual features are stored within modality-specific semantic maps in the sensory and motor cortex. Nonetheless, the first traditional assumption of conceptual stability largely remains unquestioned. Here, we tested the notion of flexible concepts using functional magnetic resonance imaging and event-related potentials (ERPs) during the verification of two attribute types (visual, action-related) for words denoting artifactual and natural objects. Functional imaging predominantly revealed crossover interactions between category and attribute type in visual, motor, and motion-related brain areas, indicating that access to conceptual knowledge is strongly modulated by attribute type: Activity in these areas was highest when nondominant conceptual attributes had to be verified. ERPs indicated that these category-attribute interactions emerged as early as 116 msec after stimulus onset, suggesting that they reflect rapid access to conceptual features rather than postconceptual processing. Our results suggest that concepts are situational-dependent mental entities. They are composed of semantic features which are flexibly recruited from distributed, yet localized, semantic maps in modality-specific brain regions depending on contextual constraints.

Download full-text PDF

Source
http://dx.doi.org/10.1162/jocn.2008.20123DOI Listing

Publication Analysis

Top Keywords

semantic maps
12
visual motor
8
motor motion-related
8
conceptual stability
8
sensory motor
8
conceptual features
8
attribute type
8
access conceptual
8
conceptual
7
conceptual flexibility
4

Similar Publications

The neuroscience of creativity has proposed that shared and domain-specific brain mechanisms underlie creative thinking. However, greater nuance is needed in characterizing these mechanisms, and limited neuroimaging analyses, especially regarding the relationship between the Alternative Uses Task (AUT) and other linguistic tasks, have so far prevented a comprehensive understanding of the neural basis of creativity. This paper offers to fill these gaps with a closer examination of the contributions of the specific domains and the deactivations associated with creativity.

View Article and Find Full Text PDF

In industrial scenarios, semantic segmentation of surface defects is vital for identifying, localizing, and delineating defects. However, new defect types constantly emerge with product iterations or process updates. Existing defect segmentation models lack incremental learning capabilities, and direct fine-tuning (FT) often leads to catastrophic forgetting.

View Article and Find Full Text PDF

Most existing small object detection methods rely on residual blocks to process deep feature maps. However, these residual blocks, composed of multiple large-kernel convolution layers, incur high computational costs and contain redundant information, which makes it difficult to improve detection performance for small objects. To address this, we designed an improved feature pyramid network called L Feature Pyramid Network (L-FPN), which optimizes the allocation of computational resources for small object detection by reconstructing the original FPN structure.

View Article and Find Full Text PDF

To detect changes in our visual environments, the visual system compares pre-and post-change representations maintained in active working memory. Previous research has suggested that change detection is primarily informed by high-level semantics in naturalistic scenes. Here, across two experiments, we used meaning maps - a data driven method to measure the visual semantic information in naturalistic scenes - to investigate whether semantic features predicted visual change detection in a flicker paradigm.

View Article and Find Full Text PDF

To improve the unmanned aerial vehicle (UAV) detection and recognition rate based on radar detection technology, this paper proposes to take the radar range-Doppler planar graph that characterizes the echo information of the UAV as the input of the improved YOLOv8 network, uses the YOLOv8n-RFL network to detect and identify the UAV target. In the detection method of the UAV target, first, we detect the echo signal of the UAV through radar, and take the received echo model as the foundation, utilize the principle of generating range-Doppler planar data to convert the received UAV echo signals into range-Doppler planar graphs, and then, use the improved YOLOv8 network to train and detect the UAV target. In the detection algorithm, the range-Doppler planar graph is taken as the input of the YOLOv8n backbone network, the UAV target is extracted from the complex background through the C2f-RVB and C2f-RVBE modules to obtain more feature maps containing multi-scale UAV feature information; the shallow features from the backbone network and deep features from the neck network are integrated through the feature semantic fusion module (FSFM) to generate high-quality fused UAV feature maps with rich details and deep semantic information, and then, the lightweight sharing detection head (LWSD) is utilized to conduct unmanned aerial vehicle (UAV) feature recognition based on the generated fused feature map.

View Article and Find Full Text PDF