98%
921
2 minutes
20
Over the past few decades, researchers have argued that playing action video games can substantially improve cognitive abilities and enhance learning. However, consensus has not been reached regarding the mechanisms through which action game experience facilitates superior performance on untrained perceptually and cognitively demanding transfer tasks. We argue that analysis of behaviors engaged in during transfer task performance may provide key insights into answering this question. In the current investigation, we examined potential action game effects in the context of a complex psychomotor task, the Space Fortress (SF) game, that allows for the detailed examination of player behaviors beyond aggregate score reports. Performance (game score) was compared between action video game players (VGPs) and non-gamers (nVGPs) in two different control interface conditions (keyboard or joystick), followed by analyses of behaviors associated with superior performance. Against expectations, VGPs displayed superior performance only in the keyboard condition, suggesting that the action gamer advantage may not generalize to less-familiar control interfaces. Performance advantages were specifically associated with more efficient ship control behaviors by VGPs. Findings highlight how process-tracing approaches may provide insight into the nature of, and mechanisms producing, action gamers' advantages on learning untrained tasks.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.actpsy.2022.103718 | DOI Listing |
J Vis Exp
August 2025
West China Second Hospital, Sichuan University; Key Laboratory of Birth Defects and Related Diseases of Women and Children (Sichuan University), Ministry of Education;
Current network pharmacology methods for TCM formulae predict potential active ingredients and mechanisms of action for specific diseases by constructing a formula-ingredient-target-pathway-disease network. Syndrome differentiation and treatment, a core principle of TCM, embodies its holistic and dynamic approach, emphasizing individual variability and disease progression. By integrating syndrome characteristics into this network, researchers can develop multi-target drugs tailored to each disease stage, enabling precise and personalized treatment.
View Article and Find Full Text PDFProc IEEE Comput Soc Conf Comput Vis Pattern Recognit
June 2025
The advancement of Multimodal Large Language Models (MLLMs) has enabled significant progress in multi-modal understanding, expanding their capacity to analyze video content. However, existing evaluation benchmarks for MLLMs primarily focus on abstract video comprehension, lacking a detailed assessment of their ability to understand video compositions, the nuanced interpretation of how visual elements combine and interact within highly compiled video contexts. We introduce VidComposition, a new benchmark specifically designed to evaluate the video composition understanding capabilities of MLLMs using carefully curated compiled videos and cinematic-level annotations.
View Article and Find Full Text PDFJ Adv Nurs
September 2025
College of Nursing, Brigham Young University, Provo, Utah, USA.
Aims: To explore the lived experiences of intensive care nurses caring for patients with limited English proficiency.
Design: A hermeneutic, interpretive phenomenological design was used.
Methods: Semi-structured interviews were conducted with intensive care nurses recruited through purposive sampling.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit
June 2025
Multimodal large language models (MLLMs) have recently shown significant advancements in video understanding, excelling in content reasoning and instruction-following tasks. However, hallucination, where models generate inaccurate or misleading content, remains underexplored in the video domain. Building on the observation that MLLM visual encoders often fail to distinguish visually different yet semantically similar video pairs, we introduce VIDHALLUC, the largest benchmark designed to examine hallucinations in MLLMs for video understanding.
View Article and Find Full Text PDF