98%
921
2 minutes
20
Background: History-taking is crucial in medical training. However, current methods often lack consistent feedback and standardized evaluation and have limited access to standardized patient (SP) resources. Artificial intelligence (AI)-powered simulated patients offer a promising solution; however, challenges such as human-AI consistency, evaluation stability, and transparency remain underexplored in multicase clinical scenarios.
Objective: This study aimed to develop and validate the AI-Powered Medical History-Taking Training and Evaluation System (AMTES), based on DeepSeek-V2.5 (DeepSeek), to assess its stability, human-AI consistency, and transparency in clinical scenarios with varying symptoms and difficulty levels.
Methods: We developed AMTES, a system using multiple strategies to ensure dialog quality and automated assessment. A prospective study with 31 medical students evaluated AMTES's performance across 3 cases of varying complexity: a simple case (cough), a moderate case (frequent urination), and a complex case (abdominal pain). To validate our design, we conducted systematic baseline comparisons to measure the incremental improvements from each level of our design approach and tested the framework's generalizability by implementing it with an alternative large language model (LLM) Qwen-Max (Qwen AI; version 20250409), under a zero-modification condition.
Results: A total of 31 students practiced with our AMTES. During the training, students generated 8606 questions across 93 history-taking sessions. AMTES achieved high dialog accuracy: 98.6% (SD 1.5%) for cough, 99.0% (SD 1.1%) for frequent urination, and 97.9% (SD 2.2%) for abdominal pain, with contextual appropriateness exceeding 99%. The system's automated assessments demonstrated exceptional stability and high human-AI consistency, supported by transparent, evidence-based rationales. Specifically, the coefficients of variation (CV) were low across total scores (0.87%-1.12%) and item-level scoring (0.55%-0.73%). Total score consistency was robust, with the intraclass correlation coefficients (ICCs) exceeding 0.923 across all scenarios, showing strong agreement. The item-level consistency was remarkably high, consistently above 95%, even for complex cases like abdominal pain (95.75% consistency). In systematic baseline comparisons, the fully-processed system improved ICCs from 0.414/0.500 to 0.923/0.972 (moderate and complex cases), with all CVs ≤1.2% across the 3 cases. A zero-modification implementation of our evaluation framework with an alternative LLM (Qwen-Max) achieved near-identical performance, with the item-level consistency rates over 94.5% and ICCs exceeding 0.89. Overall, 87% of students found AMTES helpful, and 83% expressed a desire to use it again in the future.
Conclusions: Our data showed that AMTES demonstrates significant educational value through its LLM-based virtual SPs, which successfully provided authentic clinical dialogs with high response accuracy and delivered consistent, transparent educational feedback. Combined with strong user approval, these findings highlight AMTES's potential as a valuable, adaptable, and generalizable tool for medical history-taking training across various educational contexts.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12396829 | PMC |
http://dx.doi.org/10.2196/73419 | DOI Listing |
JMIR Med Educ
August 2025
Medical Simulation Center, Shantou University Medical College, No. 22 Xinling Road, Shantou, 515041, China, 86 754-88900459.
Background: History-taking is crucial in medical training. However, current methods often lack consistent feedback and standardized evaluation and have limited access to standardized patient (SP) resources. Artificial intelligence (AI)-powered simulated patients offer a promising solution; however, challenges such as human-AI consistency, evaluation stability, and transparency remain underexplored in multicase clinical scenarios.
View Article and Find Full Text PDFJ Neurol
August 2025
Department of Neurology, Rambam Health Care Campus, Haifa, Israel.
Background And Objectives: Accurate interpretation of electrodiagnostic (EDX) studies is essential for the diagnosis and management of neuromuscular disorders. Artificial intelligence (AI) based tools may improve consistency and quality of EDX reporting and reduce workload. The aim of this study is to evaluate the performance of an AI-assisted, multi-agent framework (INSPIRE) in comparison with standard physician interpretation in a randomized controlled trial (RCT).
View Article and Find Full Text PDFLaryngoscope
August 2025
Otolaryngology Unit, Santi Paolo E Carlo Hospital, Department of Health Sciences, Università Degli Studi Di Milano, Milan, Italy.
Objectives: Clear, complete operative documentation is essential for surgical safety, continuity of care, and medico-legal standards. Large language models such as ChatGPT offer promise for automating clinical documentation; however, their performance in operative note generation, particularly in surgical subspecialties, remains underexplored. This study aimed to compare the quality, accuracy, and efficiency of operative notes authored by a surgical resident, attending surgeon, GPT alone, and an attending surgeon using GPT as a writing aid.
View Article and Find Full Text PDFBMC Med Educ
August 2025
Department of Otolaryngology, Head and Neck Surgery, Lokman Hekim University, Ankara, Turkey.
Background: The quality and reliability of health-related content on YouTube remain a growing concern. This study aimed to evaluate tonsillectomy-related YouTube videos using a multi-method framework that combines human expert review, large language model (ChatGPT-4) analysis, and transcript readability assessment.
Methods: A total of 76 English-language YouTube videos were assessed.
J Evid Based Soc Work (2019)
August 2025
Department of Social Work, Faculty of Health Sciences, Recep Tayyip Erdoğan University, Rize, Turkey.
Purpose: This study investigates the inter-rater reliability between human experts (a forensic psychologist and a social worker) and a large language model (LLM) in the assessment of child sexual abuse statements. The research aims to explore the potential, limitations, and consistency of this class of AI as an evaluation tool within the framework of Criteria-Based Content Analysis (CBCA), a widely used method for assessing statement credibility.
Materials And Methods: Sixty-five anonymized transcripts of forensic interviews with child sexual abuse victims ( = 65) were independently evaluated by three raters: a forensic psychologist, a social worker, and a large language model (ChatGPT, GPT-4o Plus).