98%
921
2 minutes
20
This study represents one of the initial efforts to analyse a coach-athlete conversational dataset using freely available GPT tools and a pre-determined, context-specific, prompt-based analyses framework (i.e., R-PIASS). One dialogue dataset was selected by means of two different freely available AI-based GPT tools: ChatGPT v4 and DeepSeek v3. The results illustrated that both ChatGPT v4 and DeepSeek v3 models could extract quantitative and qualitative conversational information from the source material using simple R-PIASS prompt specifiers. Implications for how coaches can use this technology to support their own learning, practice designs, and performance analyses were the efficiencies both platforms provided in relation to cost, usability, accessibility and convenience. Despite the strengths, there were also associated risks and pitfalls when using this process such as the strength and robustness of the applicable statistical outcomes and tensions between keeping the input data within the context and ensuring that the context did not breach privacy issues. Further investigations that engage GPT platforms for coach-athlete dialogue analysis are therefore required to ascertain the true relevance and potential of using this type of technology to enhance coach learning and athlete development.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12339518 | PMC |
http://dx.doi.org/10.3389/fspor.2025.1627685 | DOI Listing |
J Physician Assist Educ
September 2025
Andrew P. Chastain, DMS, PA-C, is an assistant professor at Butler University, Indianapolis, Indiana.
Introduction: Artificial intelligence tools show promise in supplementing traditional physician assistant education, particularly in developing clinical reasoning skills. However, limited research exists on custom Generative Pretrained Transformer (GPT) applications in physician assistant (PA) education. This study evaluated student experiences and perceptions of a custom GPT-based clinical reasoning tool.
View Article and Find Full Text PDFActa Neurochir (Wien)
September 2025
Department of Neurosurgery, Istinye University, Istanbul, Turkey.
Background: Recent studies suggest that large language models (LLMs) such as ChatGPT are useful tools for medical students or residents when preparing for examinations. These studies, especially those conducted with multiple-choice questions, emphasize that the level of knowledge and response consistency of the LLMs are generally acceptable; however, further optimization is needed in areas such as case discussion, interpretation, and language proficiency. Therefore, this study aimed to evaluate the performance of six distinct LLMs for Turkish and English neurosurgery multiple-choice questions and assess their accuracy and consistency in a specialized medical context.
View Article and Find Full Text PDFJ Glaucoma
September 2025
Harvard Medical School, Boston, MA.
Purpose: Large language models (LLMs) can assist patients who seek medical knowledge online to guide their own glaucoma care. Understanding the differences in LLM performance on glaucoma-related questions can inform patients about the best resources to obtain relevant information.
Methods: This cross-sectional study evaluated the accuracy, comprehensiveness, quality, and readability of LLM-generated responses to glaucoma inquiries.
J Craniofac Surg
September 2025
University of Miami Miller School of Medicine, Miami, FL.
Outcomes were to compare the accuracy of 2 large-language models-GPT-4o and o3-Mini-against medical-student performance on otolaryngology-focused, USMLE-style multiple-choice questions. With permission from AMBOSS, we extracted 146 Step 2 CK questions tagged "Otolaryngology" and stratified them by AMBOSS difficulty (levels 1-5). Each item was presented verbatim to GPT-4o and o3-Mini through their official APIs; outputs were scored correct/incorrect.
View Article and Find Full Text PDFClin Ophthalmol
August 2025
University of Virginia School of Medicine, Charlottesville, VA, USA.
Purpose: Diabetic retinopathy (DR) is a leading cause of vision loss in working-age adults. Despite the importance of early DR detection, only 60% of patients with diabetes receive recommended annual screenings due to limited eye care provider capacity. FDA-approved AI systems were developed to meet the growing demand for DR screening; however, high costs and specialized equipment limit accessibility.
View Article and Find Full Text PDF