98%
921
2 minutes
20
Background: The potential for generative artificial intelligence (GenAI) to assist with clinical tasks is the subject of ongoing debate within biomedical informatics and related fields.
Objective: This study aimed to explore general practitioners' (GPs') opinions about GenAI on primary care.
Methods: In January 2025, we conducted a web-based survey of 1005 UK GPs' experiences and opinions of GenAI in clinical practice. This study involved a qualitative inductive descriptive analysis of a written response ("comments") to an open-ended question in the survey. After analysis, the interpretation of themes was also informed by the technology acceptance model.
Results: Out of 1005 respondents, 611 GPs (61%) provided written comments in response to the free text question, totaling 7990 words. Comments were classified into 3 major themes and 8 subthemes in relation to GenAI in clinical practice. The major themes were (1) unfamiliarity, (2) ambivalence and anxiety, and (3) role in clinical tasks. "Unfamiliarity" encompassed a lack of experience and knowledge, and the need for training on GenAI. "Ambivalence and anxiety" included mixed expectations among GPs in relation to these tools, beliefs about diminished human connection, and skepticism about AI accountability. Finally, commenting on the role of GenAI in clinical tasks, GPs believed it would help with documentation. However, respondents questioned AI's clinical judgment and raised concerns about operational uncertainty concerning these tools. Female GPs were more likely to leave comments than male GPs, with 53% (324/611) of female GPs providing feedback compared to 41.1% (162/394) who did not. Chi-square tests confirmed this difference ((χ²₂= 14.6, P=.001). In addition, doctors who left comments were significantly more likely to have used GenAI in clinical practice compared with those who did not. Among all respondents, 71.7% (438/611) had not used GenAI. However, noncommenters were even less likely to have used it, with 80.7% (318/394) reporting no use. A chi-square test confirmed this difference (χ²₁=10.0, P=.002).
Conclusions: This study provides timely insights into UK GPs' perspectives on the role, impact, and limitations of GenAI in primary care. However, the study has limitations. The qualitative data analyzed originates from a self-selected subset of respondents who chose to provide free-text comments, and these participants were more likely to have used GenAI tools in clinical practice. However, the substantial number of comments offers valuable insights into the diverse views held by GPs regarding GenAI. Furthermore, the majority of our respondents reported limited experience and training with these tools; however, many GPs perceived potential benefits of GenAI and ambient AI for documentation. Notably, 2 years after the widespread introduction of GenAI, GPs' persistent lack of understanding and training remains a critical concern. More extensive qualitative work would provide a more in-depth understanding of GPs' views.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12327960 | PMC |
http://dx.doi.org/10.2196/74428 | DOI Listing |
JAMIA Open
October 2025
Division of Pulmonary and Critical Care, Brigham and Women's Hospital, Boston, MA, United States.
Objectives: Unstructured data, such as procedure notes, contain valuable medical information that is frequently underutilized due to the labor-intensive nature of data extraction. This study aims to develop a generative artificial intelligence (GenAI) pipeline using an open-source Large Language Model (LLM) with built-in guardrails and a retry mechanism to extract data from unstructured right heart catheterization (RHC) notes while minimizing errors, including hallucinations.
Materials And Methods: A total of 220 RHC notes were randomly selected for pipeline development and 200 for validation from the Pulmonary Vascular Disease Registry.
J Chem Inf Model
September 2025
Psivant Therapeutics, 451 D Street, Boston, Massachusetts 02210, United States.
Generative modeling with artificial intelligence (GenAI) offers an emerging approach to discover novel, efficacious, and safe drugs by enabling the systematic exploration of chemical space and to design molecules that are synthesizable while also having desirable drug properties. However, despite rapid progress in other industries, GenAI has yet to demonstrate clear and consistent value in prospective drug discovery applications. In this Perspective, we argue that the ultimate goal of generative chemistry is not just to generate "new" or "interesting" molecules, but to generate "beautiful" molecules─those that are therapeutically aligned with the program objectives and bring value beyond traditional approaches.
View Article and Find Full Text PDFDiagnostics (Basel)
August 2025
Department of Otolaryngology-Head and Neck Surgery, Chang Gung Memorial Hospital, Keelung 20401, Taiwan.
: Generative AI (GenAI) models like ChatGPT have gained significant attention in recent years for their potential applications in healthcare. This study evaluates the concordance of responses generated by ChatGPT (versions 3.5 and 4.
View Article and Find Full Text PDFEur J Investig Health Psychol Educ
August 2025
Unidad de Remisión de Diabetes Mellitus (URDM), Facultad de Estudios Superiores-Iztacala, Universidad Nacional Autónoma de México, Tlalnepantla 54090, Mexico.
DIALOGUE (DIagnostic AI Learning through Objective Guided User Experience) is a generative artificial intelligence (GenAI)-based training program designed to enhance diagnostic communication skills in medical students. In this single-arm pre-post study, we evaluated whether DIALOGUE could improve students' ability to disclose a type 2 diabetes mellitus (T2DM) diagnosis with clarity, structure, and empathy. Thirty clinical-phase students completed two pre-test virtual encounters with an AI-simulated patient (ChatGPT, GPT-4o), scored by blinded raters using an eight-domain rubric.
View Article and Find Full Text PDFOmega (Westport)
August 2025
Department of Psychology, Kinneret Academic College, Tzemach, Israel.
This article explores the integration of generative artificial intelligence (GenAI) into the field of thanatology, with particular attention to its ethical, societal, and clinical implications. In recent years, GenAI technologies have increasingly been applied in mental-health settings, including the use of chatbots and so-called "deathbots" - digital simulations of deceased individuals that some mourners engage with to sustain a sense of connection and emotional comfort. While such technologies may offer therapeutic potential, they also raise significant ethical and practical challenges.
View Article and Find Full Text PDF