98%
921
2 minutes
20
We evaluate the performance of multiple text classification methods used to automate the screening of article abstracts in terms of their relevance to a topic of interest. The aim is to develop a system that can be first trained on a set of manually screened article abstracts before using it to identify additional articles on the same topic. Here the focus is on articles related to the topic "artificial intelligence in nursing". Eight text classification methods are tested, as well as two simple ensemble systems. The results indicate that it is feasible to use text classification technology to support the manual screening process of article abstracts when conducting a literature review. The best results are achieved by an ensemble system, which achieves a F1-score of 0.41, with a sensitivity of 0.54 and a specificity of 0.96. Future work directions are discussed.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.3233/SHTI220155 | DOI Listing |
J Am Coll Emerg Physicians Open
October 2025
Department of Emergency Medicine, University of Michigan, Ann Arbor, Michigan, USA.
Objectives: We assessed time to provider (TTP) for patients with a non-English language preference (NELP) compared to patients with an English language preference (ELP) in the emergency department (ED).
Methods: We conducted a retrospective cohort study of adults presenting between 2019 and 2023 to a large urban ED. We used a 2-step classification that first identified NELP from patients' reported language at registration, followed by identification in the narrative text of the triage note.
Proc Mach Learn Res
November 2024
Pretraining plays a pivotal role in acquiring generalized knowledge from large-scale data, achieving remarkable successes as evidenced by large models in CV and NLP. However, progress in the graph domain remains limited due to fundamental challenges represented by feature heterogeneity and structural heterogeneity. Recent efforts have been made to address feature heterogeneity via Large Language Models (LLMs) on text-attributed graphs (TAGs) by generating fixed-length text representations as node features.
View Article and Find Full Text PDFHealth Inf Sci Syst
December 2025
Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 China.
Leveraging natural language processing to identify anxiety states from social media has been widely studied. However, existing research lacks deep user-level semantic modeling and effective anxiety feature extraction. Additionally, the absence of clinical domain knowledge in current models limits their interpretability and medical relevance.
View Article and Find Full Text PDFPest Manag Sci
September 2025
AgResearch Ltd, Tuhiraki, Lincoln, New Zealand.
Background: Conventional weed risk assessments (WRAs) are time-consuming and often constrained by species-specific data gaps. We present a validated, algorithmic alternative, the model, that integrates climatic suitability ( ), weed-related publication frequency (P) and global occurrence data ( ), using publicly available databases and artificial intelligence (AI)-assisted text screening with a large language model (LLM).
Results: The model was tested against independent weed hazard classifications for New Zealand and California.
J Obstet Gynecol Neonatal Nurs
September 2025
Objective: To examine the association between patient disability status and use of stigmatizing language in clinical notes from the hospital admission for birth.
Design: Cross-sectional study of electronic health record data.
Setting: Two urban hospitals in the northeastern United States.