Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

We evaluate the performance of multiple text classification methods used to automate the screening of article abstracts in terms of their relevance to a topic of interest. The aim is to develop a system that can be first trained on a set of manually screened article abstracts before using it to identify additional articles on the same topic. Here the focus is on articles related to the topic "artificial intelligence in nursing". Eight text classification methods are tested, as well as two simple ensemble systems. The results indicate that it is feasible to use text classification technology to support the manual screening process of article abstracts when conducting a literature review. The best results are achieved by an ensemble system, which achieves a F1-score of 0.41, with a sensitivity of 0.54 and a specificity of 0.96. Future work directions are discussed.

Download full-text PDF

Source
http://dx.doi.org/10.3233/SHTI220155DOI Listing

Publication Analysis

Top Keywords

text classification
12
article abstracts
12
classification methods
8
articles topic
8
automated screening
4
screening literature
4
literature artificial
4
artificial intelligence
4
intelligence nursing
4
nursing evaluate
4

Similar Publications

Objectives: We assessed time to provider (TTP) for patients with a non-English language preference (NELP) compared to patients with an English language preference (ELP) in the emergency department (ED).

Methods: We conducted a retrospective cohort study of adults presenting between 2019 and 2023 to a large urban ED. We used a 2-step classification that first identified NELP from patients' reported language at registration, followed by identification in the narrative text of the triage note.

View Article and Find Full Text PDF

Pretraining plays a pivotal role in acquiring generalized knowledge from large-scale data, achieving remarkable successes as evidenced by large models in CV and NLP. However, progress in the graph domain remains limited due to fundamental challenges represented by feature heterogeneity and structural heterogeneity. Recent efforts have been made to address feature heterogeneity via Large Language Models (LLMs) on text-attributed graphs (TAGs) by generating fixed-length text representations as node features.

View Article and Find Full Text PDF

Integrating clinical anxiety scales with pre-trained language models for anxiety recognition on social media.

Health Inf Sci Syst

December 2025

Gansu Provincial Key Laboratory of Wearable Computing, School of Information Science and Engineering, Lanzhou University, Lanzhou, 730000 China.

Leveraging natural language processing to identify anxiety states from social media has been widely studied. However, existing research lacks deep user-level semantic modeling and effective anxiety feature extraction. Additionally, the absence of clinical domain knowledge in current models limits their interpretability and medical relevance.

View Article and Find Full Text PDF

Background: Conventional weed risk assessments (WRAs) are time-consuming and often constrained by species-specific data gaps. We present a validated, algorithmic alternative, the model, that integrates climatic suitability ( ), weed-related publication frequency (P) and global occurrence data ( ), using publicly available databases and artificial intelligence (AI)-assisted text screening with a large language model (LLM).

Results: The model was tested against independent weed hazard classifications for New Zealand and California.

View Article and Find Full Text PDF

Objective: To examine the association between patient disability status and use of stigmatizing language in clinical notes from the hospital admission for birth.

Design: Cross-sectional study of electronic health record data.

Setting: Two urban hospitals in the northeastern United States.

View Article and Find Full Text PDF