Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Objectives: To differentiate invasive lepidic predominant adenocarcinoma (iLPA) from adenocarcinoma in situ (AIS)/minimally invasive adenocarcinoma (MIA) of lung utilizing visual semantic and computer-aided detection (CAD)-based texture features on subjects initially diagnosed as AIS or MIA with CT-guided biopsy.

Materials And Methods: From 2011 to 2017, all patients with CT-guided biopsy results of AIS or MIA who subsequently underwent resection were identified. CT scan before the biopsy was used to assess visual semantic and CAD texture features, totaling 23 semantic and 95 CAD-based quantitative texture variables. The least absolute shrinkage and selection operator (LASSO) method or forward selection was used to select the most predictive feature and combination of semantic and texture features for detection of invasive lung adenocarcinoma.

Results: Among the 33 core needle-biopsied patients with AIS/MIA pathology, 24 (72.7%) had invasive LPA and 9 (27.3%) had AIS/MIA on resection. On CT, visual semantic features included 21 (63.6%) part-solid, 5 (15.2%) pure ground glass, and 7 (21.2%) solid nodules. LASSO selected seven variables for the model, but all were not statistically significant. "Volume" was found to be statistically significant when assessing the correlation between independent variables using the backward selection technique. The LASSO selected "tumor_Perc95", "nodule surround", "small cyst-like spaces", and "volume" when assessing the correlation between independent variables.

Conclusions: Lung biopsy results showing noninvasive LPA underestimate invasiveness. Although statistically non-significant, some semantic features showed potential for predicting invasiveness, with septal stretching absent in all noninvasive cases, and solid consistency present in a significant portion of invasive cases.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11503399PMC
http://dx.doi.org/10.3390/medsci12040057DOI Listing

Publication Analysis

Top Keywords

visual semantic
16
texture features
12
predicting invasiveness
8
ais mia
8
semantic features
8
lasso selected
8
assessing correlation
8
correlation independent
8
semantic
7
features
6

Similar Publications

Prior researches on global-local processing have focused on hierarchical objects in the visual modality, while the real-world involves multisensory interactions. The present study investigated whether the simultaneous presentation of auditory stimuli influences the recognition of visually hierarchical objects. We added four types of auditory stimuli to the traditional visual hierarchical letters paradigm:no sound (visual-only), a pure tone, a spoken letter that was congruent with the required response (response-congruent), or a spoken letter that was incongruent with it (response-incongruent).

View Article and Find Full Text PDF

Introduction: Accurate identification of cherry maturity and precise detection of harvestable cherry contours are essential for the development of cherry-picking robots. However, occlusion, lighting variation, and blurriness in natural orchard environments present significant challenges for real-time semantic segmentation.

Methods: To address these issues, we propose a machine vision approach based on the PIDNet real-time semantic segmentation framework.

View Article and Find Full Text PDF

Generalized visual grounding tasks, including Generalized Referring Expression Comprehension (GREC) and Segmentation (GRES), extend the classical visual grounding paradigm by accommodating multi-target and non-target scenarios. Specifically, GREC focuses on accurately identifying all referential objects at the coarse bounding box level, while GRES aims for achieve fine-grained pixel-level perception. However, existing approaches typically treat these tasks independently, overlooking the benefits of jointly training GREC and GRES to ensure consistent multi-granularity predictions and streamline the overall process.

View Article and Find Full Text PDF

Brain Tumor Segmentation (BTS) is crucial for accurate diagnosis and treatment planning, but existing CNN and Transformer-based methods often struggle with feature fusion and limited training data. While recent large-scale vision models like Segment Anything Model (SAM) and CLIP offer potential, SAM is trained on natural images, lacking medical domain knowledge, and its decoder struggles with accurate tumor segmentation. To address these challenges, we propose the Medical SAM-Clip Grafting Network (MSCG), which introduces a novel SC-grafting module.

View Article and Find Full Text PDF

Algorithms of emotion: A hybrid NLP analysis of neurodivergent Reddit communities".

Acta Psychol (Amst)

September 2025

Management Department, Faculty of Economics, Administrative, and Social Sciences, Alanya University, 07400, Alanya, Antalya, Turkiye. Electronic address:

Online communities such as Reddit offer neurodivergent individuals a unique space to express emotions, seek psychosocial support, and negotiate identity outside conventional social constraints. Understanding how these communities articulate and structure emotional discourse is essential for inclusive technology design. This study employed a hybrid natural language processing (NLP) framework that integrates lexicon-based sentiment analysis (VADER) with transformer-based topic modeling (BERTopic).

View Article and Find Full Text PDF