Unified cross-modality integration and analysis of T cell receptors and T cell transcriptomes by low-resource-aware representation learning.

Cell Genom

Key Laboratory of Spine and Spinal Cord Injury Repair and Regeneration (Tongji University), Ministry of Education, Tongji Hospital, School of Medicine, Frontier Science Center for Stem Cell Research, Bioinformatics Department, School of Life Sciences and Technology, Tongji University, Shanghai 20009

Published: May 2024


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Single-cell RNA sequencing (scRNA-seq) and T cell receptor sequencing (TCR-seq) are pivotal for investigating T cell heterogeneity. Integrating these modalities, which is expected to uncover profound insights in immunology that might otherwise go unnoticed with a single modality, faces computational challenges due to the low-resource characteristics of the multimodal data. Herein, we present UniTCR, a novel low-resource-aware multimodal representation learning framework designed for the unified cross-modality integration, enabling comprehensive T cell analysis. By designing a dual-modality contrastive learning module and a single-modality preservation module to effectively embed each modality into a common latent space, UniTCR demonstrates versatility in connecting TCR sequences with T cell transcriptomes across various tasks, including single-modality analysis, modality gap analysis, epitope-TCR binding prediction, and TCR profile cross-modality generation, in a low-resource-aware way. Extensive evaluations conducted on multiple scRNA-seq/TCR-seq paired datasets showed the superior performance of UniTCR, exhibiting the ability of exploring the complexity of immune system.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11099349PMC
http://dx.doi.org/10.1016/j.xgen.2024.100553DOI Listing

Publication Analysis

Top Keywords

unified cross-modality
8
cross-modality integration
8
t cell transcriptomes
8
representation learning
8
t cell
6
analysis
4
integration analysis
4
analysis t cell
4
t cell receptors
4
receptors t cell
4

Similar Publications

Genomic language models (gLMs) face a fundamental efficiency challenge: either maintain separate specialized models for each biological modality (DNA and RNA) or develop large multi-modal architectures. Both approaches impose significant computational burdens - modality-specific models require redundant infrastructure despite inherent biological connections, while multi-modal architectures demand massive parameter counts and extensive cross-modality pretraining. To address this limitation, we introduce CodonMoE (Adaptive Mixture of Codon Reformative Experts), a lightweight adapter that transforms DNA language models into effective RNA analyzers without RNA-specific pretraining.

View Article and Find Full Text PDF

The performance of a well-trained segmentation model is often trapped by domain shift caused by acquisition variance. Existing efforts are devoted to expanding the diversity of single-source samples, as well as learning domain-invariant representations. Essentially, they are still modeling the statistical dependence between sample-label pairs to achieve a superficial portrayal of reality.

View Article and Find Full Text PDF

FPM-RNet: Fused Photoacoustic and operating Microscopic imaging with cross-modality Representation and Registration Network.

Med Image Anal

October 2025

Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China; Shanghai Key Laboratory of Flexible Medical Robotics, Tongren Hospital, Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, 200240, China. Electronic address:

Robot-assisted microsurgery is a promising technique for a number of clinical specialties including neurosurgery. One of the prerequisites of such procedures is accurate vision guidance, delineating not only the exposed surface details but also embedded microvasculature. Conventional microscopic cameras used for vascular imaging are susceptible to specular reflections and changes in ambient light with low tissue resolution and contrast.

View Article and Find Full Text PDF

Unsupervised learning visible-infrared person re-identification (USL-VI-ReID) offers a more flexible and cost-effective alternative compared to supervised methods. This field has gained increasing attention due to its promising potential. Existing methods simply cluster modality-specific samples and employ strong association techniques to achieve instance-to-cluster or cluster-to-cluster cross-modality associations.

View Article and Find Full Text PDF

Background: Medical image translation has become an essential tool in modern radiotherapy, providing complementary information for target delineation and dose calculation. However, current approaches are constrained by their modality-specific nature, requiring separate model training for each pair of imaging modalities. This limitation hinders the efficient deployment of comprehensive multimodal solutions in clinical practice.

View Article and Find Full Text PDF