98%
921
2 minutes
20
Validation studies are often used to obtain more reliable information in settings with error-prone data. Validated data on a subsample of subjects can be used together with error-prone data on all subjects to improve estimation. In practice, more than one round of data validation may be required, and direct application of standard approaches for combining validation data into analyses may lead to inefficient estimators since the information available from intermediate validation steps is only partially considered or even completely ignored. In this paper, we present two novel extensions of multiple imputation and generalized raking estimators that make full use of all available data. We show through simulations that incorporating information from intermediate steps can lead to substantial gains in efficiency. This work is motivated by and illustrated in a study of contraceptive effectiveness among 83 671 women living with HIV, whose data were originally extracted from electronic medical records, of whom 4732 had their charts reviewed, and a subsequent 1210 also had a telephone interview to validate key study variables.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10842111 | PMC |
http://dx.doi.org/10.1002/sim.9967 | DOI Listing |
Proc Natl Acad Sci U S A
September 2025
Cancer Research Center of Marseille: Team DNA Damage and Genome Instability|CNRS, Inserm, Institut Paoli-Calmettes, Aix Marseille Université, Marseille 13009, France.
Following encounter with an unrepaired DNA lesion, replication is halted and can restart downstream of the lesion leading to the formation of a single-stranded DNA (ssDNA) gap. To complete replication, this ssDNA gap is filled in by one of the two lesion tolerance pathways: the error-prone Translesion Synthesis (TLS) or the error-free Homology Directed Gap Repair (HDGR). In the present work, we evidence a role for the RecBC complex distinct from its canonical function in homologous recombination at DNA double strand breaks.
View Article and Find Full Text PDFInt J Surg
September 2025
Department of Human Structure and Repair, Ghent University Faculty of Medicine, Belgium.
Background: Staging laparoscopy (SL) is an essential procedure for peritoneal metastasis (PM) detection. Although surgeons are expected to differentiate between benign and malignant lesions intraoperatively, this task remains difficult and error-prone. The aim of this study was to develop a novel multimodal machine learning (MML) model to differentiate PM from benign lesions by integrating morphologic characteristics with intraoperative SL images.
View Article and Find Full Text PDFComput Biol Med
September 2025
Department of Neurosurgery, Asan Medical Center, University of Ulsan College of Medicine, Seoul, South Korea.
Intracranial aneurysms (IAs) are common vascular pathologies with a risk of fatal rupture. Human assessment of rupture risk is error prone, and treatment decision for unruptured IAs often rely on expert opinion and institutional policy. Therefore, we aimed to develop a computer-assisted aneurysm rupture prediction framework to help guide the decision-making process and create future decision criteria.
View Article and Find Full Text PDFSci Adv
September 2025
Department of Aerospace and Mechanical Engineering, University of Notre Dame, Notre Dame, IN 46556, USA.
Image-based modeling is essential for understanding cardiovascular hemodynamics and advancing the diagnosis and treatment of cardiovascular diseases. Constructing patient-specific vascular models remains labor-intensive, error-prone, and time-consuming, limiting their clinical applications. This study introduces a deep-learning framework that automates the creation of simulation-ready vascular models from medical images.
View Article and Find Full Text PDFJ Proteome Res
September 2025
Institut de Pharmacologie et de Biologie Structurale (IPBS), CNRS, Université de Toulouse (UT), Toulouse 31077, France.
Mass spectrometry (MS)-based proteomics data analysis is composed of many stages from quality control, data cleaning, and normalization to statistical and functional analysis, without forgetting multiple visualization steps. All of these need to be reported next to published results to make them fully understandable and reusable for the community. Although this seems straightforward, exhaustively reporting all aspects of an analysis workflow can be tedious and error prone.
View Article and Find Full Text PDF