98%
921
2 minutes
20
The resampling-based test, which often relies on permutation or bootstrap procedures, has been widely used for statistical hypothesis testing when the asymptotic distribution of the test statistic is unavailable or unreliable. It requires repeated calculations of the test statistic on a large number of simulated data sets for its significance level assessment, and thus it could become very computationally intensive. Here, we propose an efficient p-value evaluation procedure by adapting the stochastic approximation Markov chain Monte Carlo algorithm. The new procedure can be used easily for estimating the p-value for any resampling-based test. We show through numeric simulations that the proposed procedure can be 100-500 000 times as efficient (in term of computing time) as the standard resampling-based procedure when evaluating a test statistic with a small p-value (e.g. less than 10( - 6)). With its computational burden reduced by this proposed procedure, the versatile resampling-based test would become computationally feasible for a much wider range of applications. We demonstrate the application of the new method by applying it to a large-scale genetic association study of prostate cancer.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3114653 | PMC |
http://dx.doi.org/10.1093/biostatistics/kxq078 | DOI Listing |
BMC Med Res Methodol
August 2024
Department of Biostatistics & Data Science, University of Texas Medical Branch at Galveston (UTMB), Galveston, TX, USA.
Background: Accurate prediction of subject recruitment, which is critical to the success of a study, remains an ongoing challenge. Previous prediction models often rely on parametric assumptions which are not always met or may be difficult to implement. We aim to develop a novel method that is less sensitive to model assumptions and relatively easy to implement.
View Article and Find Full Text PDFNat Hum Behav
October 2024
Department of Biomedical Engineering, Yale University, New Haven, CT, USA.
Brain-phenotype predictive models seek to identify reproducible and generalizable brain-phenotype associations. External validation, or the evaluation of a model in external datasets, is the gold standard in evaluating the generalizability of models in neuroimaging. Unlike typical studies, external validation involves two sample sizes: the training and the external sample sizes.
View Article and Find Full Text PDFBiom J
July 2024
Department of Biostatistics, Institute of Cell Biology and Biophysics, Leibniz University Hannover, Hannover, Germany.
In biomedical research, the simultaneous inference of multiple binary endpoints may be of interest. In such cases, an appropriate multiplicity adjustment is required that controls the family-wise error rate, which represents the probability of making incorrect test decisions. In this paper, we investigate two approaches that perform single-step -value adjustments that also take into account the possible correlation between endpoints.
View Article and Find Full Text PDFJ Med Internet Res
May 2024
Department of Child Healthcare, Children's Hospital Affiliated to Zhengzhou University, Zhengzhou, China.
Background: Attention-deficit/hyperactivity disorder (ADHD) is one of the most common neurodevelopmental disorders among children. Pharmacotherapy has been the primary treatment for ADHD, supplemented by behavioral interventions. Digital and exercise interventions are promising nonpharmacologic approaches for enhancing the physical and psychological health of children with ADHD.
View Article and Find Full Text PDFTrials
May 2024
Department of Biostatistics and Informatics, Colorado School of Public Health, University of Colorado Anschutz Medical Campus, Aurora, CO, USA.
Background: Clinical trials often involve some form of interim monitoring to determine futility before planned trial completion. While many options for interim monitoring exist (e.g.
View Article and Find Full Text PDF