98%
921
2 minutes
20
The lottery ticket hypothesis (LTH) has increased attention to pruning neural networks at initialization. We study this problem in the linear setting. We show that finding a sparse mask at initialization is equivalent to the sketching problem introduced for efficient matrix multiplication. This gives us tools to analyze the LTH problem and gain insights into it. Specifically, using the mask found at initialization, we bound the approximation error of the pruned linear model at the end of training. We theoretically justify previous empirical evidence that the search for sparse networks may be data independent. By using the sketching perspective, we suggest a generic improvement to existing algorithms for pruning at initialization, which we show to be beneficial in the data-independent case.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2025.3598343 | DOI Listing |
IEEE Trans Pattern Anal Mach Intell
September 2025
Radiance fields represented by 3D Gaussians excel at synthesizing novel views, offering both high training efficiency and fast rendering. However, with sparse input views, the lack of multi-view consistency constraints results in poorly initialized Gaussians and unreliable heuristics for optimization, leading to suboptimal performance. Existing methods often incorporate depth priors from dense estimation networks but overlook the inherent multi-view consistency in input images.
View Article and Find Full Text PDFIEEE Trans Pattern Anal Mach Intell
August 2025
The lottery ticket hypothesis (LTH) has increased attention to pruning neural networks at initialization. We study this problem in the linear setting. We show that finding a sparse mask at initialization is equivalent to the sketching problem introduced for efficient matrix multiplication.
View Article and Find Full Text PDFNeural Netw
October 2025
Systems Engineering and Computer Science (PESC), Federal University of Rio de Janeiro (UFRJ), Av. Athos da Silveira Ramos 149, Bloco H-319, Cidade Universitária, 21945-970, Rio de Janeiro, RJ, Brazil. Electronic address:
Leveraging sparse networks to connect successive layers in deep neural networks has recently been shown to provide benefits to large-scale state-of-the-art models. However, network connectivity also plays a significant role in the learning performance of shallow networks, such as the classic Restricted Boltzmann Machine (RBM). Efficiently finding sparse connectivity patterns that improve the learning performance of shallow networks is a fundamental problem.
View Article and Find Full Text PDFIEEE Trans Image Process
January 2025
Feature matching is a fundamental concern widely employed in computer vision applications. This paper introduces a novel and efficacious method named Grid-guided Sparse Laplacian Consensus, rooted in the concept of smooth constraints. To address challenging scenes such as severe deformation and independent motions, we devise grid-based adaptive matching guidance to construct multiple transformations based on motion coherence.
View Article and Find Full Text PDFNeural Netw
April 2025
Institute of Deep Perception Technology, JITRI, 214000, Wuxi, China; XJTLU-JITRI Academy of Technology, Xi'an Jiaotong-Liverpool University, 215123, Suzhou, China; Thrust of Artificial Intelligence and Thrust of Intelligent Transportation, The Hong Kong University of Science and Technology (Guangzho
Over the past decade, the size of neural network models has gradually increased in both breadth and depth, leading to a growing interest in the application of neural network pruning. Unstructured pruning provides fine-grained sparsity and achieves better inference acceleration under specific hardware support. Unstructured Pruning at Initialization (PaI) optimizes the iterative pruning pipeline, but sparse weights increase the risk of underfitting during training.
View Article and Find Full Text PDF