A PHP Error was encountered

Severity: Warning

Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests

Filename: helpers/my_audit_helper.php

Line Number: 197

Backtrace:

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url

File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML

File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global

File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword

File: /var/www/html/index.php
Line: 317
Function: require_once

Unlocking the power of L1 regularization: A novel approach to taming overfitting in CNN for image classification. | LitMetric

Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Convolutional Neural Networks (CNNs) stand as indispensable tools in deep learning, capable of autonomously extracting crucial features from diverse data types. However, the intricacies of CNN architectures can present challenges such as overfitting and underfitting, necessitating thoughtful strategies to optimize their performance. In this work, these issues have been resolved by introducing L1 regularization in the basic architecture of CNN when it is applied for image classification. The proposed model has been applied to three different datasets. It has been observed that incorporating L1 regularization with different coefficient values has distinct effects on the working mechanism of CNN architecture resulting in improving its performance. In MNIST digit classification, L1 regularization (coefficient: 0.01) simplifies feature representation and prevents overfitting, leading to enhanced accuracy. In the Mango Tree Leaves dataset, dual L1 regularization (coefficient: 0.001 for convolutional and 0.01 for dense layers) improves model interpretability and generalization, facilitating effective leaf classification. Additionally, for hand-drawn sketches like those in the Quick, Draw! Dataset, L1 regularization (coefficient: 0.001) refines feature representation, resulting in improved recognition accuracy and generalization across diverse sketch categories. These findings underscore the significance of regularization techniques like L1 regularization in fine-tuning CNNs, optimizing their performance, and ensuring their adaptability to new data while maintaining high accuracy. Such strategies play a pivotal role in advancing the utility of CNNs across various domains, further solidifying their position as a cornerstone of deep learning.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12413007PMC
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0327985PLOS

Publication Analysis

Top Keywords

regularization coefficient
16
regularization
8
image classification
8
deep learning
8
feature representation
8
coefficient 0001
8
unlocking power
4
power regularization
4
regularization novel
4
novel approach
4

Similar Publications