Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
We introduce PiCCL (Primary Component Contrastive Learning), a self-supervised contrastive learning framework that utilizes a multiplex Siamese network structure consisting of many identical branches rather than 2 to maximize learning efficiency. PiCCL is simple and light weight, it does not use asymmetric networks, intricate pretext tasks, hard to compute loss functions or multimodal data, which are common for multiview contrastive learning frameworks and could hinder performance, simplicity, generalizability and explainability. PiCCL obtains multiple positive samples by applying the same image augmentation paradigm to the same image numerous times, the network loss is calculated using a custom designed Loss function named PiCLoss (Primary Component Loss) to take advantage of PiCCL's unique structure while keeping it computationally lightweight. To demonstrate its strength, we benchmarked PiCCL against various state-of-the-art self-supervised algorithms on multiple datasets including CIFAR-10, CIFAR-100, and STL-10. PiCCL achieved top performance in most of our tests, with top-1 accuracy of 94%, 72%, and 97% for the 3 datasets respectively. But where PiCCL excels is in the small batch learning scenarios. When testing on STL-10 using a batch size of 8, PiCCL still achieved 93% accuracy, outperforming the competition by about 3 percentage points.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12377561 | PMC |
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0329273 | PLOS |