Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
2-D neighborhood preserving projection (2DNPP) uses 2-D images as feature input instead of 1-D vectors used by neighborhood preserving projection (NPP). 2DNPP requires less computation time than NPP. However, both NPP and 2DNPP use the L norm as a metric, which is sensitive to noise in data. In this paper, we proposed a novel NPP method called low-rank 2DNPP (LR-2DNPP). This method divided the input data into a component part that encoded low-rank features, and an error part that ensured the noise was sparse. Then, a nearest neighbor graph was learned from the clean data using the same procedure as 2DNPP. To ensure that the features learned by LR-2DNPP were optimal for classification, we combined the structurally incoherent learning and low-rank learning with NPP to form a unified model called discriminative LR-2DNPP (DLR-2DNPP). By encoding the structural incoherence of the learned clean data, DLR-2DNPP could enhance the discriminative ability for feature extraction. Theoretical analyses on the convergence and computational complexity of LR-2DNPP and DLR-2DNPP were presented in details. We used seven public image databases to verify the performance of the proposed methods. The experimental results showed the effectiveness of our methods for robust image representation.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TCYB.2018.2815559 | DOI Listing |