Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Traditional classification problems assume that features and labels are fixed. However, this assumption is easily violated in open environments. For example, the exponential growth of web pages leads to an expanding feature space with the accumulation of keywords. At the same time, rapid refresh makes it difficult to obtain accurate labels for web pages, often resulting in rough annotations containing potentially correct labels, i.e., partial label set. In such cases, the coupling between the incremental feature space and the partial label set introduces more complex real-world challenges, which deserve attention but have not been fully explored. In this paper, we address this issue by introducing a novel incremental learning approach with Simultaneous Incremental Feature and Partial Label (SIFPL). SIFPL models the data evolution in dynamic and open environments in a two-stage way, consisting of a previous stage and an adapting stage, to deal with the associated challenges. Specifically, to ensure the reusability of the model during adaptation, we impose classifier consistency constraints to enhance the stability of the current model. This constraint leverages historical information from the previous stage to improve the generalization ability of the current model, providing a reliable foundation for further refining the model with new features. Regarding label disambiguation, we filter out incorrect candidate labels based on the principle of minimizing classifier loss, ensuring that the new features and labels effectively support the model's adaptation to the incremental feature space, thereby further refining its performance. Furthermore, we also provide a solid theoretical analysis of the model's generalization bounds, which can validate the efficiency of model inheritance. Experiments on benchmark and real-world datasets validate that the proposed method achieves better accuracy performance than the baseline methods in most cases.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1109/TPAMI.2025.3600033 | DOI Listing |