Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Recent research has demonstrated the effectiveness of utilizing contrastive learning for training Transformer-based sequence encoders in sequential recommendation tasks. Items are represented using vectors and the relations between items are measured by the dot product self-attention, the feature representation in sequential recommendation can be enhanced. However, in real-world scenarios, user behavior sequences are unpredictable, and the limitations of dot product-based approaches hinder the complete capture of collaborative transferability. Moreover, the Bayesian personalized ranking (BPR) loss function, commonly utilized in recommendation systems, lacks constraints when considering positive and negative sampled items, potentially leading to suboptimal optimization outcomes. This presents a complex challenge that needs to be addressed. To tackle these issues, this article proposes a novel method involving stochastic self-attention. This article introduces uncertainty into the proposed model by utilizing elliptical Gaussian distribution controlled by mean and covariance vector to explain the unpredictability of items. At the same time, the proposed model combines a Wasserstein self-attention module to compute the positional relationships between items within a sequence in order to effectively incorporate uncertainty into the training process. The Wasserstein self-attention mechanism satisfies the triangular inequality and can not only addresses uncertainty but also promote collaborative transfer learning. Furthermore, embedding a stochastic Gaussian distribution into each item will bring additional uncertainty into the proposed model. Multi-pair contrastive learning relies on high-quality positive samples, and the proposed model combines the cloze task mask and dropout mask mechanisms to generate high-quality positive samples. It demonstrates superior performance and adaptability compared to traditional single-pair contrastive learning methods. Additionally, a dynamic loss reweighting strategy is introduced to balance the cloze task loss and the contrastive loss effectively. We conduct experiments and the results show that the proposed model outperforms the state-of-the-art models, especially on cold start items. For each metric, the hit ratio (HR) and normalized discounted cumulative gain (NDCG) on the Beauty dataset improved by an average of 1.3% and 10.27%, respectively; on the Toys dataset improved by an average of 8.24% and 5.89%, respectively; on the ML-1M dataset improved by an average of 68.62% and 8.22%, respectively; and on the ML-100M dataset improved by an average of 93.57% and 44.87% Our code is available at DOI: 10.5281/zenodo.13634624.
Download full-text PDF |
Source |
---|---|
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190389 | PMC |
http://dx.doi.org/10.7717/peerj-cs.2749 | DOI Listing |