Severity: Warning
Message: file_get_contents(https://...@gmail.com&api_key=61f08fa0b96a73de8c900d749fcb997acc09&a=1): Failed to open stream: HTTP request failed! HTTP/1.1 429 Too Many Requests
Filename: helpers/my_audit_helper.php
Line Number: 197
Backtrace:
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 197
Function: file_get_contents
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 271
Function: simplexml_load_file_from_url
File: /var/www/html/application/helpers/my_audit_helper.php
Line: 3165
Function: getPubMedXML
File: /var/www/html/application/controllers/Detail.php
Line: 597
Function: pubMedSearch_Global
File: /var/www/html/application/controllers/Detail.php
Line: 511
Function: pubMedGetRelatedKeyword
File: /var/www/html/index.php
Line: 317
Function: require_once
98%
921
2 minutes
20
Recommender systems are widely used in various applications. Knowledge graphs are increasingly used to improve recommendation performance by extracting valuable information from user-item interactions. However, current methods do not effectively use fine-grained information within the knowledge graph. Additionally, some recommendation methods based on graph neural networks tend to overlook the importance of entities to users when performing aggregation operations. To alleviate these issues, we introduce a knowledge-graph-based graph neural network (PIFSA-GNN) for recommendation with two key components. The first component, user preference interaction fusion, incorporates user auxiliary information in the recommendation process. This enhances the influence of users on the recommendation model. The second component is an attention mechanism called user preference swap attention, which improves entity weight calculation for effectively aggregating neighboring entities. Our method was extensively tested on three real-world datasets. On the movie dataset, our method outperforms the best baseline by 1.3% in AUC and 2.8% in F1; Hit@1 increases by 0.7%, Hit@5 by 0.6%, and Hit@10 by 1.0%. On the restaurant dataset, AUC improves by 2.6% and F1 by 7.2%; Hit@1 increases by 1.3%, Hit@5 by 3.7%, and Hit@10 by 2.9%. On the music dataset, AUC improves by 0.9% and F1 by 0.4%; Hit@1 increases by 3.3%, Hit@5 by 1.2%, and Hit@10 by 0.2%. The results show that it outperforms baseline methods.
Download full-text PDF |
Source |
---|---|
http://dx.doi.org/10.1016/j.neunet.2024.107116 | DOI Listing |