With the increasing popularity of Graph Neural Networks (GNNs) for predictive tasks on graph structured data, research on their explainability is becoming more critical and achieving significant progress. Although many methods are proposed to explain the predictions of GNNs, their focus is mainly on "how to generate explanations." However, other important research questions like "whether the GNN explanations are inaccurate," "what if the explanations are inaccurate," and "how to adjust the model to generate more accurate explanations" have gained little attention.
View Article and Find Full Text PDFIEEE Trans Neural Netw Learn Syst
April 2024
Inferring resting-state functional connectivity (FC) from anatomical brain wiring, known as structural connectivity (SC), is of enormous significance in neuroscience for understanding biological neuronal networks and treating mental diseases. Both SC and FC are networks where the nodes are brain regions, and in SC, the edges are the physical fiber nerves among the nodes, while in FC, the edges are the nodes' coactivation relations. Despite the importance of SC and FC, until very recently, the rapidly growing research body on this topic has generally focused on either linear models or computational models that rely heavily on heuristics and simple assumptions regarding the mapping between FC and SC.
View Article and Find Full Text PDF