Toward Interpretable and Stable Graph Neural Networks

Description

Graphs are ubiquitous data structures in numerous domains, such as social science (social networks), natural science (physical systems, and protein-protein interaction networks) and knowledge graphs. As new generalizations of traditional deep neural networks to graph structured data, Graph Neural Networks (or GNNs) have demonstrated the power in graph representation learning and have permeated numerous areas of science and technology. However, GNNs also inherited the drawback of traditional deep neural networks, i.e., lacking interpretability. Moreover, graph neural networks are vulnerable to adversarial attacks. These drawbacks have raised tremendous concerns to adopt GNNs in many critical applications pertaining. Thus, this project aims to tackle the major drawbacks of GNNs and greatly enlarge their usability in critical applications. To achieve the research goal, this project systematically investigates advanced principles for new mechanisms to interpret GNNs, understand their vulnerabilities and develop robust GNNs. The proposed research extends the state-of-the-art GNNs to a new frontier, investigates original problems that entreat innovative solutions and paves the way for a new research endeavor effectively tame graph mining.

Publications

  • Conferences
  • Resources

  • Code
  • Project Members

    Acknowledgments

    This project is supported by Army Research Office under grant #W911NF-21-1-0198. Any opinions, findings, and conclusions or recommendations expressed here are those of the author(s) and do not necessarily reflect the views of the Army Research Office.

    Created by Suhang Wang who can be reached at szw494 at psu.edu.
    Webmaster: Huaisheng Zhu, Email: hvz5312 at psu.edu.


    Last Upadted: August 31, 2022