Graph kernel approaches have typically been the most popular strategy for graph classification tasks. Graph kernels can be thought of as functions that measure the similarity of two graphs. They allow kernelized learning algorithms like support vector machines to work directly on charts rather than convert them to fixed-length, real-valued feature vectors through feature extraction.
In recent years, the use of Graph Neural Networks (GNNs) based on high-performance message-passing neural networks has exploded (MPNNs). As a result, they’ve grown increasingly popular for graph categorization. However, their performance is limited by their hand-crafted combinatorial features.
Yale University and IBM researchers propose Kernel Graph Neural Networks (KerGNNs). KerGNNs are frameworks that combine graph kernels and the GNN message-passing procedure into one. They achieve results that are equivalent to cutting-edge approaches. Simultaneously, they vastly increase model interpretability when compared to traditional GNNs.
GNN neighborhood aggregation is implemented using neighborhood subgraph topology and kernel approaches. This demonstrates that the 1-WL method does not limit the expressivity of this approach (Weisfeiler-Lehman). To generalize GNNs into the graph domain, a new outlook is provided based on which both 2-D convolution and graph neighborhood aggregation can be comprehended using kernel methods.
Besides visualizing output graphs, KerGNNs may also disclose the local structure of input graphs by displaying the topology of trained graph filters, which significantly enhances model interpretability and transparency compared to ordinary MPNNs.
KerGNNs’ core idea is to combine graph kernels with the GNN message-passing process to take advantage of both methods’ strengths. By computing the kernel value in the low-dimensional features, kernel functions can operate in a high-dimensional feature space. MPNNs, on the other hand, are neural models that use message-passing between nodes to capture the dependence of graphs. They work in two stages: neighborhood aggregation and graph-level readout.
The researchers suggest a subgraph-based node aggregation mechanism to combine these two approaches. To improve flexibility, they make the graph kernels’ feature creation technique trainable per the conventional GNN training framework.
KerGNNs beat conventional GNNs with 1-WL limitations and achieved similar performance to high-order GNNs while lowering running time on graph and node classification tasks. Graph filters in KerGNNs, unlike regular GNN versions, can provide additional information that can be utilized to help explain model predictions.
Ultimately, KerGNNs outperform state-of-the-art approaches while improving interpretability and transparency. They accomplish so by visualizing graph filters and graph output graphs. In a nutshell, GNNs have the potential to enhance their representation capabilities.
Paper: https://arxiv.org/pdf/2201.00491.pdf
Suggested
Credit: Source link
Comments are closed.