The message passing neural network framework
Before exploring heterogeneous graphs, let’s recap what we have learned about homogeneous GNNs. In the previous chapters, we saw different functions for aggregating and combining features from different nodes. As seen in Chapter 5, the simplest GNN layer consists of summing the linear combination of features from neighboring nodes (including the target node itself) with a weight matrix. The output of the previous sum then replaces the previous target node embedding.
The node-level operator can be written as follows:
is the set of neighboring nodes of the node (including itself), is the embedding of the node, and is a weight matrix:
GCN and GAT layers added fixed and dynamic weights to node features but kept the same idea. Even GraphSAGE’s LSTM operator or GIN’s max aggregator did not change the main concept of a GNN layer. If we look at all these variants, we can generalize GNN layers...