GraphConvΒΆ
-
class
dgl.nn.mxnet.conv.
GraphConv
(in_feats, out_feats, norm='both', weight=True, bias=True, activation=None, allow_zero_in_degree=False)[source]ΒΆ Bases:
mxnet.gluon.block.Block
Graph convolutional layer from Semi-Supervised Classification with Graph Convolutional Networks
Mathematically it is defined as follows:
\[h_i^{(l+1)} = \sigma(b^{(l)} + \sum_{j\in\mathcal{N}(i)}\frac{1}{c_{ij}}h_j^{(l)}W^{(l)})\]where \(\mathcal{N}(i)\) is the set of neighbors of node \(i\), \(c_{ij}\) is the product of the square root of node degrees (i.e., \(c_{ij} = \sqrt{|\mathcal{N}(i)|}\sqrt{|\mathcal{N}(j)|}\)), and \(\sigma\) is an activation function.
- Parameters
in_feats (int) β Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).
out_feats (int) β Output feature size; i.e., the number of dimensions of \(h_i^{(l+1)}\).
norm (str, optional) β
How to apply the normalizer. Can be one of the following values:
right
, to divide the aggregated messages by each nodeβs in-degrees, which is equivalent to averaging the received messages.none
, where no normalization is applied.both
(default), where the messages are scaled with \(1/c_{ji}\) above, equivalent to symmetric normalization.left
, to divide the messages sent out from each node by its out-degrees, equivalent to random walk normalization.
weight (bool, optional) β If True, apply a linear layer. Otherwise, aggregating the messages without a weight matrix.
bias (bool, optional) β If True, adds a learnable bias to the output. Default:
True
.activation (callable activation function/layer or None, optional) β If not None, applies an activation function to the updated node features. Default:
None
.allow_zero_in_degree (bool, optional) β If there are 0-in-degree nodes in the graph, output for those nodes will be invalid since no message will be passed to those nodes. This is harmful for some applications causing silent performance regression. This module will raise a DGLError if it detects 0-in-degree nodes in input graph. By setting
True
, it will suppress the check and let the users handle it by themselves. Default:False
.
-
weight
ΒΆ The learnable weight tensor.
- Type
torch.Tensor
-
bias
ΒΆ The learnable bias tensor.
- Type
torch.Tensor
Note
Zero in-degree nodes will lead to invalid output value. This is because no message will be passed to those nodes, the aggregation function will be appied on empty input. A common practice to avoid this is to add a self-loop for each node in the graph if it is homogeneous, which can be achieved by:
>>> g = ... # a DGLGraph >>> g = dgl.add_self_loop(g)
Calling
add_self_loop
will not work for some graphs, for example, heterogeneous graph since the edge type can not be decided for self_loop edges. Setallow_zero_in_degree
toTrue
for those cases to unblock the code and handle zero-in-degree nodes manually. A common practise to handle this is to filter out the nodes with zero-in-degree when use after conv.Examples
>>> import dgl >>> import mxnet as mx >>> from mxnet import gluon >>> import numpy as np >>> from dgl.nn import GraphConv
>>> # Case 1: Homogeneous graph >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> g = dgl.add_self_loop(g) >>> feat = mx.nd.ones((6, 10)) >>> conv = GraphConv(10, 2, norm='both', weight=True, bias=True) >>> conv.initialize(ctx=mx.cpu(0)) >>> res = conv(g, feat) >>> print(res) [[1.0209361 0.22472616] [1.1240715 0.24742813] [1.0209361 0.22472616] [1.2924911 0.28450024] [1.3568745 0.29867214] [0.7948386 0.17495811]] <NDArray 6x2 @cpu(0)>
>>> # allow_zero_in_degree example >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> conv = GraphConv(10, 2, norm='both', weight=True, bias=True, allow_zero_in_degree=True) >>> res = conv(g, feat) >>> print(res) [[1.0209361 0.22472616] [1.1240715 0.24742813] [1.0209361 0.22472616] [1.2924911 0.28450024] [1.3568745 0.29867214] [0. 0.]] <NDArray 6x2 @cpu(0)>
>>> # Case 2: Unidirectional bipartite graph >>> u = [0, 1, 0, 0, 1] >>> v = [0, 1, 2, 3, 2] >>> g = dgl.heterograph({('_N', '_E', '_N'):(u, v)}) >>> u_fea = mx.nd.random.randn(2, 5) >>> v_fea = mx.nd.random.randn(4, 5) >>> conv = GraphConv(5, 2, norm='both', weight=True, bias=True) >>> conv.initialize(ctx=mx.cpu(0)) >>> res = conv(g, (u_fea, v_fea)) >>> res [[ 0.26967263 0.308129 ] [ 0.05143356 -0.11355402] [ 0.22705637 0.1375853 ] [ 0.26967263 0.308129 ]] <NDArray 4x2 @cpu(0)>
-
forward
(graph, feat, weight=None)[source]ΒΆ Compute graph convolution.
- Parameters
graph (DGLGraph) β The graph.
feat (mxnet.NDArray or pair of mxnet.NDArray) β
If a single tensor is given, it represents the input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes. If a pair of tensors are given, the pair must contain two tensors of shape \((N_{in}, D_{in_{src}})\) and \((N_{out}, D_{in_{dst}})\).
Note that in the special case of graph convolutional networks, if a pair of tensors is given, the latter element will not participate in computation.
weight (torch.Tensor, optional) β Optional external weight tensor.
- Returns
The output feature
- Return type
mxnet.NDArray
- Raises
DGLError β If there are 0-in-degree nodes in the input graph, it will raise DGLError since no message will be passed to those nodes. This will cause invalid output. The error can be ignored by setting
allow_zero_in_degree
parameter toTrue
.
Note
Input shape: \((N, *, \text{in_feats})\) where * means any number of additional dimensions, \(N\) is the number of nodes.
Output shape: \((N, *, \text{out_feats})\) where all but the last dimension are the same shape as the input.
Weight shape: \((\text{in_feats}, \text{out_feats})\).