EdgeConv

class dgl.nn.pytorch.conv.EdgeConv(in_feat, out_feat, batch_norm=False, allow_zero_in_degree=False)[source]

Bases: Module

EdgeConv layer from Dynamic Graph CNN for Learning on Point Clouds

It can be described as follows:

\[h_i^{(l+1)} = \max_{j \in \mathcal{N}(i)} ( \Theta \cdot (h_j^{(l)} - h_i^{(l)}) + \Phi \cdot h_i^{(l)})\]

where \(\mathcal{N}(i)\) is the neighbor of \(i\). \(\Theta\) and \(\Phi\) are linear layers.

Note

The original formulation includes a ReLU inside the maximum operator. This is equivalent to first applying a maximum operator then applying the ReLU.

Parameters:
  • in_feat (int) – Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).

  • out_feat (int) – Output feature size; i.e., the number of dimensions of \(h_i^{(l+1)}\).

  • batch_norm (bool) – Whether to include batch normalization on messages. Default: False.

  • allow_zero_in_degree (bool, optional) – If there are 0-in-degree nodes in the graph, output for those nodes will be invalid since no message will be passed to those nodes. This is harmful for some applications causing silent performance regression. This module will raise a DGLError if it detects 0-in-degree nodes in input graph. By setting True, it will suppress the check and let the users handle it by themselves. Default: False.

Note

Zero in-degree nodes will lead to invalid output value. This is because no message will be passed to those nodes, the aggregation function will be appied on empty input. A common practice to avoid this is to add a self-loop for each node in the graph if it is homogeneous, which can be achieved by:

>>> g = ... # a DGLGraph
>>> g = dgl.add_self_loop(g)

Calling add_self_loop will not work for some graphs, for example, heterogeneous graph since the edge type can not be decided for self_loop edges. Set allow_zero_in_degree to True for those cases to unblock the code and handle zero-in-degree nodes manually. A common practise to handle this is to filter out the nodes with zero-in-degree when use after conv.

Examples

>>> import dgl
>>> import numpy as np
>>> import torch as th
>>> from dgl.nn import EdgeConv
>>> # Case 1: Homogeneous graph
>>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3]))
>>> g = dgl.add_self_loop(g)
>>> feat = th.ones(6, 10)
>>> conv = EdgeConv(10, 2)
>>> res = conv(g, feat)
>>> res
tensor([[-0.2347,  0.5849],
        [-0.2347,  0.5849],
        [-0.2347,  0.5849],
        [-0.2347,  0.5849],
        [-0.2347,  0.5849],
        [-0.2347,  0.5849]], grad_fn=<CopyReduceBackward>)
>>> # Case 2: Unidirectional bipartite graph
>>> u = [0, 1, 0, 0, 1]
>>> v = [0, 1, 2, 3, 2]
>>> g = dgl.heterograph({('_N', '_E', '_N'):(u, v)})
>>> u_fea = th.rand(2, 5)
>>> v_fea = th.rand(4, 5)
>>> conv = EdgeConv(5, 2, 3)
>>> res = conv(g, (u_fea, v_fea))
>>> res
tensor([[ 1.6375,  0.2085],
        [-1.1925, -1.2852],
        [ 0.2101,  1.3466],
        [ 0.2342, -0.9868]], grad_fn=<CopyReduceBackward>)
forward(g, feat)[source]

Description

Forward computation

param g:

The graph.

type g:

DGLGraph

param feat:

\((N, D)\) where \(N\) is the number of nodes and \(D\) is the number of feature dimensions.

If a pair of tensors is given, the graph must be a uni-bipartite graph with only one edge type, and the two tensors must have the same dimensionality on all except the first axis.

type feat:

Tensor or pair of tensors

returns:

New node features.

rtype:

torch.Tensor

raises DGLError:

If there are 0-in-degree nodes in the input graph, it will raise DGLError since no message will be passed to those nodes. This will cause invalid output. The error can be ignored by setting allow_zero_in_degree parameter to True.