RelGraphConvΒΆ
-
class
dgl.nn.mxnet.conv.
RelGraphConv
(in_feat, out_feat, num_rels, regularizer='basis', num_bases=None, bias=True, activation=None, self_loop=True, low_mem=False, dropout=0.0, layer_norm=False)[source]ΒΆ Bases:
mxnet.gluon.block.Block
Relational graph convolution layer from Modeling Relational Data with Graph Convolutional Networks
It can be described as below:
\[h_i^{(l+1)} = \sigma(\sum_{r\in\mathcal{R}} \sum_{j\in\mathcal{N}^r(i)}\frac{1}{c_{i,r}}W_r^{(l)}h_j^{(l)}+W_0^{(l)}h_i^{(l)})\]where \(\mathcal{N}^r(i)\) is the neighbor set of node \(i\) w.r.t. relation \(r\). \(c_{i,r}\) is the normalizer equal to \(|\mathcal{N}^r(i)|\). \(\sigma\) is an activation function. \(W_0\) is the self-loop weight.
The basis regularization decomposes \(W_r\) by:
\[W_r^{(l)} = \sum_{b=1}^B a_{rb}^{(l)}V_b^{(l)}\]where \(B\) is the number of bases, \(V_b^{(l)}\) are linearly combined with coefficients \(a_{rb}^{(l)}\).
The block-diagonal-decomposition regularization decomposes \(W_r\) into \(B\) number of block diagonal matrices. We refer \(B\) as the number of bases.
The block regularization decomposes \(W_r\) by:
\[W_r^{(l)} = \oplus_{b=1}^B Q_{rb}^{(l)}\]where \(B\) is the number of bases, \(Q_{rb}^{(l)}\) are block bases with shape \(R^{(d^{(l+1)}/B)*(d^{l}/B)}\).
- Parameters
in_feat (int) β Input feature size; i.e, the number of dimensions of \(h_j^{(l)}\).
out_feat (int) β Output feature size; i.e., the number of dimensions of \(h_i^{(l+1)}\).
num_rels (int) β Number of relations. .
regularizer (str) β Which weight regularizer to use βbasisβ or βbddβ. βbasisβ is short for basis-diagonal-decomposition. βbddβ is short for block-diagonal-decomposition.
num_bases (int, optional) β Number of bases. If is none, use number of relations. Default:
None
.bias (bool, optional) β True if bias is added. Default:
True
.activation (callable, optional) β Activation function. Default:
None
.self_loop (bool, optional) β True to include self loop message. Default:
True
.low_mem (bool, optional) β True to use low memory implementation of relation message passing function. Default: False. This option trades speed with memory consumption, and will slowdown the forward/backward. Turn it on when you encounter OOM problem during training or evaluation. Default:
False
.dropout (float, optional) β Dropout rate. Default:
0.0
layer_norm (float, optional) β Add layer norm. Default:
False
Examples
>>> import dgl >>> import numpy as np >>> import mxnet as mx >>> from mxnet import gluon >>> from dgl.nn import RelGraphConv >>> >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> feat = mx.nd.ones((6, 10)) >>> conv = RelGraphConv(10, 2, 3, regularizer='basis', num_bases=2) >>> conv.initialize(ctx=mx.cpu(0)) >>> etype = mx.nd.array(np.array([0,1,2,0,1,2]).astype(np.int64)) >>> res = conv(g, feat, etype) [[ 0.561324 0.33745846] [ 0.61585337 0.09992217] [ 0.561324 0.33745846] [-0.01557937 0.01227859] [ 0.61585337 0.09992217] [ 0.056508 -0.00307822]] <NDArray 6x2 @cpu(0)>
-
forward
(g, x, etypes, norm=None)[source]ΒΆ Forward computation
- Parameters
g (DGLGraph) β The graph.
feat (mx.ndarray.NDArray) β
Input node features. Could be either
\((|V|, D)\) dense tensor
\((|V|,)\) int64 vector, representing the categorical values of each node. It then treat the input feature as an one-hot encoding feature.
etypes (mx.ndarray.NDArray) β Edge type tensor. Shape: \((|E|,)\)
norm (mx.ndarray.NDArray) β Optional edge normalizer tensor. Shape: \((|E|, 1)\).
- Returns
New node features.
- Return type
mx.ndarray.NDArray