ChebConv¶
-
class
dgl.nn.tensorflow.conv.
ChebConv
(*args, **kwargs)[source]¶ Bases:
tensorflow.python.keras.engine.base_layer.Layer
Chebyshev Spectral Graph Convolution layer from Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
\[ \begin{align}\begin{aligned}h_i^{l+1} &= \sum_{k=0}^{K-1} W^{k, l}z_i^{k, l}\\Z^{0, l} &= H^{l}\\Z^{1, l} &= \tilde{L} \cdot H^{l}\\Z^{k, l} &= 2 \cdot \tilde{L} \cdot Z^{k-1, l} - Z^{k-2, l}\\\tilde{L} &= 2\left(I - \tilde{D}^{-1/2} \tilde{A} \tilde{D}^{-1/2}\right)/\lambda_{max} - I\end{aligned}\end{align} \]where \(\tilde{A}\) is \(A\) + \(I\), \(W\) is learnable weight.
- Parameters
in_feats (int) – Dimension of input features; i.e, the number of dimensions of \(h_i^{(l)}\).
out_feats (int) – Dimension of output features \(h_i^{(l+1)}\).
k (int) – Chebyshev filter size \(K\).
activation (function, optional) – Activation function. Default
ReLu
.bias (bool, optional) – If True, adds a learnable bias to the output. Default:
True
.
Example
>>> import dgl >>> import numpy as np >>> import tensorflow as tf >>> from dgl.nn import ChebConv >>> with tf.device("CPU:0"): ... g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) ... feat = tf.ones((6, 10)) ... conv = ChebConv(10, 2, 2) ... res = conv(g, feat) ... res <tf.Tensor: shape=(6, 2), dtype=float32, numpy= array([[ 0.6163, -0.1809], [ 0.6163, -0.1809], [ 0.6163, -0.1809], [ 0.9698, -1.5053], [ 0.3664, 0.7556], [-0.2370, 3.0164]], dtype=float32)>