SIGNDiffusion

class dgl.transforms.SIGNDiffusion(k, in_feat_name='feat', out_feat_name='out_feat', eweight_name=None, diffuse_op='raw', alpha=0.2)[source]

Bases: dgl.transforms.module.BaseTransform

The diffusion operator from SIGN: Scalable Inception Graph Neural Networks

It performs node feature diffusion with \(TX, \cdots, T^{k}X\), where \(T\) is a diffusion matrix and \(X\) is the input node features.

Specifically, this module provides four options for \(T\).

raw: raw adjacency matrix \(A\)

rw: random walk (row-normalized) adjacency matrix \(D^{-1}A\), where \(D\) is the degree matrix.

gcn: symmetrically normalized adjacency matrix used by GCN, \(D^{-1/2}AD^{-1/2}\)

ppr: approximate personalized PageRank used by APPNP

\[ \begin{align}\begin{aligned}H^{0} &= X\\H^{l+1} &= (1-\alpha)\left(D^{-1/2}AD^{-1/2} H^{l}\right) + \alpha X\end{aligned}\end{align} \]

This module only works for homogeneous graphs.

Parameters
  • k (int) – The maximum number of times for node feature diffusion.

  • in_feat_name (str, optional) – g.ndata[{in_feat_name}] should store the input node features. Default: ‘feat’

  • out_feat_name (str, optional) – g.ndata[{out_feat_name}_i] will store the result of diffusing input node features for i times. Default: ‘out_feat’

  • eweight_name (str, optional) – Name to retrieve edge weights from g.edata. Default: None, treating the graph as unweighted.

  • diffuse_op (str, optional) – The diffusion operator to use, which can be ‘raw’, ‘rw’, ‘gcn’, or ‘ppr’. Default: ‘raw’

  • alpha (float, optional) – Restart probability if diffuse_op is 'ppr', which commonly lies in \([0.05, 0.2]\). Default: 0.2

Example

>>> import dgl
>>> import torch
>>> from dgl import SIGNDiffusion
>>> transform = SIGNDiffusion(k=2, eweight_name='w')
>>> num_nodes = 5
>>> num_edges = 20
>>> g = dgl.rand_graph(num_nodes, num_edges)
>>> g.ndata['feat'] = torch.randn(num_nodes, 10)
>>> g.edata['w'] = torch.randn(num_edges)
>>> transform(g)
Graph(num_nodes=5, num_edges=20,
      ndata_schemes={'feat': Scheme(shape=(10,), dtype=torch.float32),
                     'out_feat_1': Scheme(shape=(10,), dtype=torch.float32),
                     'out_feat_2': Scheme(shape=(10,), dtype=torch.float32)}
      edata_schemes={'w': Scheme(shape=(), dtype=torch.float32)})