DenseSAGEConv

class dgl.nn.pytorch.conv.DenseSAGEConv(in_feats, out_feats, feat_drop=0.0, bias=True, norm=None, activation=None)[source]

Bases: torch.nn.modules.module.Module

GraphSAGE layer from Inductive Representation Learning on Large Graphs

We recommend to use this module when appying GraphSAGE on dense graphs.

Note that we only support gcn aggregator in DenseSAGEConv.

Parameters
  • in_feats (int) – Input feature size; i.e, the number of dimensions of \(h_i^{(l)}\).

  • out_feats (int) – Output feature size; i.e, the number of dimensions of \(h_i^{(l+1)}\).

  • feat_drop (float, optional) – Dropout rate on features. Default: 0.

  • bias (bool) – If True, adds a learnable bias to the output. Default: True.

  • norm (callable activation function/layer or None, optional) – If not None, applies normalization to the updated node features.

  • activation (callable activation function/layer or None, optional) – If not None, applies an activation function to the updated node features. Default: None.

Example

>>> import dgl
>>> import numpy as np
>>> import torch as th
>>> from dgl.nn import DenseSAGEConv
>>>
>>> feat = th.ones(6, 10)
>>> adj = th.tensor([[0., 0., 1., 0., 0., 0.],
...         [1., 0., 0., 0., 0., 0.],
...         [0., 1., 0., 0., 0., 0.],
...         [0., 0., 1., 0., 0., 1.],
...         [0., 0., 0., 1., 0., 0.],
...         [0., 0., 0., 0., 0., 0.]])
>>> conv = DenseSAGEConv(10, 2)
>>> res = conv(adj, feat)
>>> res
tensor([[1.0401, 2.1008],
        [1.0401, 2.1008],
        [1.0401, 2.1008],
        [1.0401, 2.1008],
        [1.0401, 2.1008],
        [1.0401, 2.1008]], grad_fn=<AddmmBackward>)

See also

SAGEConv

forward(adj, feat)[source]

Compute (Dense) Graph SAGE layer.

Parameters
  • adj (torch.Tensor) – The adjacency matrix of the graph to apply SAGE Convolution on, when applied to a unidirectional bipartite graph, adj should be of shape should be of shape \((N_{out}, N_{in})\); when applied to a homo graph, adj should be of shape \((N, N)\). In both cases, a row represents a destination node while a column represents a source node.

  • feat (torch.Tensor or a pair of torch.Tensor) – If a torch.Tensor is given, the input feature of shape \((N, D_{in})\) where \(D_{in}\) is size of input feature, \(N\) is the number of nodes. If a pair of torch.Tensor is given, the pair must contain two tensors of shape \((N_{in}, D_{in})\) and \((N_{out}, D_{in})\).

Returns

The output feature of shape \((N, D_{out})\) where \(D_{out}\) is size of output feature.

Return type

torch.Tensor

reset_parameters()[source]

Reinitialize learnable parameters.

Notes

The linear weights \(W^{(l)}\) are initialized using Glorot uniform initialization.