PNAConvο
- class dgl.nn.pytorch.conv.PNAConv(in_size, out_size, aggregators, scalers, delta, dropout=0.0, num_towers=1, edge_feat_size=0, residual=True)[source]ο
Bases:
Module
Principal Neighbourhood Aggregation Layer from Principal Neighbourhood Aggregation for Graph Nets
A PNA layer is composed of multiple PNA towers. Each tower takes as input a split of the input features, and computes the message passing as below.
\[h_i^(l+1) = U(h_i^l, \oplus_{(i,j)\in E}M(h_i^l, e_{i,j}, h_j^l))\]where \(h_i\) and \(e_{i,j}\) are node features and edge features, respectively. \(M\) and \(U\) are MLPs, taking the concatenation of input for computing output features. \(\oplus\) represents the combination of various aggregators and scalers. Aggregators aggregate messages from neighbours and scalers scale the aggregated messages in different ways. \(\oplus\) concatenates the output features of each combination.
The output of multiple towers are concatenated and fed into a linear mixing layer for the final output.
- Parameters:
in_size (int) β Input feature size; i.e. the size of \(h_i^l\).
out_size (int) β Output feature size; i.e. the size of \(h_i^{l+1}\).
List of aggregation function names(each aggregator specifies a way to aggregate messages from neighbours), selected from:
mean
: the mean of neighbour messagesmax
: the maximum of neighbour messagesmin
: the minimum of neighbour messagesstd
: the standard deviation of neighbour messagesvar
: the variance of neighbour messagessum
: the sum of neighbour messagesmoment3
,moment4
,moment5
: the normalized moments aggregation
\((E[(X-E[X])^n])^{1/n}\)
List of scaler function names, selected from:
identity
: no scalingamplification
: multiply the aggregated message by \(\log(d+1)/\delta\),
where \(d\) is the degree of the node.
attenuation
: multiply the aggregated message by \(\delta/\log(d+1)\)
delta (float) β The degree-related normalization factor computed over the training set, used by scalers for normalization. \(E[\log(d+1)]\), where \(d\) is the degree for each node in the training set.
dropout (float, optional) β The dropout ratio. Default: 0.0.
num_towers (int, optional) β The number of towers used. Default: 1. Note that in_size and out_size must be divisible by num_towers.
edge_feat_size (int, optional) β The edge feature size. Default: 0.
residual (bool, optional) β The bool flag that determines whether to add a residual connection for the output. Default: True. If in_size and out_size of the PNA conv layer are not the same, this flag will be set as False forcibly.
Example
>>> import dgl >>> import torch as th >>> from dgl.nn import PNAConv >>> >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> feat = th.ones(6, 10) >>> conv = PNAConv(10, 10, ['mean', 'max', 'sum'], ['identity', 'amplification'], 2.5) >>> ret = conv(g, feat)
- forward(graph, node_feat, edge_feat=None)[source]ο
Descriptionο
Compute PNA layer.
- param graph:
The graph.
- type graph:
DGLGraph
- param node_feat:
The input feature of shape \((N, h_n)\). \(N\) is the number of nodes, and \(h_n\) must be the same as in_size.
- type node_feat:
torch.Tensor
- param edge_feat:
The edge feature of shape \((M, h_e)\). \(M\) is the number of edges, and \(h_e\) must be the same as edge_feat_size.
- type edge_feat:
torch.Tensor, optional
- returns:
The output node feature of shape \((N, h_n')\) where \(h_n'\) should be the same as out_size.
- rtype:
torch.Tensor