dgl.readout_nodes¶
-
dgl.
readout_nodes
(graph, feat, weight=None, *, op='sum', ntype=None)[source]¶ Generate a graph-level representation by aggregating node features
feat
.The function is commonly used as a readout function on a batch of graphs to generate graph-level representation. Thus, the result tensor shape depends on the batch size of the input graph. Given a graph of batch size \(B\), and a feature size of \(D\), the result shape will be \((B, D)\), with each row being the aggregated node features of each graph.
- Parameters
graph (DGLGraph.) – Input graph.
feat (str) – Node feature name.
weight (str, optional) – Node weight name. None means aggregating without weights. Otherwise, multiply each node feature by node feature
weight
before aggregation. The weight feature shape must be compatible with an element-wise multiplication with the feature tensor.op (str, optional) – Readout operator. Can be ‘sum’, ‘max’, ‘min’, ‘mean’.
ntype (str, optional) – Node type. Can be omitted if there is only one node type in the graph.
- Returns
Result tensor.
- Return type
Tensor
Examples
>>> import dgl >>> import torch as th
Create two
DGLGraph
objects and initialize their node features.>>> g1 = dgl.graph(([0, 1], [1, 0])) # Graph 1 >>> g1.ndata['h'] = th.tensor([1., 2.]) >>> g2 = dgl.graph(([0, 1], [1, 2])) # Graph 2 >>> g2.ndata['h'] = th.tensor([1., 2., 3.])
Sum over one graph:
>>> dgl.readout_nodes(g1, 'h') tensor([3.]) # 1 + 2
Sum over a batched graph:
>>> bg = dgl.batch([g1, g2]) >>> dgl.readout_nodes(bg, 'h') tensor([3., 6.]) # [1 + 2, 1 + 2 + 3]
Weighted sum:
>>> bg.ndata['w'] = th.tensor([.1, .2, .1, .5, .2]) >>> dgl.readout_nodes(bg, 'h', 'w') tensor([.5, 1.7])
Readout by max:
>>> dgl.readout_nodes(bg, 'h', op='max') tensor([2., 3.])
See also