dgl.adj_product_graphΒΆ
-
dgl.
adj_product_graph
(A, B, weight_name, etype='_E')[source]ΒΆ Create a weighted graph whose adjacency matrix is the product of the adjacency matrices of the given two graphs.
Namely, given two weighted graphs
A
andB
, whose rows represent source nodes and columns represent destination nodes, this function returns a new graph whose weighted adjacency matrix is \(\mathrm{adj}(A) \times \mathrm{adj}(B)\).The two graphs must be simple graphs, and must have only one edge type. Moreover, the number of nodes of the destination node type of
A
must be the same as the number of nodes of the source node type ofB
.The source node type of the returned graph will be the same as the source node type of graph
A
. The destination node type of the returned graph will be the same as the destination node type of graphB
. If the two node types are the same, the returned graph will be homogeneous. Otherwise, it will be a bipartite graph.Unlike
scipy
, if an edge in the result graph has zero weight, it will not be removed from the graph.Notes
This function works on both CPU and GPU. For GPU, the number of nodes and edges must be less than the maximum of
int32
(i.e.2 ** 31 - 1
) due to restriction of cuSPARSE.The edge weights returned by this function is differentiable w.r.t. the input edge weights.
If the graph format is restricted, both graphs must have CSR available.
- Parameters
- Returns
The new graph. The edge weight of the returned graph will have the same feature name as
weight_name
.- Return type
Examples
The following shows weighted adjacency matrix multiplication between two bipartite graphs. You can also perform this between two homogeneous graphs, or one homogeneous graph and one bipartite graph, as long as the numbers of nodes of the same type match.
>>> A = dgl.heterograph({ ... ('A', 'AB', 'B'): ([2, 2, 0, 2, 0, 1], [2, 1, 0, 0, 2, 2])}, ... num_nodes_dict={'A': 3, 'B': 4}) >>> B = dgl.heterograph({ ... ('B', 'BA', 'A'): ([0, 3, 2, 1, 3, 3], [1, 2, 0, 2, 1, 0])}, ... num_nodes_dict={'A': 3, 'B': 4})
If your graph is a multigraph, you will need to call
dgl.to_simple()
to convert it into a simple graph first.>>> A = dgl.to_simple(A) >>> B = dgl.to_simple(B)
Initialize learnable edge weights.
>>> A.edata['w'] = torch.randn(6).requires_grad_() >>> B.edata['w'] = torch.randn(6).requires_grad_()
Take the product.
>>> C = dgl.adj_product_graph(A, B, 'w') >>> C.edges() (tensor([0, 0, 1, 2, 2, 2]), tensor([0, 1, 0, 0, 2, 1]))
>>> C.edata['w'] tensor([0.6906, 0.2002, 0.0591, 0.3672, 0.1066, 0.1328], grad_fn=<CSRMMBackward>)
Note that this function is differentiable:
>>> C.edata['w'].sum().backward() >>> A.edata['w'].grad tensor([0.7153, 0.2775, 0.7141, 0.7141, 0.7153, 0.7153])
>>> B.edata['w'].grad tensor([0.4664, 0.0000, 1.5614, 0.3840, 0.0000, 0.0000])
If the source node type of the left operand is the same as the destination node type of the right operand, this function returns a homogeneous graph:
>>> C.ntypes ['A']
Otherwise, it returns a bipartite graph instead:
>>> A = dgl.heterograph({ ... ('A', 'AB', 'B'): ([2, 2, 0, 2, 0, 1], [2, 1, 0, 0, 2, 2])}, ... num_nodes_dict={'A': 3, 'B': 4}) >>> B = dgl.heterograph({ ... ('B', 'BC', 'C'): ([0, 3, 2, 1, 3, 3], [1, 2, 0, 2, 1, 0])}, ... num_nodes_dict={'C': 3, 'B': 4}) >>> A.edata['w'] = torch.randn(6).requires_grad_() >>> B.edata['w'] = torch.randn(6).requires_grad_() >>> C = dgl.adj_product_graph(A, B, 'w') >>> C.ntypes ['A', 'C']