dgl.DGLGraph.pin_memory_

DGLGraph.pin_memory_()

Pin the graph structure and node/edge data to the page-locked memory for GPU zero-copy access.

This is an inplace method. The graph structure must be on CPU to be pinned. If the graph struture is already pinned, the function directly returns it.

Materialization of new sparse formats for pinned graphs is not allowed. To avoid implicit formats materialization during training, you should create all the needed formats before pinning. But cloning and materialization is fine. See the examples below.

Returns

The pinned graph.

Return type

DGLGraph

Examples

The following example uses PyTorch backend.

>>> import dgl
>>> import torch
>>> g = dgl.graph((torch.tensor([1, 0]), torch.tensor([1, 2])))
>>> g.pin_memory_()

Materialization of new sparse formats is not allowed for pinned graphs.

>>> g.create_formats_()  # This would raise an error! You should do this before pinning.

Cloning and materializing new formats is allowed. The returned graph is not pinned.

>>> g1 = g.formats(['csc'])
>>> assert not g1.is_pinned()

The pinned graph can be access from both CPU and GPU. The concrete device depends on the context of query. For example, eid in find_edges() is a query. When eid is on CPU, find_edges() is executed on CPU, and the returned values are CPU tensors

>>> g.unpin_memory_()
>>> g.create_formats_()
>>> g.pin_memory_()
>>> eid = torch.tensor([1])
>>> g.find_edges(eids)
(tensor([0]), tensor([2]))

Moving eid to GPU, find_edges() will be executed on GPU, and the returned values are GPU tensors.

>>> eid = eid.to('cuda:0')
>>> g.find_edges(eids)
(tensor([0], device='cuda:0'), tensor([2], device='cuda:0'))

If you don’t provide a query, methods will be executed on CPU by default.

>>> g.in_degrees()
tensor([0, 1, 1])