TWIRLSUnfoldingAndAttentionΒΆ
-
class
dgl.nn.pytorch.conv.
TWIRLSUnfoldingAndAttention
(d, alp, lam, prop_step, attn_aft=- 1, tau=0.2, T=- 1, p=1, use_eta=False, init_att=False, attn_dropout=0, precond=True)[source]ΒΆ Bases:
torch.nn.modules.module.Module
Combine propagation and attention together.
- Parameters
d (int) β Size of graph feature.
alp (float) β Step size. \(\alpha\) in ther paper.
lam (int) β Coefficient of graph smooth term. \(\lambda\) in ther paper.
prop_step (int) β Number of propagation steps
attn_aft (int) β Where to put attention layer. i.e. number of propagation steps before attention. If set to
-1
, then no attention.tau (float) β The lower thresholding parameter. Correspond to \(\tau\) in the paper.
T (float) β The upper thresholding parameter. Correspond to \(T\) in the paper.
p (float) β Correspond to \(\rho\) in the paper..
use_eta (bool) β If True, learn a weight vector for each dimension when doing attention.
init_att (bool) β If
True
, add an extra attention layer before propagation.attn_dropout (float) β the dropout rate of attention value. Default:
0.0
.precond (bool) β If
True
, use pre-conditioned & reparameterized version propagation (eq.28), else use normalized laplacian (eq.30).
Example
>>> import dgl >>> from dgl.nn import TWIRLSUnfoldingAndAttention >>> import torch as th
>>> g = dgl.graph(([0, 1, 2, 3, 2, 5], [1, 2, 3, 4, 0, 3])).add_self_loop() >>> feat = th.ones(6,5) >>> prop = TWIRLSUnfoldingAndAttention(10, 1, 1, prop_step=3) >>> res = prop(g,feat) >>> res tensor([[2.5000, 2.5000, 2.5000, 2.5000, 2.5000], [2.5000, 2.5000, 2.5000, 2.5000, 2.5000], [2.5000, 2.5000, 2.5000, 2.5000, 2.5000], [3.7656, 3.7656, 3.7656, 3.7656, 3.7656], [2.5217, 2.5217, 2.5217, 2.5217, 2.5217], [4.0000, 4.0000, 4.0000, 4.0000, 4.0000]])