Graph Attention Networks (https://arxiv.org/abs/1710.10903)
-
Updated
Apr 12, 2020 - Python
Add a description, image, and links to the self-attention topic page so that developers can more easily learn about it.
To associate your repository with the self-attention topic, visit your repo's landing page and select "manage topics."
In official documents, there is a notice 'Each function object is meant to be used only once (in the forward pass).' in subclass of torch.autograd.Function.
In model SpGraphAttentionLayer, you have use the object of SpecialSpmmFunction(self.special_spmm) twice, one for e_rowsum and one for h_prime.
Is it the right usage for subclass of torch.autograd.Function?