For any input symmetric matrix with zero diagonals W
, I have the following implementation in PyTorch
. I was wondering if the following can be improved in terms of efficiency,
P.S. Would current implementation break backpropagation?
import torch
W = torch.tensor([[0,1,0,0,0,0,0,0,0],
[1,0,1,0,0,1,0,0,0],
[0,1,0,3,0,0,0,0,0],
[0,0,3,0,1,0,0,0,0],
[0,0,0,1,0,1,1,0,0],
[0,1,0,0,1,0,0,0,0],
[0,0,0,0,1,0,0,1,0],
[0,0,0,0,0,0,1,0,1],
[0,0,0,0,0,0,0,1,0]])
n = len(W)
C = torch.empty(n, n)
I = torch.eye(n)
for i in range(n):
for j in range(n):
B = W.clone()
B[i, j] = 0
B[j, i] = 0
tmp = torch.inverse(n * I - B)
C[i, j] = tmp[i, j]
-
1\$\begingroup\$ The current question title, which states your concerns about the code, applies to too many questions on this site to be useful. The site standard is for the title to simply state the task accomplished by the code. Please see How to Ask for examples, and revise the title accordingly. \$\endgroup\$Toby Speight– Toby Speight2020年11月15日 10:29:08 +00:00Commented Nov 15, 2020 at 10:29
1 Answer 1
A few hints for improving efficiency:
When
W[i, j] == 0
,B
equals toW
sotmp
remains unchanged. In this case, the computation ofC
values can be done only once outside the loop instead of inside the loop.torch.nonzero
/torch.Tensor.nonzero
can be used to obtain all indices of non-zero values in a tensor.Since
W
is symmetric,C
is also symmetric and only half of its values need to be computed.All repeated computation can be moved out of the loop to improve effciency.
Improved code:
n = len(W)
nIW = n * torch.eye(n) - W
nIB = nIW.clone()
C = nIB.inverse()
for i, j in W.nonzero():
if i < j:
nIB[i, j] = nIB[j, i] = 0
C[j, i] = C[i, j] = nIB.inverse()[i, j]
nIB[i, j] = nIB[j, i] = nIW[i, j]
Further performance improvement may be achieved according to this.
As for backpropagation, I have no idea how it can be done on elements of matrix inverses.