0
$\begingroup$

I'm reading through Learning Deep Generative Models of Graphs, which is a paper that seems to me propose some sort of variational autoencoder to generate a graph.

At very high level the semantic of what the network would is insert new nodes and connecting existing nodes, which is pretty straightforward to understand apparently.

However the first doubt I have is how to make this decisions differentiable, so we can apply backpropagation.

It seems to me the key is in understanding equations (5),(6), (7), (8), (9) and (10) in section 4.1., which I write down for reference

$$ \begin{array}{l} h_V^{(T)} = prop^{(T)}(h_V,G) \\ h_G = R(h_V^{(T)},G) \\ f_{addnode}(G) = softmax(f_{an}(h_G)) \\ f_{addedge}(G,v) = \sigma(f_{ae}(h_G,h_v^{(T)}) \\ s_u = f_s(h_u^{(T)},h_v^{(T)}) \\ f_{nodes}(G,v) = softmax(s) \end{array} $$

I'll refer to the paper for the description of the single functions, but you can clearly see that the "softmax" is involved, which leads me to think that maybe what this network would generate is a set of parameters describing a probability. Therefore the final graph would be obtained by maximizing such probability.

This explanation makes sense to me since it would be possible to backpropagate, but I need confirmation.

Am I right? Is there something more involved that I'm missing?

asked Sep 9, 2019 at 12:54
$\endgroup$

1 Answer 1

1
$\begingroup$

Yes, the model is trained by maximizing log likelihood. However it's an autoregressive model, not a variational encoder.

However the first doubt I have is how to make this decisions differentiable

Well it's the same thing as an RNN really -- you only have to make discrete "decisions" when sampling from the model, while training can be done without.

answered Sep 10, 2019 at 0:02
$\endgroup$
6
  • $\begingroup$ Hi, thanks for your answer. Can you elaborate more on why the "training can be done without"? $\endgroup$ Commented Sep 10, 2019 at 10:48
  • $\begingroup$ well in training, you have the ground truth graph already, so instead of making discrete decisions you only ever have to calculate the probability of having made the ground truth already-known decisions. $\endgroup$ Commented Sep 10, 2019 at 17:48
  • $\begingroup$ I'm not sure I follow, if you do backpropagation you would first run a forward pass, which in this context in my mind would be to perform these "decisions", then you backpropagate. $\endgroup$ Commented Sep 10, 2019 at 18:21
  • $\begingroup$ Hi, I've read the paper in more detail. I still think I'm missing bits. Can you roughly explain what do you mean with auto-regressive model? $\endgroup$ Commented Oct 3, 2019 at 10:42
  • $\begingroup$ yes, i mean that the probability of a graph is decomposed into a sequence of steps. $\endgroup$ Commented Oct 4, 2019 at 3:58

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.