-
Notifications
You must be signed in to change notification settings - Fork 418
Correct way to save a trained model? #2620
-
I've looked around a decent bit and it's possible that I've somehow missed this, but what is the correct way to save a model which is performing well at the end of a training loop, and how would I go about re-using it later? I now understand how to save replay buffer contents, but to my understanding, this doesn't save any information about the weights of the models themselves, right?
I thought that maybe using torch.save might be the route to go, but what do I use it on? For example, I have a policy/actor which is a QValueActor with a DuelingCnnDQNet value net inside of it. I also have a policy_explore which is a Seq(policy, EGreedyModule). Is saving the QValueActor enough to get up and running again? Do I also need to save the policy_explore? What about the value net?
Beta Was this translation helpful? Give feedback.