Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Correct way to save a trained model? #2620

Unanswered
carschandler asked this question in Q&A
Discussion options

I've looked around a decent bit and it's possible that I've somehow missed this, but what is the correct way to save a model which is performing well at the end of a training loop, and how would I go about re-using it later? I now understand how to save replay buffer contents, but to my understanding, this doesn't save any information about the weights of the models themselves, right?

I thought that maybe using torch.save might be the route to go, but what do I use it on? For example, I have a policy/actor which is a QValueActor with a DuelingCnnDQNet value net inside of it. I also have a policy_explore which is a Seq(policy, EGreedyModule). Is saving the QValueActor enough to get up and running again? Do I also need to save the policy_explore? What about the value net?

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
1 participant

AltStyle によって変換されたページ (->オリジナル) /