Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Optimizing Through Train and Validate Sets #564

Cloblak started this conversation in Ideas
Discussion options

Has there been thought to build the optimization in a manner similar to sklearn where you can designate a train and validate upset? I think a lot of issue with training indicator strategies is the overfitting that occurs when optimization. If you are interested in my approach, I am trying to work on one, and we going to see if I could incorporate it into your existing coding framework.

EDIT: The more I think about this idea, the less it seems like a truly feasible idea in the usual indicator testing sense. Optimization as it stands is basically doing this because there are no weights to minimize or parameters to tune.

My thought to this, and I would be curious what other people thought, as it pertains to defeating overfitted strategies, is comparing say training and test set results and looking for the parameters that will make the results of the train and test equal as close to 1.

Optimum Strategy = (Return [%] from Training) / (Return [%] from Test) ; Parameters that make this value closest to 1

Then you can test the new strategy parameters against a Validation set. If you are still getting favorable returns, then you could be confident your strategy is not overfitted.

You must be logged in to vote

Replies: 0 comments

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
1 participant

AltStyle によって変換されたページ (->オリジナル) /