Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

How to pass a context for ML training as part of the backtesting? #1227

bravegag started this conversation in General
Discussion options

If I have a model that is trained using multiple indicators, how can I best pass that to the strategy and keep it in sync with the backtesting expanding window? also making sure that I am not getting any Lookahead bias ... the library is conceived to only get as input the OCHL... of a single indicator but in ML we look at many indicators. It would be a good design to separate the traded instrument from the indicators (which may also include the traded instrument but not necessarily).

what I would do for now is to use the UD strategy constructor to pass all the data as context and then time filter it accordingly while running the strategy in the expanding window.

You must be logged in to vote

Replies: 1 comment 2 replies

Comment options

how can I best pass that to the strategy and keep it in sync with the backtesting expanding window?

Does the approach in #39 work for you?

You must be logged in to vote
2 replies
Comment options

@kernc thanks a lot for your answer. Yes I saw that, but it is unfortunately still not ideal ... I would like to be able pass additional context to the Strategy, for example: pre-trained ML models, backtesting start date (which may not coincide with the OCHL input data time index at iloc 0), etc. Otherwise, I saw that instead of filling the state of the Strategy in OOP, Strategy implementations need to end up consuming global variables as alternative which compromises the design quality and maintainability. The type or complexity of data needed by a Strategy may not be always cast as additional columns in the dataframe passed (for example, pre-trained models or path location where to load them from). I get it to keep the Strategy simple as much as possible but being able have a provision for more sophisticated use-cases would be invaluable.

It would be great including an additional kvargs argument to the Backtest to be piped to the Strategy's constructor and then it will work, or even just allowing to pass a context dictionary so that the Strategy can consume that context data internally as it sees fit.

For example, here:
https://kernc.github.io/backtesting.py/doc/examples/Trading%20with%20Machine%20Learning.html

I would like to instead of passing or reading this parameter from a "magic number" global variable, to instead pass from configuration to my CLI or WebApp a date where to start the backtest or the retraining frequency for my ML models etc:

N_TRAIN = 400
Comment options

Ah, I understand. You'd like to additionally pass kwargs. This makes sense. Indeed, global scope was currently the showcase way.

Note, since this is Python, you can always do the equivalent of:

from .complex_strategy import OurMostComplexStrategy
def some_locals(**kwargs):
 class StrategyKwargsWrapper(OurMostComplexStrategy):
 def init(self):
 self.model = Path(kwargs['model_path']).read()
 # OurMostComplexStrategy can use `self.model` ...
 super().init()
 local_backtest = Backtest(GOOG, StrategyKwargsWrapper, ...)
 ...
some_locals(model_path=...)

But passing kwargs is a valid feature. Please freely open an issue for it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
2 participants

AltStyle によって変換されたページ (->オリジナル) /