Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Threading/ProcessPooling Backtest.Optimize #620

KitHaywood started this conversation in General
Discussion options

Hi There,

I am trying to perform an optimised backtest on my user defined strategy. The scope of potential combinations for these 2 indicators is large, and therefore the cost-to-time for these computations is high.

Is there a way to thread/process-pool backtesting.optimize? I'm currently thinking that I can chunk the indicator combinations up and then backtest.optimize again on the set of outcomes of the chunks - do you think this is the best way forward?

Any advice on how to thread/processpool would be very helpful,

Thanks,

Kit

You must be logged in to vote

Replies: 2 comments 1 reply

Comment options

chunk the indicator combinations up

This is how Backtest.optimize(method='grid') works internally. Unfortunately not so well on Windos. Ideally, someone would have a look at implementing proper multiprocessing on Widows:

.. TODO::
Improve multiprocessing/parallel execution on Windos with start method 'spawn'.

# Save necessary objects into "global" state; pass into concurrent executor
# (and thus pickle) nothing but two numbers; receive nothing but numbers.
# With start method "fork", children processes will inherit parent address space
# in a copy-on-write manner, achieving better performance/RAM benefit.
backtest_uuid = np.random.random()
param_batches = list(_batch(param_combos))
Backtest._mp_backtests[backtest_uuid] = (self, param_batches, maximize) # type: ignore
try:
# If multiprocessing start method is 'fork' (i.e. on POSIX), use
# a pool of processes to compute results in parallel.
# Otherwise (i.e. on Windos), sequential computation will be "faster".
if mp.get_start_method(allow_none=False) == 'fork':
with ProcessPoolExecutor() as executor:
futures = [executor.submit(Backtest._mp_task, backtest_uuid, i)
for i in range(len(param_batches))]
for future in _tqdm(as_completed(futures), total=len(futures),
desc='Backtest.optimize'):
batch_index, values = future.result()
for value, params in zip(values, param_batches[batch_index]):
heatmap[tuple(params.values())] = value
else:
if os.name == 'posix':
warnings.warn("For multiprocessing support in `Backtest.optimize()` "
"set multiprocessing start method to 'fork'.")
for batch_index in _tqdm(range(len(param_batches))):
_, values = Backtest._mp_task(backtest_uuid, batch_index)
for value, params in zip(values, param_batches[batch_index]):
heatmap[tuple(params.values())] = value
finally:
del Backtest._mp_backtests[backtest_uuid]
You must be logged in to vote
1 reply
Comment options

I'll give that a go, I'm running it on linux so should work, thanks

Comment options

Note for Mac users:
the default start method on Mac is spawn, so the multiprocessing will not work on Mac out of the box.
To enable multiprocessing on Mac set the start method manually to 'fork' before running bt

import multiprocessing
multiprocessing.set_start_method("fork")
You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /