Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Loading big pandas #1121

00houssam00 started this conversation in General
Discussion options

Hello,

Thank you for this amazing library.

I would like to ask whats the best way to handle large data? is there a way to process the data in chunks and load one chunk at a time into Backtesting (ex:- feeding the backtest.run() with prices one chunk at a time)?

You must be logged in to vote

Replies: 1 comment

Comment options

You can use chunksize parameter to divide your data in blocks (chunk) and the size is determined by you

for chunk in pd.read_csv(filepath, chunksize=10000)
 # Process each chunk
 backtest.run(chunk)
You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet

AltStyle によって変換されたページ (->オリジナル) /