-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Loading big pandas #1121
00houssam00
started this conversation in
General
Loading big pandas
#1121
-
Hello,
Thank you for this amazing library.
I would like to ask whats the best way to handle large data? is there a way to process the data in chunks and load one chunk at a time into Backtesting (ex:- feeding the backtest.run() with prices one chunk at a time)?
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 1 comment
-
You can use chunksize parameter to divide your data in blocks (chunk) and the size is determined by you
for chunk in pd.read_csv(filepath, chunksize=10000)
# Process each chunk
backtest.run(chunk)
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment