I am using multiprocessing.imap_unordered to perform a computation on a list of values:
def process_parallel(fnc, some_list):
pool = multiprocessing.Pool()
for result in pool.imap_unordered(fnc, some_list):
for x in result:
yield x
pool.terminate()
Each call to fnc returns a HUGE object as a result, by design. I can store N instances of such object in RAM, where N ~ cpu_count, but not much more (not hundreds).
Now, using this function takes up too much memory. The memory is entirely spent in the main process, not in the workers.
How does imap_unordered store the finished results? I mean the results that were already returned by workers but not yet passed on to user. I thought it was smart and only computed them "lazily" as needed, but apparently not.
It looks like since I cannot consume the results of process_parallel fast enough, the pool keeps queueing these huge objects from fnc somewhere, internally, and then blows up. Is there a way to avoid this? Limit its internal queue somehow?
I'm using Python2.7. Cheers.
2 Answers 2
As you can see by looking into the corresponding source file (python2.7/multiprocessing/pool.py), the IMapUnorderedIterator uses a collections.deque instance for storing the results. If a new item comes in, it is added and removed in the iteration.
As you suggested, if another huge object comes in while the main thread is still processing the object, those will be stored in memory too.
What you might try is something like this:
it = pool.imap_unordered(fnc, some_list)
for result in it:
it._cond.acquire()
for x in result:
yield x
it._cond.release()
This should cause the task-result-receiver-thread to get blocked while you process an item if it is trying to put the next object into the deque. Thus there should not be more than two of the huge objects in memory. If that works for your case, I don't know ;)
6 Comments
it simply a generator and as such it won't have _cond.acquire() and release methods? If you need to write them yourself, what kind of object does ._cond need to be?imap_unordered returns an IMapUnorderedIterator, which has these functions as can be seen by a look in the corresponding source code. Since the result-receiver-thread will (upon receiving an result) require the lock to enter the result to the deque, this will block the thread and stop it from consuming more memory.The simplest solution I can think of would be to add a closure to wrap your fnc function which would use a semaphore to control the total number of simultaneous job executions that can execute at one time (I assume the main process/thread would be incrementing the semaphore). The semaphore value could be calculated based on job size and available memory.
yieldis in the main process, not insidefnc(ie, the function done by the workers). isfncitself doing lazy evaluation?fnctakes a single item fromsome_list, and computes and returns a huge object from it.