Skip to main content
Stack Overflow
  1. About
  2. For Teams

Return to Question

Notice removed Draw attention by Community Bot
Bounty Ended with rumpel's answer chosen by Community Bot
Notice added Draw attention by user124114
Bounty Started worth 50 reputation by user124114
added 41 characters in body
Source Link
user124114
  • 8.7k
  • 13
  • 48
  • 69

I am using multiprocessing.imap_unordered to perform a computation on a list of values:

def process_parallel(fnc, some_list):
 pool = multiprocessing.Pool()
 for result in pool.imap_unordered(fnc, some_list):
 for x in result:
 yield x
 pool.terminate()

Each call to fnc returns a HUGE object as a result, by design. I can store N instances of such object in RAM, where N ~ cpu_count, but not much more (not hundreds).

Now, using this function takes up too much memory. The memory is entirely spent in the main process, not in the workers.

How does imap_unordered store the finished results? I mean the results that were already returned by workers but not yet returnedpassed on to user. I thought it was smart and only computed them "lazily" as needed, but apparently not.

It looks like since I cannot consume the results of process_parallel fast enough, the pool keeps queueing these huge objects from fnc somewhere, internally, and then blows up. Is there a way to avoid this? Limit its internal queue somehow?


I'm using Python2.7. Cheers.

I am using multiprocessing.imap_unordered to perform a computation on a list of values:

def process_parallel(fnc, some_list):
 pool = multiprocessing.Pool()
 for result in pool.imap_unordered(fnc, some_list):
 for x in result:
 yield x
 pool.terminate()

Each call to fnc returns a HUGE object as a result, by design. I can store N instances of such object in RAM, where N ~ cpu_count, but not much more (not hundreds).

Now, using this function takes up too much memory. The memory is entirely spent in the main process, not in the workers.

How does imap_unordered store the finished results? I mean results that were already returned by workers but not yet returned to user. I thought it was smart and only computed them "lazily" as needed, but apparently not.

It looks like since I cannot consume the results of process_parallel fast enough, the pool keeps queueing these huge objects from fnc somewhere, internally, and then blows up. Is there a way to avoid this?

I'm using Python2.7. Cheers.

I am using multiprocessing.imap_unordered to perform a computation on a list of values:

def process_parallel(fnc, some_list):
 pool = multiprocessing.Pool()
 for result in pool.imap_unordered(fnc, some_list):
 for x in result:
 yield x
 pool.terminate()

Each call to fnc returns a HUGE object as a result, by design. I can store N instances of such object in RAM, where N ~ cpu_count, but not much more (not hundreds).

Now, using this function takes up too much memory. The memory is entirely spent in the main process, not in the workers.

How does imap_unordered store the finished results? I mean the results that were already returned by workers but not yet passed on to user. I thought it was smart and only computed them "lazily" as needed, but apparently not.

It looks like since I cannot consume the results of process_parallel fast enough, the pool keeps queueing these huge objects from fnc somewhere, internally, and then blows up. Is there a way to avoid this? Limit its internal queue somehow?


I'm using Python2.7. Cheers.

Source Link
user124114
  • 8.7k
  • 13
  • 48
  • 69

Python's multiprocessing and memory

I am using multiprocessing.imap_unordered to perform a computation on a list of values:

def process_parallel(fnc, some_list):
 pool = multiprocessing.Pool()
 for result in pool.imap_unordered(fnc, some_list):
 for x in result:
 yield x
 pool.terminate()

Each call to fnc returns a HUGE object as a result, by design. I can store N instances of such object in RAM, where N ~ cpu_count, but not much more (not hundreds).

Now, using this function takes up too much memory. The memory is entirely spent in the main process, not in the workers.

How does imap_unordered store the finished results? I mean results that were already returned by workers but not yet returned to user. I thought it was smart and only computed them "lazily" as needed, but apparently not.

It looks like since I cannot consume the results of process_parallel fast enough, the pool keeps queueing these huge objects from fnc somewhere, internally, and then blows up. Is there a way to avoid this?

I'm using Python2.7. Cheers.

lang-py

AltStyle によって変換されたページ (->オリジナル) /