Message138741
| Author |
neologix |
| Recipients |
greg.ath, neologix |
| Date |
2011年06月20日.17:18:43 |
| SpamBayes Score |
1.075396e-06 |
| Marked as misclassified |
No |
| Message-id |
<1308590324.27.0.125474218119.issue12352@psf.upfronthosting.co.za> |
| In-reply-to |
| Content |
Thanks for reporting this.
There's indeed a bug which can lead to this deadlock.
Relevant code in Lib/multiprocessing/heap.py
- the BufferWrapper class uses a single Heap() shared among instances, protected by a mutex (threading.Lock), from which blocks are allocated
- when a BufferedWrapper is allocated, a multiprocessing.Finalizer is installed to free the corresponding block allocated from the Heap
- if another BufferedWrapper is garbage collected while the mutex protecting the Heap is held (in your case, while a new BufferedWrapper is allocated), the corresponding finalizer will try to free the block from the heap
- free tries to lock the mutex
- deadlock
The obvious solution is to use a recursive lock instead.
Could you try your application after changing:
"""
class Heap(object):
_alignment = 8
def __init__(self, size=mmap.PAGESIZE):
self._lastpid = os.getpid()
self._lock = threading.Lock()
"""
to
"""
class Heap(object):
_alignment = 8
def __init__(self, size=mmap.PAGESIZE):
self._lastpid = os.getpid()
-> self._lock = threading.RLock()
"""
One could probably reproduce this by allocating and freeing many multiprocessing.Values, preferably with a lower GC threshold. |
|
History
|
|---|
| Date |
User |
Action |
Args |
| 2011年06月20日 17:18:44 | neologix | set | recipients:
+ neologix, greg.ath |
| 2011年06月20日 17:18:44 | neologix | set | messageid: <1308590324.27.0.125474218119.issue12352@psf.upfronthosting.co.za> |
| 2011年06月20日 17:18:43 | neologix | link | issue12352 messages |
| 2011年06月20日 17:18:43 | neologix | create |
|