homepage

This issue tracker has been migrated to GitHub , and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author bacchusrx
Recipients
Date 2004年12月29日.02:09:35
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Some part of socket.py leaks memory on Mac OS X 10.3 (both with 
the python 2.3 that ships with the OS and with python 2.4).
I encountered the problem in John Goerzen's offlineimap. 
Transfers of messages over a certain size would cause the program 
to bail with malloc errors, eg
*** malloc: vm_allocate(size=5459968) failed (error code=3)
*** malloc[13730]: error: Can't allocate region
Inspecting the process as it runs shows that python's total memory
size grows wildly during such transfers.
The bug manifests in _fileobject.read() in socket.py. You can 
replicate the problem easily using the attached example with "nc -l 
-p 9330 < /dev/zero" running on some some remote host.
The way _fileobject.read() is written, socket.recv is called with the 
larger of the minimum rbuf size or whatever's left to be read. 
Whatever is received is then appended to a buffer which is joined 
and returned at the end of function.
It looks like each time through the loop, space for recv_size is 
allocated but not freed, so if the loop runs for enough iterations, 
python exhausts the memory available to it.
You can sidestep the condition if recv_size is small (like 
_fileobject.default_bufsize small).
I can't replicate this problem with python 2.3 on FreeBSD 4.9 or 
FreeBSD 5.2, nor on Mac OS X 10.3 if the logic from 
_fileobject.read() is re-written in Perl (for example).
History
Date User Action Args
2007年08月23日 14:28:47adminlinkissue1092502 messages
2007年08月23日 14:28:47admincreate

AltStyle によって変換されたページ (->オリジナル) /