I've been looking into the memory leaks exercised by the memleak_gui.py unit test. I've searched through the mailing list for information, but I'm new to the party here, so forgive me if I'm not fully current. Eric Firing wrote: "I think we have a similar problem with all interactive backends (the only one I didn't test is Qt4Agg) which also makes me suspect we are violating some gui rule, rather than that gtk, qt3, wx, and tk all have leaks." Unfortunately, from what I've seen, there isn't a single rule being broken, but instead I've been running into lots of different "surprises" in the various toolkits. But I am starting to suspect that my old versions of Gtk (or PyGtk) have some bonafide leaks. I just finished submitting patches (to the tracker) for a number of memory leaks in the Tk, Gtk, and Wx backends (other backends will hopefully follow). I did all my testing on RHEL4 with Python 2.5 and recent SVN matplotlib (rev. 3244), so it's quite possible that memory leaks still remain on other platforms. Tk: See the patch: http://sourceforge.net/tracker/index.php?func=detail&aid=1745400&group_id=80706&atid=560722 Even after this patch, Tkinter still leaks 28 bytes every time it is initialized (every time a toplevel window is created). There may be a way to avoid the subsequent initializations, but it wasn't immediately obvious to me, and given the small size of this leak, I've passed on it for now. Gtk: See the patch: http://sourceforge.net/tracker/index.php?func=detail&aid=1745406&group_id=80706&atid=560722 The patch fixes a number of Python-level leaks. Unfortunately, the Gdk rendering backend still leaks gtk.gdk.Window objects, and I have so far been unable to determine a fix. GtkAgg, however, does not have this leak. Under pygtk-2.4.0, the toolbars leak gdk.Pixbuf's and file selector dialogs. Since these issues appear to be bugs in gtk and/or pygtk itself, I will probably first try a more recent version of them. (The gtk installed on RHEL4 is ancient (2.4), and I haven't yet tried building my own. If anyone has a more recent version of pygtk handy, I would appreciate a report from memleak_gui.py after applying my patch.) Wx: See the patch: http://sourceforge.net/tracker/index.php?func=detail&aid=1745408&group_id=80706&atid=560722 This one was fairly simple, though surprising. Top-level windows do not fully destroy themselves as long as there are pending events. Manually flushing the event queue causes the windows to go away. See the wxPython docs: http://wxwidgets.org/manuals/stable/wx_wxwindow.html#wxwindowdestroy There were no further leaks in Wx/WxAgg that I was able to detect (in my environment). As an aside, I thought I'd share the techniques I used to find these leaks (hopefully not to be pedantic, but these things were hard to come by online...), and it might be useful to some. For C/C++ memory leaks, I really like valgrind (though it is Linux-only), though be sure to follow the directions to get it to play well with Python. I recommend rebuilding Python with "--without-pymalloc" to make memory reporting in general much more sensical (though slower). See: http://svn.python.org/view/python/trunk/Misc/README.valgrind For an example, you can see the rsvg memory leak here: ==15979== 1,280 bytes in 20 blocks are definitely lost in loss record 13,506 of 13,885 ==15979== at 0x4004405: malloc (vg_replace_malloc.c:149) ==15979== by 0x314941: (within /usr/lib/libart_lgpl_2.so.2.3.16) ==15979== by 0x315E0C: (within /usr/lib/libart_lgpl_2.so.2.3.16) ==15979== by 0x31624A: art_svp_intersector (in /usr/lib/libart_lgpl_2.so.2.3.16) ==15979== by 0x316660: art_svp_intersect (in /usr/lib/libart_lgpl_2.so.2.3.16) ==15979== by 0x6BFA86C: (within /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x6BFD801: rsvg_render_path (in /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x6BFD9E7: rsvg_handle_path (in /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x6BFFDB5: rsvg_start_path (in /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x6C07F59: (within /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x9C24BB: xmlParseStartTag (in /usr/lib/libxml2.so.2.6.16) ==15979== by 0xA4DADC: xmlParseChunk (in /usr/lib/libxml2.so.2.6.16) ==15979== by 0x6C08ED5: rsvg_handle_write_impl (in /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x6C09539: rsvg_handle_write (in /usr/lib/librsvg-2.so.2.8.1) ==15979== by 0x5D51B51: (within /usr/lib/gtk-2.0/2.4.0/loaders/svg_loader.so) ==15979== by 0x32871F: (within /usr/lib/libgdk_pixbuf-2.0.so.0.400.13) ==15979== by 0x32885E: gdk_pixbuf_new_from_file (in /usr/lib/libgdk_pixbuf-2.0.so.0.400.13) ==15979== by 0x5CEE34: (within /usr/lib/libgtk-x11-2.0.so.0.400.13) ==15979== by 0x5CF095: gtk_window_set_default_icon_from_file (in /usr/lib/libgtk-x11-2.0.so.0.400.13) ==15979== by 0x59FCD1B: _wrap_gtk_window_set_default_icon_from_file (gtk.c:38156) ==15979== by 0x80C29B6: PyEval_EvalFrameEx (ceval.c:3564) For finding cycles that result in uncollectable garbage, I wrote a cycle-finding utility (see attached file). (I plan to submit this as a Python Cookbook entry). Given a list of objects of interest, it will print out all the reference cycles involving those objects (though it can't traverse through extension objects that don't expose reference information to the garbage collector). Objects that leak because legitimate Python references to them are still around are some of the trickiest to find. (This was an answer to a job interview question I was once asked: "How can you leak memory in Java/some other garbage collected language?") I find it useful to compare snapshots of all the objects in the interpreter before and after a suspected leak: existing_objects = [id(x) for x in gc.get_objects()] # ... do something that leaks ... # ... delete everything you can ... remaining_objects = [x for x in gc.get_objects() if id(x) not in existing_objects] One can then scour through remaining_objects for anything that suspected of causing the problem, and do cycle detection on those objects, if necessary to find ways to forcibly remove them. Cheers, Mike
Mike, All this sounds like great progress--thanks! I particularly appreciate the descriptions of what problems you found and how you found them. John et al.: is there a maintainer for each of these backends? I think it is very important that Mike's patches be checked out and applied ASAP if they are OK; or if there is a problem, then that info needs to get back to Mike. This should be very high priority. I can do a quick check and commit if necessary, but it would make more sense for someone more familiar with backends and gui code to do it. Or at least for others to do some testing on different platforms if I make the commits. Eric Michael Droettboom wrote: > I've been looking into the memory leaks exercised by the memleak_gui.py > unit test. I've searched through the mailing list for information, but > I'm new to the party here, so forgive me if I'm not fully current. > > Eric Firing wrote: > "I think we have a similar problem with all interactive backends (the > only one I didn't test is Qt4Agg) which also makes me suspect we are > violating some gui rule, rather than that gtk, qt3, wx, and tk all have > leaks." > > Unfortunately, from what I've seen, there isn't a single rule being > broken, but instead I've been running into lots of different "surprises" > in the various toolkits. But I am starting to suspect that my old > versions of Gtk (or PyGtk) have some bonafide leaks. > > I just finished submitting patches (to the tracker) for a number of > memory leaks in the Tk, Gtk, and Wx backends (other backends will > hopefully follow). I did all my testing on RHEL4 with Python 2.5 and > recent SVN matplotlib (rev. 3244), so it's quite possible that memory > leaks still remain on other platforms. > > Tk: > See the patch: > http://sourceforge.net/tracker/index.php?func=detail&aid=1745400&group_id=80706&atid=560722 > > > Even after this patch, Tkinter still leaks 28 bytes every time it is > initialized (every time a toplevel window is created). There may be a > way to avoid the subsequent initializations, but it wasn't immediately > obvious to me, and given the small size of this leak, I've passed on it > for now. > > Gtk: > See the patch: > http://sourceforge.net/tracker/index.php?func=detail&aid=1745406&group_id=80706&atid=560722 > > > The patch fixes a number of Python-level leaks. > Unfortunately, the Gdk rendering backend still leaks gtk.gdk.Window > objects, and I have so far been unable to determine a fix. GtkAgg, > however, does not have this leak. > Under pygtk-2.4.0, the toolbars leak gdk.Pixbuf's and file selector > dialogs. > Since these issues appear to be bugs in gtk and/or pygtk itself, I will > probably first try a more recent version of them. (The gtk installed on > RHEL4 is ancient (2.4), and I haven't yet tried building my own. If > anyone has a more recent version of pygtk handy, I would appreciate a > report from memleak_gui.py after applying my patch.) > > Wx: > See the patch: > http://sourceforge.net/tracker/index.php?func=detail&aid=1745408&group_id=80706&atid=560722 > > > This one was fairly simple, though surprising. Top-level windows do not > fully destroy themselves as long as there are pending events. Manually > flushing the event queue causes the windows to go away. See the > wxPython docs: > http://wxwidgets.org/manuals/stable/wx_wxwindow.html#wxwindowdestroy > > There were no further leaks in Wx/WxAgg that I was able to detect (in my > environment). > > > As an aside, I thought I'd share the techniques I used to find these > leaks (hopefully not to be pedantic, but these things were hard to come > by online...), and it might be useful to some. > > For C/C++ memory leaks, I really like valgrind (though it is > Linux-only), though be sure to follow the directions to get it to play > well with Python. I recommend rebuilding Python with > "--without-pymalloc" to make memory reporting in general much more > sensical (though slower). See: > http://svn.python.org/view/python/trunk/Misc/README.valgrind > For an example, you can see the rsvg memory leak here: > > ==15979== 1,280 bytes in 20 blocks are definitely lost in loss record > 13,506 of 13,885 > ==15979== at 0x4004405: malloc (vg_replace_malloc.c:149) > ==15979== by 0x314941: (within /usr/lib/libart_lgpl_2.so.2.3.16) > ==15979== by 0x315E0C: (within /usr/lib/libart_lgpl_2.so.2.3.16) > ==15979== by 0x31624A: art_svp_intersector (in > /usr/lib/libart_lgpl_2.so.2.3.16) > ==15979== by 0x316660: art_svp_intersect (in > /usr/lib/libart_lgpl_2.so.2.3.16) > ==15979== by 0x6BFA86C: (within /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x6BFD801: rsvg_render_path (in > /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x6BFD9E7: rsvg_handle_path (in > /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x6BFFDB5: rsvg_start_path (in /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x6C07F59: (within /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x9C24BB: xmlParseStartTag (in /usr/lib/libxml2.so.2.6.16) > ==15979== by 0xA4DADC: xmlParseChunk (in /usr/lib/libxml2.so.2.6.16) > ==15979== by 0x6C08ED5: rsvg_handle_write_impl (in > /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x6C09539: rsvg_handle_write (in > /usr/lib/librsvg-2.so.2.8.1) > ==15979== by 0x5D51B51: (within > /usr/lib/gtk-2.0/2.4.0/loaders/svg_loader.so) > ==15979== by 0x32871F: (within /usr/lib/libgdk_pixbuf-2.0.so.0.400.13) > ==15979== by 0x32885E: gdk_pixbuf_new_from_file (in > /usr/lib/libgdk_pixbuf-2.0.so.0.400.13) > ==15979== by 0x5CEE34: (within /usr/lib/libgtk-x11-2.0.so.0.400.13) > ==15979== by 0x5CF095: gtk_window_set_default_icon_from_file (in > /usr/lib/libgtk-x11-2.0.so.0.400.13) > ==15979== by 0x59FCD1B: _wrap_gtk_window_set_default_icon_from_file > (gtk.c:38156) > ==15979== by 0x80C29B6: PyEval_EvalFrameEx (ceval.c:3564) > > For finding cycles that result in uncollectable garbage, I wrote a > cycle-finding utility (see attached file). (I plan to submit this as a > Python Cookbook entry). Given a list of objects of interest, it will > print out all the reference cycles involving those objects (though it > can't traverse through extension objects that don't expose reference > information to the garbage collector). > > Objects that leak because legitimate Python references to them are still > around are some of the trickiest to find. (This was an answer to a job > interview question I was once asked: "How can you leak memory in > Java/some other garbage collected language?") I find it useful to > compare snapshots of all the objects in the interpreter before and after > a suspected leak: > > existing_objects = [id(x) for x in gc.get_objects()] > # ... do something that leaks ... > # ... delete everything you can ... > remaining_objects = [x for x in gc.get_objects() if id(x) not in > existing_objects] > > One can then scour through remaining_objects for anything that suspected > of causing the problem, and do cycle detection on those objects, if > necessary to find ways to forcibly remove them. > > Cheers, > Mike > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > > > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
On 6/30/07, Eric Firing <ef...@ha...> wrote: > Mike, > > All this sounds like great progress--thanks! I particularly appreciate > the descriptions of what problems you found and how you found them. > > John et al.: is there a maintainer for each of these backends? I think gtk: Steve Chaplin or me wx: Ken McIvor qt: Darren? tk: Charlie? After we get these patches in, we can just give Michael commit privileges :-) I can probably look at this Monday, but if you want to commit and test some of these before then, please do so. JDH
John Hunter wrote: > On 6/30/07, Eric Firing <ef...@ha...> wrote: >> Mike, >> >> All this sounds like great progress--thanks! I particularly appreciate >> the descriptions of what problems you found and how you found them. >> >> John et al.: is there a maintainer for each of these backends? I think > > gtk: Steve Chaplin or me > wx: Ken McIvor > qt: Darren? > tk: Charlie? > > After we get these patches in, we can just give Michael commit > privileges :-) I can probably look at this Monday, but if you want to > commit and test some of these before then, please do so. Done. It looks like there is still plenty of memory leakage, but there are improvements, and the huge list of uncollectable garbage with tkAgg is gone. I also made memleak_gui.py more flexible with arguments. For example, here are tests with three backends, a generous number of loops, and suppression of intermediate output: python ../unit/memleak_gui.py -d wx -s 500 -e 1000 -q uncollectable list: [] Backend WX, toolbar toolbar2 Averaging over loops 500 to 1000 Memory went from 29316k to 31211k Average memory consumed per loop: 3.7900k bytes python ../unit/memleak_gui.py -d tkagg -s 500 -e 1000 -q uncollectable list: [] Backend TkAgg, toolbar toolbar2 Averaging over loops 500 to 1000 Memory went from 29202k to 31271k Average memory consumed per loop: 4.1380k bytes python ../unit/memleak_gui.py -d gtkagg -s 500 -e 1000 -q uncollectable list: [] Backend GTKAgg, toolbar toolbar2 Averaging over loops 500 to 1000 Memory went from 29324k to 31131k Average memory consumed per loop: 3.6140k bytes So, this test is still showing problems, with similar memory consumption in these three backends. Eric > > JDH
Eric Firing wrote: > So, this test is still showing problems, with similar memory > consumption in these three backends. Not necessarily. By default, Python allocates large pools from the operating system and then manages those pools itself (though its PyMalloc call). Prior to Python 2.5, those pools were never freed. With Python 2.5, empty pools, when they occur, are freed back to the OS. Due to fragmentation issues, even if there is enough free space in those pools for new objects, new pools may need to be created anyway, since Python objects can't be moved once they are created. So seeing modest increases in memory usage during a long-running Python application is typical, and not something that can be avoided wiinaccurate at finding memory leaksthout micro-optimizing for pool performance (something that may be very difficult). If memory usage is truly increasing in an unbounded way, then, yes, there may be problems, but it should eventually stabilize (though in a test such as memleak_gui that may take many iterations). It's more interesting to see the curve of memory usage over time than the average over a number of iterations. For further reading, see: http://evanjones.ca/python-memory.html README.valgrind in the Python source http://mail.python.org/pipermail/python-dev/2006-March/061991.html Because of this, using the total memory allocated by the Python process to track memory leaks is pretty blunt tool. More important metrics are the total number of GC objects (gc.get_objects()), GC garbage (gc.garbage), and using a tool like Valgrind or Purify to find mismatched malloc/frees. Another useful tool (but I didn't resort to yet with matplotlib testing) is to build Python with COUNT_ALLOCS, which then gives access to the total number of mallocs and frees in the Python interpreter at runtime. IMO, the only reasonable way to use the total memory usage of Python to debug memory leaks is if you build Python without pool allocation (--without-pymalloc). That was how I was debugging memory leaks last week (in conjunction with valgrind, and the gc module), and with that configuration, I was only seeing memory leakage with Pygtk 2.4, and a very small amount with Tk. Are your numbers from a default build? If so, I'll rebuild my Python and check my numbers against yours. If they match, I suspect there's little we can do. Cheers, Mike
Eric Firing wrote: > I also made memleak_gui.py more flexible with arguments. For example, > here are tests with three backends, a generous number of loops, and > suppression of intermediate output: Those changes are really helpful. I just added code to display the total number of objects in the Python interpreter (len(gc.get_objects()) with each iteration as well, as that can be useful. (It doesn't rule out memory leaks, but if it is increasing, that is definitely a problem.) I also added a commandline option to print out any cycles involving uncollectable objects, and added the necessary function to do so to cbook.py. Cheers, Mike
Michael Droettboom wrote: > Eric Firing wrote: >> So, this test is still showing problems, with similar memory >> consumption in these three backends. > Not necessarily. By default, Python allocates large pools from the > operating system and then manages those pools itself (though its > PyMalloc call). Prior to Python 2.5, those pools were never freed. > With Python 2.5, empty pools, when they occur, are freed back to the > OS. Due to fragmentation issues, even if there is enough free space in > those pools for new objects, new pools may need to be created anyway, > since Python objects can't be moved once they are created. So seeing > modest increases in memory usage during a long-running Python > application is typical, and not something that can be avoided > wiinaccurate at finding memory leaksthout micro-optimizing for pool > performance (something that may be very difficult). If memory usage is > truly increasing in an unbounded way, then, yes, there may be problems, > but it should eventually stabilize (though in a test such as memleak_gui > that may take many iterations). It's more interesting to see the curve > of memory usage over time than the average over a number of iterations. I agree. I just ran 2000 iterations with GtkAgg, plotted every 10th point, and the increase is linear (apart from a little bumpiness) over the entire range (not just the last 1000 iterations reported below): Backend GTKAgg, toolbar toolbar2 Averaging over loops 1000 to 2000 Memory went from 31248k to 35040k Average memory consumed per loop: 3.7920k bytes Maybe this is just the behavior of pymalloc in 2.5? > For further reading, see: > http://evanjones.ca/python-memory.html > README.valgrind in the Python source > http://mail.python.org/pipermail/python-dev/2006-March/061991.html > > Because of this, using the total memory allocated by the Python process > to track memory leaks is pretty blunt tool. More important metrics are > the total number of GC objects (gc.get_objects()), GC garbage > (gc.garbage), and using a tool like Valgrind or Purify to find > mismatched malloc/frees. Another useful tool (but I didn't resort to > yet with matplotlib testing) is to build Python with COUNT_ALLOCS, which > then gives access to the total number of mallocs and frees in the Python > interpreter at runtime. > > IMO, the only reasonable way to use the total memory usage of Python to > debug memory leaks is if you build Python without pool allocation > (--without-pymalloc). That was how I was debugging memory leaks last > week (in conjunction with valgrind, and the gc module), and with that > configuration, I was only seeing memory leakage with Pygtk 2.4, and a > very small amount with Tk. Are your numbers from a default build? If > so, I'll rebuild my Python and check my numbers against yours. If they > match, I suspect there's little we can do. I used stock Python 2.5 from ubuntu Feisty. I should compile a version as you suggest, but I haven't done it yet. Eric > > Cheers, > Mike >
Forgot to attach the patches. Oops, Mike Michael Droettboom wrote: > More results: > > I've built and tested a more recent pygtk+ stack. (glib-2.12, > gtk+-2.10.9, librsvg-2.16.1, libxml2-2.6.29, pygobject-2.13.1, > pygtk-2.10.4...). The good news is that the C-level leaks I was seeing > in pygtk 2.2 and 2.4 are resolved. In particular, using an SVG icon and > Gdk rendering no longer seems problematic. I would suggest that anyone > using old versions of pygtk should upgrade, rather than spending time on > workarounds for matplotlib -- do you all agree? And my Gtk patch should > probably be reverted to use an SVG icon for the window again (or to only > do it on versions of pygtk > 2.xx). I don't know what percentage of > users are still using pygtk-2.4 and earlier... > > There is, however, a new patch (attached) to fix a leak of > FileChooserDialog objects that I didn't see in earlier pygtk versions. > I have to admit that I'm a bit puzzled by the solution -- it seems that > the FileChooserDialog object refuses to destruct whenever any custom > Python attributes have been added to the object. It doesn't really need > them in this case so it's an easy fix, but I'm not sure why that was > broken -- other classes do this and don't have problems (e.g. > NavigationToolbar2GTK). Maybe a pygtk expert out there knows what this > is about. It would be great if this resolved the linear memory growth > that Eric is seeing with the Gtk backend. > > GtkCairo seems to be free of leaks. > > QtAgg (qt-3.3) was leaking because of a cyclical reference in the > signals between the toolbar and its buttons. (Patch attached). > > Qt4 is forthcoming (I'm still trying to compile something that runs the > demos cleanly). > > I tried the FltkAgg backend, but it doesn't seem to close the window at > all when the figure is closed -- instead I get dozens of windows open at > once. Is that a known bug or correct behavior? > > Cheers, > Mike > > Eric Firing wrote: > >> Michael Droettboom wrote: >> >>> Eric Firing wrote: >>> >>>> So, this test is still showing problems, with similar memory >>>> consumption in these three backends. >>>> >>> Not necessarily. By default, Python allocates large pools from the >>> operating system and then manages those pools itself (though its >>> PyMalloc call). Prior to Python 2.5, those pools were never freed. >>> With Python 2.5, empty pools, when they occur, are freed back to the >>> OS. Due to fragmentation issues, even if there is enough free space >>> in those pools for new objects, new pools may need to be created >>> anyway, since Python objects can't be moved once they are created. >>> So seeing modest increases in memory usage during a long-running >>> Python application is typical, and not something that can be avoided >>> wiinaccurate at finding memory leaksthout micro-optimizing for pool >>> performance (something that may be very difficult). If memory usage >>> is truly increasing in an unbounded way, then, yes, there may be >>> problems, but it should eventually stabilize (though in a test such >>> as memleak_gui that may take many iterations). It's more interesting >>> to see the curve of memory usage over time than the average over a >>> number of iterations. >>> >> I agree. I just ran 2000 iterations with GtkAgg, plotted every 10th >> point, and the increase is linear (apart from a little bumpiness) over >> the entire range (not just the last 1000 iterations reported below): >> >> Backend GTKAgg, toolbar toolbar2 >> Averaging over loops 1000 to 2000 >> Memory went from 31248k to 35040k >> Average memory consumed per loop: 3.7920k bytes >> >> Maybe this is just the behavior of pymalloc in 2.5? >> >> >> >>> For further reading, see: >>> http://evanjones.ca/python-memory.html >>> README.valgrind in the Python source >>> http://mail.python.org/pipermail/python-dev/2006-March/061991.html >>> >>> Because of this, using the total memory allocated by the Python >>> process to track memory leaks is pretty blunt tool. More important >>> metrics are the total number of GC objects (gc.get_objects()), GC >>> garbage (gc.garbage), and using a tool like Valgrind or Purify to >>> find mismatched malloc/frees. Another useful tool (but I didn't >>> resort to yet with matplotlib testing) is to build Python with >>> COUNT_ALLOCS, which then gives access to the total number of mallocs >>> and frees in the Python interpreter at runtime. >>> >>> IMO, the only reasonable way to use the total memory usage of Python >>> to debug memory leaks is if you build Python without pool allocation >>> (--without-pymalloc). That was how I was debugging memory leaks last >>> week (in conjunction with valgrind, and the gc module), and with that >>> configuration, I was only seeing memory leakage with Pygtk 2.4, and a >>> very small amount with Tk. Are your numbers from a default build? >>> If so, I'll rebuild my Python and check my numbers against yours. If >>> they match, I suspect there's little we can do. >>> >> I used stock Python 2.5 from ubuntu Feisty. I should compile a >> version as you suggest, but I haven't done it yet. >> >> Eric >> >> >>> Cheers, >>> Mike >>> >>> >> > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > >
On 7/2/07, Michael Droettboom <md...@st...> wrote: > Forgot to attach the patches. Michael -- if you send me your sf ID I'll add you to the committers list and you can check these in directly. Vis-a-vis the gtk question, I agree that we should encourage people to upgrade who are suffering from the leak rather than work around it. I would like to summarize the status of known leaks for the FAQ so perhaps you could summarize across the backends what kind of leaks remain in the --without-pymalloc with the known problems fixed (eg the gtk upgrade). If you could simply send me an update for the memory leak FAQ (don't worry about the formatting, I can take care of that) that would be great. Or if you are feeling doubly adventurous, you can simply update the FAQ in the htdocs/faq.html.template svn document and commit it along with your other changes. Thanks for all the very useful and detailed work! JDH
John Hunter wrote: > On 7/2/07, Michael Droettboom <md...@st...> wrote: >> Forgot to attach the patches. > > Michael -- if you send me your sf ID I'll add you to the committers > list and you can check these in directly. mdboom > Vis-a-vis the gtk question, I agree that we should encourage people to > upgrade who are suffering from the leak rather than work around it. I > would like to summarize the status of known leaks for the FAQ so > perhaps you could summarize across the backends what kind of leaks > remain in the --without-pymalloc with the known problems fixed (eg the > gtk upgrade). If you could simply send me an update for the memory > leak FAQ (don't worry about the formatting, I can take care of that) > that would be great. Or if you are feeling doubly adventurous, you > can simply update the FAQ in the htdocs/faq.html.template svn document > and commit it along with your other changes. Will do. Cheers, Mike
Michael Droettboom wrote: > Eric Firing wrote: >> I also made memleak_gui.py more flexible with arguments. For example, >> here are tests with three backends, a generous number of loops, and >> suppression of intermediate output: > > Those changes are really helpful. I just added code to display the > total number of objects in the Python interpreter (len(gc.get_objects()) > with each iteration as well, as that can be useful. (It doesn't rule > out memory leaks, but if it is increasing, that is definitely a problem.) > > I also added a commandline option to print out any cycles involving > uncollectable objects, and added the necessary function to do so to > cbook.py. > > Cheers, > Mike Mike, Good, thank you. I just committed a change to the output formatting of memleak_gui so that if you redirect it to a file, that file can be loaded with pylab.load() in case you want to plot the columns. (At least this is true if you don't use the -c option.) Yesterday, before your commits, I compared memleak_gui with stock Python 2.4 versus stock 2.5 (both from ubuntu feisty) and found very little difference in the OS memory numbers. Eric
Michael Droettboom wrote: > Eric Firing wrote: >> >> I just committed a change to the output formatting of memleak_gui so >> that if you redirect it to a file, that file can be loaded with >> pylab.load() in case you want to plot the columns. (At least this is >> true if you don't use the -c option.) >> > Great. Sorry for stomping on that ;) >> Yesterday, before your commits, I compared memleak_gui with stock >> Python 2.4 versus stock 2.5 (both from ubuntu feisty) and found very >> little difference in the OS memory numbers. > Are they still increasing linearly? I'm still seeing some mystery leaks > with Gtk, Qt4 and (much smaller) on Tk. Qt and Wx seem fine here. Attached are runs with gtk, wx, qtagg, and tkagg. Quite a variety of results: tkagg is best, with only slow memory growth and a constant number of python objects; qtagg grows by 2.2k per loop, with no increase in python object count; wx (which is built on gtk) consumes 3.5k per loop, with an increasing object count; gtk consumes 1.8k per loop with an increasing object count. All runs are on stock ubuntu feisty python 2.5. Eric > Unfortunately Qt4 crashes valgrind, so it's not of much use. > I'm curious whether your results match that. I'm not terribly surprised > that 2.4 isn't different from 2.5, since the case in which entire memory > pools are freed in 2.5 is probably hard to trigger. > > Cheers, > Mike
On Tuesday 03 July 2007 04:33:46 pm Eric Firing wrote: > Michael Droettboom wrote: > > Eric Firing wrote: > >> I just committed a change to the output formatting of memleak_gui so > >> that if you redirect it to a file, that file can be loaded with > >> pylab.load() in case you want to plot the columns. (At least this is > >> true if you don't use the -c option.) > > > > Great. Sorry for stomping on that ;) > > > >> Yesterday, before your commits, I compared memleak_gui with stock > >> Python 2.4 versus stock 2.5 (both from ubuntu feisty) and found very > >> little difference in the OS memory numbers. > > > > Are they still increasing linearly? I'm still seeing some mystery leaks > > with Gtk, Qt4 and (much smaller) on Tk. Qt and Wx seem fine here. > > Attached are runs with gtk, wx, qtagg, and tkagg. Quite a variety of > results: tkagg is best, with only slow memory growth and a constant > number of python objects; qtagg grows by 2.2k per loop, with no increase > in python object count; wx (which is built on gtk) consumes 3.5k per > loop, with an increasing object count; gtk consumes 1.8k per loop with > an increasing object count. > > All runs are on stock ubuntu feisty python 2.5. > > Eric > > > Unfortunately Qt4 crashes valgrind, so it's not of much use. > > I'm curious whether your results match that. I'm not terribly surprised > > that 2.4 isn't different from 2.5, since the case in which entire memory > > pools are freed in 2.5 is probably hard to trigger. I am swamped at work, and have not been able to follow this thread closely. But I just updated from svn and ran memleak_gui.py with qt4: # columns are: iteration, OS memory (k), number of python objects # 0 37364 53792 10 37441 53792 20 37441 53792 30 37525 53792 40 37483 53792 50 37511 53792 60 37539 53792 70 37568 53792 80 37596 53792 90 37624 53792 100 37653 53792 # columns above are: iteration, OS memory (k), number of python objects # # uncollectable list: [] # # Backend Qt4Agg, toolbar toolbar2 # Averaging over loops 30 to 100 # Memory went from 37525k to 37653k # Average memory consumed per loop: 1.8286k bytes Darren
Michael Droettboom wrote: > Eric Firing wrote: >> Attached are runs with gtk, wx, qtagg, and tkagg. Quite a variety of >> results: tkagg is best, with only slow memory growth and a constant >> number of python objects; qtagg grows by 2.2k per loop, with no >> increase in python object count; wx (which is built on gtk) consumes >> 3.5k per loop, with an increasing object count; gtk consumes 1.8k per >> loop with an increasing object count. >> >> All runs are on stock ubuntu feisty python 2.5. > Thanks for these results. Unfortunately, I'm seeing different results > here. [dagnabbit!] None of them have an increasing object count for > me, which leads me to suspect there's some version difference between > your environment and mine that isn't being accounted for. > > Gtk[Agg|Cairo] -- 1.3k per loop. > Wx[Agg] -- 0.010k per loop > QtAgg -- 2.3k per loop (which is in the same ballpark as your result) > Qt4Agg -- 1.4k per loop (which seems to be in the same ballpark as > Darren Dale's result) > TkAgg -- 0.29k per loop > > I don't know if the size of memory per loop is directly comparable > between your environment and mine, but certainly the shape of the curve, > and whether the number of Python objects is growing is very relevant. > > I made some more commits to SVN on 07/03/07 necessary for recent > versions of gtk+ and qt. Did you (by any chance) not get those > patches? It would also be interesting to know which versions of the > toolkits you have, as they are probably different from mine. Is it safe > to assume that they are all the stock Ubuntu feisty packages? In any > case, I have updated memleak_gui.py to display the relevant toolkit > versions. I've also attached a script to display the toolkit versions. > Its output on my machine is: > > # pygtk version: (2, 10, 4), gtk version: (2, 10, 9) > # PyQt4 version: 4.2, Qt version 40300 > # pyqt version: 3.17.2, qt version: 30303 > # wxPython version: 2.8.4.0 > # Tkinter version: $Revision: 50704 ,ドル Tk version: 8.4, Tcl version: 8.4 Here is mine--not very different from yours: # pygtk version: (2, 10, 4), gtk version: (2, 10, 11) # PyQt4 version: 4.1, Qt version 40202 # pyqt version: 3.17, qt version: 30307 # wxPython version: 2.8.1.1 # Tkinter version: $Revision: 50704 ,ドル Tk version: 8.4, Tcl version: 8.4 Everything is stock ubuntu feisty. wx is built on gtk. I'm pretty sure I did the tests after your 07/03/07 commits. I just updated from svn and tried to rerun the wx test, but ran into an error: efiring@manini:~/programs/py/mpl/tests$ python ../matplotlib_units/unit/memleak_gui.py -dwx -s1000 -e2000 > ~/temp/memleak_wx_0705.asc Traceback (most recent call last): File "../matplotlib_units/unit/memleak_gui.py", line 58, in <module> pylab.close(fig) File "/usr/local/lib/python2.5/site-packages/matplotlib/pylab.py", line 742, in close _pylab_helpers.Gcf.destroy(manager.num) File "/usr/local/lib/python2.5/site-packages/matplotlib/_pylab_helpers.py", line 28, in destroy figManager.destroy() File "/usr/local/lib/python2.5/site-packages/matplotlib/backends/backend_wx.py", line 1403, in destroy self.frame.Destroy() File "/usr/local/lib/python2.5/site-packages/matplotlib/backends/backend_wx.py", line 1362, in Destroy wxapp.Yield() NameError: global name 'wxapp' is not defined Eric
Michael Droettboom wrote: > Interesting. I don't get that, but I do get some random segfaults (I > got lucky the first time I tested). > > I'm awfully surprised that wx.GetApp() would return an iterator, as you > are getting, so maybe it's corruption of some sort? > > Reverting to revision 3441 on backend_wx.py does resolve this issue for > me, so it is related to removing the wxapp global variable. While I [...] Works for me now, and the result is attached. Object count is still climbing. Eric
Interesting... When you get a chance, would you mind running the attached script? This is how I was finding object leaks before. It takes a single commandline argument that is the number of iterations. Can you send me the outputs from 1 and 2 iterations? That way we should be able to see what type of object is being leaked, which is a good first step. If that doesn't make it immediately obvious, I'll try this on my Ubuntu box at home and see if I can reproduce what you're seeing. Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting. I don't get that, but I do get some random segfaults (I >> got lucky the first time I tested). >> >> I'm awfully surprised that wx.GetApp() would return an iterator, as >> you are getting, so maybe it's corruption of some sort? >> >> Reverting to revision 3441 on backend_wx.py does resolve this issue >> for me, so it is related to removing the wxapp global variable. While I > > [...] > > Works for me now, and the result is attached. Object count is still > climbing. > > Eric > ------------------------------------------------------------------------ > > # columns are: iteration, OS memory (k), number of python objects > # > 0 18849 75791 > 10 18849 75831 > 20 18849 75871 > 30 18849 75911 > 40 18930 75951 > 50 18930 75991 > 60 19038 76031 > 70 19038 76071 > 80 19038 76111 > 90 19038 76151 > 100 19124 76191 > 110 19124 76231 > 120 19235 76271 > 130 19235 76311 > 140 19316 76351 > 150 19316 76391 > 160 19417 76431 > 170 19417 76471 > 180 19417 76511 > 190 19513 76551 > 200 19513 76591 > 210 19513 76631 > 220 19612 76671 > 230 19612 76711 > 240 19612 76751 > 250 19710 76791 > 260 19710 76831 > 270 19710 76871 > 280 19800 76911 > 290 19800 76951 > 300 19800 76991 > 310 19893 77031 > 320 19893 77071 > 330 19893 77111 > 340 19978 77151 > 350 19978 77191 > 360 20088 77231 > 370 20088 77271 > 380 20088 77311 > 390 20192 77351 > 400 20192 77391 > 410 20192 77431 > 420 20192 77471 > 430 20274 77511 > 440 20274 77551 > 450 20374 77591 > 460 20374 77631 > 470 20374 77671 > 480 20484 77711 > 490 20484 77751 > 500 20484 77791 > 510 20588 77831 > 520 20588 77871 > 530 20588 77911 > 540 20588 77951 > 550 20683 77991 > 560 20683 78031 > 570 20683 78071 > 580 20763 78111 > 590 20796 78151 > 600 20861 78191 > 610 20961 78231 > 620 20961 78271 > 630 20961 78311 > 640 21060 78351 > 650 21060 78391 > 660 21060 78431 > 670 21156 78471 > 680 21156 78511 > 690 21156 78551 > 700 21254 78591 > 710 21254 78631 > 720 21254 78671 > 730 21353 78711 > 740 21353 78751 > 750 21353 78791 > 760 21442 78831 > 770 21442 78871 > 780 21442 78911 > 790 21528 78951 > 800 21528 78991 > 810 21638 79031 > 820 21638 79071 > 830 21638 79111 > 840 21638 79151 > 850 21733 79191 > 860 21733 79231 > 870 21733 79271 > 880 21820 79311 > 890 21820 79351 > 900 21931 79391 > 910 21931 79431 > 920 21931 79471 > 930 21931 79511 > 940 22015 79551 > 950 22015 79591 > 960 22126 79631 > 970 22126 79671 > 980 22126 79711 > 990 22126 79751 > 1000 22207 79791 > 1010 22207 79831 > 1020 22298 79871 > 1030 22298 79911 > 1040 22335 79951 > 1050 22401 79991 > 1060 22503 80031 > 1070 22503 80071 > 1080 22503 80111 > 1090 22601 80151 > 1100 22601 80191 > 1110 22601 80231 > 1120 22699 80271 > 1130 22699 80311 > 1140 22699 80351 > 1150 22798 80391 > 1160 22798 80431 > 1170 22798 80471 > 1180 22897 80511 > 1190 22897 80551 > 1200 22897 80591 > 1210 22995 80631 > 1220 22995 80671 > 1230 22995 80711 > 1240 23086 80751 > 1250 23086 80791 > 1260 23086 80831 > 1270 23180 80871 > 1280 23180 80911 > 1290 23283 80951 > 1300 23283 80991 > 1310 23283 81031 > 1320 23283 81071 > 1330 23366 81111 > 1340 23366 81151 > 1350 23475 81191 > 1360 23475 81231 > 1370 23475 81271 > 1380 23475 81311 > 1390 23552 81351 > 1400 23552 81391 > 1410 23643 81431 > 1420 23643 81471 > 1430 23595 81511 > 1440 23700 81551 > 1450 23808 81591 > 1460 23808 81631 > 1470 23808 81671 > 1480 23787 81711 > 1490 23852 81751 > 1500 23951 81791 > 1510 24048 81831 > 1520 24048 81871 > 1530 24048 81911 > 1540 24146 81951 > 1550 24146 81991 > 1560 24146 82031 > 1570 24247 82071 > 1580 24247 82111 > 1590 24247 82151 > 1600 24344 82191 > 1610 24344 82231 > 1620 24344 82271 > 1630 24442 82311 > 1640 24442 82351 > 1650 24442 82391 > 1660 24542 82431 > 1670 24542 82471 > 1680 24542 82511 > 1690 24639 82551 > 1700 24639 82591 > 1710 24639 82631 > 1720 24738 82671 > 1730 24738 82711 > 1740 24738 82751 > 1750 24837 82791 > 1760 24837 82831 > 1770 24837 82871 > 1780 24935 82911 > 1790 24935 82951 > 1800 24935 82991 > 1810 25025 83031 > 1820 25025 83071 > 1830 25025 83111 > 1840 25118 83151 > 1850 25118 83191 > 1860 25104 83231 > 1870 25170 83271 > 1880 25281 83311 > 1890 25281 83351 > 1900 25281 83391 > 1910 25385 83431 > 1920 25385 83471 > 1930 25385 83511 > 1940 25362 83551 > 1950 25400 83591 > 1960 25469 83631 > 1970 25596 83671 > 1980 25596 83711 > 1990 25693 83751 > 2000 25693 83791 > # columns above are: iteration, OS memory (k), number of python objects > # > # uncollectable list: [] > # > # Backend WX, toolbar toolbar2 > # wxPython version: 2.8.1.1 > # Averaging over loops 1000 to 2000 > # Memory went from 22207k to 25693k > # Average memory consumed per loop: 3.4860k bytes > >
Michael Droettboom wrote: > Interesting... > > When you get a chance, would you mind running the attached script? This > is how I was finding object leaks before. It takes a single commandline > argument that is the number of iterations. Can you send me the outputs > from 1 and 2 iterations? That way we should be able to see what type of > object is being leaked, which is a good first step. efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 75891 76010 *** <class 'wx._core.PySimpleApp'> *** <class 'wx._core._wxPyDeadObject'> uncollectable list: [] efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer Dell424 could not be loaded. GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m could not be loaded. 75891 76014 *** <class 'wx._core.PySimpleApp'> *** <class 'wx._core._wxPyDeadObject'> uncollectable list: [] Eric
Yep. Nothing obvious. I'll have to have a look on Ubuntu and see if that makes a difference. Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 <http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1> I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 should upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release. Promisingly, my bug was confirmed within about five minutes of filing it. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 <https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381> Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 <http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1> I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 should upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release. Promisingly, my bug was confirmed within about five minutes of filing it. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 <https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381> Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 should upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release. Promisingly, my bug was confirmed within about five minutes of filing it. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 should upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release. Promisingly, my bug was confirmed within about five minutes of filing it. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release, but I thought it was worth trying. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 <http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1> I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 should upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release. Promisingly, my bug was confirmed within about five minutes of filing it. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 <https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381> Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >
I had no trouble reproducing this on my Ubuntu Feisty box. It turns out that wxPython leaks a dictionary for every object whose class subclasses a Wx class. There is a fix for this that made it into wxPython-2.8.3.0: http://cvs.wxwidgets.org/viewcvs.cgi/wxWidgets/wxPython/src/helpers.cpp.diff?r1=1.145&r2=1.145.4.1 I have verified this on my source-built wxPython-2.8.4.0. If I remove this line, I can reproduce the reference leak. ** I would recommend that anyone using a wxPython-2.8.x prior to wxPython-2.8.4 upgrade. There are binary packages available for a number of distributions on wxpython.org. ** As an aside, I filed a bug for this on Ubuntu launchpad. I don't know if this qualifies for the kind of fix they would normally make as a maintenance release, but I thought it was worth trying. https://bugs.launchpad.net/ubuntu/+source/wxwidgets2.8/+bug/124381 Cheers, Mike Eric Firing wrote: > Michael Droettboom wrote: >> Interesting... >> >> When you get a chance, would you mind running the attached script? >> This is how I was finding object leaks before. It takes a single >> commandline argument that is the number of iterations. Can you send >> me the outputs from 1 and 2 iterations? That way we should be able >> to see what type of object is being leaked, which is a good first step. > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 1 > 75891 76010 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > efiring@manini:~/programs/py/mpl/tests$ python memleak_gui_wx.py 2 > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer > Dell424 could not be loaded. > GnomePrintCupsPlugin-Message: The ppd file for the CUPS printer pslj4m > could not be loaded. > 75891 76014 > *** <class 'wx._core.PySimpleApp'> > *** <class 'wx._core._wxPyDeadObject'> > > uncollectable list: [] > > > Eric >