SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S



1
(3)
2
(4)
3
4
(2)
5
6
(4)
7
(11)
8
(7)
9
(9)
10
(3)
11
12
13
(4)
14
(1)
15
(24)
16
(8)
17
(11)
18
(6)
19
(2)
20
(14)
21
(13)
22
(14)
23
(3)
24
(6)
25
(2)
26
27
(9)
28
(18)
29
(7)
30
(15)
31
(5)

Showing 7 results of 7

From: John H. <jd...@gm...> - 2008年10月08日 19:55:09
On Wed, Oct 8, 2008 at 1:44 PM, David Huard <dav...@gm...> wrote:
> /usr/local/lib64/python2.5/site-packages/matplotlib/legend.pyc in
> __init__(self, parent, handles, labels, loc, numpoints, prop, pad,
> borderpad, markerscale, labelsep, handlelen, handletextsep, axespad, shadow)
> 125 setattr(self,name,value)
> 126 if pad:
> --> 127 warnings.DeprecationWarning("Use 'borderpad' instead of
> 'pad'.")
> 128 # 2008年10月04日
> 129 if self.numpoints <= 0:
>
> AttributeError: 'module' object has no attribute 'DeprecationWarning'
I just replaced this with
 warnings.warn("Use 'borderpad' instead of 'pad'.", DeprecationWarning)
which is what we have been doing in other parts of the code, so please
give svn 6173 a try.
Thanks,
JDH
From: David H. <dav...@gm...> - 2008年10月08日 18:54:13
I just updated matplotlib from svn and here is traceback I get after calling
legend with the pad argument:
/usr/local/lib64/python2.5/site-packages/matplotlib/pyplot.pyc in
legend(*args, **kwargs)
 2390 def legend(*args, **kwargs):
 2391
-> 2392 ret = gca().legend(*args, **kwargs)
 2393 draw_if_interactive()
 2394 return ret
/usr/local/lib64/python2.5/site-packages/matplotlib/axes.pyc in legend(self,
*args, **kwargs)
 3662
 3663 handles = cbook.flatten(handles)
-> 3664 self.legend_ = mlegend.Legend(self, handles, labels,
**kwargs)
 3665 return self.legend_
 3666
/usr/local/lib64/python2.5/site-packages/matplotlib/legend.pyc in
__init__(self, parent, handles, labels, loc, numpoints, prop, pad,
borderpad, markerscale, labelsep, handlelen, handletextsep, axespad, shadow)
 125 setattr(self,name,value)
 126 if pad:
--> 127 warnings.DeprecationWarning("Use 'borderpad' instead of
'pad'.")
 128 # 2008年10月04日
 129 if self.numpoints <= 0:
AttributeError: 'module' object has no attribute 'DeprecationWarning'
This is with python2.5.
Here is a patch:
Index: lib/matplotlib/legend.py
===================================================================
--- lib/matplotlib/legend.py (revision 6171)
+++ lib/matplotlib/legend.py (working copy)
@@ -124,7 +124,7 @@
 value=rcParams["legend."+name]
 setattr(self,name,value)
 if pad:
- warnings.DeprecationWarning("Use 'borderpad' instead of
'pad'.")
+ DeprecationWarning("Use 'borderpad' instead of 'pad'.")
 # 2008年10月04日
 if self.numpoints <= 0:
 raise ValueError("numpoints must be >= 0; it was %d"%
numpoints)
Regards,
David
From: Michael D. <md...@st...> - 2008年10月08日 17:18:28
John Hunter wrote:
> On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <md...@st...> wrote:
>
> 
>> I figured this out. When this happens, a RuntimeError("Agg rendering
>> complexity exceeded") is thrown.
>> 
>
> Do you think it is a good idea to put a little helper note in the
> exception along the lines of
>
> throw "Agg rendering complexity exceeded; you may want to increase
> the cell_block_size in agg_rasterizer_cells_aa.h"
>
> in case someone gets this exception two years from now and none of us
> can remember this brilliant fix :-)
> 
We can suggest that, or suggest that the size of the data is too large 
(which is easier for most users to fix, I would suspect). What about:
"Agg rendering complexity exceeded. Consider downsampling or decimating 
your data."
along with a comment (not thrown), saying
/* If this is thrown too often, increase cell_block_limit. */
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: John H. <jd...@gm...> - 2008年10月08日 17:02:19
On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <md...@st...> wrote:
> I figured this out. When this happens, a RuntimeError("Agg rendering
> complexity exceeded") is thrown.
Do you think it is a good idea to put a little helper note in the
exception along the lines of
 throw "Agg rendering complexity exceeded; you may want to increase
the cell_block_size in agg_rasterizer_cells_aa.h"
in case someone gets this exception two years from now and none of us
can remember this brilliant fix :-)
From: Michael D. <md...@st...> - 2008年10月08日 16:42:15
Michael Droettboom wrote:
> Eric Firing wrote:
> 
>> Michael Droettboom wrote:
>> 
>>> Eric Firing wrote:
>>> 
>>>> Mike, John,
>>>>
>>>> Because path simplification does not work with anything but a 
>>>> continuous line, it is turned off if there are any nans in the 
>>>> path. The result is that if one does this:
>>>>
>>>> import numpy as np
>>>> xx = np.arange(200000)
>>>> yy = np.random.rand(200000)
>>>> #plot(xx, yy)
>>>> yy[1000] = np.nan
>>>> plot(xx, yy)
>>>>
>>>> the plot fails with an incomplete rendering and general 
>>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>>> exceeded.
>>>> 
>>> The limit in question is "cell_block_limit" in 
>>> agg_rasterizer_cells_aa.h. The relationship between the number 
>>> vertices and the number of rasterization cells I suspect depends on 
>>> the nature of the values.
>>> However, if we want to increase the limit, each "cell_block" is 4096 
>>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>>> blocks, for a total of 67,108,864 bytes. So, the question is, how 
>>> much memory should be devoted to rasterization, when the data set is 
>>> large like this? I think we could safely quadruple this number for a 
>>> lot of modern machines, and this maximum won't affect people plotting 
>>> smaller data sets, since the memory is dynamically allocated anyway. 
>>> It works for me, but I have 4GB RAM here at work.
>>> 
>> It sounds like we have little to lose by increasing the limit as you 
>> suggest here. In addition, it would be nice if hitting that limit 
>> triggered an informative exception instead of a puzzling and quiet 
>> failure, but maybe that would be hard to arrange. I have no idea how 
>> to approach it.
>> 
> Agreed. But also, I'm not sure how to do that. I can see where the 
> limit is tested and no more memory is allocated, but not where it shuts 
> down drawing after that. If we can find that point, we should be able 
> to throw an exception back to Python somehow.
I figured this out. When this happens, a RuntimeError("Agg rendering 
complexity exceeded") is thrown.
Cheers,
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Michael D. <md...@st...> - 2008年10月08日 12:39:09
Eric Firing wrote:
> Michael Droettboom wrote:
>> Eric Firing wrote:
>>> Mike, John,
>>>
>>> Because path simplification does not work with anything but a 
>>> continuous line, it is turned off if there are any nans in the 
>>> path. The result is that if one does this:
>>>
>>> import numpy as np
>>> xx = np.arange(200000)
>>> yy = np.random.rand(200000)
>>> #plot(xx, yy)
>>> yy[1000] = np.nan
>>> plot(xx, yy)
>>>
>>> the plot fails with an incomplete rendering and general 
>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>> exceeded.
>> The limit in question is "cell_block_limit" in 
>> agg_rasterizer_cells_aa.h. The relationship between the number 
>> vertices and the number of rasterization cells I suspect depends on 
>> the nature of the values.
>> However, if we want to increase the limit, each "cell_block" is 4096 
>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>> blocks, for a total of 67,108,864 bytes. So, the question is, how 
>> much memory should be devoted to rasterization, when the data set is 
>> large like this? I think we could safely quadruple this number for a 
>> lot of modern machines, and this maximum won't affect people plotting 
>> smaller data sets, since the memory is dynamically allocated anyway. 
>> It works for me, but I have 4GB RAM here at work.
>
> It sounds like we have little to lose by increasing the limit as you 
> suggest here. In addition, it would be nice if hitting that limit 
> triggered an informative exception instead of a puzzling and quiet 
> failure, but maybe that would be hard to arrange. I have no idea how 
> to approach it.
Agreed. But also, I'm not sure how to do that. I can see where the 
limit is tested and no more memory is allocated, but not where it shuts 
down drawing after that. If we can find that point, we should be able 
to throw an exception back to Python somehow.
>
>>> With or without the nan, this test case also shows the bizarre 
>>> slowness of add_line that I asked about in a message yesterday, and 
>>> that has me completely baffled.
>> lsprofcalltree is my friend!
>
> Thank you very much for finding that!
>
>>>
>>> Both of these are major problems for real-world use.
>>>
>>> Do you have any thoughts on timing and strategy for solving this 
>>> problem? A few weeks ago, when the problem with nans and path 
>>> simplification turned up, I tried to figure out what was going on 
>>> and how to fix it, but I did not get very far. I could try again, 
>>> but as you know I don't get along well with C++.
>> That simplification code is pretty hairy, particularly because it 
>> tries to avoid a copy by doing everything in an iterator/generator 
>> way. I think even just supporting MOVETOs there would be tricky, but 
>> probably the easiest first thing.
>
> The attached patch seems to work, based on cursory testing. I can 
> make an array of 1M points, salt it with nans, and plot it, complete 
> with gaps, and all in a reasonably snappy fashion, thanks to your 
> units fix.
Very nice! It looks like a nice approach --- though I see from your 
second message that things aren't quite perfect yet. I, too, feel it's 
close, though.
One possible minor improvement might be to change the "should_simplify" 
expression to be true if codes is not None and contains only LINETO and 
MOVETOs (but not curves, obviously). I don't imagine a lot of people 
are building up their own paths with MOVETOs in them, but your 
improvement would at least make simplifying those possible.
Mike
>
> Eric
>
>>>
>>> I am also wondering whether more than straightforward path 
>>> simplification with nan/moveto might be needed. Suppose there is a 
>>> nightmarish time series with every third point being bad, so it is 
>>> essentially a sequence of 2-point line segments. The simplest form 
>>> of path simplification fix might be to reset the calculation 
>>> whenever a moveto is encountered, but this would yield no 
>>> simplification in this case. I assume Agg would still choke. Is 
>>> there a need for some sort of automatic chunking of the rendering 
>>> operation in addition to path simplification?
>>>
>> Chunking is probably something worth looking into (for lines, at 
>> least), as it might also reduce memory usage vs. the "increase the 
>> cell_block_limit" scenario.
>>
>> I also think for the special case of high-resolution time series 
>> data, where x if uniform, there is an opportunity to do something 
>> completely different that should be far faster. Audio editors (such 
>> as Audacity), draw each column of pixels based on the min/max and/or 
>> mean and/or RMS of the values within that column. This makes the 
>> rendering extremely fast and simple. See:
>>
>> http://audacity.sourceforge.net/about/images/audacity-macosx.png
>>
>> Of course, that would mean writing a bunch of new code, but it 
>> shouldn't be incredibly tricky new code. It could convert the time 
>> series data to an image and plot that, or to a filled polygon whose 
>> vertices are downsampled from the original data. The latter may be 
>> nicer for Ps/Pdf output.
>>
>> Cheers,
>> Mike
>>
>
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Eric F. <ef...@ha...> - 2008年10月08日 08:29:40
The patch in that last message of mine was clearly not quite right. I 
have gone through several iterations, and have seemed tantalizingly 
close, but I still don't have it right yet. I need to leave it alone 
for a while, but I do think it is important to get this working 
correctly ASAP--certainly it is for my own work, at least.
What happens with a nan should be somewhat similar to what happens with 
clipping, so perhaps one could take advantage of part of the clipping 
logic, but I have not looked at this approach closely.
Eric
Eric Firing wrote:
> Michael Droettboom wrote:
>> Eric Firing wrote:
>>> Mike, John,
>>>
>>> Because path simplification does not work with anything but a 
>>> continuous line, it is turned off if there are any nans in the path. 
>>> The result is that if one does this:
>>>
>>> import numpy as np
>>> xx = np.arange(200000)
>>> yy = np.random.rand(200000)
>>> #plot(xx, yy)
>>> yy[1000] = np.nan
>>> plot(xx, yy)
>>>
>>> the plot fails with an incomplete rendering and general 
>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>> exceeded.
>> The limit in question is "cell_block_limit" in 
>> agg_rasterizer_cells_aa.h. The relationship between the number 
>> vertices and the number of rasterization cells I suspect depends on 
>> the nature of the values.
>> However, if we want to increase the limit, each "cell_block" is 4096 
>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>> blocks, for a total of 67,108,864 bytes. So, the question is, how 
>> much memory should be devoted to rasterization, when the data set is 
>> large like this? I think we could safely quadruple this number for a 
>> lot of modern machines, and this maximum won't affect people plotting 
>> smaller data sets, since the memory is dynamically allocated anyway. 
>> It works for me, but I have 4GB RAM here at work.
> 
> It sounds like we have little to lose by increasing the limit as you 
> suggest here. In addition, it would be nice if hitting that limit 
> triggered an informative exception instead of a puzzling and quiet 
> failure, but maybe that would be hard to arrange. I have no idea how to 
> approach it.
> 
>>> With or without the nan, this test case also shows the bizarre 
>>> slowness of add_line that I asked about in a message yesterday, and 
>>> that has me completely baffled.
>> lsprofcalltree is my friend!
> 
> Thank you very much for finding that!
> 
>>>
>>> Both of these are major problems for real-world use.
>>>
>>> Do you have any thoughts on timing and strategy for solving this 
>>> problem? A few weeks ago, when the problem with nans and path 
>>> simplification turned up, I tried to figure out what was going on and 
>>> how to fix it, but I did not get very far. I could try again, but as 
>>> you know I don't get along well with C++.
>> That simplification code is pretty hairy, particularly because it 
>> tries to avoid a copy by doing everything in an iterator/generator 
>> way. I think even just supporting MOVETOs there would be tricky, but 
>> probably the easiest first thing.
> 
> The attached patch seems to work, based on cursory testing. I can make 
> an array of 1M points, salt it with nans, and plot it, complete with 
> gaps, and all in a reasonably snappy fashion, thanks to your units fix.
> 
> I will hold off on committing it until I hear from you or John; or if 
> either of you want to polish and commit it (or an alternative), that's 
> even better.
> 
> Eric
> 
>>>
>>> I am also wondering whether more than straightforward path 
>>> simplification with nan/moveto might be needed. Suppose there is a 
>>> nightmarish time series with every third point being bad, so it is 
>>> essentially a sequence of 2-point line segments. The simplest form 
>>> of path simplification fix might be to reset the calculation whenever 
>>> a moveto is encountered, but this would yield no simplification in 
>>> this case. I assume Agg would still choke. Is there a need for some 
>>> sort of automatic chunking of the rendering operation in addition to 
>>> path simplification?
>>>
>> Chunking is probably something worth looking into (for lines, at 
>> least), as it might also reduce memory usage vs. the "increase the 
>> cell_block_limit" scenario.
>>
>> I also think for the special case of high-resolution time series data, 
>> where x if uniform, there is an opportunity to do something completely 
>> different that should be far faster. Audio editors (such as 
>> Audacity), draw each column of pixels based on the min/max and/or mean 
>> and/or RMS of the values within that column. This makes the rendering 
>> extremely fast and simple. See:
>>
>> http://audacity.sourceforge.net/about/images/audacity-macosx.png
>>
>> Of course, that would mean writing a bunch of new code, but it 
>> shouldn't be incredibly tricky new code. It could convert the time 
>> series data to an image and plot that, or to a filled polygon whose 
>> vertices are downsampled from the original data. The latter may be 
>> nicer for Ps/Pdf output.
>>
>> Cheers,
>> Mike
>>
> 
> 
> ------------------------------------------------------------------------
> 
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel

Showing 7 results of 7

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /