SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S



1
(3)
2
(4)
3
4
(2)
5
6
(4)
7
(11)
8
(7)
9
(9)
10
(3)
11
12
13
(4)
14
(1)
15
(24)
16
(8)
17
(11)
18
(6)
19
(2)
20
(14)
21
(13)
22
(14)
23
(3)
24
(6)
25
(2)
26
27
(9)
28
(18)
29
(7)
30
(15)
31
(5)

Showing results of 205

<< < 1 .. 6 7 8 9 > >> (Page 8 of 9)
From: David H. <dav...@gm...> - 2008年10月08日 18:54:13
I just updated matplotlib from svn and here is traceback I get after calling
legend with the pad argument:
/usr/local/lib64/python2.5/site-packages/matplotlib/pyplot.pyc in
legend(*args, **kwargs)
 2390 def legend(*args, **kwargs):
 2391
-> 2392 ret = gca().legend(*args, **kwargs)
 2393 draw_if_interactive()
 2394 return ret
/usr/local/lib64/python2.5/site-packages/matplotlib/axes.pyc in legend(self,
*args, **kwargs)
 3662
 3663 handles = cbook.flatten(handles)
-> 3664 self.legend_ = mlegend.Legend(self, handles, labels,
**kwargs)
 3665 return self.legend_
 3666
/usr/local/lib64/python2.5/site-packages/matplotlib/legend.pyc in
__init__(self, parent, handles, labels, loc, numpoints, prop, pad,
borderpad, markerscale, labelsep, handlelen, handletextsep, axespad, shadow)
 125 setattr(self,name,value)
 126 if pad:
--> 127 warnings.DeprecationWarning("Use 'borderpad' instead of
'pad'.")
 128 # 2008年10月04日
 129 if self.numpoints <= 0:
AttributeError: 'module' object has no attribute 'DeprecationWarning'
This is with python2.5.
Here is a patch:
Index: lib/matplotlib/legend.py
===================================================================
--- lib/matplotlib/legend.py (revision 6171)
+++ lib/matplotlib/legend.py (working copy)
@@ -124,7 +124,7 @@
 value=rcParams["legend."+name]
 setattr(self,name,value)
 if pad:
- warnings.DeprecationWarning("Use 'borderpad' instead of
'pad'.")
+ DeprecationWarning("Use 'borderpad' instead of 'pad'.")
 # 2008年10月04日
 if self.numpoints <= 0:
 raise ValueError("numpoints must be >= 0; it was %d"%
numpoints)
Regards,
David
From: Michael D. <md...@st...> - 2008年10月08日 17:18:28
John Hunter wrote:
> On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <md...@st...> wrote:
>
> 
>> I figured this out. When this happens, a RuntimeError("Agg rendering
>> complexity exceeded") is thrown.
>> 
>
> Do you think it is a good idea to put a little helper note in the
> exception along the lines of
>
> throw "Agg rendering complexity exceeded; you may want to increase
> the cell_block_size in agg_rasterizer_cells_aa.h"
>
> in case someone gets this exception two years from now and none of us
> can remember this brilliant fix :-)
> 
We can suggest that, or suggest that the size of the data is too large 
(which is easier for most users to fix, I would suspect). What about:
"Agg rendering complexity exceeded. Consider downsampling or decimating 
your data."
along with a comment (not thrown), saying
/* If this is thrown too often, increase cell_block_limit. */
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: John H. <jd...@gm...> - 2008年10月08日 17:02:19
On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <md...@st...> wrote:
> I figured this out. When this happens, a RuntimeError("Agg rendering
> complexity exceeded") is thrown.
Do you think it is a good idea to put a little helper note in the
exception along the lines of
 throw "Agg rendering complexity exceeded; you may want to increase
the cell_block_size in agg_rasterizer_cells_aa.h"
in case someone gets this exception two years from now and none of us
can remember this brilliant fix :-)
From: Michael D. <md...@st...> - 2008年10月08日 16:42:15
Michael Droettboom wrote:
> Eric Firing wrote:
> 
>> Michael Droettboom wrote:
>> 
>>> Eric Firing wrote:
>>> 
>>>> Mike, John,
>>>>
>>>> Because path simplification does not work with anything but a 
>>>> continuous line, it is turned off if there are any nans in the 
>>>> path. The result is that if one does this:
>>>>
>>>> import numpy as np
>>>> xx = np.arange(200000)
>>>> yy = np.random.rand(200000)
>>>> #plot(xx, yy)
>>>> yy[1000] = np.nan
>>>> plot(xx, yy)
>>>>
>>>> the plot fails with an incomplete rendering and general 
>>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>>> exceeded.
>>>> 
>>> The limit in question is "cell_block_limit" in 
>>> agg_rasterizer_cells_aa.h. The relationship between the number 
>>> vertices and the number of rasterization cells I suspect depends on 
>>> the nature of the values.
>>> However, if we want to increase the limit, each "cell_block" is 4096 
>>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>>> blocks, for a total of 67,108,864 bytes. So, the question is, how 
>>> much memory should be devoted to rasterization, when the data set is 
>>> large like this? I think we could safely quadruple this number for a 
>>> lot of modern machines, and this maximum won't affect people plotting 
>>> smaller data sets, since the memory is dynamically allocated anyway. 
>>> It works for me, but I have 4GB RAM here at work.
>>> 
>> It sounds like we have little to lose by increasing the limit as you 
>> suggest here. In addition, it would be nice if hitting that limit 
>> triggered an informative exception instead of a puzzling and quiet 
>> failure, but maybe that would be hard to arrange. I have no idea how 
>> to approach it.
>> 
> Agreed. But also, I'm not sure how to do that. I can see where the 
> limit is tested and no more memory is allocated, but not where it shuts 
> down drawing after that. If we can find that point, we should be able 
> to throw an exception back to Python somehow.
I figured this out. When this happens, a RuntimeError("Agg rendering 
complexity exceeded") is thrown.
Cheers,
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Michael D. <md...@st...> - 2008年10月08日 12:39:09
Eric Firing wrote:
> Michael Droettboom wrote:
>> Eric Firing wrote:
>>> Mike, John,
>>>
>>> Because path simplification does not work with anything but a 
>>> continuous line, it is turned off if there are any nans in the 
>>> path. The result is that if one does this:
>>>
>>> import numpy as np
>>> xx = np.arange(200000)
>>> yy = np.random.rand(200000)
>>> #plot(xx, yy)
>>> yy[1000] = np.nan
>>> plot(xx, yy)
>>>
>>> the plot fails with an incomplete rendering and general 
>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>> exceeded.
>> The limit in question is "cell_block_limit" in 
>> agg_rasterizer_cells_aa.h. The relationship between the number 
>> vertices and the number of rasterization cells I suspect depends on 
>> the nature of the values.
>> However, if we want to increase the limit, each "cell_block" is 4096 
>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>> blocks, for a total of 67,108,864 bytes. So, the question is, how 
>> much memory should be devoted to rasterization, when the data set is 
>> large like this? I think we could safely quadruple this number for a 
>> lot of modern machines, and this maximum won't affect people plotting 
>> smaller data sets, since the memory is dynamically allocated anyway. 
>> It works for me, but I have 4GB RAM here at work.
>
> It sounds like we have little to lose by increasing the limit as you 
> suggest here. In addition, it would be nice if hitting that limit 
> triggered an informative exception instead of a puzzling and quiet 
> failure, but maybe that would be hard to arrange. I have no idea how 
> to approach it.
Agreed. But also, I'm not sure how to do that. I can see where the 
limit is tested and no more memory is allocated, but not where it shuts 
down drawing after that. If we can find that point, we should be able 
to throw an exception back to Python somehow.
>
>>> With or without the nan, this test case also shows the bizarre 
>>> slowness of add_line that I asked about in a message yesterday, and 
>>> that has me completely baffled.
>> lsprofcalltree is my friend!
>
> Thank you very much for finding that!
>
>>>
>>> Both of these are major problems for real-world use.
>>>
>>> Do you have any thoughts on timing and strategy for solving this 
>>> problem? A few weeks ago, when the problem with nans and path 
>>> simplification turned up, I tried to figure out what was going on 
>>> and how to fix it, but I did not get very far. I could try again, 
>>> but as you know I don't get along well with C++.
>> That simplification code is pretty hairy, particularly because it 
>> tries to avoid a copy by doing everything in an iterator/generator 
>> way. I think even just supporting MOVETOs there would be tricky, but 
>> probably the easiest first thing.
>
> The attached patch seems to work, based on cursory testing. I can 
> make an array of 1M points, salt it with nans, and plot it, complete 
> with gaps, and all in a reasonably snappy fashion, thanks to your 
> units fix.
Very nice! It looks like a nice approach --- though I see from your 
second message that things aren't quite perfect yet. I, too, feel it's 
close, though.
One possible minor improvement might be to change the "should_simplify" 
expression to be true if codes is not None and contains only LINETO and 
MOVETOs (but not curves, obviously). I don't imagine a lot of people 
are building up their own paths with MOVETOs in them, but your 
improvement would at least make simplifying those possible.
Mike
>
> Eric
>
>>>
>>> I am also wondering whether more than straightforward path 
>>> simplification with nan/moveto might be needed. Suppose there is a 
>>> nightmarish time series with every third point being bad, so it is 
>>> essentially a sequence of 2-point line segments. The simplest form 
>>> of path simplification fix might be to reset the calculation 
>>> whenever a moveto is encountered, but this would yield no 
>>> simplification in this case. I assume Agg would still choke. Is 
>>> there a need for some sort of automatic chunking of the rendering 
>>> operation in addition to path simplification?
>>>
>> Chunking is probably something worth looking into (for lines, at 
>> least), as it might also reduce memory usage vs. the "increase the 
>> cell_block_limit" scenario.
>>
>> I also think for the special case of high-resolution time series 
>> data, where x if uniform, there is an opportunity to do something 
>> completely different that should be far faster. Audio editors (such 
>> as Audacity), draw each column of pixels based on the min/max and/or 
>> mean and/or RMS of the values within that column. This makes the 
>> rendering extremely fast and simple. See:
>>
>> http://audacity.sourceforge.net/about/images/audacity-macosx.png
>>
>> Of course, that would mean writing a bunch of new code, but it 
>> shouldn't be incredibly tricky new code. It could convert the time 
>> series data to an image and plot that, or to a filled polygon whose 
>> vertices are downsampled from the original data. The latter may be 
>> nicer for Ps/Pdf output.
>>
>> Cheers,
>> Mike
>>
>
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Eric F. <ef...@ha...> - 2008年10月08日 08:29:40
The patch in that last message of mine was clearly not quite right. I 
have gone through several iterations, and have seemed tantalizingly 
close, but I still don't have it right yet. I need to leave it alone 
for a while, but I do think it is important to get this working 
correctly ASAP--certainly it is for my own work, at least.
What happens with a nan should be somewhat similar to what happens with 
clipping, so perhaps one could take advantage of part of the clipping 
logic, but I have not looked at this approach closely.
Eric
Eric Firing wrote:
> Michael Droettboom wrote:
>> Eric Firing wrote:
>>> Mike, John,
>>>
>>> Because path simplification does not work with anything but a 
>>> continuous line, it is turned off if there are any nans in the path. 
>>> The result is that if one does this:
>>>
>>> import numpy as np
>>> xx = np.arange(200000)
>>> yy = np.random.rand(200000)
>>> #plot(xx, yy)
>>> yy[1000] = np.nan
>>> plot(xx, yy)
>>>
>>> the plot fails with an incomplete rendering and general 
>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>> exceeded.
>> The limit in question is "cell_block_limit" in 
>> agg_rasterizer_cells_aa.h. The relationship between the number 
>> vertices and the number of rasterization cells I suspect depends on 
>> the nature of the values.
>> However, if we want to increase the limit, each "cell_block" is 4096 
>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>> blocks, for a total of 67,108,864 bytes. So, the question is, how 
>> much memory should be devoted to rasterization, when the data set is 
>> large like this? I think we could safely quadruple this number for a 
>> lot of modern machines, and this maximum won't affect people plotting 
>> smaller data sets, since the memory is dynamically allocated anyway. 
>> It works for me, but I have 4GB RAM here at work.
> 
> It sounds like we have little to lose by increasing the limit as you 
> suggest here. In addition, it would be nice if hitting that limit 
> triggered an informative exception instead of a puzzling and quiet 
> failure, but maybe that would be hard to arrange. I have no idea how to 
> approach it.
> 
>>> With or without the nan, this test case also shows the bizarre 
>>> slowness of add_line that I asked about in a message yesterday, and 
>>> that has me completely baffled.
>> lsprofcalltree is my friend!
> 
> Thank you very much for finding that!
> 
>>>
>>> Both of these are major problems for real-world use.
>>>
>>> Do you have any thoughts on timing and strategy for solving this 
>>> problem? A few weeks ago, when the problem with nans and path 
>>> simplification turned up, I tried to figure out what was going on and 
>>> how to fix it, but I did not get very far. I could try again, but as 
>>> you know I don't get along well with C++.
>> That simplification code is pretty hairy, particularly because it 
>> tries to avoid a copy by doing everything in an iterator/generator 
>> way. I think even just supporting MOVETOs there would be tricky, but 
>> probably the easiest first thing.
> 
> The attached patch seems to work, based on cursory testing. I can make 
> an array of 1M points, salt it with nans, and plot it, complete with 
> gaps, and all in a reasonably snappy fashion, thanks to your units fix.
> 
> I will hold off on committing it until I hear from you or John; or if 
> either of you want to polish and commit it (or an alternative), that's 
> even better.
> 
> Eric
> 
>>>
>>> I am also wondering whether more than straightforward path 
>>> simplification with nan/moveto might be needed. Suppose there is a 
>>> nightmarish time series with every third point being bad, so it is 
>>> essentially a sequence of 2-point line segments. The simplest form 
>>> of path simplification fix might be to reset the calculation whenever 
>>> a moveto is encountered, but this would yield no simplification in 
>>> this case. I assume Agg would still choke. Is there a need for some 
>>> sort of automatic chunking of the rendering operation in addition to 
>>> path simplification?
>>>
>> Chunking is probably something worth looking into (for lines, at 
>> least), as it might also reduce memory usage vs. the "increase the 
>> cell_block_limit" scenario.
>>
>> I also think for the special case of high-resolution time series data, 
>> where x if uniform, there is an opportunity to do something completely 
>> different that should be far faster. Audio editors (such as 
>> Audacity), draw each column of pixels based on the min/max and/or mean 
>> and/or RMS of the values within that column. This makes the rendering 
>> extremely fast and simple. See:
>>
>> http://audacity.sourceforge.net/about/images/audacity-macosx.png
>>
>> Of course, that would mean writing a bunch of new code, but it 
>> shouldn't be incredibly tricky new code. It could convert the time 
>> series data to an image and plot that, or to a filled polygon whose 
>> vertices are downsampled from the original data. The latter may be 
>> nicer for Ps/Pdf output.
>>
>> Cheers,
>> Mike
>>
> 
> 
> ------------------------------------------------------------------------
> 
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
From: Eric F. <ef...@ha...> - 2008年10月07日 23:08:36
Attachments: simplify.diff
Michael Droettboom wrote:
> Eric Firing wrote:
>> Mike, John,
>>
>> Because path simplification does not work with anything but a 
>> continuous line, it is turned off if there are any nans in the path. 
>> The result is that if one does this:
>>
>> import numpy as np
>> xx = np.arange(200000)
>> yy = np.random.rand(200000)
>> #plot(xx, yy)
>> yy[1000] = np.nan
>> plot(xx, yy)
>>
>> the plot fails with an incomplete rendering and general 
>> unresponsiveness; apparently some mysterious agg limit is quietly 
>> exceeded.
> The limit in question is "cell_block_limit" in 
> agg_rasterizer_cells_aa.h. The relationship between the number vertices 
> and the number of rasterization cells I suspect depends on the nature of 
> the values.
> However, if we want to increase the limit, each "cell_block" is 4096 
> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
> blocks, for a total of 67,108,864 bytes. So, the question is, how much 
> memory should be devoted to rasterization, when the data set is large 
> like this? I think we could safely quadruple this number for a lot of 
> modern machines, and this maximum won't affect people plotting smaller 
> data sets, since the memory is dynamically allocated anyway. It works 
> for me, but I have 4GB RAM here at work.
It sounds like we have little to lose by increasing the limit as you 
suggest here. In addition, it would be nice if hitting that limit 
triggered an informative exception instead of a puzzling and quiet 
failure, but maybe that would be hard to arrange. I have no idea how to 
approach it.
>> With or without the nan, this test case also shows the bizarre 
>> slowness of add_line that I asked about in a message yesterday, and 
>> that has me completely baffled.
> lsprofcalltree is my friend!
Thank you very much for finding that!
>>
>> Both of these are major problems for real-world use.
>>
>> Do you have any thoughts on timing and strategy for solving this 
>> problem? A few weeks ago, when the problem with nans and path 
>> simplification turned up, I tried to figure out what was going on and 
>> how to fix it, but I did not get very far. I could try again, but as 
>> you know I don't get along well with C++.
> That simplification code is pretty hairy, particularly because it tries 
> to avoid a copy by doing everything in an iterator/generator way. I 
> think even just supporting MOVETOs there would be tricky, but probably 
> the easiest first thing.
The attached patch seems to work, based on cursory testing. I can make 
an array of 1M points, salt it with nans, and plot it, complete with 
gaps, and all in a reasonably snappy fashion, thanks to your units fix.
I will hold off on committing it until I hear from you or John; or if 
either of you want to polish and commit it (or an alternative), that's 
even better.
Eric
>>
>> I am also wondering whether more than straightforward path 
>> simplification with nan/moveto might be needed. Suppose there is a 
>> nightmarish time series with every third point being bad, so it is 
>> essentially a sequence of 2-point line segments. The simplest form of 
>> path simplification fix might be to reset the calculation whenever a 
>> moveto is encountered, but this would yield no simplification in this 
>> case. I assume Agg would still choke. Is there a need for some sort 
>> of automatic chunking of the rendering operation in addition to path 
>> simplification?
>>
> Chunking is probably something worth looking into (for lines, at least), 
> as it might also reduce memory usage vs. the "increase the 
> cell_block_limit" scenario.
> 
> I also think for the special case of high-resolution time series data, 
> where x if uniform, there is an opportunity to do something completely 
> different that should be far faster. Audio editors (such as Audacity), 
> draw each column of pixels based on the min/max and/or mean and/or RMS 
> of the values within that column. This makes the rendering extremely 
> fast and simple. See:
> 
> http://audacity.sourceforge.net/about/images/audacity-macosx.png
> 
> Of course, that would mean writing a bunch of new code, but it shouldn't 
> be incredibly tricky new code. It could convert the time series data to 
> an image and plot that, or to a filled polygon whose vertices are 
> downsampled from the original data. The latter may be nicer for Ps/Pdf 
> output.
> 
> Cheers,
> Mike
> 
From: Michael D. <md...@st...> - 2008年10月07日 16:54:18
Sorry. I didn't read carefully enough. That's right -- the "if 
converter: break" was replaced with "return converter".
You're right. This is fine.
Mike
John Hunter wrote:
> On Tue, Oct 7, 2008 at 11:26 AM, Michael Droettboom <md...@st...> wrote:
> 
>> This isn't quite what I was suggesting (and seems to be equivalent to
>> the code as before). In the common case where there are no units in the
>> data, this will still traverse the entire list.
>>
>> I think replacing the whole loop with:
>>
>> converter = self.get_converter(iter(x).next())
>>
>> would be even better. (Since lists of data should not be heterogeneous
>> anyway...)
>> 
>
> Hmm, I don't see how it would traverse the entire list
>
> for thisx in x:
> converter = self.get_converter( thisx )
> return converter
>
> since it will return after the first element in the loop. I have no
> problem with the iter approach, but am not seeing what the problem is
> with this usage.
>
> JDH
> 
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
On Tue, Oct 7, 2008 at 11:26 AM, Michael Droettboom <md...@st...> wrote:
> This isn't quite what I was suggesting (and seems to be equivalent to
> the code as before). In the common case where there are no units in the
> data, this will still traverse the entire list.
>
> I think replacing the whole loop with:
>
> converter = self.get_converter(iter(x).next())
>
> would be even better. (Since lists of data should not be heterogeneous
> anyway...)
Hmm, I don't see how it would traverse the entire list
 for thisx in x:
 converter = self.get_converter( thisx )
 return converter
since it will return after the first element in the loop. I have no
problem with the iter approach, but am not seeing what the problem is
with this usage.
JDH
From: Michael D. <md...@st...> - 2008年10月07日 16:33:39
Eric Firing wrote:
> Mike, John,
>
> Because path simplification does not work with anything but a 
> continuous line, it is turned off if there are any nans in the path. 
> The result is that if one does this:
>
> import numpy as np
> xx = np.arange(200000)
> yy = np.random.rand(200000)
> #plot(xx, yy)
> yy[1000] = np.nan
> plot(xx, yy)
>
> the plot fails with an incomplete rendering and general 
> unresponsiveness; apparently some mysterious agg limit is quietly 
> exceeded.
The limit in question is "cell_block_limit" in 
agg_rasterizer_cells_aa.h. The relationship between the number vertices 
and the number of rasterization cells I suspect depends on the nature of 
the values. 
However, if we want to increase the limit, each "cell_block" is 4096 
cells, each with 16 bytes, and currently it maxes out at 1024 cell 
blocks, for a total of 67,108,864 bytes. So, the question is, how much 
memory should be devoted to rasterization, when the data set is large 
like this? I think we could safely quadruple this number for a lot of 
modern machines, and this maximum won't affect people plotting smaller 
data sets, since the memory is dynamically allocated anyway. It works 
for me, but I have 4GB RAM here at work.
> With or without the nan, this test case also shows the bizarre 
> slowness of add_line that I asked about in a message yesterday, and 
> that has me completely baffled.
lsprofcalltree is my friend!
>
> Both of these are major problems for real-world use.
>
> Do you have any thoughts on timing and strategy for solving this 
> problem? A few weeks ago, when the problem with nans and path 
> simplification turned up, I tried to figure out what was going on and 
> how to fix it, but I did not get very far. I could try again, but as 
> you know I don't get along well with C++.
That simplification code is pretty hairy, particularly because it tries 
to avoid a copy by doing everything in an iterator/generator way. I 
think even just supporting MOVETOs there would be tricky, but probably 
the easiest first thing.
>
> I am also wondering whether more than straightforward path 
> simplification with nan/moveto might be needed. Suppose there is a 
> nightmarish time series with every third point being bad, so it is 
> essentially a sequence of 2-point line segments. The simplest form of 
> path simplification fix might be to reset the calculation whenever a 
> moveto is encountered, but this would yield no simplification in this 
> case. I assume Agg would still choke. Is there a need for some sort 
> of automatic chunking of the rendering operation in addition to path 
> simplification?
>
Chunking is probably something worth looking into (for lines, at least), 
as it might also reduce memory usage vs. the "increase the 
cell_block_limit" scenario.
I also think for the special case of high-resolution time series data, 
where x if uniform, there is an opportunity to do something completely 
different that should be far faster. Audio editors (such as Audacity), 
draw each column of pixels based on the min/max and/or mean and/or RMS 
of the values within that column. This makes the rendering extremely 
fast and simple. See:
http://audacity.sourceforge.net/about/images/audacity-macosx.png
Of course, that would mean writing a bunch of new code, but it shouldn't 
be incredibly tricky new code. It could convert the time series data to 
an image and plot that, or to a filled polygon whose vertices are 
downsampled from the original data. The latter may be nicer for Ps/Pdf 
output.
Cheers,
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Michael D. <md...@st...> - 2008年10月07日 16:27:03
This isn't quite what I was suggesting (and seems to be equivalent to 
the code as before). In the common case where there are no units in the 
data, this will still traverse the entire list.
I think replacing the whole loop with:
 converter = self.get_converter(iter(x).next())
would be even better. (Since lists of data should not be heterogeneous 
anyway...)
Mike
jd...@us... wrote:
> Revision: 6166
> http://matplotlib.svn.sourceforge.net/matplotlib/?rev=6166&view=rev
> Author: jdh2358
> Date: 2008年10月07日 15:13:53 +0000 (2008年10月07日)
>
> Log Message:
> -----------
> added michaels unit detection optimization for arrays
>
> Modified Paths:
> --------------
> trunk/matplotlib/lib/matplotlib/units.py
>
> Modified: trunk/matplotlib/lib/matplotlib/units.py
> ===================================================================
> --- trunk/matplotlib/lib/matplotlib/units.py	2008年10月07日 15:13:13 UTC (rev 6165)
> +++ trunk/matplotlib/lib/matplotlib/units.py	2008年10月07日 15:13:53 UTC (rev 6166)
> @@ -135,7 +135,7 @@
> 
> for thisx in x:
> converter = self.get_converter( thisx )
> - if converter: break
> + return converter
> 
> #DISABLED self._cached[idx] = converter
> return converter
>
>
> This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Matplotlib-checkins mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-checkins
> 
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: John H. <jd...@gm...> - 2008年10月07日 15:41:10
On Tue, Oct 7, 2008 at 9:18 AM, Michael Droettboom <md...@st...> wrote:
> According to lsprofcalltree, the slowness appears to be entirely in the
> units code by a wide margin -- which is unfortunately code I understand very
> little about. The difference in timing before and after adding the line to
> the axes appears to be because the unit conversion is not invalidated until
> the line has been added to an axes.
>
> In units.get_converter(), it iterates through every *value* in the data to
> see if any of them require unit conversion, and returns the first one it
> finds. It seems like if we're passing in a numpy array of numbers (i.e. not
> array of objects), then we're pretty much guaranteed from the get-go not to
> find a single value that requires unit conversion so we might as well not
> look. Am I making the wrong assumption?
>
> However, for lists, it also seems that, since the code returns the first
> converter it finds, maybe it could just look at the first element of the
> sequence, rather than the entire sequence. It the first is not in the same
> unit as everything else, then the result will be broken anyway.
I made this change -- return the converter from the first element --
and added Michael's non-object numpy arrat optimization too. The
units code needs some attention, I just haven't been able to get to
it...
This helps performance considerably -- on backend driver:
Before:
 Backend agg took 1.32 minutes to complete
 Backend ps took 1.37 minutes to complete
 Backend pdf took 1.78 minutes to complete
 Backend template took 0.83 minutes to complete
 Backend svg took 1.53 minutes to complete
After:
 Backend agg took 1.08 minutes to complete
 Backend ps took 1.15 minutes to complete
 Backend pdf took 1.57 minutes to complete
 Backend template took 0.61 minutes to complete
 Backend svg took 1.31 minutes to complete
Obviously, the results for tests focused on lines with lots of data
would be more dramatic.
Thanks for these suggestions.
JDH
From: Michael D. <md...@st...> - 2008年10月07日 14:21:06
Attachments: units.py.patch
According to lsprofcalltree, the slowness appears to be entirely in the 
units code by a wide margin -- which is unfortunately code I understand 
very little about. The difference in timing before and after adding the 
line to the axes appears to be because the unit conversion is not 
invalidated until the line has been added to an axes.
In units.get_converter(), it iterates through every *value* in the data 
to see if any of them require unit conversion, and returns the first one 
it finds. It seems like if we're passing in a numpy array of numbers 
(i.e. not array of objects), then we're pretty much guaranteed from the 
get-go not to find a single value that requires unit conversion so we 
might as well not look. Am I making the wrong assumption?
However, for lists, it also seems that, since the code returns the first 
converter it finds, maybe it could just look at the first element of the 
sequence, rather than the entire sequence. It the first is not in the 
same unit as everything else, then the result will be broken anyway. 
For example, if I hack evans_test.py to contain a single int amongst the 
list of "Foo" objects in the data, I get an exception anyway, even as 
the code stands now.
I have attached a patch against unit.py to speed up the first case 
(passing Numpy arrays). I think I need more feedback from the units 
experts whether my suggestion for lists (to only look at the first 
element) is reasonable.
Feel free to commit the patch if it seems reasonable to those who know 
more about units than I do.
Mike
Eric Firing wrote:
> I am getting very inconsistent timings when looking into plotting a line 
> with a very large number of points. Axes.add_line() is very slow, and 
> the time is taken by Axes._update_line_limits(). But when I simply run 
> the latter, on a Line2D of the same dimensions, it can be fast.
>
> import matplotlib
> matplotlib.use('template')
> import numpy as np
> import matplotlib.lines as mlines
> import matplotlib.pyplot as plt
> ax = plt.gca()
> LL = mlines.Line2D(np.arange(1.5e6), np.sin(np.arange(1.5e6)))
> from time import time
> t = time(); ax.add_line(LL); time()-t
> ###16.621543884277344
> LL = mlines.Line2D(np.arange(1.5e6), np.sin(np.arange(1.5e6)))
> t = time(); ax.add_line(LL); time()-t
> ###16.579419136047363
> ## We added two identical lines, each took 16 seconds.
>
> LL = mlines.Line2D(np.arange(1.5e6), np.sin(np.arange(1.5e6)))
> t = time(); ax._update_line_limits(LL); time()-t
> ###0.1733548641204834
> ## But when we made another identical line, updating the limits was
> ## fast.
>
> # Below are similar experiments:
> LL = mlines.Line2D(np.arange(1.5e6), 2*np.sin(np.arange(1.5e6)))
> t = time(); ax._update_line_limits(LL); time()-t
> ###0.18362092971801758
>
> ## with a fresh axes:
> plt.clf()
> ax = plt.gca()
> LL = mlines.Line2D(np.arange(1.5e6), 2*np.sin(np.arange(1.5e6)))
> t = time(); ax._update_line_limits(LL); time()-t
> ###0.22244811058044434
>
> t = time(); ax.add_line(LL); time()-t
> ###16.724560976028442
>
> What is going on? I used print statements inside add_line() to verify 
> that all the time is in _update_line_limits(), which runs one or two 
> orders of magnitude slower when run inside of add_line than when run 
> outside--even if I run the preceding parts of add_line first.
>
> Eric
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
> 
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Keaton M. <km...@gm...> - 2008年10月07日 07:58:53
Hey all,
I hope this is the right list for this sort of thing, but here goes.
My installation of matplotlib (via macports) bombed out with this
error:
Traceback (most recent call last):
 File "setup.py", line 125, in <module>
 if check_for_tk() or (options['build_tkagg'] is True):
 File "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_python_py25-matplotlib/work/matplotlib-0.98.3/setupext.py",
line 841, in check_for_tk
 explanation = add_tk_flags(module)
 File "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_python_py25-matplotlib/work/matplotlib-0.98.3/setupext.py",
line 1055, in add_tk_flags
 module.libraries.extend(['tk' + tk_ver, 'tcl' + tk_ver])
UnboundLocalError: local variable 'tk_ver' referenced before assignment
I fixed it by adding
 tcl_lib_dir = ""
 tk_lib_dir = ""
 tk_ver = ""
at line 1033 in setupext.py. That way, if we do get an exception in
the ensuing try block, the variables are still defined. This seemed
to clear things up nicely. Hope that's clear... feel free to ask for
any further debugging info. Thanks!
Keaton Mowery
From: Jae-Joon L. <lee...@gm...> - 2008年10月07日 06:27:18
Hi Eric,
As far as I know, get_window_extent is meant to return the extent of
the object in the display coordinate. And, as you may have noticed,
often this is used to calculate the relative position of objects.
I quickly went through your patch and my guess is your implementation
of get_window_extent is correct in this regard, but I haven't
considered this seriously so I may be wrong.
On the other hand, I guess the original problem you had is not related
with the get_window_extents() method.
The legend class has a _update_positions() method which is called
before the legends are drawn. And you have to update positions of your
handles within this method.
In the simple patch below. I tried to implement some basic update code
for the polycollections ( I also slightly adjusted the y-offsets. This
is just my personal preference). See if this patch works for you.
Regards,
-JJ
Index: lib/matplotlib/legend.py
===================================================================
--- lib/matplotlib/legend.py	(revision 6163)
+++ lib/matplotlib/legend.py	(working copy)
@@ -532,6 +540,12 @@
 elif isinstance(handle, Rectangle):
 handle.set_y(y+1/4*h)
 handle.set_height(h/2)
+ elif isinstance(handle, RegularPolyCollection):
+ offsets = handle.get_offsets()
+ xvals = [x for (x, _) in offsets]
+ yy = y + h
+ yvals=[yy-4./8*h,yy-3./8*h,yy-4./8*h]
+ handle.set_offsets(zip(xvals, yvals))
 # Set the data for the legend patch
 bbox = self._get_handle_text_bbox(renderer)
On Tue, Oct 7, 2008 at 12:15 AM, Erik Tollerud <eri...@gm...> wrote:
> Does anyone have anything new here? I'm perfectly willing to
> experiment, but I'm really at a loss as to what
> get_window_extent(self,render) is supposed to do (clearly get some
> window extent, but exactly what window and what coordinates the extent
> is in is what is confusing me).
>
> On Tue, Sep 23, 2008 at 11:41 AM, John Hunter <jd...@gm...> wrote:
>> On Tue, Sep 23, 2008 at 12:20 AM, Erik Tollerud <eri...@gm...> wrote:
>>> Attached is a diff against revision 6115 that contains a patch to
>>> improve the behavior of the legend function when showing legends for
>>
>> Erik,
>>
>> I haven't had a chance to get to this yet. Could you please also post
>> it on the sf patch tracker so it doesn't get dropped, and ping us with
>> a reminder in a few days if nothing has happened....
>>
>> JDH
>>
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>
From: Erik T. <eri...@gm...> - 2008年10月07日 04:16:31
Does anyone have anything new here? I'm perfectly willing to
experiment, but I'm really at a loss as to what
get_window_extent(self,render) is supposed to do (clearly get some
window extent, but exactly what window and what coordinates the extent
is in is what is confusing me).
On Tue, Sep 23, 2008 at 11:41 AM, John Hunter <jd...@gm...> wrote:
> On Tue, Sep 23, 2008 at 12:20 AM, Erik Tollerud <eri...@gm...> wrote:
>> Attached is a diff against revision 6115 that contains a patch to
>> improve the behavior of the legend function when showing legends for
>
> Erik,
>
> I haven't had a chance to get to this yet. Could you please also post
> it on the sf patch tracker so it doesn't get dropped, and ping us with
> a reminder in a few days if nothing has happened....
>
> JDH
>
From: Eric F. <ef...@ha...> - 2008年10月07日 01:23:39
Mike, John,
Because path simplification does not work with anything but a continuous 
line, it is turned off if there are any nans in the path. The result is 
that if one does this:
import numpy as np
xx = np.arange(200000)
yy = np.random.rand(200000)
#plot(xx, yy)
yy[1000] = np.nan
plot(xx, yy)
the plot fails with an incomplete rendering and general 
unresponsiveness; apparently some mysterious agg limit is quietly 
exceeded. With or without the nan, this test case also shows the 
bizarre slowness of add_line that I asked about in a message yesterday, 
and that has me completely baffled.
Both of these are major problems for real-world use.
Do you have any thoughts on timing and strategy for solving this 
problem? A few weeks ago, when the problem with nans and path 
simplification turned up, I tried to figure out what was going on and 
how to fix it, but I did not get very far. I could try again, but as 
you know I don't get along well with C++.
I am also wondering whether more than straightforward path 
simplification with nan/moveto might be needed. Suppose there is a 
nightmarish time series with every third point being bad, so it is 
essentially a sequence of 2-point line segments. The simplest form of 
path simplification fix might be to reset the calculation whenever a 
moveto is encountered, but this would yield no simplification in this 
case. I assume Agg would still choke. Is there a need for some sort of 
automatic chunking of the rendering operation in addition to path 
simplification?
Thanks.
Eric
From: Charlie M. <cw...@gm...> - 2008年10月06日 17:20:31
Hey Randy,
 All the mpl binaries are built against tcl/tk 8.4. I believe mpl is
not compatible with tcl/tk 8.5 as of the last release. Someone else might
know if this has changed in svn?
- Charlie
On Sun, Oct 5, 2008 at 11:04 PM, Randy Heiland <he...@in...> wrote:
> Short/naive question: do the mpl eggs have a dependency on Tk 8.4?
>
> Longer question: I'm trying to support a plugin (NLOPredict) to a
> popular molecular vis pkg (UCSF Chimera) and, no surprise, the plugin
> uses mpl. Chimera bundles its own Python, plus all dependencies.
> The latest version switched to Python 2.5 and Tcl/Tk 8.5. It also
> bundles numpy 1.0.4. So I tried to install a mpl-maintenance egg
> (Windows first) that used Python 2.5 and pre-numpy 1.1 (I tried mpl
> 0.91.4 and 91.2). However, when I bring up the Chimera IDLE, I get:
>
> >>> from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
> traceback...
> File "d:\chimera-1.2540\bin\lib\site-package\matplotlib-0.91.2-py2.5-
> win32.egg\matplotlib\backends\backend_tkagg.py", line 8, in <module>
> import tkagg # Paint image to Tk photo blitter extension
> File "d:\chimera-1.2540\bin\lib\site-package\matplotlib-0.91.2-
> py2.5-win32.egg\matplotlib\backends\tkagg.py", line 1, in <module>
> import _tkagg
> ImportError: DLL load failed: The specified file could not be found.
>
>
> Ideas?
> thanks, Randy
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's
> challenge
> Build the coolest Linux based applications with Moblin SDK & win great
> prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>
From: Gregor T. <gre...@gm...> - 2008年10月06日 15:25:31
Dear developers,
in matplotlib 0.98.3 I discoverd that in scatter individual alpha 
settings (by giving a list of rgba values) are ignered. Here an example 
that show this behaviour: All points show the same alpha value as given 
by the alpha keyword argument. (Omitting it equals to the setting alpha=1).
from pylab import *
x = [1,2,3]
y = [1,2,3]
c = [[1,0,0, 0.0],
 [1,0,0, 0.5],
 [1,0,0, 1.0]]
gca()
cla()
scatter(x,y, c=c, s = 200, alpha = 0.5)
draw()
show()
I had a look at the sources. In axes.py/scatter I simply removed the line
collection.set_alpha(alpha)
The recent svn version also contains this line.
With this change it worked as expected, also e.g. for the case of a 
single color for all points,
scatter(x,y, c = 'r', alpha = 0.5)
Gregor
From: Randy H. <he...@in...> - 2008年10月06日 03:05:40
Short/naive question: do the mpl eggs have a dependency on Tk 8.4?
Longer question: I'm trying to support a plugin (NLOPredict) to a 
popular molecular vis pkg (UCSF Chimera) and, no surprise, the plugin 
uses mpl. Chimera bundles its own Python, plus all dependencies. 
The latest version switched to Python 2.5 and Tcl/Tk 8.5. It also 
bundles numpy 1.0.4. So I tried to install a mpl-maintenance egg 
(Windows first) that used Python 2.5 and pre-numpy 1.1 (I tried mpl 
0.91.4 and 91.2). However, when I bring up the Chimera IDLE, I get:
 >>> from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
traceback...
File "d:\chimera-1.2540\bin\lib\site-package\matplotlib-0.91.2-py2.5- 
win32.egg\matplotlib\backends\backend_tkagg.py", line 8, in <module>
 import tkagg # Paint image to Tk photo blitter extension
 File "d:\chimera-1.2540\bin\lib\site-package\matplotlib-0.91.2- 
py2.5-win32.egg\matplotlib\backends\tkagg.py", line 1, in <module>
 import _tkagg
ImportError: DLL load failed: The specified file could not be found.
Ideas?
thanks, Randy
From: Eric F. <ef...@ha...> - 2008年10月06日 00:50:58
I am getting very inconsistent timings when looking into plotting a line 
with a very large number of points. Axes.add_line() is very slow, and 
the time is taken by Axes._update_line_limits(). But when I simply run 
the latter, on a Line2D of the same dimensions, it can be fast.
import matplotlib
matplotlib.use('template')
import numpy as np
import matplotlib.lines as mlines
import matplotlib.pyplot as plt
ax = plt.gca()
LL = mlines.Line2D(np.arange(1.5e6), np.sin(np.arange(1.5e6)))
from time import time
t = time(); ax.add_line(LL); time()-t
###16.621543884277344
LL = mlines.Line2D(np.arange(1.5e6), np.sin(np.arange(1.5e6)))
t = time(); ax.add_line(LL); time()-t
###16.579419136047363
## We added two identical lines, each took 16 seconds.
LL = mlines.Line2D(np.arange(1.5e6), np.sin(np.arange(1.5e6)))
t = time(); ax._update_line_limits(LL); time()-t
###0.1733548641204834
## But when we made another identical line, updating the limits was
## fast.
# Below are similar experiments:
LL = mlines.Line2D(np.arange(1.5e6), 2*np.sin(np.arange(1.5e6)))
t = time(); ax._update_line_limits(LL); time()-t
###0.18362092971801758
## with a fresh axes:
plt.clf()
ax = plt.gca()
LL = mlines.Line2D(np.arange(1.5e6), 2*np.sin(np.arange(1.5e6)))
t = time(); ax._update_line_limits(LL); time()-t
###0.22244811058044434
t = time(); ax.add_line(LL); time()-t
###16.724560976028442
What is going on? I used print statements inside add_line() to verify 
that all the time is in _update_line_limits(), which runs one or two 
orders of magnitude slower when run inside of add_line than when run 
outside--even if I run the preceding parts of add_line first.
Eric
From: Jae-Joon L. <lee...@gm...> - 2008年10月04日 23:57:21
John,
As you may know, you're reverting the change Michael made sometime
ago. Michael said it is not a bug, but rather intended.
http://sourceforge.net/mailarchive/message.php?msg_id=6e8d907b0809031201p4bb0701eo23b3d294797a8766%40mail.gmail.com
So, I would appreciate if you reiterate this with Micheal before I
change back my scripts again.
Regards,
-JJ
On Thu, Sep 25, 2008 at 11:53 AM, John Hunter <jd...@gm...> wrote:
> On Thu, Sep 25, 2008 at 9:31 AM, Darren Dale <dsd...@gm...> wrote:
>> I noticed this morning that my Times and Palatino system fonts are not being
>> found anymore. I removed my fontManager.cache and ran my script with
>> verbose=debug, and it looks like creatFontDict found them, but then findfont
>> cant:
>
> I recently fixed another bug related to font finding when an explicit
> file name was passed -- I wonder if I broke a normal use case. It's a
> simple change shown in the diff below. Could you manually revert on
> your end and see if it makes a difference. If so, I'll have to find
> another solution to the problem I was fixing.
>
>
> johnh@flag:mpl> svn diff lib/matplotlib/font_manager.py -r6097:6098
> Index: lib/matplotlib/font_manager.py
> ===================================================================
> --- lib/matplotlib/font_manager.py (revision 6097)
> +++ lib/matplotlib/font_manager.py (revision 6098)
> @@ -955,7 +955,7 @@
> fname = prop.get_file()
> if fname is not None:
> verbose.report('findfont returning %s'%fname, 'debug')
> - return fname[0]
> + return fname
>
> if fontext == 'afm':
> fontdict = self.afmdict
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>
From: Eric F. <ef...@ha...> - 2008年10月04日 07:21:06
Tony S Yu wrote:
> Hi Eric,
> 
> Sorry for the late reply.
> 
> On Sep 27, 2008, at 8:56 PM, Eric Firing wrote:
> 
>> Actually, I think the most logical thing would be to let the default 
>> None give the old behavior, and require precision=0 to get the new 
>> behavior. What do you think? Is it OK if I make this change? It 
>> is more consistent with the old behavior.
> 
> I'm ambivalent about this change. On one hand, I think it makes a lot 
> more sense to have None give the old behavior and precision=0 to 
> ignore zero values in the sparse array (then precision would be 
> consistent for finite values and for zero).
> 
> On the other hand, I think ignoring zero values should be the default 
> behavior for sparse arrays (although, I definitely agree there should 
> be the option to plot all assigned values).
> 
> Would it be possible to make the change you suggest and also change 
> the default precision value to 0? (see diff below) This change would 
> also allow you to remove a lot of the special handling for 
> precision=None, since precision=0 gives the same result (I didn't go 
> this far in the diff below).
Good point. I made that change, but then made precision='present' be 
the value for sparse arrays to show all filled cells. precision=None is 
deprecated, but converted to 0.
Eric
From: Mátyás J. <mj...@gm...> - 2008年10月02日 19:58:46
Hi,
thank you very much! The memory leak disappeared after I updated
pygobject from 2.12 to 2.13.2.
From: Michael D. <md...@st...> - 2008年10月02日 13:53:12
matplotlib SVN trunk appears to be running backend_driver.py fine under 
Python-2.6 with Numpy SVN and TkAgg backend.
I had to make a handful of changes to remove deprecation warnings, none 
of which changes our "still supporting Python 2.4" policy. One change 
was required to pytz, which I've communicated upstream to its author.
In case anyone is wondering, there don't seem to be any measurable 
performance increases... :(
Cheers,
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA

Showing results of 205

<< < 1 .. 6 7 8 9 > >> (Page 8 of 9)
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /