SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S






1
(3)
2
3
(2)
4
(1)
5
(12)
6
(5)
7
(3)
8
(2)
9
(1)
10
(14)
11
(11)
12
(7)
13
(4)
14
(8)
15
(2)
16
(1)
17
(2)
18
(1)
19
20
(1)
21
(10)
22
(2)
23
(1)
24
(1)
25
(7)
26
(10)
27
(2)
28
(1)
29
(5)
30
(7)






Showing results of 126

<< < 1 2 3 4 .. 6 > >> (Page 2 of 6)
From: John H. <jd...@gm...> - 2008年11月25日 19:45:55
On Tue, Nov 25, 2008 at 1:31 PM, Michael Droettboom <md...@st...> wrote:
> The trunk has effectively the same fix already in that additional code you
> point out. Its purpose is to make sure the zoom happens only once for each
> grouping. It could probably be done better, but it does work.
OK, thanks for the explanation. I've purged the invalid merge from the trunk.
JDH
From: Michael D. <md...@st...> - 2008年11月25日 19:31:43
The change doesn't apply to the trunk. The shared axes logic is 
completely different now. Whereas before there was a unidirectional 
link from one axes to another, and a concept of "master" and "slave" 
axes, the new version avoids that complication by using the "Grouper" class.
The bug fix was required on the branch because the zoom rectangle was 
getting "doubly applied", once for each axis which caused (in effect) 
for the zoom to go too far. The fix was simply to ignore one of the 
axes, in this case the "master" axes.
The trunk has effectively the same fix already in that additional code 
you point out. Its purpose is to make sure the zoom happens only once 
for each grouping. It could probably be done better, but it does work.
So the software engineering lesson here, is I should have remembered to 
merge my own change on the branch -- I would have known right away it 
didn't apply. (Actually, that's probably why I didn't merge it, but of 
course, you still have to let SVN know you don't want to merge the 
change somehow...)
Mike
John Hunter wrote:
> On Tue, Nov 25, 2008 at 12:28 PM, John Hunter <jd...@gm...> wrote:
> 
>> On Tue, Nov 25, 2008 at 12:16 PM, John Hunter <jd...@gm...> wrote:
>> 
>>> pan/zoom appears to be broken in the sharex axis demo. If you do a
>>> zoom to rect on ax2 or ax3 in
>>> examples/pylab_examples/shared_axis_demo.py the event seems to be
>>> swallowed, though a zoom in ax1 is respected.
>>> 
>> The problem appears to be in the backend_bases
>> NavigationToolbar2.release_zoom method. I have updated svn r6447 with
>>
>> # JDH: I don't know why this is here but I expect to be
>> # able to zoomo on any axis that is shared. This was
>> # breaking zoom-to-rect on shared_axis_demo if the zoom
>> # happened in ax2 or ax3 so i am replacing the continue
>> # with a pass until this is sorted out
>> if a._sharex or a._sharey:
>> #continue
>> pass
>>
>> If anyone knows why the continue was/should be there, speak up!
>> 
>
> OK, I think I see where this came in. I did an svnmerge the other day
> from the branch, and merged in Michael's change:
>
> r6365 | mdboom | 2008年11月05日 09:15:28 -0600 (2008年11月05日) | 1 line
>
> Fix bug in zoom rectangle with twin axes
>
> Michael, perhaps you can comment on this bugfix on the branch, and
> whether this change or something like it should be in the trunk? I
> see the trunk has some additional logic that the branch does not have:
>
> # detect twinx,y axes and avoid double zooming
> twinx, twiny = False, False
> if last_a:
> for la in last_a:
> if a.get_shared_x_axes().joined(a,la): twinx=True
> if a.get_shared_y_axes().joined(a,la): twiny=True
>
>
> JDH
>
> -------------------------------------------------------------------------
> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
> Build the coolest Linux based applications with Moblin SDK & win great prizes
> Grand prize is a trip for two to an Open Source event anywhere in the world
> http://moblin-contest.org/redirect.php?banner_id=100&url=/
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
> 
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Jae-Joon L. <lee...@gm...> - 2008年11月25日 19:03:23
I'm so sorry Erik. I missed your last email.
I just submitted your patch with a slight modification.
As far as I know, matplotlib still supports python 2.4, and
Conditional Expressions are introduced in 2.5.
Regards,
-JJ
On Tue, Nov 11, 2008 at 9:22 PM, Erik Tollerud <eri...@gm...> wrote:
> Patch against today's svn is attached. Sorry for the long delay...
>
> Right now, "scatterpoints" is just set to 3 in Legend.__init__, but
> presumably that should be an rcParam...
>
> On Fri, Oct 31, 2008 at 2:42 AM, Jae-Joon Lee <lee...@gm...> wrote:
>> Sorry Erik.
>> Can you make a new patch against the current SVN?
>> Some of the patch was applied (but without scatterpoints option) in the SVN.
>> Thanks,
>>
>> -JJ
>>
>>
>>
>>
>> On Thu, Oct 30, 2008 at 1:58 PM, Erik Tollerud <eri...@gm...> wrote:
>>> No more thoughts on this? Or was some version of the patch committed?
>>>
>>> On Mon, Oct 20, 2008 at 12:16 PM, Erik Tollerud <eri...@gm...> wrote:
>>>> Actually, looking more closely, there is one thing that's still
>>>> bothering me: as it is now, it's impossible to have, say, 2 points
>>>> for plotted values, and 3 points for scatter plots on the same legend
>>>> (you have to give a numpoints=# command that's shared by everything in
>>>> the legend, if I'm understanding it). It'd be nice to have a
>>>> property, say, "scatterpoints" (and presumably then an associated
>>>> rcParam "legend.scatterpoints" ) that sets the number of points to use
>>>> for scatter plots. That way, I can make plots just like in the
>>>> original form, but it can also be the same number for both if so
>>>> desired. I've attached a patch based on the last one that does this,
>>>> although it probably needs to be changed to allow for an rcParam
>>>> 'legend.scatterplot' (I don't really know the procedure for adding a
>>>> new rcParam).
>>>>
>>>> On Mon, Oct 20, 2008 at 3:22 AM, Erik Tollerud <eri...@gm...> wrote:
>>>>> The current patch looks good to me... it satisfies all the use cases I
>>>>> had in mind, and I can't think of much else that would be wanted.
>>>>> Thanks!
>>>>>
>>>>> I also very much like the idea of the "sizebar," although that's
>>>>> probably a substantially larger job to implement. I may look into it
>>>>> though, time permitting...
>>>>>
>>>>> On Sat, Oct 18, 2008 at 7:04 PM, Jae-Joon Lee <lee...@gm...> wrote:
>>>>>>> To help clarify the original purpose of "update_from": I wrote this
>>>>>>> method when writing the original legend implementation so the legend
>>>>>>> proxy objects could easily copy their style attributes from the
>>>>>>> underlying objects they were a proxy for (so not every property is
>>>>>>> copied, eg the xdata for line objects is not copied). So the
>>>>>>> operating question should be: what properties do I need to copy to
>>>>>>> make the legend representation of the object. While you are in
>>>>>>> there, perhaps you could clarify this in the docstrings of the
>>>>>>> update_from method.
>>>>>>
>>>>>> Thanks for clarifying this, John.
>>>>>>
>>>>>> Manuel,
>>>>>> The patch looks good to me. We may submit the patch (I hope Erik is
>>>>>> okay with the current patch) and it would be great if you handle the
>>>>>> submission.
>>>>>>
>>>>>> -JJ
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Fri, Oct 17, 2008 at 9:45 PM, Manuel Metz <mm...@as...> wrote:
>>>>>>> Jae-Joon Lee wrote:
>>>>>>>> Thanks Manuel.
>>>>>>>>
>>>>>>>> Yes, we need rotation value and etc, but my point is, do we need to
>>>>>>>> update it within the update_from() method? Although my preference is
>>>>>>>> not to do it, it may not matter much as far as we state what this
>>>>>>>> method does clearly in the doc.
>>>>>>>
>>>>>>> Okay, it's probably better to create the object correctly (numsides ...)
>>>>>>> instead of copying the properties (see also JDHs mail !)
>>>>>>>
>>>>>>>> And, in your patch, I don't think updating the numsides value has any
>>>>>>>> effect as it does not recreate the paths.
>>>>>>>>
>>>>>>>> I'm attaching the revised patch. In this patch, update_from() only
>>>>>>>> update gc-related properties. And numsides, size, and rotations are
>>>>>>>> given during the object creation time.
>>>>>>>
>>>>>>> Yes, this looks better. But creating handle_sizes is a little bit too
>>>>>>> much effort. This is done internally. It will do passing a sizes list,
>>>>>>> that may or may not be shorter/longer than numpoints (see revised patch).
>>>>>>>
>>>>>>> I also changed the way the yoffsets are updated in _update_positions().
>>>>>>>
>>>>>>> One additional thing I have in mind (for a later time) is a "sizesbar"
>>>>>>> similar to a colorbar where you can read off values corresponding to
>>>>>>> marker sizes...
>>>>>>>
>>>>>>> Cheers,
>>>>>>> Manuel
>>>>>>>
>>>>>>>> Erik,
>>>>>>>> I see your points. My main concern is that the yoffsets makes the
>>>>>>>> results a bit funny when numpoints is 2. The attached patch has a
>>>>>>>> varying sizes of [0.5*(max+min), max, min]. The yoffsets are only
>>>>>>>> introduced when numpints > 2 and you can also provide it as an
>>>>>>>> optional argument.
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> -JJ
>>>>>>>>
>>>>>>>>
>>>>>>>> On Thu, Oct 16, 2008 at 8:43 PM, Manuel Metz <mm...@as...> wrote:
>>>>>>>>> Manuel Metz wrote:
>>>>>>>>>> Jae-Joon Lee wrote:
>>>>>>>>>>> Hi Manuel,
>>>>>>>>>>>
>>>>>>>>>>> I think it is a good to introduce the update_from method in Collections.
>>>>>>>>>>> But, I'm not sure if it is a good idea to also update sizes, paths and
>>>>>>>>>>> rotation (in RegularPolyCoolection). My impression is that update_from
>>>>>>>>>>> method is to update gc related attributes. For comparison,
>>>>>>>>>>> Patch.update_from() does not update the path.
>>>>>>>>>> That's exactly the point why I wasn't fully happy with the patch. The
>>>>>>>>>> path is generated by the _path_generator, so instead of copying the path
>>>>>>>>>> it seems to be better to create an instance of the corresponding class
>>>>>>>>>> (e.g. the StarPolygonCollection class, as suggested before).
>>>>>>>>>>
>>>>>>>>>> One should update the rotation attribute (!!); it's only one number. A
>>>>>>>>>> '+' marker, for example, has rotation = 0, whereas a 'x' marker has
>>>>>>>>>> rotation=pi/4. That's the only difference between those two !
>>>>>>>>>>
>>>>>>>>>>> Also, is it okay to update properties without checking its length?. It
>>>>>>>>>>> does not seem to cause any problems though.
>>>>>>>>>> It's in principal not a problem to copy the sizes attribute without
>>>>>>>>>> checking the length. If it's shorter the the number of items the sizes
>>>>>>>>>> are repeated; if it's longer it gets truncated.
>>>>>>>>>>
>>>>>>>>>> mm
>>>>>>>>>>
>>>>>>>>>>> I guess It would better to use xdata_markers than xdata in the
>>>>>>>>>>> get_handle() method. The difference is when numpoints==1. Using xdata
>>>>>>>>>>> gives two marker points.
>>>>>>>>>>>
>>>>>>>>>>> I was actually about to to commit my patch. I'll try to account your
>>>>>>>>>>> changes and post my version of patch later today.
>>>>>>>>>>>
>>>>>>>>>>> Regards,
>>>>>>>>>>>
>>>>>>>>>>> -JJ
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Oct 15, 2008 at 4:07 PM, Manuel Metz <mm...@as...> wrote:
>>>>>>>>>>>> hmm
>>>>>>>>>>>>
>>>>>>>>>>>> -------- Original Message --------
>>>>>>>>>>>> Jae-Joon Lee wrote:
>>>>>>>>>>>>>> - the parameter numpoints should be used (it's ignored right now)
>>>>>>>>>>>>>>
>>>>>>>>>>>>> Thanks Manuel. I guess we can simply reuse xdata_marker for this purpose.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>> - Some private variables are accessed and a new RegularPolycollection is
>>>>>>>>>>>>>> created (does this work eg. with a StarPolygonCollection? I haven't
>>>>>>>>>>>>>> checked, but I don't think so !). Instead of creating a new
>>>>>>>>>>>>>> RegularPolyCollection it might be more useful to make a copy of the
>>>>>>>>>>>>>> existing object... I was thinking about a update_from() method for the
>>>>>>>>>>>>>> Collection class(es) similar to update_from() for lines.
>>>>>>>>>>>>>>
>>>>>>>>>>>>> By changing "RegularPolyCoolection" to "type(handles)", it works for
>>>>>>>>>>>>> StarPolygonCollection.
>>>>>>>>>>>>>
>>>>>>>>>>>>> In Erik's current implementation, the markers in the legend have
>>>>>>>>>>>>> varying colors, sizes, and y offsets.
>>>>>>>>>>>>> The color variation seems fine. But do we need to vary the sizes and
>>>>>>>>>>>>> y-offsets? My inclination is to use a fixed size (median?) and a fixed
>>>>>>>>>>>>> y offset. How does Erik and others think?
>>>>>>>>>>>>>
>>>>>>>>>>>>> Regards,
>>>>>>>>>>>>>
>>>>>>>>>>>>> -JJ
>>>>>>>>>>>> Attached is my current version of the patch. I've moved all of the
>>>>>>>>>>>> properties-copying stuff to collections, which makes the changes
>>>>>>>>>>>> legend.py more clearer (but I'm not fully happy with the patch and
>>>>>>>>>>>> haven't commit anything yet)
>>>>>>>>>>>>
>>>>>>>>>>>> mm
>>>>>>>>>>>>
>>>>>>>>> Hi Jae-Joon,
>>>>>>>>> so here is my revised version of the patch. What do you think ?
>>>>>>>>>
>>>>>>>>> Manuel
>>>>>>>>>
>>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> -------------------------------------------------------------------------
>>>>>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
>>>>>>> Build the coolest Linux based applications with Moblin SDK & win great prizes
>>>>>>> Grand prize is a trip for two to an Open Source event anywhere in the world
>>>>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>>>>>>> _______________________________________________
>>>>>>> Matplotlib-devel mailing list
>>>>>>> Mat...@li...
>>>>>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> -------------------------------------------------------------------------
>>>>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
>>>>>> Build the coolest Linux based applications with Moblin SDK & win great prizes
>>>>>> Grand prize is a trip for two to an Open Source event anywhere in the world
>>>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>>>>>> _______________________________________________
>>>>>> Matplotlib-devel mailing list
>>>>>> Mat...@li...
>>>>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>>>>>>
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Erik Tollerud
>>>> Graduate Student
>>>> Center For Cosmology
>>>> Department of Physics and Astronomy
>>>> 2142 Frederick Reines Hall
>>>> University of California, Irvine
>>>> Office Phone: (949)824-2587
>>>> Cell: (651)307-9409
>>>> eto...@uc...
>>>>
>>>
>>>
>>>
>>> --
>>> Erik Tollerud
>>> Graduate Student
>>> Center For Cosmology
>>> Department of Physics and Astronomy
>>> 2142 Frederick Reines Hall
>>> University of California, Irvine
>>> Office Phone: (949)824-2587
>>> Cell: (651)307-9409
>>> eto...@uc...
>>>
>>
>
>
>
> --
> Erik Tollerud
> Graduate Student
> Center For Cosmology
> Department of Physics and Astronomy
> 2142 Frederick Reines Hall
> University of California, Irvine
> Office Phone: (949)824-2587
> Cell: (651)307-9409
> eto...@uc...
>
From: John H. <jd...@gm...> - 2008年11月25日 18:33:23
On Tue, Nov 25, 2008 at 12:28 PM, John Hunter <jd...@gm...> wrote:
> On Tue, Nov 25, 2008 at 12:16 PM, John Hunter <jd...@gm...> wrote:
>> pan/zoom appears to be broken in the sharex axis demo. If you do a
>> zoom to rect on ax2 or ax3 in
>> examples/pylab_examples/shared_axis_demo.py the event seems to be
>> swallowed, though a zoom in ax1 is respected.
>
> The problem appears to be in the backend_bases
> NavigationToolbar2.release_zoom method. I have updated svn r6447 with
>
> # JDH: I don't know why this is here but I expect to be
> # able to zoomo on any axis that is shared. This was
> # breaking zoom-to-rect on shared_axis_demo if the zoom
> # happened in ax2 or ax3 so i am replacing the continue
> # with a pass until this is sorted out
> if a._sharex or a._sharey:
> #continue
> pass
>
> If anyone knows why the continue was/should be there, speak up!
OK, I think I see where this came in. I did an svnmerge the other day
from the branch, and merged in Michael's change:
 r6365 | mdboom | 2008年11月05日 09:15:28 -0600 (2008年11月05日) | 1 line
 Fix bug in zoom rectangle with twin axes
Michael, perhaps you can comment on this bugfix on the branch, and
whether this change or something like it should be in the trunk? I
see the trunk has some additional logic that the branch does not have:
 # detect twinx,y axes and avoid double zooming
 twinx, twiny = False, False
 if last_a:
 for la in last_a:
 if a.get_shared_x_axes().joined(a,la): twinx=True
 if a.get_shared_y_axes().joined(a,la): twiny=True
JDH
From: John H. <jd...@gm...> - 2008年11月25日 18:28:43
On Tue, Nov 25, 2008 at 12:16 PM, John Hunter <jd...@gm...> wrote:
> pan/zoom appears to be broken in the sharex axis demo. If you do a
> zoom to rect on ax2 or ax3 in
> examples/pylab_examples/shared_axis_demo.py the event seems to be
> swallowed, though a zoom in ax1 is respected.
The problem appears to be in the backend_bases
NavigationToolbar2.release_zoom method. I have updated svn r6447 with
 # JDH: I don't know why this is here but I expect to be
 # able to zoomo on any axis that is shared. This was
 # breaking zoom-to-rect on shared_axis_demo if the zoom
 # happened in ax2 or ax3 so i am replacing the continue
 # with a pass until this is sorted out
 if a._sharex or a._sharey:
 #continue
 pass
If anyone knows why the continue was/should be there, speak up!
JDH
From: John H. <jd...@gm...> - 2008年11月25日 18:16:42
pan/zoom appears to be broken in the sharex axis demo. If you do a
zoom to rect on ax2 or ax3 in
examples/pylab_examples/shared_axis_demo.py the event seems to be
swallowed, though a zoom in ax1 is respected.
I know Eric was recently working on autoscale support for sharex axes
 r6315 | efiring | 2008年10月23日 19:08:58 -0500 (2008年10月23日) | 2 lines
 Support autoscaling with shared axes
And perhaps the current bug is related to the problem Michael
wrote about earlier in the thread: "shared axes" bug in matplotlib
0.98
 Back when the 0.98 transformations were being written, John and I
 had a long discussion about whether data limits should be Bbox-like
 or pair-of-intervals-like, and we ultimately decided to leave things
 as-is to avoid creating too much newness at once. IMHO, however,
 the real problem is that the shared axes mechanism doesn't know
 whether the limits are changing because of autoscaling (in which
 case the limits should be unioned together), or panning/zooming, in
 which case the limits need to be replaced. The second problem is
 probably necessary to fix whether we use Bboxes or not.
JDH
Hey John and the rest of the MPL gang:
I've made the changes you suggested, but the problem is looking to be
deeper than it seemed. I'm also moving this conversation to
matplotlib-devel, since that's probably the more appropriate place for
it.
This updated patch allows for the creation of colormaps with various
alphas, but there is likely more work to be done so that mpl can
consistently make use of it (because it seems like all built-in cmaps
are RGB, not RGBA).
In trying to come up with an example that exercises the new
capabilities, I found out that methods like scatter and countourf modify
the colormap you give them and reset all of the alpha values to 1.
I think this is because inside collections, we pass self._alpha, which
is the Artist._alpha, and 1.0 by default, when making calls such
as:
 _colors.colorConverter.to_rgba_array(c, self._alpha)
...Thus resetting all of alpha values.
I was able to get around this by allowing collections to take on an
alpha value of None, and then passing alpha=None to scatter and
countourf, for example. There are probably other places where such a
change should be done, unless someone has a better idea for how do do
this. I updated examples/pylab/plot_scatter.py to show off the new
capability.
Another thing that I was unable to get around is that if you now make a
plot using the same colormap but omit the alpha=None parameter, or set
it to something other than None, it will reset the alpha values on the
previous plot:
 figure(2)
 c = scatter(theta, r, c=colors, s=area,cmap=myColormap,alpha=None)
will do the right thing, but calling scatter without alpha=None
 figure(3)
 d = scatter(theta, r, c=colors, s=area,cmap=myColormap)
or
 d = scatter(theta, r, c=colors, s=area,cmap=myColormap, alpha=.5)
will reset all of the alpha values in myColormap to 1 or .5.
You can do c.cmap._init() to reset its original alpha values, and if you
force a redraw on figure(2) (by panning or zooming on it, for example),
it will look right again. However, if you go and fiddle with figure(3)
(pan/zoom), and come back to figure(2), panning or zooming will
cause all of the alpha values will be reset again.
I'm not sure if it would be worth it to make a copy of the colormap to
prevent this from happening. Anyone have thoughts on this?
(the full example of this is commented with FIXME: in polar_scatter.py)
best,
 Paul Ivanov
John Hunter, on 2008年11月23日 07:36, wrote:
> On Sun, Nov 23, 2008 at 2:01 AM, Paul Ivanov <piv...@gm...> wrote:
>> I took a stab at it, how does this look?
>>
>> I also took the liberty of adding alpha to LinearSegmentedColormap and
>> updated its docstring changing two somewhat ambiguous uses of the word
>> 'entry' with 'key' and 'value'.
> 
> Hey Paul,
> 
> Thanks for taking this on. I haven't tested this but I read the patch
> and have some inline comments below. Some additional comments:
> 
> * the patch should include a section in the CHANGELOG and
> API_CHANGES letting people know what is different.
> 
> * you should run examples/tests/backend_driver.py and make sure all
> the examples still run, checking the output of some of the mappable
> types (images, scaltter, pcolor...)
> 
> * it would be nice to have an example in the examples dir which
> exercises the new capabilities.
> 
> See also, in case you haven't,
> http://matplotlib.sourceforge.net/devel/coding_guide.html, which
> covers some of this in more detail.
> 
> Thanks again! Comments below:
> 
> Index: lib/matplotlib/colors.py
> ===================================================================
> --- lib/matplotlib/colors.py	(revision 6431)
> +++ lib/matplotlib/colors.py	(working copy)
> @@ -452,7 +452,7 @@
> self._isinit = False
> 
> 
> - def __call__(self, X, alpha=1.0, bytes=False):
> + def __call__(self, X, alpha=None, bytes=False):
> """
> *X* is either a scalar or an array (of any dimension).
> If scalar, a tuple of rgba values is returned, otherwise
> @@ -466,9 +466,10 @@
> """
> You need to document what alpha can be here: what does None mean, can
> it be an array, scalar, etc...
> 
> if not self._isinit: self._init()
> - alpha = min(alpha, 1.0) # alpha must be between 0 and 1
> - alpha = max(alpha, 0.0)
> - self._lut[:-3, -1] = alpha
> + if alpha:
> 
> I prefer to explicitly use "if alpha is None", since there are other
> things that would test False (0, [], '') that you probably don't mean.
> 
> + alpha = min(alpha, 1.0) # alpha must be between 0 and 1
> + alpha = max(alpha, 0.0)
> 
> You should be able to use np.clip(alpha, 0, 1) here, but we should
> consider instead raising for illegal alpha values since this will be
> more helpful to the user. I realize some of this is inherited code
> from before your changes, but we can improve it while making this
> patch.
> 
> + self._lut[:-3, -1] = alpha
> mask_bad = None
> if not cbook.iterable(X):
> vtype = 'scalar'
> @@ -558,9 +559,10 @@
> def __init__(self, name, segmentdata, N=256):
> """Create color map from linear mapping segments
> 
> - segmentdata argument is a dictionary with a red, green and blue
> - entries. Each entry should be a list of *x*, *y0*, *y1* tuples,
> - forming rows in a table.
> + segmentdata argument is a dictionary with red, green and blue
> + keys. An optional alpha key is also supported. Each value
> + should be a list of *x*, *y0*, *y1* tuples, forming rows in a
> + table.
> 
> Example: suppose you want red to increase from 0 to 1 over
> the bottom half, green to do the same over the middle half,
> @@ -606,6 +608,8 @@
> self._lut[:-3, 0] = makeMappingArray(self.N,
> self._segmentdata['red'])
> self._lut[:-3, 1] = makeMappingArray(self.N,
> self._segmentdata['green'])
> self._lut[:-3, 2] = makeMappingArray(self.N,
> self._segmentdata['blue'])
> + if self._segmentdata.has_key('alpha'):
> + self._lut[:-3, 3] = makeMappingArray(self.N,
> self._segmentdata['blue'])
> 
> Is this what you meant? I think you would use 'alpha' rather than
> 'blue' here, no?
> 
> self._isinit = True
> self._set_extremes()
> 
> @@ -664,11 +668,10 @@
> 
> 
> def _init(self):
> - rgb = np.array([colorConverter.to_rgb(c)
> + rgba = np.array([colorConverter.to_rgba(c)
> for c in self.colors], np.float)
> self._lut = np.zeros((self.N + 3, 4), np.float)
> - self._lut[:-3, :-1] = rgb
> - self._lut[:-3, -1] = 1
> + self._lut[:-3] = rgba
> self._isinit = True
> self._set_extremes()
From: Stan W. <sta...@nr...> - 2008年11月24日 15:10:38
> -----Original Message-----
> From: John Hunter [mailto:jd...@gm...] 
> Sent: Friday, November 21, 2008 09:23
> 
> My main comment is to not try and reuse subplot for this.
...
> You want your grids to be irregular, so make a new subclass 
> of Axes that acts the way you want.
Understood. I appreciate the feedback.
From: Sandro T. <mo...@de...> - 2008年11月23日 10:31:00
Attachments: README.debian
Hello John & Devs :)
first of all, let me excuse for the enormous delay of this reply, it
seems I've lost it somewhere :(
On Sat, Nov 1, 2008 at 21:14, John Hunter <jd...@gm...> wrote:
> On Sat, Nov 1, 2008 at 5:37 AM, Sandro Tosi <mo...@de...> wrote:
>> Hello guys!
>> Following up the discussion Benjiamin and me had about a couple of
>> bugs in Ubuntu[1] and Debian[2], and what Mike wrote on [1], we'd like
>> to explore the possibility for you to develop a "backend=Auto" mode,
>> that can discover automatically at runtime the backend to use from the
>> ones available (in case of multiple backends, let's use a priority
>> list [gtk, qt, tk, ...]).
>
> This should be fairly easy to implement -- we already do something
> like this at build time when we choose the default backend to put into
> the template. FYI, since you are a packager, I jwant to clarify what
> happens at build time. We have a file called setup.cfg.template which
> is a template for people who want to override the default behavior.
> You can copy this to setup.cfg and choose what default backend you
> want, and the setup program will create an rc configured accordingly.
> But if someone has not done this, the setup script will look (at build
> time) for gtk, wx, and tk, and set the backend in order of increasing
> preference: Agg, TkAgg, WXAgg, GTKAgg. The file matplotlibrc.template
> is then used to generate the default matplotlibrc, with this backend
> selected. This matplotlibrc is installed to matplotlib/mpl-data/ and
> is used as the default config if the user does not override it.
>
> As a debian/ubuntu packager, you will probably want to use setup.cfg
> and set your backend manually. You may want to use TkAgg since in
> some ways this is the simplest backend to use in the most contexts,
> primarily because it does not require special threading calls to work
> interactively in the python shell -- see
> http://matplotlib.sf.net/installing.html
yeah, we are doing (since a couple or revisions) exactly this: we
choose Tk as the default backend in setup.cfg so that even
/etc/matplotlibrc has that param set.
> But an "Auto" feature would be useful in other contexts too. One area
> is when matplotlib is embedded in a GUI IDE matlab-like application.
> There are several of these that are being worked on in different user
> interfaces, primarily using the new embeddable ipython, and the
> concern there is that the user may be using one application which
> embeds a python shell, and when users import pylab in that shell, the
> ought not to have to think: "now I am using python in a wx app, so I
> need to sert wxagg" but in other scenarios, "now I am using pylab in a
> plain python shell so use tkagg"
>
> The auto search algorithm should go something like the following:
>
> * if tkinter, wx, gtk or qt has already been imported in sys.modules,
> use that backend - Gael has already an implementation in the pyplot
> module using the rc param 'backend_fallback'
>
> * if backend = 'auto': try in order of preference :tkagg (most
> likely to work in most contexts), gtkagg, wxagg, qtagg. This order
> could easily be configurable
>
> * if none of the UIs are available in 'auto' mode, issue a warning
> and set 'agg'
>
> If I were packaging for ubuntu, I would select tkagg as the default,
> so you don't have to wade into the GNOME vs KDE arena and because it
> works out of the box in the most contexts and is a pretty small
> dependency. You could modify the matplotlib runtime so that when the
> .matplotlib directory is created (the first time mpl is run) it issues
> a message like
>
> Creating config directory ~/.matplotlib. The default user interface
> toolkit for matplotlib is TkInter via the "TkAgg backend" (see
> http://matplotlib.sourceforge.net/faq/installing_faq.html#id1). You
> use other backends, you will need to install additional ubuntu
> dependencies.
> For GTKAgg, install python-gtk, for WXAgg, install python-wxgtk, etc..."
We did a similar thing, but at packaging level: we added a
"notification" messages at package installation that clarify TkAgg is
the default backend and to look at a simple doc file we created to
change it to a different backend, both at machine or user level (the
file is attached, in case you would provide some
corrections/enhancements).
>> Personally, I think we can even attack the problem with a different
>> solution: continue to ship all the mpl file in the "main" package
>> (python-matplotlib in Debian & Ubuntu) and then create some "dummy"
>> packages that simply depends on the python gui bindings (let's call
>> them python-matplotlib-<ui>), each of them providing a virtual
>> package, let's call it python-matplotlib-backend. If python-matplotlib
>> depends on python-matplotlib-gtk OR python-matplotlib-backend, any
>> backend installed can satisfy that dependency (and the default being
>> gtk).
>
> That should work fine, but I advise installing all of mpl and use
> these dummies only for dependencies.
>
>
>> Both of them has cons: the first poses problem to us for the
>> packaging, and both does not solve the problem of not choosing a
>> default (or requiring to specify another package (the backend chosen)
>> when installing python-matplotlib); moreover, what other packages
>> depending on python-matplotlib should do after this change (they
>> expect mpl to Just Work)?
>
> Well, if the package that depends on mpl requires for example wx and
> the wx backend, then it is enough for them to get a full mpl install
> and handle the wx dependency in their package. They would need to
> make sure that the right mpl backend is selected when they import mpl,
> but they can do that with the use directive.
>
>> Another solution (that would save the most of the current work done),
>> almost the same currently used today is: keep doing the same thing as
>> of now, but do not install any python gui bindings, but popup a
>> windows at python-matplotlib install time to ask the user what binding
>> to use (then create the ad-hoc /etc/matplotlirc file with that
>> "backend" set) and then ask to install the correct python binding for
>> the backend chosen. A light version is: keep choosing gtk as default
>> backend, and clearly document (even at install time) how to change
>> backend.
>
> This is in line with what I was suggesting, though I was suggesting it
> at mpl first run time. Either way could work.
>
> I do see that this is a problem: a colleague of mine with a new ubuntu
> 8.10 box installed python-matplotlib as one of his first packages, and
> it brought in 280 MB of packages, including several UI toolkits and a
> full tex distribution. I think the packagers are being over
> inclusive. For optional dependencies like usetex, which most people
> do not use, and optional backends, it would be better to have a clear
> set of instructions for users who want these optional features to
> simply apt-get install the optional dependencies themselves.
Yeah, we are trying to address this problem, that I try to explain
here to receive your suggestions:
- texlive (LaTeX distribution) is brought in because of dvipng; I
think we can safely move this package to a place where it's not
automatically installed, since it's not a strict dependency
- other gui libraries (like libgtk2.0.0) are brought in because the
package contains .so files that links to those libraries, and this
situation cannot be avoided (leaving the package as it is)
- with a recent change, we avoid the installation of python gtk2
bindings if python-tk is already installed, so all that set of
packages will not be installed by default
These changes should reduce a lot the amount of packages to be
installed (in particular for texlive stuff).
Thanks for your attention,
-- 
Sandro Tosi (aka morph, Morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi
From: Jozef V. <ve...@gj...> - 2008年11月22日 19:18:03
Sorry mistake sneaked in, correct patch is:
--- axes.py.old 2008年11月22日 18:15:17.000000000 +0100
+++ axes.py 2008年11月22日 18:24:09.000000000 +0100
@@ -1480,6 +1480,11 @@
 """
 # if image data only just use the datalim
 if not self._autoscaleon: return
+
+ if iterable(self._autoscaleon):
+ scalex = scalex and self._autoscaleon[0]
+ scaley = scaley and self._autoscaleon[1]
+
 if scalex:
 xshared = self._shared_x_axes.get_siblings(self)
 dl = [ax.dataLim for ax in xshared]
From: Jozef V. <ve...@gj...> - 2008年11月22日 18:53:15
Hello matplotlib developers, 
I found useful to autoscale just one axis.
Here is the quick noninvasive patch to illustrate the idea.
Real solution should change whole autoscale_on semantics
(get_, set_, docstrings). As I am not very familiar with
your api standards I leave it to you.
Jozef Vesely 
ve...@gj...
--- axes.py.old 2008年11月22日 18:15:17.000000000 +0100
+++ axes.py 2008年11月22日 18:24:09.000000000 +0100
@@ -1480,6 +1480,11 @@
 """
 # if image data only just use the datalim
 if not self._autoscaleon: return
+
+ if iterable(self._autoscale_on):
+ scalex = scalex and self._autoscale_on[0]
+ scaley = scaley and self._autoscale_on[1]
+
 if scalex:
 xshared = self._shared_x_axes.get_siblings(self)
 dl = [ax.dataLim for ax in xshared]
From: John H. <jd...@gm...> - 2008年11月21日 14:24:15
On Fri, Nov 21, 2008 at 8:17 AM, Benoit Zuber <bz...@mr...> wrote:
> Sorry, I did not realise it (I suppose, I did not quite know what backend
> means, now I checked it on wikipedia ;-) . Running the script with
> --verbose-helpful told me that the backend was GTKAgg version 2.10.1
Here is some additional documentation of backends from the mpl perspective:
 http://matplotlib.sourceforge.net/faq/installing_faq.html#id1
JDH
From: John H. <jd...@gm...> - 2008年11月21日 14:23:07
On Fri, Nov 21, 2008 at 8:08 AM, Stan West <sta...@nr...> wrote:
> While I check out the mplsizer toolkit, I'm still interested in any feedback
> on my ideas for subplot layout features. Does anyone have any critiques,
> concerns, preferences, suggestions, etc., to voice? Thanks.
My main comment is to not try and reuse subplot for this. Subplot is
a very thin wrapper of Axes, which handles layout on a regular grid.
You want your grids to be irregular, so make a new subclass of Axes
that acts the way you want. This will be easier than trying to tack
extras complexity on top of subplot.
We can then expose it to the toplevel with
 ax = fig.add_your_new_axes(whatever)
and to pyplot.
From: Benoit Z. <bz...@mr...> - 2008年11月21日 14:17:54
>
> Well, you are still using some backend, probably a GUI one, even if no
> figure pops up. You can run your script with --verbose-helpful to see
> what is happening.
> 
Sorry, I did not realise it (I suppose, I did not quite know what 
backend means, now I checked it on wikipedia ;-) . Running the script 
with --verbose-helpful told me that the backend was GTKAgg version 2.10.1
Best regards,
Ben
From: Stan W. <sta...@nr...> - 2008年11月21日 14:08:54
While I check out the mplsizer toolkit, I'm still interested in any feedback
on my ideas for subplot layout features. Does anyone have any critiques,
concerns, preferences, suggestions, etc., to voice? Thanks.
Stan
From: John H. <jd...@gm...> - 2008年11月21日 13:55:34
On Fri, Nov 21, 2008 at 7:53 AM, Benoit Zuber <bz...@mr...> wrote:
>
>> If you comment out agg you are using a gui backend presumably (which
>> one) and most of these are known to have some leaks, some of which are
>> beyond our control.
>
> This leak happened without any gui backend when I ran the script from the
> csh prompt like that:
>> python script.py
Well, you are still using some backend, probably a GUI one, even if no
figure pops up. You can run your script with --verbose-helpful to see
what is happening.
>> After
>> reading your post, I am not clear if you still have a problem or not.>From
>> the data you posted, it appears that agg is not leaking in your
>> example.
>>
>
> It is not a problem anymore, using the 'Agg' solved the problem.
Great
JDH
From: Benoit Z. <bz...@mr...> - 2008年11月21日 13:53:58
> If you comment out agg you are using a gui backend presumably (which
> one) and most of these are known to have some leaks, some of which are
> beyond our control. 
This leak happened without any gui backend when I ran the script from 
the csh prompt like that:
 > python script.py
>
> When you say you are working interactively, do you mean from the
> python or ipython shell? 
Yes, from ipython shell.
> After
> reading your post, I am not clear if you still have a problem or not.>From the data you posted, it appears that agg is not leaking in your
> example.
> 
 It is not a problem anymore, using the 'Agg' solved the problem.
Thanks.
Ben
From: John H. <jd...@gm...> - 2008年11月21日 13:24:50
On Fri, Nov 21, 2008 at 7:04 AM, Benoit Zuber <bz...@mr...> wrote:
>
>> > I posted this on maplotlib-users list, but got no reply. I guess that
>> > bugs should rather be reported here...
>>
>> Could you post a *complete* script that demonstrates the leak, eg one
>> that calls the function and does any other cleanup? Does it help to
>> use gc.collect between function calls?
>
> Thanks for your reply. Here is the complete script (I was running the
> previous one interactively).
> In fact, I realised that the memory leak is not total... I mean that the
> RAM gets loaded during the first two iterations, which correspond to a
> load of 1.9Gb (I have 4Gb RAM in total). Then the RAM usage remains
> absolutely stable.
>
> I then tried to run this script interactively in ipython. Once the
> script ends, the RAM is not released (1.9Gb are still used).
> Nevertheless, when I call fa() once again, the memory load remains the
> same. So this leak does not lead to a crash, which is fine.
>
> Finally if I comment "matplotlib.use('Agg')", then the load is
> increasing during each iteration, saturating the RAM, and starting
> filling up the swap. In this case the output of the script is :
If you comment out agg you are using a gui backend presumably (which
one) and most of these are known to have some leaks, some of which are
beyond our control. Michael has recently made some change to
significantly reduce a gtk leak.
When you say you are working interactively, do you mean from the
python or ipython shell? ipython holds a reference to the names the
main module namespace, which could be preventing a gc cleanup. After
reading your post, I am not clear if you still have a problem or not.
>From the data you posted, it appears that agg is not leaking in your
example.
JDH
From: Benoit Z. <bz...@mr...> - 2008年11月21日 13:05:00
> > I posted this on maplotlib-users list, but got no reply. I guess that
> > bugs should rather be reported here...
> 
> Could you post a *complete* script that demonstrates the leak, eg one
> that calls the function and does any other cleanup? Does it help to
> use gc.collect between function calls?
Thanks for your reply. Here is the complete script (I was running the
previous one interactively).
In fact, I realised that the memory leak is not total... I mean that the
RAM gets loaded during the first two iterations, which correspond to a
load of 1.9Gb (I have 4Gb RAM in total). Then the RAM usage remains
absolutely stable. 
I then tried to run this script interactively in ipython. Once the
script ends, the RAM is not released (1.9Gb are still used).
Nevertheless, when I call fa() once again, the memory load remains the
same. So this leak does not lead to a crash, which is fine.
Finally if I comment "matplotlib.use('Agg')", then the load is
increasing during each iteration, saturating the RAM, and starting
filling up the swap. In this case the output of the script is :
9
9
9
9
9
Cheers,
Ben
import numpy as np
import matplotlib
matplotlib.use('Agg')
from matplotlib import pylab
import gc
def fa():
 a = np.arange(1024**2)
 a = a.reshape(1024,1024)
 for i in range(5):
 filename = "memleak%d" %(i)
 pylab.pcolor(a)
 pylab.savefig(filename)
 pylab.close()
 print gc.collect()
fa()
This outputs:
0
0
0
0
0
From: John H. <jd...@gm...> - 2008年11月21日 11:44:17
On Fri, Nov 21, 2008 at 4:24 AM, Benoit Zuber <bz...@mr...> wrote:
> Hi,
> I posted this on maplotlib-users list, but got no reply. I guess that
> bugs should rather be reported here...
Could you post a *complete* script that demonstrates the leak, eg one
that calls the function and does any other cleanup? Does it help to
use gc.collect between function calls?
JDH
From: Benoit Z. <bz...@mr...> - 2008年11月21日 10:24:11
Hi,
I posted this on maplotlib-users list, but got no reply. I guess that 
bugs should rather be reported here...
I have noticed a memory leak when using pylab.pcolor. Here is the code,
fa() and fb() do the same thing. The difference is the size of the array
which is passed to pcolor. With a large array pcolor leaks but not with a
small one.
Cheers,
Ben
import numpy as np
import matplotlib
matplotlib.use('Agg')
from matplotlib import pylab
def fa():
 """ This function leaks.
 """
 a = np.arange(1024**2)
 a = a.reshape(1024,1024)
 for i in range(1):
 pylab.pcolor(a)
 pylab.close()
def fb():
 """This function does not leak.
 """
 b = np.arange(1024)
 b = b.reshape(32,32)
 for i in range(1024):
 pylab.pcolor(b)
 pylab.close()
From: Eric F. <ef...@ha...> - 2008年11月20日 02:01:24
Mike (or other transforms afficionados),
The thread "[Matplotlib-users] Bug saving semilogy plots with a axvline" 
started by js...@fc... pointed to a bug that appears to be deep in 
the transforms code. My head is spinning. The problem seems to be 
related to the propagation of the _invalid attribute in transforms, in 
the case of a mixed data/axes transform such as ashline uses. The 
following one-line change in TransformNode, second line from the bottom, 
works:
 def invalidate(self):
 """
 Invalidate this :class:`TransformNode` and all of its
 ancestors. Should be called any time the transform changes.
 """
 # If we are an affine transform being changed, we can set the
 # flag to INVALID_AFFINE_ONLY
 value = (self.is_affine) and self.INVALID_AFFINE or self.INVALID
 # Shortcut: If self is already invalid, that means its parents
 # are as well, so we don't need to do anything.
 if self._invalid == value:
 return
 if not len(self._parents):
 self._invalid = value
 return
 # Invalidate all ancestors of self using pseudo-recursion.
 parent = None
 stack = [self]
 while len(stack):
 root = stack.pop()
 # Stop at subtrees that have already been invalidated
 if root._invalid != value or root.pass_through:
 root._invalid = self.INVALID # value <===========
 stack.extend(root._parents.keys())
Now, I know this is the wrong solution, because it defeats all the 
cleverness with the _invalid values; but perhaps it will save you a few 
minutes in finding the right solution.
To reproduce the problem, do this in ipython -pylab:
axhline(5)
yscale('log')
ylim(0.5,30)
Eric
From: Anne A. <per...@gm...> - 2008年11月18日 14:45:51
2008年10月16日 Rob Hetland <he...@ta...>:
>
>
> On Oct 13, 2008, at 4:59 PM, Anne Archibald wrote:
>
>> but I have written a simple line
>> integral convolution operator I'd be happy to contribute.
>
>
> Anne-
>
> I would be interested in seeing the code, regardless of where it finds a
> home.
>
> Would you mind sharing, or would you rather wait?
Well, just now I'm absolutely swamped with other work, but I thought
it would be good to make the code I have available:
http://www.scipy.org/Cookbook/LineIntegralConvolution
Good luck with it!
Anne
From: Val S. <vsc...@CC...> - 2008年11月17日 18:30:44
Fair enough.
Thanks!
On Nov 17, 2008, at 1:06 PM, Eric Firing wrote:
> vschmidt wrote:
>> I'm hoping you can help me confirm/deny a bug in pylab's date2num() 
>> function. My assumption (this may be wrong) is that this function 
>> is meant to be
>> compatible with the MATLAB function date2num(). However, in ipython 
>> I can
>> execute:
>> ---------
>> import datetime
>> import pylab as p
>> dts = datetime.datetime.now()
>> serialts = p.date2num(dts)
>> print dts
>> 2008年11月16日 12:03:20.914480
>> print serialts
>> 733362.502325
>> ------------
>> If I then copy this serialts value into MATLAB I get:
>> ----------
>> datestr(733362.502325)
>> 16-Nov-2007 12:03:20
>> ----------
>> Note that the year is off by one.
>
> Evidently date2num was designed to be similar, but not identical, to 
> Matlab's datenum. (The difference might have been inadvertent.) 
> Matlab's documentation says,
>
> A serial date number represents the whole and fractional number of 
> days from a specific date and time, where datenum('Jan-1-0000 
> 00:00:00') returns the number 1. (The year 0000 is merely a 
> reference point and is not intended to be interpreted as a real year 
> in time.)
>
> And mpl's says,
>
>
> return value is a floating point number (or sequence of floats)
> which gives number of days (fraction part represents hours,
> minutes, seconds) since 0001年01月01日 00:00:00 UTC
>
> So they simply have a different origin. I find calendars endlessly 
> confusing, and I make no attempt to delve into them; but I dimly 
> recall that there is a year 1, but there is no year 0, so perhaps 
> that is an advantage of the mpl version--not that it should matter 
> in practice.
>
> I think the conclusion is that this sort of date number should be 
> considered suitable for internal use only, and not used as an 
> interchange format; for going from one software system to another, 
> one must use a genuine standard supported by both.
>
> Eric
>
>
------------------------------------------------------
Val Schmidt
CCOM/JHC
University of New Hampshire
Chase Ocean Engineering Lab
24 Colovos Road
Durham, NH 03824
e: vschmidt [AT] ccom.unh.edu
m: 614.286.3726
From: Eric F. <ef...@ha...> - 2008年11月17日 18:06:30
vschmidt wrote:
> I'm hoping you can help me confirm/deny a bug in pylab's date2num() function. 
> 
> My assumption (this may be wrong) is that this function is meant to be
> compatible with the MATLAB function date2num(). However, in ipython I can
> execute:
> 
> ---------
> import datetime
> import pylab as p
> 
> dts = datetime.datetime.now()
> serialts = p.date2num(dts)
> 
> print dts
> 2008年11月16日 12:03:20.914480
> 
> print serialts
> 733362.502325
> ------------
> 
> If I then copy this serialts value into MATLAB I get:
> 
> ----------
> datestr(733362.502325)
> 16-Nov-2007 12:03:20
> ----------
> 
> Note that the year is off by one.
Evidently date2num was designed to be similar, but not identical, to 
Matlab's datenum. (The difference might have been inadvertent.) Matlab's 
documentation says,
A serial date number represents the whole and fractional number of days 
from a specific date and time, where datenum('Jan-1-0000 00:00:00') 
returns the number 1. (The year 0000 is merely a reference point and is 
not intended to be interpreted as a real year in time.)
And mpl's says,
 return value is a floating point number (or sequence of floats)
 which gives number of days (fraction part represents hours,
 minutes, seconds) since 0001年01月01日 00:00:00 UTC
So they simply have a different origin. I find calendars endlessly 
confusing, and I make no attempt to delve into them; but I dimly recall 
that there is a year 1, but there is no year 0, so perhaps that is an 
advantage of the mpl version--not that it should matter in practice.
I think the conclusion is that this sort of date number should be 
considered suitable for internal use only, and not used as an 
interchange format; for going from one software system to another, one 
must use a genuine standard supported by both.
Eric
> 
> 

Showing results of 126

<< < 1 2 3 4 .. 6 > >> (Page 2 of 6)
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /