You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
|
1
(3) |
2
|
3
(2) |
4
(1) |
5
(12) |
6
(5) |
7
(3) |
8
(2) |
9
(1) |
10
(14) |
11
(11) |
12
(7) |
13
(4) |
14
(8) |
15
(2) |
16
(1) |
17
(2) |
18
(1) |
19
|
20
(1) |
21
(10) |
22
(2) |
23
(1) |
24
(1) |
25
(7) |
26
(10) |
27
(2) |
28
(1) |
29
(5) |
30
(7) |
|
|
|
|
|
|
I'm hoping you can help me confirm/deny a bug in pylab's date2num() function. My assumption (this may be wrong) is that this function is meant to be compatible with the MATLAB function date2num(). However, in ipython I can execute: --------- import datetime import pylab as p dts = datetime.datetime.now() serialts = p.date2num(dts) print dts 2008年11月16日 12:03:20.914480 print serialts 733362.502325 ------------ If I then copy this serialts value into MATLAB I get: ---------- datestr(733362.502325) 16-Nov-2007 12:03:20 ---------- Note that the year is off by one. -- View this message in context: http://www.nabble.com/Bug-in-pylab.date2num-tp20527541p20527541.html Sent from the matplotlib - devel mailing list archive at Nabble.com.
On Sat, Nov 15, 2008 at 6:50 AM, Jozef Vesely <ve...@gj...> wrote: > legends outside axes display only line without markers. > Somebody forgot to unset clipping. It appears Eric already fixed this in svn: r6127 | efiring | 2008年09月27日 19:44:08 -0500 (2008年9月27日) | 4 lines Enhancement to Axes.spy, and bugfixes: figlegend was not plotting markers; plot could not handle empty input arrays. So please give svn a test if you have a chance. Thanks for the patch. JDH
Hello, legends outside axes display only line without markers. Somebody forgot to unset clipping. Bye Jozef Vesely --- matplotlib-0.98.3.orig/lib/matplotlib/legend.py 2008年08月03日 20:15:03.000000000 +0200 +++ matplotlib-0.98.3/lib/matplotlib/legend.py 2008年11月15日 13:41:09.000000000 +0100 @@ -270,6 +270,8 @@ legline_marker.update_from(handle) legline_marker.set_linestyle('None') self._set_artist_props(legline_marker) + legline_marker.set_clip_box(None) + legline_marker.set_clip_path(None) # we don't want to add this to the return list because # the texts and handles are assumed to be in one-to-one # correpondence.
On Nov 14, 2008, at 12:36 PM, Paul Kienzle wrote: >> >> Any reason not to implement this simply as an additional kwarg to >> Wedge, rather than a new class -- since Ring with a r2 == 0 is >> equivalent to Wedge anyway? Just thinking of having less code to >> maintain...but maybe that's more confusing for end users. > > Okay, I will replace Wedge on svn. Done. The keyword I used is *width* for the width of the ring. Using width=None draws the whole wedge. - Paul
Paul Kienzle wrote: > > On Nov 14, 2008, at 9:59 AM, Michael Droettboom wrote: > >> Paul Kienzle wrote: >>> Hi, >>> >>> We found we needed to draw a partial ring, but didn't see one in >>> patches.py. >>> >>> Attached is a generalization of Wedge to accept an inner and an >>> outer radius. >>> >>> Should I add this to patches? >> Looks like a useful feature to me. Could be used for doing these >> sort of "hierarchical pie chart" graphs: >> >> http://linux.byexamples.com/wp-content/uploads/2007/04/baobab.png > > That's pretty, though I have no idea what it is supposed to represent. It's the size of directories in a directory structure. Each ring is a different depth of nesting. Useful wherever you have a heirarchical percentages... 50% of %40 of %30... Of course, suffers from the same usability problems of pie charts. I'm not seriously suggesting we pursue this... Your Ring class just reminded me of it. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
On Nov 14, 2008, at 9:59 AM, Michael Droettboom wrote: > Paul Kienzle wrote: >> Hi, >> >> We found we needed to draw a partial ring, but didn't see one in >> patches.py. >> >> Attached is a generalization of Wedge to accept an inner and an >> outer radius. >> >> Should I add this to patches? > Looks like a useful feature to me. Could be used for doing these > sort of "hierarchical pie chart" graphs: > > http://linux.byexamples.com/wp-content/uploads/2007/04/baobab.png That's pretty, though I have no idea what it is supposed to represent. > > Any reason not to implement this simply as an additional kwarg to > Wedge, rather than a new class -- since Ring with a r2 == 0 is > equivalent to Wedge anyway? Just thinking of having less code to > maintain...but maybe that's more confusing for end users. Okay, I will replace Wedge on svn. - Paul
Paul Kienzle wrote: > Hi, > > We found we needed to draw a partial ring, but didn't see one in > patches.py. > > Attached is a generalization of Wedge to accept an inner and an outer > radius. > > Should I add this to patches? Looks like a useful feature to me. Could be used for doing these sort of "hierarchical pie chart" graphs: http://linux.byexamples.com/wp-content/uploads/2007/04/baobab.png Any reason not to implement this simply as an additional kwarg to Wedge, rather than a new class -- since Ring with a r2 == 0 is equivalent to Wedge anyway? Just thinking of having less code to maintain...but maybe that's more confusing for end users. > > Note that rather saving the unit ring and constructing a transform as > in Wedge: > > def get_patch_transform(self): > x = self.convert_xunits(self.center[0]) > y = self.convert_yunits(self.center[1]) > rx = self.convert_xunits(self.r2) > ry = self.convert_yunits(self.r2) > self._patch_transform = transforms.Affine2D() \ > .scale(rx, ry).translate(x, y) > return self._patch_transform > > I just transform the coordinates directly: > > v *= r2 > v += numpy.array(center) > self._path = Path(v,c) > self._patch_transform = transforms.IdentityTransform() > > Any reason to prefer one over the other? No strong reason. The reason I did it that way was so that if the patch were scaled or translated in an animation, the transformation of the wedge vertices would happen on-the-fly within backend_agg. It's not based on any benchmarking, but probably more on old habits from game programming (where it's common practice to keep shapes and transforms separated and let the hardware apply them). I doubt it matters here. Mike > > - Paul > > > ------------------------------------------------------------------------ > > > > > > > > One problem is that I had to explicitly set the axes limits 1because add_patch > > wasn't updating the limits to include the bounds of the new patch. > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Eric Firing wrote: > Jozef Vesely wrote: > >> Hello matplotlib developers, >> >> I have implemented "svg.image_noscale" feature for ps and pdf backends. I think >> that resampling/scaling should be avoided, when vector format can scale image >> itself. >> Unfortunately, the quality of interpolation is often subpar compared to what matplotlib (via Agg) provides. Worse, the quality will be different when using Acrobat Reader vs. xpdf, for instance. I don't think zooming in on individual pixels of data in Acroread is something we really are trying to support anyway -- for that you should use an interactive matplotlib GUI. The purpose of pdf, imho, is really for printing. In that case, you're likely to get better results and smaller file sizes by knowing the maximum resolution of your output device and letting matplotlib resample it -- and resample it with a method that is appropriate for the data, not the one in the printer or Acrobat that is (probably) optimized for photographs of the real world or whatever the driver is currently set to. > It seems to me best if there is an option to scale or not; depending on > the situation, one might want to generate a file with images downscaled. > Right. All the above notwithstanding, I don't have a problem with this being a user option, I just can't imagine using it myself. > >> One advantage is that original image can be recovered from final file. Moreover >> as it is vector format it should be dpi independent and always provide maximum >> possible quality - that's original data. >> The original image can theoretically be recovered from the final file. But not the original data, which may be floating point etc. If you anticipate users of your plot to need the original data, just distribute the original data alongside the plot. >> As for svg backend I have commented some transformation changes which >> I don't understand and which result in misaligned image and axes. >> Without it the misalignment is still there but smaller. >> Thanks for that. I'm not sure why that code is there. I see it looks much better without it. >> I have also removed MixedModeRenderer from svg as it conflicts with "svg.image_noscale" >> and does not seem to be used. >> I think it would be better to turn off mixed mode rendering only when svg.image_noscale is True. >> > > I think having the option of using the MixedModeRenderer is important in > the long run for the vector backends; without it, one can end up with > completely unwieldy and potentially unrenderable files. I'm not sure > what its status is at present; I think Mike got it working to a > considerable extent, but didn't quite finish, and therefore left it > temporarily disabled. > It's fully functional in all the backends where it makes sense. The part that is unfinished is the user interface -- how to turn the functionality on and off. We couldn't find both a general and easy way to do it. But it would be nice to have another go at it. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
> You may want to have a look at the mplsizer MPL toolkit I > wrote a long time ago and have failed to properly advertise > or maintain. Thanks; I'll take a look at it.
Sandro Tosi wrote: > Hello guys! > A Debian Developers just reported a bug[1] on debian matplotlib during > his preparation to introduce GCC 4.4: matplotlib will fail to build > with GCC 4.4 due to a missing include. > > Attached is a patch to fix this problem, forged from an updated trunk; > hope you can include it. > > [1] http://bugs.debian.org/505618 Hi Sandro, fixed in r6402. mm > Cheers, > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
Jozef Vesely wrote: > Hello matplotlib developers, > > I have implemented "svg.image_noscale" feature for ps and pdf backends. I think > that resampling/scaling should be avoided, when vector format can scale image > itself. It seems to me best if there is an option to scale or not; depending on the situation, one might want to generate a file with images downscaled. > > One advantage is that original image can be recovered from final file. Moreover > as it is vector format it should be dpi independent and always provide maximum > possible quality - that's original data. > > As for svg backend I have commented some transformation changes which > I don't understand and which result in misaligned image and axes. > Without it the misalignment is still there but smaller. > I have also removed MixedModeRenderer from svg as it conflicts with "svg.image_noscale" > and does not seem to be used. > I think having the option of using the MixedModeRenderer is important in the long run for the vector backends; without it, one can end up with completely unwieldy and potentially unrenderable files. I'm not sure what its status is at present; I think Mike got it working to a considerable extent, but didn't quite finish, and therefore left it temporarily disabled. Eric
Hello guys! A Debian Developers just reported a bug[1] on debian matplotlib during his preparation to introduce GCC 4.4: matplotlib will fail to build with GCC 4.4 due to a missing include. Attached is a patch to fix this problem, forged from an updated trunk; hope you can include it. [1] http://bugs.debian.org/505618 Cheers, -- Sandro Tosi (aka morph, Morpheus, matrixhasu) My website: http://matrixhasu.altervista.org/ Me at Debian: http://wiki.debian.org/SandroTosi
Hi, We found we needed to draw a partial ring, but didn't see one in patches.py. Attached is a generalization of Wedge to accept an inner and an outer radius. Should I add this to patches? Note that rather saving the unit ring and constructing a transform as in Wedge: def get_patch_transform(self): x = self.convert_xunits(self.center[0]) y = self.convert_yunits(self.center[1]) rx = self.convert_xunits(self.r2) ry = self.convert_yunits(self.r2) self._patch_transform = transforms.Affine2D() \ .scale(rx, ry).translate(x, y) return self._patch_transform I just transform the coordinates directly: v *= r2 v += numpy.array(center) self._path = Path(v,c) self._patch_transform = transforms.IdentityTransform() Any reason to prefer one over the other? - Paul
Hi Stan, You may want to have a look at the mplsizer MPL toolkit I wrote a long time ago and have failed to properly advertise or maintain. But, it does much of what you propose by emulating wx sizers for matplotlib. Anyhow, this is available by svn checkout from https://matplotlib.svn.sourceforge.net/svnroot/matplotlib/trunk/toolkits/mplsizer I'd be happy if you wanted to take over the project and push it forward, but I also understand if you have other implementation ideas. (The wx API is not to everyone's taste, for one thing.) -Andrew Stan West wrote: > Hello, all. I'd like to add to matplotlib facilities for (a) > conveniently specifying the relative sizes of subplots, and (b) creating > subplots that span cells of the subplot grid. For example, to obtain a > column of three subplots with the last one 50% taller than the other > two, the user would provide [1, 1, 1.5] as a row size parameter, and > matplotlib would calculate the appropriate axes positions. An example of > spanning using the current machinery is the mri_with_eeg > <http://matplotlib.sourceforge.net/examples/pylab_examples/mri_with_eeg.html> > example. Are these features of interest? > > The calculation code was easy to write, but I'd like input on how it can > best be implemented and accessed by users. One issue is where the > relative sizes should be stored. Presently, while figures hold > information about the margins around and between their subplots, as far > as I can tell they are unaware of any grid structure among their > subplots (although they do track as keys the arguments to add_subplot). > Grid dimensions are instead stored with each axes along with the > position of the axes within the grid. (The fact that the grid dimensions > need not agree among all of the axes in a figure allows for layouts like > mri_with_eeg. However, pushing that too far can yield unattractive > results: The axes specified by "subplot(2, 2, 1); subplot(2, 4, 5); > subplot(2, 4, 6)" are misaligned unless wspace = 0, a consequence of how > the wspace parameter is interpreted. A new spanning mechanism should > yield aligned axes.) > > A second issue is that the numbers of rows and columns are > overdetermined, being specified explicitly by the arguments to subplot > and implicitly by the lengths of the lists containing the custom sizes. > > Considering those issues, I thought of a few possible approaches for > specifying custom row and column sizes: > > A. Store the row and column sizes in the instance of > figure.SubplotParams held in the figure's subplotpars attribute. User > code would look something like the following, using an example kwarg > row_sizes: > > fig = figure(subplotpars=mpl.figure.SubplotParams(row_sizes=[1, 1, > 1.5])) > fig.add_subplot(3, 1, 1) # etc. > fig.subplots_adjust(row_sizes=[1, 1, 3]) # resize the subplots > > There would be corresponding pyplot functionality. This approach allows > the user to manage the sizes at the figure level (which I consider > convenient), but only one set of sizes is accommodated. One way to > handle the overspecified grid dimensions is a conflict-resolution > scheme. For example, in axes.SubplotBase.update_params, if the number of > rows (originating in the subplot call) matches the length of the > figure's row specifier, then honor the custom row sizes, and likewise > for columns. Otherwise, either (a) raise an error, or (b) fall back to > using equal rows (or columns) but send a message to verbose.report at > some debug level. If no custom sizes are specified, then everything > operates as now and no errors nor verbose.report messages will be > generated. Because the numbers of rows and columns are stored with the > axes, each axes would handle the conflict resolution independently of > the other axes. Another scheme (which I like better) would introduce > alternative subplot syntax such as add_subplot(row, col), where row and > col are numbered Python-style from 0. That form would cause the subplot > to inherit the numbers of rows and columns and the custom sizes in > fig.subplotpars, whereas the current forms would use only equally-sized > subplots. The new form also relieves the user of constructing the > Matlab-style single subplot number. > > B. Instead of the above, associate the size information with the axes. > User code might resemble > > fig = figure() > fig.add_subplot(2, 2, 1, col_sizes=[3, 1]) # +----+ ++ > fig.add_subplot(2, 2, 2, col_sizes=[3, 1]) # +----+ ++ > fig.add_subplot(2, 2, 3, col_sizes=[1, 3]) # ++ +----+ > fig.add_subplot(2, 2, 4, col_sizes=[1, 3]) # ++ +----+ > > This continues the current ability to create different grids (different > numbers of rows and columns, only now with different row and column > sizes) within the same figure, but they're managed at the axes level. > Resizing in this approach would need to be performed with each affected > axes, or a figure-level method could walk the child axes (possibly > checking for matching numbers of rows and columns). This approach still > overspecifies the numbers of rows and columns, and conflicts would need > to be resolved somehow. With the above syntax, errors could be raised by > subplot if its arguments disagree. Or, subplot could truncate or recycle > the list of sizes to match the grid dimensions. > > C. Create a new layout mechanism parallel to or containing the subplot > mechanism. I can imagine such a mechanism handling nested grids or > creating colorbar axes that adjust with their parent axes within a > complex layout. Such a mechanism would be beyond what I can contribute > now working alone, but I mention it essentially to ask whether anyone > sees a need to build a new system instead of augmenting the current one. > > While approach A appeals to me intuitively (in that it handles the sizes > at the figure level), it differs from the current structure of > matplotlib in which each axes independently maintains the dimensions of > its subplot grid and its position within it. I have some concern that > going in the direction of figure-level storage would involve structural > tension unless a new system is implemented (approach C). So, unless > there is a call for something dramatically different, I'm leaning toward > approach B with some figure-level methods to facilitate convenient > changes across multiple child axes. What do you folks think? > > As for the approach for creating spanning subplots, I like extending the > syntax of subplot along the lines of: > > subplot(num_rows, num_cols, row, col) > # non-spanning; row and col are numbered Python-style from 0 > subplot(num_rows, num_cols, (row_start, row_stop), (col_start, > col_stop)) > # spans row_start through row_stop - 1 (Python-style) > # and likewise for columns > > How does that look? > > This email ran long, but I didn't want to propose significant changes to > the matplotlib interface without first sharing some of the issues I've > encountered and getting feedback about how to proceed. Thanks. > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > > > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
Hello, all. I'd like to add to matplotlib facilities for (a) conveniently specifying the relative sizes of subplots, and (b) creating subplots that span cells of the subplot grid. For example, to obtain a column of three subplots with the last one 50% taller than the other two, the user would provide [1, 1, 1.5] as a row size parameter, and matplotlib would calculate the appropriate axes positions. An example of spanning using the current machinery is the mri_with_eeg <http://matplotlib.sourceforge.net/examples/pylab_examples/mri_with_eeg.html> example. Are these features of interest? The calculation code was easy to write, but I'd like input on how it can best be implemented and accessed by users. One issue is where the relative sizes should be stored. Presently, while figures hold information about the margins around and between their subplots, as far as I can tell they are unaware of any grid structure among their subplots (although they do track as keys the arguments to add_subplot). Grid dimensions are instead stored with each axes along with the position of the axes within the grid. (The fact that the grid dimensions need not agree among all of the axes in a figure allows for layouts like mri_with_eeg. However, pushing that too far can yield unattractive results: The axes specified by "subplot(2, 2, 1); subplot(2, 4, 5); subplot(2, 4, 6)" are misaligned unless wspace = 0, a consequence of how the wspace parameter is interpreted. A new spanning mechanism should yield aligned axes.) A second issue is that the numbers of rows and columns are overdetermined, being specified explicitly by the arguments to subplot and implicitly by the lengths of the lists containing the custom sizes. Considering those issues, I thought of a few possible approaches for specifying custom row and column sizes: A. Store the row and column sizes in the instance of figure.SubplotParams held in the figure's subplotpars attribute. User code would look something like the following, using an example kwarg row_sizes: fig = figure(subplotpars=mpl.figure.SubplotParams(row_sizes=[1, 1, 1.5])) fig.add_subplot(3, 1, 1) # etc. fig.subplots_adjust(row_sizes=[1, 1, 3]) # resize the subplots There would be corresponding pyplot functionality. This approach allows the user to manage the sizes at the figure level (which I consider convenient), but only one set of sizes is accommodated. One way to handle the overspecified grid dimensions is a conflict-resolution scheme. For example, in axes.SubplotBase.update_params, if the number of rows (originating in the subplot call) matches the length of the figure's row specifier, then honor the custom row sizes, and likewise for columns. Otherwise, either (a) raise an error, or (b) fall back to using equal rows (or columns) but send a message to verbose.report at some debug level. If no custom sizes are specified, then everything operates as now and no errors nor verbose.report messages will be generated. Because the numbers of rows and columns are stored with the axes, each axes would handle the conflict resolution independently of the other axes. Another scheme (which I like better) would introduce alternative subplot syntax such as add_subplot(row, col), where row and col are numbered Python-style from 0. That form would cause the subplot to inherit the numbers of rows and columns and the custom sizes in fig.subplotpars, whereas the current forms would use only equally-sized subplots. The new form also relieves the user of constructing the Matlab-style single subplot number. B. Instead of the above, associate the size information with the axes. User code might resemble fig = figure() fig.add_subplot(2, 2, 1, col_sizes=[3, 1]) # +----+ ++ fig.add_subplot(2, 2, 2, col_sizes=[3, 1]) # +----+ ++ fig.add_subplot(2, 2, 3, col_sizes=[1, 3]) # ++ +----+ fig.add_subplot(2, 2, 4, col_sizes=[1, 3]) # ++ +----+ This continues the current ability to create different grids (different numbers of rows and columns, only now with different row and column sizes) within the same figure, but they're managed at the axes level. Resizing in this approach would need to be performed with each affected axes, or a figure-level method could walk the child axes (possibly checking for matching numbers of rows and columns). This approach still overspecifies the numbers of rows and columns, and conflicts would need to be resolved somehow. With the above syntax, errors could be raised by subplot if its arguments disagree. Or, subplot could truncate or recycle the list of sizes to match the grid dimensions. C. Create a new layout mechanism parallel to or containing the subplot mechanism. I can imagine such a mechanism handling nested grids or creating colorbar axes that adjust with their parent axes within a complex layout. Such a mechanism would be beyond what I can contribute now working alone, but I mention it essentially to ask whether anyone sees a need to build a new system instead of augmenting the current one. While approach A appeals to me intuitively (in that it handles the sizes at the figure level), it differs from the current structure of matplotlib in which each axes independently maintains the dimensions of its subplot grid and its position within it. I have some concern that going in the direction of figure-level storage would involve structural tension unless a new system is implemented (approach C). So, unless there is a call for something dramatically different, I'm leaning toward approach B with some figure-level methods to facilitate convenient changes across multiple child axes. What do you folks think? As for the approach for creating spanning subplots, I like extending the syntax of subplot along the lines of: subplot(num_rows, num_cols, row, col) # non-spanning; row and col are numbered Python-style from 0 subplot(num_rows, num_cols, (row_start, row_stop), (col_start, col_stop)) # spans row_start through row_stop - 1 (Python-style) # and likewise for columns How does that look? This email ran long, but I didn't want to propose significant changes to the matplotlib interface without first sharing some of the issues I've encountered and getting feedback about how to proceed. Thanks.
>> if (x > 0.0 && x < *xm) *xm = x; >> if (y > 0.0 && y < *ym) *ym = y; >> } >> } >> >> >> In the last 2 lines, why are xm and ym being clipped at 0, when x0 and >> y0 are not? > xm and ym are the minimum positive values, used for log scales. > Definitely worth a comment. I will add one. >> >> 2) It looks like update_path_extents throws away orientation by always >> returning x0 and y0 as the minima. Bbox.update_from_path is therefore >> doing the same. This doesn't hurt in present usage, since orientation >> is not needed for dataLim, but it seems a bit surprising, and worth a >> comment at least. Am I again missing something obvious? >> > I think I'm missing something. (Choices in my code are always obvious > to *me*, and any mistakes are invisible... ;) Are you suggesting that > if a line has x decreasing then x0 > x1? What if it boomerangs? I > think it's simplest to keep data limits in a consistent order. View > limits are another issue, of course. I think I will add a comment. The only "problem" at present is that update_from_path, taken as a perfectly general Bbox method, is doing a bit more than is obvious from its name or even from the code, and this places at least a theoretical restriction on its potential use. For example, if someone looked at the method and thought he or she could use it to directly update a viewLim, the result would not be as expected. Eric > > Mike >
Eric Firing wrote: > Michael Droettboom wrote: >> >> Eric Firing wrote: >>> Mike, >>> >>> A bug was recently pointed out: axhline, axvline, axhspan, axvspan >>> mess up the ax.dataLim. I committed a quick fix for axhline and >>> axvline, but I don't think that what I did is a good solution, so >>> before doing anything for axhspan and axvspan I want to arrive at a >>> better strategy. >>> >>> What is needed is a clean way to specify that only the x or the y >>> part of ax.dataLim be updated when a line or patch (or potentially >>> anything else) is added. This is specifically for the case, as in >>> *line, where one axis is in data coordinates and the other is in >>> normalized coordinates--we don't want the latter to have any effect >>> on the dataLim. >>> >>> This could be done in python in any of a variety of ways, but I >>> suspect that to be most consistent with the way the transforms code >>> is now written, relying on update_path_extends from _path.cpp, it >>> might make sense to append two boolean arguments to that cpp >>> function, "update_x" and "update_y", and use kwargs in >>> Bbox.update_from_path and siblings to set these, with default values >>> of True. >> It seems we could do this without touching C at all. Just change >> update_from_path so it only updates certain coordinates in the >> bounding box based on the kwargs you propose. Sure, the C side will >> be keeping track of y bounds even when it doesn't have to, but I >> doubt that matters much compared to checking a flag in the inner >> loop. It will compute the bezier curves for both x and y anyway >> (without digging into Agg). It's hard to really estimate the >> performance impact, so I'm necessarily pushing for either option, but >> it may save having to update the C. > > Mike, > > I was somehow thinking that update_path_extents was changing things in > place--completely wrong. So yes, it is trivial to make the change at > the python level, and that is definitely the place to do it. I will > try to take care of it this evening. > > In poking around, however, I came up with a couple of questions. > Neither is a blocker for what I need to do, but each might deserve a > comment in the code, if nothing else. > > 1) in _path.cpp: > > void get_path_extents(PathIterator& path, const agg::trans_affine& trans, > double* x0, double* y0, double* x1, double* y1, > double* xm, double* ym) > { > typedef agg::conv_transform<PathIterator> transformed_path_t; > typedef agg::conv_curve<transformed_path_t> curve_t; > double x, y; > unsigned code; > > transformed_path_t tpath(path, trans); > curve_t curved_path(tpath); > > curved_path.rewind(0); > > while ((code = curved_path.vertex(&x, &y)) != agg::path_cmd_stop) > { > if ((code & agg::path_cmd_end_poly) == agg::path_cmd_end_poly) > continue; > /* if (MPL_notisfinite64(x) || MPL_notisfinite64(y)) > continue; > We should not need the above, because the path iterator > should already be filtering out invalid values. > */ > if (x < *x0) *x0 = x; > if (y < *y0) *y0 = y; > if (x > *x1) *x1 = x; > if (y > *y1) *y1 = y; > if (x > 0.0 && x < *xm) *xm = x; > if (y > 0.0 && y < *ym) *ym = y; > } > } > > > In the last 2 lines, why are xm and ym being clipped at 0, when x0 and > y0 are not? xm and ym are the minimum positive values, used for log scales. Definitely worth a comment. > > 2) It looks like update_path_extents throws away orientation by always > returning x0 and y0 as the minima. Bbox.update_from_path is therefore > doing the same. This doesn't hurt in present usage, since orientation > is not needed for dataLim, but it seems a bit surprising, and worth a > comment at least. Am I again missing something obvious? > I think I'm missing something. (Choices in my code are always obvious to *me*, and any mistakes are invisible... ;) Are you suggesting that if a line has x decreasing then x0 > x1? What if it boomerangs? I think it's simplest to keep data limits in a consistent order. View limits are another issue, of course. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Michael Droettboom wrote: > > Eric Firing wrote: >> Mike, >> >> A bug was recently pointed out: axhline, axvline, axhspan, axvspan >> mess up the ax.dataLim. I committed a quick fix for axhline and >> axvline, but I don't think that what I did is a good solution, so >> before doing anything for axhspan and axvspan I want to arrive at a >> better strategy. >> >> What is needed is a clean way to specify that only the x or the y part >> of ax.dataLim be updated when a line or patch (or potentially anything >> else) is added. This is specifically for the case, as in *line, where >> one axis is in data coordinates and the other is in normalized >> coordinates--we don't want the latter to have any effect on the dataLim. >> >> This could be done in python in any of a variety of ways, but I >> suspect that to be most consistent with the way the transforms code is >> now written, relying on update_path_extends from _path.cpp, it might >> make sense to append two boolean arguments to that cpp function, >> "update_x" and "update_y", and use kwargs in Bbox.update_from_path and >> siblings to set these, with default values of True. > It seems we could do this without touching C at all. Just change > update_from_path so it only updates certain coordinates in the bounding > box based on the kwargs you propose. Sure, the C side will be keeping > track of y bounds even when it doesn't have to, but I doubt that matters > much compared to checking a flag in the inner loop. It will compute the > bezier curves for both x and y anyway (without digging into Agg). It's > hard to really estimate the performance impact, so I'm necessarily > pushing for either option, but it may save having to update the C. Mike, I was somehow thinking that update_path_extents was changing things in place--completely wrong. So yes, it is trivial to make the change at the python level, and that is definitely the place to do it. I will try to take care of it this evening. In poking around, however, I came up with a couple of questions. Neither is a blocker for what I need to do, but each might deserve a comment in the code, if nothing else. 1) in _path.cpp: void get_path_extents(PathIterator& path, const agg::trans_affine& trans, double* x0, double* y0, double* x1, double* y1, double* xm, double* ym) { typedef agg::conv_transform<PathIterator> transformed_path_t; typedef agg::conv_curve<transformed_path_t> curve_t; double x, y; unsigned code; transformed_path_t tpath(path, trans); curve_t curved_path(tpath); curved_path.rewind(0); while ((code = curved_path.vertex(&x, &y)) != agg::path_cmd_stop) { if ((code & agg::path_cmd_end_poly) == agg::path_cmd_end_poly) continue; /* if (MPL_notisfinite64(x) || MPL_notisfinite64(y)) continue; We should not need the above, because the path iterator should already be filtering out invalid values. */ if (x < *x0) *x0 = x; if (y < *y0) *y0 = y; if (x > *x1) *x1 = x; if (y > *y1) *y1 = y; if (x > 0.0 && x < *xm) *xm = x; if (y > 0.0 && y < *ym) *ym = y; } } In the last 2 lines, why are xm and ym being clipped at 0, when x0 and y0 are not? 2) It looks like update_path_extents throws away orientation by always returning x0 and y0 as the minima. Bbox.update_from_path is therefore doing the same. This doesn't hurt in present usage, since orientation is not needed for dataLim, but it seems a bit surprising, and worth a comment at least. Am I again missing something obvious? Eric
Eric Firing wrote: > Mike, > > A bug was recently pointed out: axhline, axvline, axhspan, axvspan > mess up the ax.dataLim. I committed a quick fix for axhline and > axvline, but I don't think that what I did is a good solution, so > before doing anything for axhspan and axvspan I want to arrive at a > better strategy. > > What is needed is a clean way to specify that only the x or the y part > of ax.dataLim be updated when a line or patch (or potentially anything > else) is added. This is specifically for the case, as in *line, where > one axis is in data coordinates and the other is in normalized > coordinates--we don't want the latter to have any effect on the dataLim. > > This could be done in python in any of a variety of ways, but I > suspect that to be most consistent with the way the transforms code is > now written, relying on update_path_extends from _path.cpp, it might > make sense to append two boolean arguments to that cpp function, > "update_x" and "update_y", and use kwargs in Bbox.update_from_path and > siblings to set these, with default values of True. It seems we could do this without touching C at all. Just change update_from_path so it only updates certain coordinates in the bounding box based on the kwargs you propose. Sure, the C side will be keeping track of y bounds even when it doesn't have to, but I doubt that matters much compared to checking a flag in the inner loop. It will compute the bezier curves for both x and y anyway (without digging into Agg). It's hard to really estimate the performance impact, so I'm necessarily pushing for either option, but it may save having to update the C. > What do you think? If you agree to the _path.cpp change strategy, do > you prefer to do that yourself, or would you rather that I try it first? I probably won't have a chance to look at this today, so go ahead if you like. I'll shoot you a note later in the week if I have time... Cheers, Mike > > Any suggestions and/or contributions are welcome. > > Thanks. > > Eric -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Hello matplotlib developers, I have implemented "svg.image_noscale" feature for ps and pdf backends. I think that resampling/scaling should be avoided, when vector format can scale image itself. One advantage is that original image can be recovered from final file. Moreover as it is vector format it should be dpi independent and always provide maximum possible quality - that's original data. As for svg backend I have commented some transformation changes which I don't understand and which result in misaligned image and axes. Without it the misalignment is still there but smaller. I have also removed MixedModeRenderer from svg as it conflicts with "svg.image_noscale" and does not seem to be used. As last change, I restored commented self.figure.canvas.draw() in FigureCanvasBase.print_figure as I got complaints about "I/O operation on closed file" when I tried to "draw_artist" in my animation using blit technique. (It used cached renderer that no longer existed, self.figure.canvas.draw() updates this cache) Further more please consider set_animated(False) on all artists before saving hardcopy and restore animated state after (animated artist should be saved, although they can not animate in hardcopy) Thanks for reviewing my suggestions Jozef Vesely ve...@gj... --- /usr/lib/python2.5/site-packages/matplotlib/backends/backend_svg.py 2008年08月03日 20:15:03.000000000 +0200 +++ backend_svg.py 2008年11月12日 10:08:54.000000000 +0100 @@ -254,9 +254,9 @@ transstr = '' if rcParams['svg.image_noscale']: trans = list(im.get_matrix()) - if im.get_interpolation() != 0: - trans[4] += trans[0] - trans[5] += trans[3] + #if im.get_interpolation() != 0: + # trans[4] += trans[0] + # trans[5] += trans[3] trans[5] = -trans[5] transstr = 'transform="matrix(%s %s %s %s %s %s)" '%tuple(trans) assert trans[1] == 0 @@ -579,9 +579,9 @@ self.figure.set_dpi(72.0) width, height = self.figure.get_size_inches() w, h = width*72, height*72 - - renderer = MixedModeRenderer( - width, height, 72.0, RendererSVG(w, h, svgwriter, filename)) + + + renderer = RendererSVG(w, h, svgwriter, filename) self.figure.draw(renderer) renderer.finalize() if fh_to_close is not None: --- backend_ps.py 2008年08月03日 20:15:03.000000000 +0200 +++ backend_ps.py.new 2008年11月12日 10:04:33.000000000 +0100 @@ -400,7 +400,13 @@ bbox is a matplotlib.transforms.BBox instance for clipping, or None """ - + if rcParams['svg.image_noscale']: + trans = list(im.get_matrix()) + numrows,numcols = im.get_size() + im.reset_matrix() + im.set_interpolation(0) + im.resize(numcols, numrows) + im.flipud_out() if im.is_grayscale: @@ -413,8 +419,12 @@ xscale, yscale = ( w/self.image_magnification, h/self.image_magnification) - - figh = self.height*72 + + if rcParams['svg.image_noscale']: + xscale *= trans[0] + yscale *= trans[3] + + #figh = self.height*72 #print 'values', origin, flipud, figh, h, y clip = [] --- backend_pdf.py 2008年08月03日 20:15:03.000000000 +0200 +++ backend_pdf.py.new 2008年11月12日 10:06:30.000000000 +0100 @@ -1226,9 +1226,17 @@ gc.set_clip_rectangle(bbox) self.check_gc(gc) + if rcParams['svg.image_noscale']: + t = list(im.get_matrix()) + numrows,numcols = im.get_size() + im.reset_matrix() + im.set_interpolation(0) + im.resize(numcols, numrows) + h, w = im.get_size_out() imob = self.file.imageObject(im) self.file.output(Op.gsave, w, 0, 0, h, x, y, Op.concat_matrix, + t[0],0,0,t[3],0,0, Op.concat_matrix, imob, Op.use_xobject, Op.grestore) def draw_path(self, gc, path, transform, rgbFace=None): --- /usr/lib/python2.5/site-packages/matplotlib/backend_bases.py 2008年08月03日 20:15:03.000000000 +0200 +++ backend_bases.py 2008年11月12日 10:22:38.000000000 +0100 @@ -1313,7 +1313,7 @@ self.figure.set_facecolor(origfacecolor) self.figure.set_edgecolor(origedgecolor) self.figure.set_canvas(self) - #self.figure.canvas.draw() ## seems superfluous + self.figure.canvas.draw() ## not superfluous return result def get_default_filetype(self):
Patch against today's svn is attached. Sorry for the long delay... Right now, "scatterpoints" is just set to 3 in Legend.__init__, but presumably that should be an rcParam... On Fri, Oct 31, 2008 at 2:42 AM, Jae-Joon Lee <lee...@gm...> wrote: > Sorry Erik. > Can you make a new patch against the current SVN? > Some of the patch was applied (but without scatterpoints option) in the SVN. > Thanks, > > -JJ > > > > > On Thu, Oct 30, 2008 at 1:58 PM, Erik Tollerud <eri...@gm...> wrote: >> No more thoughts on this? Or was some version of the patch committed? >> >> On Mon, Oct 20, 2008 at 12:16 PM, Erik Tollerud <eri...@gm...> wrote: >>> Actually, looking more closely, there is one thing that's still >>> bothering me: as it is now, it's impossible to have, say, 2 points >>> for plotted values, and 3 points for scatter plots on the same legend >>> (you have to give a numpoints=# command that's shared by everything in >>> the legend, if I'm understanding it). It'd be nice to have a >>> property, say, "scatterpoints" (and presumably then an associated >>> rcParam "legend.scatterpoints" ) that sets the number of points to use >>> for scatter plots. That way, I can make plots just like in the >>> original form, but it can also be the same number for both if so >>> desired. I've attached a patch based on the last one that does this, >>> although it probably needs to be changed to allow for an rcParam >>> 'legend.scatterplot' (I don't really know the procedure for adding a >>> new rcParam). >>> >>> On Mon, Oct 20, 2008 at 3:22 AM, Erik Tollerud <eri...@gm...> wrote: >>>> The current patch looks good to me... it satisfies all the use cases I >>>> had in mind, and I can't think of much else that would be wanted. >>>> Thanks! >>>> >>>> I also very much like the idea of the "sizebar," although that's >>>> probably a substantially larger job to implement. I may look into it >>>> though, time permitting... >>>> >>>> On Sat, Oct 18, 2008 at 7:04 PM, Jae-Joon Lee <lee...@gm...> wrote: >>>>>> To help clarify the original purpose of "update_from": I wrote this >>>>>> method when writing the original legend implementation so the legend >>>>>> proxy objects could easily copy their style attributes from the >>>>>> underlying objects they were a proxy for (so not every property is >>>>>> copied, eg the xdata for line objects is not copied). So the >>>>>> operating question should be: what properties do I need to copy to >>>>>> make the legend representation of the object. While you are in >>>>>> there, perhaps you could clarify this in the docstrings of the >>>>>> update_from method. >>>>> >>>>> Thanks for clarifying this, John. >>>>> >>>>> Manuel, >>>>> The patch looks good to me. We may submit the patch (I hope Erik is >>>>> okay with the current patch) and it would be great if you handle the >>>>> submission. >>>>> >>>>> -JJ >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Fri, Oct 17, 2008 at 9:45 PM, Manuel Metz <mm...@as...> wrote: >>>>>> Jae-Joon Lee wrote: >>>>>>> Thanks Manuel. >>>>>>> >>>>>>> Yes, we need rotation value and etc, but my point is, do we need to >>>>>>> update it within the update_from() method? Although my preference is >>>>>>> not to do it, it may not matter much as far as we state what this >>>>>>> method does clearly in the doc. >>>>>> >>>>>> Okay, it's probably better to create the object correctly (numsides ...) >>>>>> instead of copying the properties (see also JDHs mail !) >>>>>> >>>>>>> And, in your patch, I don't think updating the numsides value has any >>>>>>> effect as it does not recreate the paths. >>>>>>> >>>>>>> I'm attaching the revised patch. In this patch, update_from() only >>>>>>> update gc-related properties. And numsides, size, and rotations are >>>>>>> given during the object creation time. >>>>>> >>>>>> Yes, this looks better. But creating handle_sizes is a little bit too >>>>>> much effort. This is done internally. It will do passing a sizes list, >>>>>> that may or may not be shorter/longer than numpoints (see revised patch). >>>>>> >>>>>> I also changed the way the yoffsets are updated in _update_positions(). >>>>>> >>>>>> One additional thing I have in mind (for a later time) is a "sizesbar" >>>>>> similar to a colorbar where you can read off values corresponding to >>>>>> marker sizes... >>>>>> >>>>>> Cheers, >>>>>> Manuel >>>>>> >>>>>>> Erik, >>>>>>> I see your points. My main concern is that the yoffsets makes the >>>>>>> results a bit funny when numpoints is 2. The attached patch has a >>>>>>> varying sizes of [0.5*(max+min), max, min]. The yoffsets are only >>>>>>> introduced when numpints > 2 and you can also provide it as an >>>>>>> optional argument. >>>>>>> >>>>>>> Regards, >>>>>>> >>>>>>> -JJ >>>>>>> >>>>>>> >>>>>>> On Thu, Oct 16, 2008 at 8:43 PM, Manuel Metz <mm...@as...> wrote: >>>>>>>> Manuel Metz wrote: >>>>>>>>> Jae-Joon Lee wrote: >>>>>>>>>> Hi Manuel, >>>>>>>>>> >>>>>>>>>> I think it is a good to introduce the update_from method in Collections. >>>>>>>>>> But, I'm not sure if it is a good idea to also update sizes, paths and >>>>>>>>>> rotation (in RegularPolyCoolection). My impression is that update_from >>>>>>>>>> method is to update gc related attributes. For comparison, >>>>>>>>>> Patch.update_from() does not update the path. >>>>>>>>> That's exactly the point why I wasn't fully happy with the patch. The >>>>>>>>> path is generated by the _path_generator, so instead of copying the path >>>>>>>>> it seems to be better to create an instance of the corresponding class >>>>>>>>> (e.g. the StarPolygonCollection class, as suggested before). >>>>>>>>> >>>>>>>>> One should update the rotation attribute (!!); it's only one number. A >>>>>>>>> '+' marker, for example, has rotation = 0, whereas a 'x' marker has >>>>>>>>> rotation=pi/4. That's the only difference between those two ! >>>>>>>>> >>>>>>>>>> Also, is it okay to update properties without checking its length?. It >>>>>>>>>> does not seem to cause any problems though. >>>>>>>>> It's in principal not a problem to copy the sizes attribute without >>>>>>>>> checking the length. If it's shorter the the number of items the sizes >>>>>>>>> are repeated; if it's longer it gets truncated. >>>>>>>>> >>>>>>>>> mm >>>>>>>>> >>>>>>>>>> I guess It would better to use xdata_markers than xdata in the >>>>>>>>>> get_handle() method. The difference is when numpoints==1. Using xdata >>>>>>>>>> gives two marker points. >>>>>>>>>> >>>>>>>>>> I was actually about to to commit my patch. I'll try to account your >>>>>>>>>> changes and post my version of patch later today. >>>>>>>>>> >>>>>>>>>> Regards, >>>>>>>>>> >>>>>>>>>> -JJ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Oct 15, 2008 at 4:07 PM, Manuel Metz <mm...@as...> wrote: >>>>>>>>>>> hmm >>>>>>>>>>> >>>>>>>>>>> -------- Original Message -------- >>>>>>>>>>> Jae-Joon Lee wrote: >>>>>>>>>>>>> - the parameter numpoints should be used (it's ignored right now) >>>>>>>>>>>>> >>>>>>>>>>>> Thanks Manuel. I guess we can simply reuse xdata_marker for this purpose. >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>>> - Some private variables are accessed and a new RegularPolycollection is >>>>>>>>>>>>> created (does this work eg. with a StarPolygonCollection? I haven't >>>>>>>>>>>>> checked, but I don't think so !). Instead of creating a new >>>>>>>>>>>>> RegularPolyCollection it might be more useful to make a copy of the >>>>>>>>>>>>> existing object... I was thinking about a update_from() method for the >>>>>>>>>>>>> Collection class(es) similar to update_from() for lines. >>>>>>>>>>>>> >>>>>>>>>>>> By changing "RegularPolyCoolection" to "type(handles)", it works for >>>>>>>>>>>> StarPolygonCollection. >>>>>>>>>>>> >>>>>>>>>>>> In Erik's current implementation, the markers in the legend have >>>>>>>>>>>> varying colors, sizes, and y offsets. >>>>>>>>>>>> The color variation seems fine. But do we need to vary the sizes and >>>>>>>>>>>> y-offsets? My inclination is to use a fixed size (median?) and a fixed >>>>>>>>>>>> y offset. How does Erik and others think? >>>>>>>>>>>> >>>>>>>>>>>> Regards, >>>>>>>>>>>> >>>>>>>>>>>> -JJ >>>>>>>>>>> Attached is my current version of the patch. I've moved all of the >>>>>>>>>>> properties-copying stuff to collections, which makes the changes >>>>>>>>>>> legend.py more clearer (but I'm not fully happy with the patch and >>>>>>>>>>> haven't commit anything yet) >>>>>>>>>>> >>>>>>>>>>> mm >>>>>>>>>>> >>>>>>>> Hi Jae-Joon, >>>>>>>> so here is my revised version of the patch. What do you think ? >>>>>>>> >>>>>>>> Manuel >>>>>>>> >>>>>>>> >>>>>> >>>>>> >>>>>> ------------------------------------------------------------------------- >>>>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>>> _______________________________________________ >>>>>> Matplotlib-devel mailing list >>>>>> Mat...@li... >>>>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >>>>>> >>>>>> >>>>> >>>>> ------------------------------------------------------------------------- >>>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>>> _______________________________________________ >>>>> Matplotlib-devel mailing list >>>>> Mat...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >>>>> >>>> >>> >>> >>> >>> -- >>> Erik Tollerud >>> Graduate Student >>> Center For Cosmology >>> Department of Physics and Astronomy >>> 2142 Frederick Reines Hall >>> University of California, Irvine >>> Office Phone: (949)824-2587 >>> Cell: (651)307-9409 >>> eto...@uc... >>> >> >> >> >> -- >> Erik Tollerud >> Graduate Student >> Center For Cosmology >> Department of Physics and Astronomy >> 2142 Frederick Reines Hall >> University of California, Irvine >> Office Phone: (949)824-2587 >> Cell: (651)307-9409 >> eto...@uc... >> > -- Erik Tollerud Graduate Student Center For Cosmology Department of Physics and Astronomy 2142 Frederick Reines Hall University of California, Irvine Office Phone: (949)824-2587 Cell: (651)307-9409 eto...@uc...
Mike, A bug was recently pointed out: axhline, axvline, axhspan, axvspan mess up the ax.dataLim. I committed a quick fix for axhline and axvline, but I don't think that what I did is a good solution, so before doing anything for axhspan and axvspan I want to arrive at a better strategy. What is needed is a clean way to specify that only the x or the y part of ax.dataLim be updated when a line or patch (or potentially anything else) is added. This is specifically for the case, as in *line, where one axis is in data coordinates and the other is in normalized coordinates--we don't want the latter to have any effect on the dataLim. This could be done in python in any of a variety of ways, but I suspect that to be most consistent with the way the transforms code is now written, relying on update_path_extends from _path.cpp, it might make sense to append two boolean arguments to that cpp function, "update_x" and "update_y", and use kwargs in Bbox.update_from_path and siblings to set these, with default values of True. What do you think? If you agree to the _path.cpp change strategy, do you prefer to do that yourself, or would you rather that I try it first? Any suggestions and/or contributions are welcome. Thanks. Eric
Ryan May wrote: > John Hunter wrote: > >> On Tue, Nov 11, 2008 at 1:38 PM, Ryan May <rm...@gm...> wrote: >> >> >>> 1) Have psd(x) call csd(x,x) >>> 2) Have csd() check if y is x, and if so, avoid doing the extra work. >>> >>> Would this be an acceptable solution to reduce code duplication? >>> >> Sure, that should work fine. >> > > Ok, I noticed that specgram() duplicated much of the same code, so I > factored it all out and made a _spectral_helper() function, which pretty > much implements a cross-spectrogram. csd() and specgram() use this, and > then psd still calls csd(). Now all of the spectral analysis stuff is > using the same computational code base. > > >>> On a separate note, once I get done with these tweaks, are there any >>> objections to submitting something based on this to scipy? >>> >> No objections here -- if it were put into numpy though, we could >> depend on it and avoid the duplication. I would campaign for numpy >> first, eg np.fft.psd, etc. >> > > I agree it'd be better for us if it went to numpy, but I've gotten the > sense that they're not really receptive to adding things like this now. > I'll give it a try, but I sense that scipy.signal would end up being a > more likely home. That wouldn't help us with duplication, but would > help the community at large. It's always bugged me that I can't just > grab a psd() function from my general computing packages. (In my > opinion, anything in mlab that doesn't involve plotting should really > exist in a more general package.) There's an interesting case to be made here for modules shared between packages at the version-control-system and bug tracking level (e.g. in a DVCS type system) but installed in separate namespaces and shipped independently when it was time for source and binary distributions of the package to be made. There'd be duplication at the install and distribution level, but at least not at the source level. I'd guess the linux packagers would also find a way to reduce duplication at those other levels, too, for their systems. It seems to me that this would reduce a lot of the developer angst about having multiple sources for the same things, while still making things easy on the users. However, I don't know any VCS that would facilitate such a thing... -Andrew
John Hunter wrote: > On Tue, Nov 11, 2008 at 1:38 PM, Ryan May <rm...@gm...> wrote: > >> 1) Have psd(x) call csd(x,x) >> 2) Have csd() check if y is x, and if so, avoid doing the extra work. >> >> Would this be an acceptable solution to reduce code duplication? > > Sure, that should work fine. Ok, I noticed that specgram() duplicated much of the same code, so I factored it all out and made a _spectral_helper() function, which pretty much implements a cross-spectrogram. csd() and specgram() use this, and then psd still calls csd(). Now all of the spectral analysis stuff is using the same computational code base. >> On a separate note, once I get done with these tweaks, are there any >> objections to submitting something based on this to scipy? > > No objections here -- if it were put into numpy though, we could > depend on it and avoid the duplication. I would campaign for numpy > first, eg np.fft.psd, etc. I agree it'd be better for us if it went to numpy, but I've gotten the sense that they're not really receptive to adding things like this now. I'll give it a try, but I sense that scipy.signal would end up being a more likely home. That wouldn't help us with duplication, but would help the community at large. It's always bugged me that I can't just grab a psd() function from my general computing packages. (In my opinion, anything in mlab that doesn't involve plotting should really exist in a more general package.) Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma
On Tue, Nov 11, 2008 at 1:38 PM, Ryan May <rm...@gm...> wrote: > 1) Have psd(x) call csd(x,x) > 2) Have csd() check if y is x, and if so, avoid doing the extra work. > > Would this be an acceptable solution to reduce code duplication? Sure, that should work fine. > On a separate note, once I get done with these tweaks, are there any > objections to submitting something based on this to scipy? No objections here -- if it were put into numpy though, we could depend on it and avoid the duplication. I would campaign for numpy first, eg np.fft.psd, etc. JDH