You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
(1) |
2
(9) |
3
(1) |
4
(3) |
5
(1) |
6
(2) |
7
(9) |
8
(2) |
9
|
10
(10) |
11
(4) |
12
(1) |
13
(1) |
14
(2) |
15
(9) |
16
|
17
(1) |
18
(6) |
19
|
20
(4) |
21
(7) |
22
(3) |
23
(3) |
24
(2) |
25
(1) |
26
|
27
(3) |
28
(6) |
29
(12) |
30
|
31
(8) |
|
|
John Hunter wrote: > It is useful to store the final pixel buffer (eg in a PNG) as RGBA > because some people like to have some parts of their figure > transparent to composite the figure with other images. fair enough, and that's probably a really cool feature when you need it! ken wrote: > I was talking about the image-from-a-buffer business not helping us > with WX 2.4/2.6 due to the RGBA to RGB conversion. But it looks like RendererAgg has this a agg.tostring_rgb() method, so we should be able to do change : image.SetData(agg.tostring_rgb()) to image.SetDataBuffer(agg.tostring_rgb()) If we make sure to keepthe string around. I haven't looked at your C++ code, but does it do something faster than RendererAgg.tostring_rgb() ? Another thing that would be nice (for all Agg back-ends, I imagine), is if we could replace this: # agg => rgb -> image => bitmap => clipped bitmap => image return wx.ImageFromBitmap(_clipped_image_as_bitmap(image, bbox)) with a RendererAgg._clipped_tostring_rgb(bbox) So that we don't copy a bunch of RGB data we don't need. even if we don't do that, I think _clipped_image_as_bitmap() could use wx.Image.GetSubImage(), rather than creating a bimtp of the whole thing and blitting. untested code: def _clipped_image_as_bitmap(image, bbox): """ Convert the region of a wx.Image described by bbox to a wx.Bitmap. """ l, b, width, height = bbox.get_bounds() return wx.BitmapFromImage(image.GetSubImage(wxRect((l,b),(w,h)))) > RendererAgg appears to already have a buffer_rgba() method. So we're all set for wxPython 2.7 -- very nice! I hope it doesn't make a copy. Is there a numpy_array_rgba method -- that could be nice, and would work as a buffer, too. Maybe when we are ready to dump Numeric and numarray. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
>>>>> "Ken" == Ken McIvor <mc...@ii...> writes: Ken> On 08/31/06 13:43, Christopher Barker wrote: >> Ken McIvor wrote: >> >> a wxBitmap is the same format as the native rendering >> system. While most systems use 24b RGB (or 32b RGBA), people >> can still run displays at 16bpp or whatever, so it's still >> needed. Ken> I can understand why it's still necessary, although it's nice Ken> to sometimes pretend that everyone's running 24-bit color Ken> displays. I hope I didn't sound too judgemental! >>> I don't think we're going to be able to get performance >>> similar to that of the accelerator using straight Python code >> But whether it's Python or C++, you still need to do the >> Image->Bitmap conversion -- so if we can get rid of the data >> copying from Agg buffer to wxImage in Python, we don't need >> C++. Ken> I think we got some wires crossed at some point in the Ken> conversation, although it could be that I'm wearing the Ken> Stupid Hat today. I was talking about the Ken> image-from-a-buffer business not helping us with WX 2.4/2.6 Ken> due to the RGBA to RGB conversion. >> And it has. For wxPython 2.7 (and now in CVS) there are methods >> for dumping 32 bit RGBA data directly into a wxBitmap with no >> copying, if the data source is a Python Buffer object. I think >> I posted a note about this here yesterday. Ken> Yes, you did mention it. I agree completely with this Ken> analysis of the situation. When I replied I wasn't thinking Ken> in terms of wxPython 2.7. >> To really get it to work, the 24bit RGB Agg buffer needs to be >> a Python Buffer object -- is it now? I'm sorry I don't have the >> time to mess with this now -- maybe some day. Ken> I guess Guido lets John borrow his time machine, because Ken> RendererAgg appears to already have a buffer_rgba() method. Guido has been very generous with us :-) >> You can alpha composite into a non-alpha background. You just >> lose the alpha there, so that the background couldn't be >> alpha-composited onto anything else -- but does it ever need to >> be? Ken> I thought that the buffer's accumulated alpha played a role Ken> in compositing new pixels onto it, but I apparently Ken> misunderstood. It does: here is agg's rgba pixel blending routing static AGG_INLINE void blend_pix(value_type* p, unsigned cr, unsigned cg, unsigned cb, unsigned alpha, unsigned cover=0) { calc_type r = p[Order::R]; calc_type g = p[Order::G]; calc_type b = p[Order::B]; calc_type a = p[Order::A]; p[Order::R] = (value_type)(((cr - r) * alpha + (r << base_shift)) >> base_shift); p[Order::G] = (value_type)(((cg - g) * alpha + (g << base_shift)) >> base_shift); p[Order::B] = (value_type)(((cb - b) * alpha + (b << base_shift)) >> base_shift); p[Order::A] = (value_type)((alpha + a) - ((alpha * a + base_mask) >> base_shift)); } Ken> Images. Anyway, if the buffer's alpha channel isn't used, Ken> then the whole situation does seem a bit odd. Could the Ken> information be retained for PNGs or something? It is useful to store the final pixel buffer (eg in a PNG) as RGBA because some people like to have some parts of their figure transparent to composite the figure with other images. JDH
On 08/31/06 13:43, Christopher Barker wrote: > Ken McIvor wrote: > > a wxBitmap is the same format as the native rendering system. While most > systems use 24b RGB (or 32b RGBA), people can still run displays at > 16bpp or whatever, so it's still needed. I can understand why it's still necessary, although it's nice to sometimes pretend that everyone's running 24-bit color displays. I hope I didn't sound too judgemental! >>I don't think we're going to be able to get performance similar to that >>of the accelerator using straight Python code > > But whether it's Python or C++, you still need to do the Image->Bitmap > conversion -- so if we can get rid of the data copying from Agg buffer > to wxImage in Python, we don't need C++. I think we got some wires crossed at some point in the conversation, although it could be that I'm wearing the Stupid Hat today. I was talking about the image-from-a-buffer business not helping us with WX 2.4/2.6 due to the RGBA to RGB conversion. > And it has. For wxPython 2.7 (and now in CVS) there are methods for > dumping 32 bit RGBA data directly into a wxBitmap with no copying, if > the data source is a Python Buffer object. I think I posted a note about > this here yesterday. Yes, you did mention it. I agree completely with this analysis of the situation. When I replied I wasn't thinking in terms of wxPython 2.7. > To really get it to work, the 24bit RGB Agg buffer needs to be a Python > Buffer object -- is it now? I'm sorry I don't have the time to mess with > this now -- maybe some day. I guess Guido lets John borrow his time machine, because RendererAgg appears to already have a buffer_rgba() method. > You can alpha composite into a non-alpha background. You just lose the > alpha there, so that the background couldn't be alpha-composited onto > anything else -- but does it ever need to be? I thought that the buffer's accumulated alpha played a role in compositing new pixels onto it, but I apparently misunderstood. It must be time to read "Compositing Digital Images. Anyway, if the buffer's alpha channel isn't used, then the whole situation does seem a bit odd. Could the information be retained for PNGs or something? > However, there is something to be said for just using alpha everywhere, > and as we'll soon be able to dump RGBA data straight into a wx.Bitmap, > this should work great. Yes, it will be a great improvement over the current situation. Ken
Ken McIvor wrote: > I think they added preliminary support for alpha channels in 2.5, in the > form of wx.Image.HasAlpha(). right, but it uses the old 24 bit RGB buffer, and separate Alpha buffer, it's kind of tacked on, rather than native. > My beef is that you have to convert > everything to a wx.Bitmap before you can do anything useful with it. And that DCs don't support alpha, even though the underlying device often does. This has been discuses a lot, but no one has done much about it yet -- wxTNG will have something better, but who knows how far out that is? > As near as I can tell, the primary slowdown at this point is the way > wxWidgets distinguishes from RGB image data (wx.Image) as opposed to > displayed image data (wx.Bitmap). Right now you cannot draw a wx.Image > without first converting it into a wx.Bitmap, nor can you use a MemoryDC > to blit or otherwise munge a wx.Image directly. My impression is that > this made sense when wxWindows was getting started (Win16 and Motif), > but is more of an artificial distinction at this point. a wxBitmap is the same format as the native rendering system. While most systems use 24b RGB (or 32b RGBA), people can still run displays at 16bpp or whatever, so it's still needed. Also, I wouldn't be surprised if some less common systems use ARGB or something else weird. > I don't think we're going to be able to get performance similar to that > of the accelerator using straight Python code But whether it's Python or C++, you still need to do the Image->Bitmap conversion -- so if we can get rid of the data copying from Agg buffer to wxImage in Python, we don't need C++. > unless something changes > in the wxWidgets' Image/Bitmap/MemoryDC department. And it has. For wxPython 2.7 (and now in CVS) there are methods for dumping 32 bit RGBA data directly into a wxBitmap with no copying, if the data source is a Python Buffer object. I think I posted a note about this here yesterday. > I'd love to be proven wrong! If you're interested in the gory details, > you should check out the pure-Python implementation of the image > conversion functions, at the end of `backend_wxagg.py'. I did, and I suggested some improvements a couple messages back. To really get it to work, the 24bit RGB Agg buffer needs to be a Python Buffer object -- is it now? I'm sorry I don't have the time to mess with this now -- maybe some day. >> I do have one question -- does the agg back-end really need to use an >> alpha channel for it's buffer? Isn't it the whole image anyway? What >> is is it going to get blended with? > > I don't know enough about Agg to venture an educated guess. My > un-educated guess is that there's an RGBA buffer to support alpha in the > drawing operations... how can Agg alpha-composite new pixels into the > buffer when you draw something, unless you know the alpha values of the > existing pixels? You can alpha composite into a non-alpha background. You just lose the alpha there, so that the background couldn't be alpha-composited onto anything else -- but does it ever need to be? However, there is something to be said for just using alpha everywhere, and as we'll soon be able to dump RGBA data straight into a wx.Bitmap, this should work great. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
On 08/28/06 12:19, Christopher Barker wrote: > > wx is really due for an update to support alpha properly. But I guess > you'll always have problems with different data formats. I think they added preliminary support for alpha channels in 2.5, in the form of wx.Image.HasAlpha(). My beef is that you have to convert everything to a wx.Bitmap before you can do anything useful with it. > Anyway, this thread started because people were having binary > compatibility issues. Even if this doesn't speed up the accelerator, it > may be possible to get the same performance without using > wx-version-specific compiled code -- i.e. pure python. As near as I can tell, the primary slowdown at this point is the way wxWidgets distinguishes from RGB image data (wx.Image) as opposed to displayed image data (wx.Bitmap). Right now you cannot draw a wx.Image without first converting it into a wx.Bitmap, nor can you use a MemoryDC to blit or otherwise munge a wx.Image directly. My impression is that this made sense when wxWindows was getting started (Win16 and Motif), but is more of an artificial distinction at this point. I don't think we're going to be able to get performance similar to that of the accelerator using straight Python code unless something changes in the wxWidgets' Image/Bitmap/MemoryDC department. That being said, I'd love to be proven wrong! If you're interested in the gory details, you should check out the pure-Python implementation of the image conversion functions, at the end of `backend_wxagg.py'. > I do have one question -- does the agg back-end really need to use an > alpha channel for it's buffer? Isn't it the whole image anyway? What is > is it going to get blended with? I don't know enough about Agg to venture an educated guess. My un-educated guess is that there's an RGBA buffer to support alpha in the drawing operations... how can Agg alpha-composite new pixels into the buffer when you draw something, unless you know the alpha values of the existing pixels? Ken
I don't know how (or if) this can be improved, but I will gladly consider patches. On Thursday 31 August 2006 03:09, Michael Fitzgerald wrote: > Hi all, > > I have a question about the PS backend (building on the thread > "imshow with PS backend" from ~ a month ago). Evidently this backend > is fixed at 72 dpi. This isn't a problem with vector information. > However, it would seem that one would want to use a higher resolution > when plotting figures that use imshow() for raster data, since this > command has several choices for interpolation. As I understand, the > AxesImage is sampled at this low-resolution when being written to PS/ > EPS. Subsequent interpolation is done when printing, or viewing with > ghostview. For the (originally?) raster data, gv seems to use a > nearest-neighbor scheme, making the image blocky. It would be nice > to use matplotlib's interpolation instead. Is there a fundamental > reason this needs to be fixed at 72 dpi? As some publishers ask for > EPS files of e.g. 300 dpi, I would think it's theoretically possible > to export at different resolutions. My understanding is that the > _preview_ image in the file is supposed to be 72 dpi. > > One possible workaround is to scale up the size of the figure (in > inches), but then fonts, line thickness, marker sizes, etc. must also > be scaled, making it less-than-satisfactory. > > Thank you in advance for any enlightenment, and please forgive my > ignorance -- I must admit I don't know that much about PS, nor about > the specific scheme used in matplotlib for getting the image data > into the postscript file, so I may be critically mistaken in the > above assessment.
Hi all, I have a question about the PS backend (building on the thread "imshow with PS backend" from ~ a month ago). Evidently this backend is fixed at 72 dpi. This isn't a problem with vector information. However, it would seem that one would want to use a higher resolution when plotting figures that use imshow() for raster data, since this command has several choices for interpolation. As I understand, the AxesImage is sampled at this low-resolution when being written to PS/ EPS. Subsequent interpolation is done when printing, or viewing with ghostview. For the (originally?) raster data, gv seems to use a nearest-neighbor scheme, making the image blocky. It would be nice to use matplotlib's interpolation instead. Is there a fundamental reason this needs to be fixed at 72 dpi? As some publishers ask for EPS files of e.g. 300 dpi, I would think it's theoretically possible to export at different resolutions. My understanding is that the _preview_ image in the file is supposed to be 72 dpi. One possible workaround is to scale up the size of the figure (in inches), but then fonts, line thickness, marker sizes, etc. must also be scaled, making it less-than-satisfactory. Thank you in advance for any enlightenment, and please forgive my ignorance -- I must admit I don't know that much about PS, nor about the specific scheme used in matplotlib for getting the image data into the postscript file, so I may be critically mistaken in the above assessment. Best, Mike (P.S. please cc me, as I'm not subscribed)
Rob: I am building Matplotlib on my Mac with Python 2.5 Release Candidate 1. We use this for our astronomical data-reduction software. I saw your post on matplotlib-devel and I think I can help a bit. Here are some patches that get it to compile:
On Tuesday 29 August 2006 15:49, Darren Dale wrote: > On Tuesday 29 August 2006 13:01, Will Lee wrote: > > I need to apply the attached patch in order to get the setup.py script to > > run. I'm using python 2.4.3 with matplotlib-0.87.4. If I do not apply > > this patch, I got the following. It seems like there's a change in tk's > > getvar implementation. The tk.getvar('tcl_library") returns an > > _tkinter.Tcl_Obj instead of a string. > > What OS? It returns a string on linux with python-2.4.3 and tk-8.4.13. I applied your patch, tested the build process on my own machine (still ok) and commited it to svn. Thanks for the report. Darren
>>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: Charlie> Sounds good. Two open windows issues that aren't Charlie> showstoppers are: 1) Inclusion of msvcp71.dll? 2) Charlie> Building against wxpython unicode or ansii? (until we Charlie> move to pure python blitting) Since it is a point release, I don't think we should change the way we build against wx, since that is libel to confuse and piss off folks who've just made the switch to unicode wx for 87.4. I don't have enough insight into the msvcp71.dll issues to comment. JDH
On 8/29/06, John Hunter <jdh...@ac...> wrote: > >>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: > >> Travis, would you care to comment? > > Charlie> He made a comment on the numpy list. We can shoot for a > Charlie> mpl release by the end of the week. Are there any > Charlie> lingering issues? > > Apparently numpy 1.05b is due out over the weekend, so we can > coordinate with that release. As soon as Travis puts it up, Charlie > you can put out mpl 0.87.5 after a quick test of backend_driver which > is currently passing with svn numpy. Sounds good. Two open windows issues that aren't showstoppers are: 1) Inclusion of msvcp71.dll? 2) Building against wxpython unicode or ansii? (until we move to pure python blitting)
On Tuesday 29 August 2006 13:01, Will Lee wrote: > I need to apply the attached patch in order to get the setup.py script to > run. I'm using python 2.4.3 with matplotlib-0.87.4. If I do not apply > this patch, I got the following. It seems like there's a change in tk's > getvar implementation. The tk.getvar('tcl_library") returns an > _tkinter.Tcl_Obj instead of a string. What OS? It returns a string on linux with python-2.4.3 and tk-8.4.13.
>>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: >> Travis, would you care to comment? Charlie> He made a comment on the numpy list. We can shoot for a Charlie> mpl release by the end of the week. Are there any Charlie> lingering issues? Apparently numpy 1.05b is due out over the weekend, so we can coordinate with that release. As soon as Travis puts it up, Charlie you can put out mpl 0.87.5 after a quick test of backend_driver which is currently passing with svn numpy. JDH
On Tuesday 29 August 2006 15:21, Charlie Moad wrote: > We can shoot for a mpl release by the end of the week. Are there any > lingering issues? There is this tk unicode nonsense in setupext.py. I was hoping to get these build warnings taken care of on Linux (see my recent post), but its not important.
On 8/29/06, Darren Dale <dd...@co...> wrote: > On Tuesday 29 August 2006 14:07, Charlie Moad wrote: > > On 8/29/06, Darren Dale <dd...@co...> wrote: > > > On Monday 14 August 2006 17:48, John Hunter wrote: > > > > >>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: > > > > > > > > Charlie> Numpy 1.0b2 was released last night and Travis hopes this > > > > Charlie> will remain binary compatible with numpy 1.0. Are there > > > > Charlie> any objections to a minor release bump? I could do this > > > > Charlie> an soon as tomorrow. > > > > > > > > Let's shoot for Tuesday evening, in advance of scipy. I'm going to > > > > make one more attempt before then to get the damned widget lock > > > > working right.... > > > > > > I hate to raise this issue again, but what is the status of the next > > > release? > > > > Seeing numpy b3 and then b4 has made me hesitate. I posted a snapshot > > to the user list a while back just in case anyone wanted it. Have > > these minor releases been breaking the c-api? > > I think b4 included the improved support for migrating from numarray. I'm not > sure it would have effected an mpl release. > > Travis, would you care to comment? He made a comment on the numpy list. We can shoot for a mpl release by the end of the week. Are there any lingering issues? - Charlie
On Tuesday 29 August 2006 14:07, Charlie Moad wrote: > On 8/29/06, Darren Dale <dd...@co...> wrote: > > On Monday 14 August 2006 17:48, John Hunter wrote: > > > >>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: > > > > > > Charlie> Numpy 1.0b2 was released last night and Travis hopes this > > > Charlie> will remain binary compatible with numpy 1.0. Are there > > > Charlie> any objections to a minor release bump? I could do this > > > Charlie> an soon as tomorrow. > > > > > > Let's shoot for Tuesday evening, in advance of scipy. I'm going to > > > make one more attempt before then to get the damned widget lock > > > working right.... > > > > I hate to raise this issue again, but what is the status of the next > > release? > > Seeing numpy b3 and then b4 has made me hesitate. I posted a snapshot > to the user list a while back just in case anyone wanted it. Have > these minor releases been breaking the c-api? I think b4 included the improved support for migrating from numarray. I'm not sure it would have effected an mpl release. Travis, would you care to comment?
On 8/29/06, Darren Dale <dd...@co...> wrote: > On Monday 14 August 2006 17:48, John Hunter wrote: > > >>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: > > > > Charlie> Numpy 1.0b2 was released last night and Travis hopes this > > Charlie> will remain binary compatible with numpy 1.0. Are there > > Charlie> any objections to a minor release bump? I could do this > > Charlie> an soon as tomorrow. > > > > Let's shoot for Tuesday evening, in advance of scipy. I'm going to > > make one more attempt before then to get the damned widget lock > > working right.... > > I hate to raise this issue again, but what is the status of the next release? Seeing numpy b3 and then b4 has made me hesitate. I posted a snapshot to the user list a while back just in case anyone wanted it. Have these minor releases been breaking the c-api? - Charlie
On Monday 14 August 2006 17:48, John Hunter wrote: > >>>>> "Charlie" == Charlie Moad <cw...@gm...> writes: > > Charlie> Numpy 1.0b2 was released last night and Travis hopes this > Charlie> will remain binary compatible with numpy 1.0. Are there > Charlie> any objections to a minor release bump? I could do this > Charlie> an soon as tomorrow. > > Let's shoot for Tuesday evening, in advance of scipy. I'm going to > make one more attempt before then to get the damned widget lock > working right.... I hate to raise this issue again, but what is the status of the next release?
On Sunday 27 August 2006 22:09, Eric Firing wrote: > Darren Dale wrote: > > A while back, I put some effort into rendering an offset ticklabel, which > > allowed the user to do something like > > > > plot(linspace(100000100, 100000200, 100)) > > > > and the plot would look like a plot from 0 to 100, with a "+100000100" > > rendered in a new label near the far end of the axis. This doesnt work > > quite as well as it used to, because the axes autoscaling is setting the > > plot range to something like the average plus and minus 6%. I have tried > > tracing the source of this change, but I can't find it. It might be > > buried in the _transforms extension code, and I've never been able to > > wrap my head around mpl's transforms. > > > > Does anyone know why autoscaling is defaulting to this +-6% range? Does > > it have to be this way? I'm trying to improve the scalar formatter > > (supporting engineering notation, cleaning up the code). > > Yes. It is not a +-6% range in general, rather it is an adjustment that > is made if the range is very small. The relevant method in Locator is: > > def nonsingular(self, vmin, vmax, expander=0.001, tiny=1e-6): > if vmax < vmin: > vmin, vmax = vmax, vmin > if vmax - vmin <= max(abs(vmin), abs(vmax)) * tiny: > if vmin==0.0: > vmin -= 1 > vmax += 1 > else: > vmin -= expander*abs(vmin) > vmax += expander*abs(vmax) > return vmin, vmax > > I know I did it this way for a reason, but I don't remember exactly what > it was--whether it was because of problems with zooming when the zoom > range gets too small (this was definitely a big problem), or because of > problems with the rest of the locator code, or because it seemed to me > to be roughly the desired behavior in most cases. Maybe it was all of > the above. Certainly, something like this is needed--I think you will > find that things go bad rapidly if vmin gets too close to vmax. I put > in the "expander" and "tiny" kwargs in case of future need, but only > expander is non-default (e.g., 0.05) in other parts of ticker.py, and > neither kwarg is presently exposed to the user. That could be changed. I don't understand, I spent a lot of time making the scalarformatter work with precisely this scenario (zooming in on extremely small ranges), and it was working very well. I don't know of any circumstance where there was a problem, maybe you could be more specific about the big problems you encountered. Darren
Ken McIvor wrote: > The problem I foresee is that the Agg renderer's RGBA data has to > be converted to RGB before a wxImage can be created by convert_agg2image(). As if by magic, this from Robin Dunn today: > You may want to take a look at my CVS commits for the last couple weeks. I've now got some raw bitmap access code in place. Both 2.6 and 2.7 will have wx.BitmapFromBuffer and wx.BitmapFromBufferRGBA factory functions which can copy from a buffer object directly into the bitmap's pixel buffer, and 2.7 will also have wx.NativePixelData and wx.AlphaPixelData which allow direct access to the pixel buffer from Python. (The latter needed a bug fix that I'm not sure (yet) can be backported to 2.6...) For example, I can now do this (in a PyShell): > > >>> import wx > >>> import numarray > >>> f = wx.Frame(None) > >>> p = wx.Panel(f) > >>> dc = wx.ClientDC(p) > >>> f.Show() > >>> dim=100 > >>> R=0; G=1; B=2; A=3 > >>> arr = numarray.array(shape=(dim, dim, 4), typecode='u1') > >>> for row in xrange(dim): > ... for col in xrange(dim): > ... arr[row,col,R] = 0 > ... arr[row,col,G] = 0 > ... arr[row,col,B] = 255 > ... arr[row,col,A] = int(col * 255.0 / dim) > ... > >>> bmp = wx.BitmapFromBufferRGBA(dim, dim, arr) > >>> dc.DrawBitmap(bmp, 20, 20, True) This is looking pretty promising. -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
Darren Dale wrote: > On Sunday 27 August 2006 22:09, Eric Firing wrote: >> Darren Dale wrote: >>> A while back, I put some effort into rendering an offset ticklabel, which >>> allowed the user to do something like >>> >>> plot(linspace(100000100, 100000200, 100)) >>> >>> and the plot would look like a plot from 0 to 100, with a "+100000100" >>> rendered in a new label near the far end of the axis. This doesnt work >>> quite as well as it used to, because the axes autoscaling is setting the >>> plot range to something like the average plus and minus 6%. I have tried >>> tracing the source of this change, but I can't find it. It might be >>> buried in the _transforms extension code, and I've never been able to >>> wrap my head around mpl's transforms. >>> >>> Does anyone know why autoscaling is defaulting to this +-6% range? Does >>> it have to be this way? I'm trying to improve the scalar formatter >>> (supporting engineering notation, cleaning up the code). >> Yes. It is not a +-6% range in general, rather it is an adjustment that >> is made if the range is very small. The relevant method in Locator is: >> >> def nonsingular(self, vmin, vmax, expander=0.001, tiny=1e-6): >> if vmax < vmin: >> vmin, vmax = vmax, vmin >> if vmax - vmin <= max(abs(vmin), abs(vmax)) * tiny: >> if vmin==0.0: >> vmin -= 1 >> vmax += 1 >> else: >> vmin -= expander*abs(vmin) >> vmax += expander*abs(vmax) >> return vmin, vmax >> >> I know I did it this way for a reason, but I don't remember exactly what >> it was--whether it was because of problems with zooming when the zoom >> range gets too small (this was definitely a big problem), or because of >> problems with the rest of the locator code, or because it seemed to me >> to be roughly the desired behavior in most cases. Maybe it was all of >> the above. Certainly, something like this is needed--I think you will >> find that things go bad rapidly if vmin gets too close to vmax. I put >> in the "expander" and "tiny" kwargs in case of future need, but only >> expander is non-default (e.g., 0.05) in other parts of ticker.py, and >> neither kwarg is presently exposed to the user. That could be changed. > > I don't understand, I spent a lot of time making the scalarformatter work with > precisely this scenario (zooming in on extremely small ranges), and it was > working very well. I don't know of any circumstance where there was a > problem, maybe you could be more specific about the big problems you > encountered. Darren, I'm sorry, but I probably can't be much more specific. I don't remember the details of the whole lengthy process involved in getting MaxNLocator and aspect ratio handling working with pan and zoom, but the present version of nonsingular was part of it. It looks like the change you don't like was revision 2149 on March 16, when the "tiny" kwarg was added. Now, I think that the point of adding it was that checking for vmin == vmax turned out to be not good enough; given floating point math, having vmin too close to vmax could still cause trouble, maybe not in your formatter, but elsewhere. At one point "elsewhere" included the transforms module, but I am not sure whether the bug I fixed in revision 2149 involved an error from the transforms module. For experimental purposes, you can get the old behavior by setting tiny=0.0. Eric
Ken McIvor wrote: > I'll put it on the list of things to look into. Great. I'm glad someone is working on this. > The problem I foresee is that the Agg renderer's RGBA data has to > be converted to RGB before a wxImage can be created by convert_agg2image(). darn. I figured as much. wx is really due for an update to support alpha properly. But I guess you'll always have problems with different data formats. > I'm not sure this approach will help speed > up the wxAgg accelerator, but Anyway, this thread started because people were having binary compatibility issues. Even if this doesn't speed up the accelerator, it may be possible to get the same performance without using wx-version-specific compiled code -- i.e. pure python. I do have one question -- does the agg back-end really need to use an alpha channel for it's buffer? Isn't it the whole image anyway? What is is it going to get blended with? -Chris -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
On 8/27/06, Eric Firing <ef...@ha...> wrote: > Bill Baxter wrote: > > > > > I don't know anything about it what happened to the code, but I will > > say that +- 6% autoscaling is better than tight bounds for many kinds > > of plots. Like a scatter plot. It doesn't look good if some of your > > points are right on the axes, with their marker cut in half by the > > border. It's always bugged me with Matlab that there was no easy way > > to get slightly enlarged bounds on plots, so I'm glad to hear mpl has > > added something like that. I'm not sure it should be the default, or > > only option though. Some plots are better with tight bounds. > > Presently it kicks in only in the unusual case of a very small range, > but it has also occurred to me that it would be nice to be able to tell > the autoscaling to add a margin in any case. I just haven't gotten > around to doing it. +1 for that. I've just recently been fixing my limits by hand in this way precisely to avoid the half-cut markers problem that Bill describes. Cheers, f
Bill Baxter wrote: > > I don't know anything about it what happened to the code, but I will > say that +- 6% autoscaling is better than tight bounds for many kinds > of plots. Like a scatter plot. It doesn't look good if some of your > points are right on the axes, with their marker cut in half by the > border. It's always bugged me with Matlab that there was no easy way > to get slightly enlarged bounds on plots, so I'm glad to hear mpl has > added something like that. I'm not sure it should be the default, or > only option though. Some plots are better with tight bounds. Presently it kicks in only in the unusual case of a very small range, but it has also occurred to me that it would be nice to be able to tell the autoscaling to add a margin in any case. I just haven't gotten around to doing it. Eric
Darren Dale wrote: > A while back, I put some effort into rendering an offset ticklabel, which > allowed the user to do something like > > plot(linspace(100000100, 100000200, 100)) > > and the plot would look like a plot from 0 to 100, with a "+100000100" > rendered in a new label near the far end of the axis. This doesnt work quite > as well as it used to, because the axes autoscaling is setting the plot range > to something like the average plus and minus 6%. I have tried tracing the > source of this change, but I can't find it. It might be buried in the > _transforms extension code, and I've never been able to wrap my head around > mpl's transforms. > > Does anyone know why autoscaling is defaulting to this +-6% range? Does it > have to be this way? I'm trying to improve the scalar formatter (supporting > engineering notation, cleaning up the code). Yes. It is not a +-6% range in general, rather it is an adjustment that is made if the range is very small. The relevant method in Locator is: def nonsingular(self, vmin, vmax, expander=0.001, tiny=1e-6): if vmax < vmin: vmin, vmax = vmax, vmin if vmax - vmin <= max(abs(vmin), abs(vmax)) * tiny: if vmin==0.0: vmin -= 1 vmax += 1 else: vmin -= expander*abs(vmin) vmax += expander*abs(vmax) return vmin, vmax I know I did it this way for a reason, but I don't remember exactly what it was--whether it was because of problems with zooming when the zoom range gets too small (this was definitely a big problem), or because of problems with the rest of the locator code, or because it seemed to me to be roughly the desired behavior in most cases. Maybe it was all of the above. Certainly, something like this is needed--I think you will find that things go bad rapidly if vmin gets too close to vmax. I put in the "expander" and "tiny" kwargs in case of future need, but only expander is non-default (e.g., 0.05) in other parts of ticker.py, and neither kwarg is presently exposed to the user. That could be changed. Eric