You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
1
(5) |
2
(6) |
3
(10) |
4
|
5
(1) |
6
(6) |
7
(2) |
8
(27) |
9
(2) |
10
(2) |
11
|
12
|
13
(2) |
14
(2) |
15
(6) |
16
(1) |
17
(10) |
18
(1) |
19
(2) |
20
|
21
|
22
(4) |
23
(2) |
24
|
25
|
26
|
27
(1) |
28
|
29
(1) |
30
(5) |
|
|
|
|
Hi, I have a crash using current matplotlib binary build for windows on python2.6 If I have active figures open when exiting ipython -pylab using ctrl-d I get a: Fatal Python error: PyEval_RestoreThread: NULL tstate and a crashdialog, see attached bmp This does not happen if I close the windows before exiting Using: windows XP python2.6 matplotlib-0.98.5.3 binary installer numpy-1.3.0 scipy-0.7.1rc3 /Jörgen
Ray Speth wrote: > I believe I have found a simple change that improves the rendering speed > of quiver plots, which can be quite slow for large vector fields. Based > on some profiling, the problem appears to stem from the use of numpy's > MaskedArrays in PolyCollection.set_verts. If I add the following line to > the top of the PolyCollection.set_verts function in collections.py: > > verts = np.asarray(verts) > > I find that quiver plots are drawn about 3 times as quickly, going from > 2.6 seconds for a 125x125 field to 0.65 seconds. This does not seem to > break the use of MaskedArrays as inputs, and masked regions are still > hidden in the final plot. I do not know if this has any adverse effects > in other classes that inherit from PolyCollection. > > Using: > python 2.6.2 on Windows XP > numpy 1.3.0 > matplotlib 0.98.5.3, Qt4Agg backend > > I do not know why iterating over MaskedArrays is so slow, but perhaps > this information can be used to speed up some other functions as well. > Ray, I was not aware of this particular slowdown, but yes, masked arrays are frustratingly slow in many ways. There is no inherent reason they can't be nearly as fast as ndarrays for almost everything, but it will take quite a bit of work at the C level to get there. In the meantime, there are often other ways of getting around critical slowdowns. Thanks for the quiver tip; I will look into it. Offhand, I suspect a modified version of your suggestion will be needed, but we should be able to get the speedup you found one way or another. I think that your suggested change would effectively disable all masked array support in PolyCollection, and I don't want to do that. Eric > Ray Speth > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
I believe I have found a simple change that improves the rendering speed of quiver plots, which can be quite slow for large vector fields. Based on some profiling, the problem appears to stem from the use of numpy's MaskedArrays in PolyCollection.set_verts. If I add the following line to the top of the PolyCollection.set_verts function in collections.py: verts = np.asarray(verts) I find that quiver plots are drawn about 3 times as quickly, going from 2.6 seconds for a 125x125 field to 0.65 seconds. This does not seem to break the use of MaskedArrays as inputs, and masked regions are still hidden in the final plot. I do not know if this has any adverse effects in other classes that inherit from PolyCollection. Using: python 2.6.2 on Windows XP numpy 1.3.0 matplotlib 0.98.5.3, Qt4Agg backend I do not know why iterating over MaskedArrays is so slow, but perhaps this information can be used to speed up some other functions as well. Ray Speth
Dear sirs/madams,There is bug on >>> from pylab import *>>> from pylab import *>>> fig=figure(figsize=(8,8))>>> ax = fig.add_axes([0.1,0.1,0.7,0.7])>>> l,b,w,h = ax.get_position()Traceback (most recent call last): File "<interactive input>", line 1, in <module>TypeError: 'Bbox' object is not iterable>>> ax.get_position()Bbox(array([[ 0.1, 0.1], [ 0.8, 0.8]]))I suggest this;[l,b],[w,h]=ax.get_position().get_points() As its is a bug onbasemap-0.9.5\examples\plotmap.py", line 34, in <module> l,b,w,h = ax.get_position()TypeError: 'Bbox' object is not iterable best regards
There is a problem on pylab ,WindowsTraceback (most recent call last): File "....\basemap-0.9.5\examples\plot_tissot.py", line 59, in <module> m.drawmapboundary() File "C:\Python25\Lib\site-packages\matplotlib\toolkits\basemap\basemap.py", line 1227, in drawmapboundary ax.add_collection(bound) File "C:\Python25\Lib\site-packages\matplotlib\axes.py", line 1320, in add_collection if collection._paths and len(collection._paths):AttributeError: 'Polygon' object has no attribute '_paths'
Hi, I submitted a new patch on the tracker. When I was playing with the contour routine in mplot3d and using the extend3d keyword it would error out. I've attached a sample program to display the error as well as the dataset I was using. Please feel free to contact me if there are any questions regarding this. Ryan Wagner Support/Consulting Engineer Visual Numerics Inc. rw...@vn...<mailto:rw...@vn...>
Greetings, The conference committee is extending the deadline for abstract submission for the Scipy conference 2009 one week. On Friday July 3th, at midnight Pacific, we will turn off the abstract submission on the conference site. Up to then, you can modify the already-submitted abstract, or submit new abstracts. Submitting Papers --------------------- The program features tutorials, contributed papers, lightning talks, and bird-of-a-feather sessions. We are soliciting talks and accompanying papers (either formal academic or magazine-style articles) that discuss topics which center around scientific computing using Python. These include applications, teaching, future development directions, and research. A collection of peer-reviewed articles will be published as part of the proceedings. Proposals for talks are submitted as extended abstracts. There are two categories of talks: Paper presentations These talks are 35 minutes in duration (including questions). A one page abstract of no less than 500 words (excluding figures and references) should give an outline of the final paper. Proceeding papers are due two weeks after the conference, and may be in a formal academic style, or in a more relaxed magazine-style format. Rapid presentations These talks are 10 minutes in duration. An abstract of between 300 and 700 words should describe the topic and motivate its relevance to scientific computing. In addition, there will be an open session for lightning talks during which any attendee willing to do so is invited to do a couple-of-minutes-long presentation. If you wish to present a talk at the conference, please create an account on the website (http://conference.scipy.org). You may then submit an abstract by logging in, clicking on your profile and following the "Submit an abstract" link. Submission Guidelines * Submissions should be uploaded via the online form. * Submissions whose main purpose is to promote a commercial product or service will be refused. * All accepted proposals must be presented at the SciPy conference by at least one author. * Authors of an accepted proposal can provide a final paper for publication in the conference proceedings. Final papers are limited to 7 pages, including diagrams, figures, references, and appendices. The papers will be reviewed to help ensure the high-quality of the proceedings. For further information, please visit the conference homepage: http://conference.scipy.org. The SciPy 2009 executive committee ----------------------------------- * Jarrod Millman, UC Berkeley, USA (Conference Chair) * Gaël Varoquaux, INRIA Saclay, France (Program Co-Chair) * Stéfan van der Walt, University of Stellenbosch, South Africa (Program Co-Chair) * Fernando Pérez, UC Berkeley, USA (Tutorial Chair)
What's in there now should work with numpy arrays, but obviously not Python lists. I think the correct solution is actually to coerce width and height to arrays rather than lists. But I'm hoping someone more familiar with the bar code can comment. It should be a simple change to "make_iterable" Cheers, Mike Brad Chivari wrote: > Anyone? Am I way off the mark here? > > Thanks, > Brad > > On 2009年6月18日 14:41:03 -0300 > Brad Chivari <bra...@so...> wrote: > > >> SUBJECT: >> Filtering out 0 bar height/width not working >> >> FILE: >> matplotlib/ trunk/ matplotlib/ lib/ matplotlib/ axes.py >> >> PROBLEM: >> xmin = np.amin(width[width!=0]) # filter out the 0 width rects >> ymin = np.amin(height[height!=0]) # filter out the 0 height rects >> >> These aren't using proper python list comprehension and don't work as expected (for me anyway). >> >> SOLUTION: >> Shouldn't they be something like: >> xmin = np.amin([w for w in width if w != 0]) >> ymin = np.amin([h for h in height if h != 0]) >> >> Once I changed them they seem to work properly. >> >> Thanks, >> Brad >> >> ------------------------------------------------------------------------------ >> Crystal Reports - New Free Runtime and 30 Day Trial >> Check out the new simplified licensing option that enables unlimited >> royalty-free distribution of the report engine for externally facing >> server and web deployment. >> http://p.sf.net/sfu/businessobjects >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> > > ------------------------------------------------------------------------------ > Are you an open source citizen? Join us for the Open Source Bridge conference! > Portland, OR, June 17-19. Two days of sessions, one day of unconference: 250ドル. > Need another reason to go? 24-hour hacker lounge. Register today! > http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Anyone? Am I way off the mark here? Thanks, Brad On 2009年6月18日 14:41:03 -0300 Brad Chivari <bra...@so...> wrote: > SUBJECT: > Filtering out 0 bar height/width not working > > FILE: > matplotlib/ trunk/ matplotlib/ lib/ matplotlib/ axes.py > > PROBLEM: > xmin = np.amin(width[width!=0]) # filter out the 0 width rects > ymin = np.amin(height[height!=0]) # filter out the 0 height rects > > These aren't using proper python list comprehension and don't work as expected (for me anyway). > > SOLUTION: > Shouldn't they be something like: > xmin = np.amin([w for w in width if w != 0]) > ymin = np.amin([h for h in height if h != 0]) > > Once I changed them they seem to work properly. > > Thanks, > Brad > > ------------------------------------------------------------------------------ > Crystal Reports - New Free Runtime and 30 Day Trial > Check out the new simplified licensing option that enables unlimited > royalty-free distribution of the report engine for externally facing > server and web deployment. > http://p.sf.net/sfu/businessobjects > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
Tobias Wood wrote: > Michael, > Thanks for the comments, much appreciated. I've attached an updated > patch including your suggestions and some more whitespace for > readability. There was no reason other than simplicity for not > returning a 8-bit numpy array. I actually meant to ask about the > ternary blocks - I think I picked up the style from the original code > and had continued in the same vein for compactness. Thanks! > > While I was testing this I came across another issue - which variety > of FLOAT should the code return? My understanding was that Python > floats are C doubles. However the code was previously returning a > PyArray_FLOAT, which seems to be a FLOAT32 rather than a FLOAT64. > Hence I removed any trace of doubles from my code and have left it all > at float precision. Yes, that's all correct. I suspect read_png creates arrays of floats rather than doubles just for the sake of memory savings -- and the fact that doubles would be overkill for even 16-bit integral data. Thanks for the new patch. I'll wait a bit to see if there's any comments on the functionality itself or the API before committing it. Cheers, Mike > > Thanks again, > Toby > > On 2009年6月22日 15:58:58 +0100, Michael Droettboom > <md...@st...> wrote: > >> I don't ever work with data-in-PNGs, so I won't comment on the use >> cases or API here -- I'll leave that to others. >> >> However, for the patch, I think the reinterpret_cast<unsigned short >> *> would be safer as reinterpret_cast<png_uint_16>, since unsigned >> ints are not guaranteed to be 16-bits, and png.h provides a nice >> convenient typedef for us. Also, why does the code not create an >> 8-bit numpy array for "raw" images that are only 8-bits? >> >> Also a style note: I find assignments inside of ternary operators >> (... ? ... : ...) confusing. I'd rather see that as a proper "if" >> block. >> >> Cheers, >> Mike >> >> Tobias Wood wrote: >>> Dear list, >>> Back in April I submitted a patch that allowed imread() to correctly >>> read PNGs that have odd bit-depths, ie not 8 or 16 (I actually >>> submitted that to the Users list as I was unsure of protocol). There >>> were a couple of things I left unfinished that I've finally got >>> round to looking at again. >>> >>> The main remaining issue for me is that PNG specifies that all bit >>> depths should be scaled to have the same maximum brightness, so that >>> a value of 8191 in an 13-bit image is displayed the same as 65535 in >>> a 16-bit image. Unfortunately, the LabView drivers for the 12-bit >>> CCD in our lab do not follow this convention. A higher bit-depth >>> from this setup means the image was brighter in an absolute sense >>> and no scaling takes place. So this is not an error with Matplotlib >>> as such, but more about having a decent way to handle iffy PNGs. It >>> is worth noting that Matlab does not handle these PNGs well either >>> (We have to query the image file using iminfo and then correct it) >>> and PIL ignores anything above 8-bits as far as I can tell. >>> >>> A simple method, in my mind, and originally suggested by Andrew >>> Straw is to add a keyword argument to imread() that indicates >>> whether a user wants floats scaled between 0 and 1, or the raw byte >>> values which they can then scale as required. This then gets passed >>> to read_png(), which does the scaling if necessary and if not >>> returns an array of UINT16s. I wrote a patch that does this, >>> changing both image.py and _png.cpp. I'm very much open to other >>> suggestions, as I didn't particularly want to fiddle with a core >>> function like imread() and I'm fairly new to Python. In particular I >>> have not changed anything to do with PIL - although it would not be >>> much work to update pil_to_array() to follow the same behaviour as >>> read_png(). I have tested this with the pngsuite.py*, and if desired >>> I can submit an extended version of this that tests the extended >>> bit-depth images from the PNG suite. >>> >>> Thanks in advance, >>> Toby Wood >>> >>> * My patch also includes a minor change to pngsuite.py which was >>> throwing a deprecation warning about using get_frame() istead of patch >>> ------------------------------------------------------------------------ >>> >>> >>> ------------------------------------------------------------------------------ >>> >>> Are you an open source citizen? Join us for the Open Source Bridge >>> conference! >>> Portland, OR, June 17-19. Two days of sessions, one day of >>> unconference: 250ドル. >>> Need another reason to go? 24-hour hacker lounge. Register today! >>> http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org >>> >>> ------------------------------------------------------------------------ >>> >>> >>> _______________________________________________ >>> Matplotlib-devel mailing list >>> Mat...@li... >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Michael, Thanks for the comments, much appreciated. I've attached an updated patch including your suggestions and some more whitespace for readability. There was no reason other than simplicity for not returning a 8-bit numpy array. I actually meant to ask about the ternary blocks - I think I picked up the style from the original code and had continued in the same vein for compactness. While I was testing this I came across another issue - which variety of FLOAT should the code return? My understanding was that Python floats are C doubles. However the code was previously returning a PyArray_FLOAT, which seems to be a FLOAT32 rather than a FLOAT64. Hence I removed any trace of doubles from my code and have left it all at float precision. Thanks again, Toby On 2009年6月22日 15:58:58 +0100, Michael Droettboom <md...@st...> wrote: > I don't ever work with data-in-PNGs, so I won't comment on the use cases > or API here -- I'll leave that to others. > > However, for the patch, I think the reinterpret_cast<unsigned short *> > would be safer as reinterpret_cast<png_uint_16>, since unsigned ints are > not guaranteed to be 16-bits, and png.h provides a nice convenient > typedef for us. Also, why does the code not create an 8-bit numpy array > for "raw" images that are only 8-bits? > > Also a style note: I find assignments inside of ternary operators (... ? > ... : ...) confusing. I'd rather see that as a proper "if" block. > > Cheers, > Mike > > Tobias Wood wrote: >> Dear list, >> Back in April I submitted a patch that allowed imread() to correctly >> read PNGs that have odd bit-depths, ie not 8 or 16 (I actually >> submitted that to the Users list as I was unsure of protocol). There >> were a couple of things I left unfinished that I've finally got round >> to looking at again. >> >> The main remaining issue for me is that PNG specifies that all bit >> depths should be scaled to have the same maximum brightness, so that a >> value of 8191 in an 13-bit image is displayed the same as 65535 in a >> 16-bit image. Unfortunately, the LabView drivers for the 12-bit CCD in >> our lab do not follow this convention. A higher bit-depth from this >> setup means the image was brighter in an absolute sense and no scaling >> takes place. So this is not an error with Matplotlib as such, but more >> about having a decent way to handle iffy PNGs. It is worth noting that >> Matlab does not handle these PNGs well either (We have to query the >> image file using iminfo and then correct it) and PIL ignores anything >> above 8-bits as far as I can tell. >> >> A simple method, in my mind, and originally suggested by Andrew Straw >> is to add a keyword argument to imread() that indicates whether a user >> wants floats scaled between 0 and 1, or the raw byte values which they >> can then scale as required. This then gets passed to read_png(), which >> does the scaling if necessary and if not returns an array of UINT16s. I >> wrote a patch that does this, changing both image.py and _png.cpp. I'm >> very much open to other suggestions, as I didn't particularly want to >> fiddle with a core function like imread() and I'm fairly new to Python. >> In particular I have not changed anything to do with PIL - although it >> would not be much work to update pil_to_array() to follow the same >> behaviour as read_png(). I have tested this with the pngsuite.py*, and >> if desired I can submit an extended version of this that tests the >> extended bit-depth images from the PNG suite. >> >> Thanks in advance, >> Toby Wood >> >> * My patch also includes a minor change to pngsuite.py which was >> throwing a deprecation warning about using get_frame() istead of patch >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------ >> Are you an open source citizen? Join us for the Open Source Bridge >> conference! >> Portland, OR, June 17-19. Two days of sessions, one day of >> unconference: 250ドル. >> Need another reason to go? 24-hour hacker lounge. Register today! >> http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >
I don't ever work with data-in-PNGs, so I won't comment on the use cases or API here -- I'll leave that to others. However, for the patch, I think the reinterpret_cast<unsigned short *> would be safer as reinterpret_cast<png_uint_16>, since unsigned ints are not guaranteed to be 16-bits, and png.h provides a nice convenient typedef for us. Also, why does the code not create an 8-bit numpy array for "raw" images that are only 8-bits? Also a style note: I find assignments inside of ternary operators (... ? ... : ...) confusing. I'd rather see that as a proper "if" block. Cheers, Mike Tobias Wood wrote: > Dear list, > Back in April I submitted a patch that allowed imread() to correctly > read PNGs that have odd bit-depths, ie not 8 or 16 (I actually > submitted that to the Users list as I was unsure of protocol). There > were a couple of things I left unfinished that I've finally got round > to looking at again. > > The main remaining issue for me is that PNG specifies that all bit > depths should be scaled to have the same maximum brightness, so that a > value of 8191 in an 13-bit image is displayed the same as 65535 in a > 16-bit image. Unfortunately, the LabView drivers for the 12-bit CCD in > our lab do not follow this convention. A higher bit-depth from this > setup means the image was brighter in an absolute sense and no scaling > takes place. So this is not an error with Matplotlib as such, but more > about having a decent way to handle iffy PNGs. It is worth noting that > Matlab does not handle these PNGs well either (We have to query the > image file using iminfo and then correct it) and PIL ignores anything > above 8-bits as far as I can tell. > > A simple method, in my mind, and originally suggested by Andrew Straw > is to add a keyword argument to imread() that indicates whether a user > wants floats scaled between 0 and 1, or the raw byte values which they > can then scale as required. This then gets passed to read_png(), which > does the scaling if necessary and if not returns an array of UINT16s. > I wrote a patch that does this, changing both image.py and _png.cpp. > I'm very much open to other suggestions, as I didn't particularly want > to fiddle with a core function like imread() and I'm fairly new to > Python. In particular I have not changed anything to do with PIL - > although it would not be much work to update pil_to_array() to follow > the same behaviour as read_png(). I have tested this with the > pngsuite.py*, and if desired I can submit an extended version of this > that tests the extended bit-depth images from the PNG suite. > > Thanks in advance, > Toby Wood > > * My patch also includes a minor change to pngsuite.py which was > throwing a deprecation warning about using get_frame() istead of patch > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > Are you an open source citizen? Join us for the Open Source Bridge conference! > Portland, OR, June 17-19. Two days of sessions, one day of unconference: 250ドル. > Need another reason to go? 24-hour hacker lounge. Register today! > http://ad.doubleclick.net/clk;215844324;13503038;v?http://opensourcebridge.org > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Dear list, Back in April I submitted a patch that allowed imread() to correctly read PNGs that have odd bit-depths, ie not 8 or 16 (I actually submitted that to the Users list as I was unsure of protocol). There were a couple of things I left unfinished that I've finally got round to looking at again. The main remaining issue for me is that PNG specifies that all bit depths should be scaled to have the same maximum brightness, so that a value of 8191 in an 13-bit image is displayed the same as 65535 in a 16-bit image. Unfortunately, the LabView drivers for the 12-bit CCD in our lab do not follow this convention. A higher bit-depth from this setup means the image was brighter in an absolute sense and no scaling takes place. So this is not an error with Matplotlib as such, but more about having a decent way to handle iffy PNGs. It is worth noting that Matlab does not handle these PNGs well either (We have to query the image file using iminfo and then correct it) and PIL ignores anything above 8-bits as far as I can tell. A simple method, in my mind, and originally suggested by Andrew Straw is to add a keyword argument to imread() that indicates whether a user wants floats scaled between 0 and 1, or the raw byte values which they can then scale as required. This then gets passed to read_png(), which does the scaling if necessary and if not returns an array of UINT16s. I wrote a patch that does this, changing both image.py and _png.cpp. I'm very much open to other suggestions, as I didn't particularly want to fiddle with a core function like imread() and I'm fairly new to Python. In particular I have not changed anything to do with PIL - although it would not be much work to update pil_to_array() to follow the same behaviour as read_png(). I have tested this with the pngsuite.py*, and if desired I can submit an extended version of this that tests the extended bit-depth images from the PNG suite. Thanks in advance, Toby Wood * My patch also includes a minor change to pngsuite.py which was throwing a deprecation warning about using get_frame() istead of patch
Please excuse me for incorrect information in my announcement: On Fri, Jun 19, 2009 at 04:01:58PM +0200, Gael Varoquaux wrote: > We are very happy to announce that this year registration to the > conference will be only 150,ドル sprints 100,ドル and students get half price! This should read that the tutorials are 100,ドル not the sprints. The sprints are actually free, off course. We will be very please to see as many people as possible willing to participate at the sprint in making the SciPy ecosystem thrive. Thanks to Travis Oliphant for pointing out the typo. Gaël Varoquaux
We are finally opening the registration for the SciPy 2009 conference. It took us time, but the reason is that we made careful budget estimations to bring the registration cost down. We are very happy to announce that this year registration to the conference will be only 150,ドル sprints 100,ドル and students get half price! We made this effort because we hope it will open up the conference to more people, especially students that often have to finance this trip with little budget. As a consequence, however, catering at noon is not included. This does not mean that we are getting a reduced conference. Quite on the contrary, this year we have two keynote speakers. And what speakers: Peter Norvig and Jon Guyer! Peter Norvig is the director of research at Google and Jon Guyer is a research scientist at NIST, in the Thermodynamics and Kinetics Group, where he leads a fiPy, a finite element project in Python. The SciPy 2009 Conference ========================== SciPy 2009, the 8th Python in Science conference (http://conference.scipy.org), will be held from August 18-23, 2009 at Caltech in Pasadena, CA, USA. Each year SciPy attracts leading figures in research and scientific software development with Python from a wide range of scientific and engineering disciplines. The focus of the conference is both on scientific libraries and tools developed with Python and on scientific or engineering achievements using Python. Call for Papers ================ We welcome contributions from the industry as well as the academic world. Indeed, industrial research and development as well academic research face the challenge of mastering IT tools for exploration, modeling and analysis. We look forward to hearing your recent breakthroughs using Python! Please read the full call for papers (http://conference.scipy.org/call_for_papers). Important Dates ================ * Friday, June 26: Abstracts Due * Saturday, July 4: Announce accepted talks, post schedule * Friday, July 10: Early Registration ends * Tuesday-Wednesday, August 18-19: Tutorials * Thursday-Friday, August 20-21: Conference * Saturday-Sunday, August 22-23: Sprints * Friday, September 4: Papers for proceedings due The SciPy 2009 executive committee ----------------------------------- * Jarrod Millman, UC Berkeley, USA (Conference Chair) * Gaël Varoquaux, INRIA Saclay, France (Program Co-Chair) * Stéfan van der Walt, University of Stellenbosch, South Africa * (Program Co-Chair) * Fernando Pérez, UC Berkeley, USA (Tutorial Chair)
SUBJECT: Filtering out 0 bar height/width not working FILE: matplotlib/ trunk/ matplotlib/ lib/ matplotlib/ axes.py PROBLEM: xmin = np.amin(width[width!=0]) # filter out the 0 width rects ymin = np.amin(height[height!=0]) # filter out the 0 height rects These aren't using proper python list comprehension and don't work as expected (for me anyway). SOLUTION: Shouldn't they be something like: xmin = np.amin([w for w in width if w != 0]) ymin = np.amin([h for h in height if h != 0]) Once I changed them they seem to work properly. Thanks, Brad
Ludwig Schwardt wrote: > Does the new path simplification code use a similar approach to snd? > I've always wanted something like that in matplotlib... :-) > > Not knowing the details of what snd is doing, I would say "probably". The general idea is to remove points on-the-fly that do not change the appearance of the plot at the given resolution. Spending the time to do this at the front speeds up the path stroking immensely as it has fewer vertices and therefore fewer self-intersections to compute. I suspect what matplotlib is doing is a little more general, and therefore not quite as efficient as snd, because it can't assume a 1-dimensional time series. To give credit where it is due, the path simplification was originally written by Allan Haldane and has been in matplotlib for some time. The recent work has been to fix some bugs when dealing with some degenerate cases, to improve its performance, greatly improve the clipping algorithm and allow the tolerance to be user-configurable. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Hi, On this subject, one program that has pretty impressive interactive visualisation is the venerable snd (http://ccrma.stanford.edu/software/snd/). It displays hours of audio in a flash and allows you pan and zoom the signal without a hitch. It only plots an envelope of the audio signal at first, and shows more and more detail as you zoom in. Jimmy's comment that there's no need to visualize 3 million points if you can only display 200 000 is even more true for time signals, where you can typically only display 1000 to 2000 samples (i.e. the number of horizontal pixels). Does the new path simplification code use a similar approach to snd? I've always wanted something like that in matplotlib... :-) Regards, Ludwig
The demo-animation.py worked beautifully out of the box at 150fps.... I upped a bit the array size to 1200x1200...still around 40fps... very interesting... jimmy 2009年6月17日 Jimmy Paillet <jim...@gm...> > > > 2009年6月17日 Michael Droettboom <md...@st...> > >> vehemental wrote: >> >>> Hello, >>> >>> I'm using matplotlib for various tasks beautifully...but on some >>> occasions, >>> I have to visualize large datasets (in the range of 10M data points) >>> (using >>> imshow or regular plots)...system start to choke a bit at that point... >>> >>> >> The first thing I would check is whether your system becomes starved for >> memory at this point and virtual memory swapping kicks in. > > > the python process is sitting around a 300Mo of memory comsumption....there > should plenty of memory left... > but I will look more closely to what's happenning... > I would assume the Memory bandwidth to not be very high, given the > cheapness of the comp i' m using :D > >> >> >> A common technique for faster plotting of image data is to downsample it >> before passing it to matplotlib. Same with line plots -- they can be >> decimated. There is newer/faster path simplification code in SVN trunk that >> may help with complex line plots (when the path.simplify rcParam is True). >> I would suggest starting with that as a baseline to see how much >> performance it already gives over the released version. > > > yes totally make sense...no need to visualize 3 millions points if you can > only display 200 000.... > I'm already doing that to some extent, but it's taking time on its > own...but at least I have solutions to reduce this time if needed.... > i' ll try the SVN version....see if I can extract some improvements.... > > >> >> I would like to be consistent somehow and not use different tools for >>> basically similar tasks... >>> so I'd like some pointers regarding rendering performance...as I would be >>> interested to be involved in dev is there is something to be done.... >>> >>> To active developers, what's the general feel does matplotlib have room >>> to >>> spare in its rendering performance?... >>> >>> >> I've spent a lot of time optimizing the Agg backend (which is already one >> of the fastest software-only approaches out there), and I'm out of obvious >> ideas. But a fresh set of eyes may find new things. An advantage of Agg >> that shouldn't be overlooked is that is works identically everywhere. >> >>> or is it pretty tied down to the speed of Agg right now? >>> Is there something to gain from using the multiprocessing module now >>> included by default in 2.6? >>> >>> >> Probably not. If the work of rendering were to be divided among cores, >> that would probably be done at the C++ level anyway to see any gains. As it >> is, the problem with plotting many points generally tends to be limited by >> memory bandwidth anyway, not processor speed. >> >>> or even go as far as using something like pyGPU for fast vectorized >>> computations...? >>> >>> >> Perhaps. But again, the computation isn't the bottleneck -- it's usually >> a memory bandwidth starvation issue in my experience. Using a GPU may only >> make matters worse. Note that I consider that approach distinct from just >> using OpenGL to colormap and render the image as a texture. That approach >> may bear some fruit -- but only for image plots. Vector graphics >> acceleration with GPUs is still difficult to do in high quality across >> platforms and chipsets and beat software for speed. >> > > > So if I hear you correctly, the Matplotlib/Agg combination is not terribly > slower that would be a C plotting lib using Agg as well to render... > and we are talking more about hardware limitations, right? > > >> >> I've seen around previous discussions about OpenGL being a backend in >>> some >>> future... >>> would it really stand up compared to the current backends? is there >>> clues >>> about that right now? >>> >> > Thanks Nicolas, I' ll take a closer look at GLnumpy.... > I can probably gather some info by making a comparison of an imshow to the > equivalent in OGL.... > > > >> >>> thanks for any inputs! :D >>> bye >>> >>> >> Hope this helps, > > > it did! thanks > jimmy > > >> >> Mike >> >> -- >> Michael Droettboom >> Science Software Branch >> Operations and Engineering Division >> Space Telescope Science Institute >> Operated by AURA for NASA >> >> >
I think the setter method is available in python 2.6 only. I modified sources and put them at same place. It should be ok now. Nicolas On Wed, 2009年06月17日 at 10:10 -0500, Gökhan SEVER wrote: > On Wed, Jun 17, 2009 at 9:25 AM, Nicolas Rougier > <Nic...@lo...> wrote: > > Hello, > > To give you some hints on performances using OpenGL, you can > have a look > at glumpy: http://www.loria.fr/~rougier/tmp/glumpy.tgz > (It requires pyglet for the OpenGL backend). > > It is not yet finished but it is usable. Current version > allows to > visualize static numpy float32 array up to 8000x8000 and > dynamic numpy > float32 array around 500x500 depending on GPU hardware > (dynamic means > that you update image at around 30 fps/second). > > The idea behind glumpy is to directly translate a numpy array > into a > texture and to use shaders to make the colormap transformation > and > filtering (nearest, bilinear or bicubic). > > Nicolas > > Nicholas, > > How do you run a the demo scripts in glumpy? > > I get errors both with Ipython run and python script_name.py > > In [1]: run demo-simple.py > --------------------------------------------------------------------------- > AttributeError Traceback (most recent call > last) > > /home/gsever/glumpy/demo-simple.py in <module>() > 20 # > 21 # > ----------------------------------------------------------------------------- > ---> 22 import glumpy > 23 import numpy as np > 24 import pyglet, pyglet.gl as gl > > /home/gsever/glumpy/glumpy/__init__.py in <module>() > 23 import colormap > 24 from color import Color > ---> 25 from image import Image > 26 from trackball import Trackball > 27 from app import app, proxy > > /home/gsever/glumpy/glumpy/image.py in <module>() > 25 > 26 > ---> 27 class Image(object): > 28 ''' ''' > 29 def __init__(self, Z, format=None, > cmap=colormap.IceAndFire, vmin=None, > > /home/gsever/glumpy/glumpy/image.py in Image() > 119 return self._cmap > 120 > --> 121 @cmap.setter > 122 def cmap(self, cmap): > 123 ''' Colormap to be used to represent the array. ''' > > AttributeError: 'property' object has no attribute 'setter' > WARNING: Failure executing file: <demo-simple.py> > > > > > > [gsever@ccn glumpy]$ python demo-cube.py > Traceback (most recent call last): > File "demo-cube.py", line 22, in <module> > import glumpy > File "/home/gsever/glumpy/glumpy/__init__.py", line 25, in <module> > from image import Image > File "/home/gsever/glumpy/glumpy/image.py", line 27, in <module> > class Image(object): > File "/home/gsever/glumpy/glumpy/image.py", line 121, in Image > @cmap.setter > AttributeError: 'property' object has no attribute 'setter' > > > Have Python 2.5.2... > >
On Wed, Jun 17, 2009 at 9:25 AM, Nicolas Rougier <Nic...@lo...>wrote: > > Hello, > > To give you some hints on performances using OpenGL, you can have a look > at glumpy: http://www.loria.fr/~rougier/tmp/glumpy.tgz<http://www.loria.fr/%7Erougier/tmp/glumpy.tgz> > (It requires pyglet for the OpenGL backend). > > It is not yet finished but it is usable. Current version allows to > visualize static numpy float32 array up to 8000x8000 and dynamic numpy > float32 array around 500x500 depending on GPU hardware (dynamic means > that you update image at around 30 fps/second). > > The idea behind glumpy is to directly translate a numpy array into a > texture and to use shaders to make the colormap transformation and > filtering (nearest, bilinear or bicubic). > > Nicolas Nicholas, How do you run a the demo scripts in glumpy? I get errors both with Ipython run and python script_name.py In [1]: run demo-simple.py --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) /home/gsever/glumpy/demo-simple.py in <module>() 20 # 21 # ----------------------------------------------------------------------------- ---> 22 import glumpy 23 import numpy as np 24 import pyglet, pyglet.gl as gl /home/gsever/glumpy/glumpy/__init__.py in <module>() 23 import colormap 24 from color import Color ---> 25 from image import Image 26 from trackball import Trackball 27 from app import app, proxy /home/gsever/glumpy/glumpy/image.py in <module>() 25 26 ---> 27 class Image(object): 28 ''' ''' 29 def __init__(self, Z, format=None, cmap=colormap.IceAndFire, vmin=None, /home/gsever/glumpy/glumpy/image.py in Image() 119 return self._cmap 120 --> 121 @cmap.setter 122 def cmap(self, cmap): 123 ''' Colormap to be used to represent the array. ''' AttributeError: 'property' object has no attribute 'setter' WARNING: Failure executing file: <demo-simple.py> [gsever@ccn glumpy]$ python demo-cube.py Traceback (most recent call last): File "demo-cube.py", line 22, in <module> import glumpy File "/home/gsever/glumpy/glumpy/__init__.py", line 25, in <module> from image import Image File "/home/gsever/glumpy/glumpy/image.py", line 27, in <module> class Image(object): File "/home/gsever/glumpy/glumpy/image.py", line 121, in Image @cmap.setter AttributeError: 'property' object has no attribute 'setter' Have Python 2.5.2...
2009年6月17日 Michael Droettboom <md...@st...> > vehemental wrote: > >> Hello, >> >> I'm using matplotlib for various tasks beautifully...but on some >> occasions, >> I have to visualize large datasets (in the range of 10M data points) >> (using >> imshow or regular plots)...system start to choke a bit at that point... >> >> > The first thing I would check is whether your system becomes starved for > memory at this point and virtual memory swapping kicks in. the python process is sitting around a 300Mo of memory comsumption....there should plenty of memory left... but I will look more closely to what's happenning... I would assume the Memory bandwidth to not be very high, given the cheapness of the comp i' m using :D > > > A common technique for faster plotting of image data is to downsample it > before passing it to matplotlib. Same with line plots -- they can be > decimated. There is newer/faster path simplification code in SVN trunk that > may help with complex line plots (when the path.simplify rcParam is True). > I would suggest starting with that as a baseline to see how much > performance it already gives over the released version. yes totally make sense...no need to visualize 3 millions points if you can only display 200 000.... I'm already doing that to some extent, but it's taking time on its own...but at least I have solutions to reduce this time if needed.... i' ll try the SVN version....see if I can extract some improvements.... > > I would like to be consistent somehow and not use different tools for >> basically similar tasks... >> so I'd like some pointers regarding rendering performance...as I would be >> interested to be involved in dev is there is something to be done.... >> >> To active developers, what's the general feel does matplotlib have room to >> spare in its rendering performance?... >> >> > I've spent a lot of time optimizing the Agg backend (which is already one > of the fastest software-only approaches out there), and I'm out of obvious > ideas. But a fresh set of eyes may find new things. An advantage of Agg > that shouldn't be overlooked is that is works identically everywhere. > >> or is it pretty tied down to the speed of Agg right now? >> Is there something to gain from using the multiprocessing module now >> included by default in 2.6? >> >> > Probably not. If the work of rendering were to be divided among cores, > that would probably be done at the C++ level anyway to see any gains. As it > is, the problem with plotting many points generally tends to be limited by > memory bandwidth anyway, not processor speed. > >> or even go as far as using something like pyGPU for fast vectorized >> computations...? >> >> > Perhaps. But again, the computation isn't the bottleneck -- it's usually a > memory bandwidth starvation issue in my experience. Using a GPU may only > make matters worse. Note that I consider that approach distinct from just > using OpenGL to colormap and render the image as a texture. That approach > may bear some fruit -- but only for image plots. Vector graphics > acceleration with GPUs is still difficult to do in high quality across > platforms and chipsets and beat software for speed. > So if I hear you correctly, the Matplotlib/Agg combination is not terribly slower that would be a C plotting lib using Agg as well to render... and we are talking more about hardware limitations, right? > > I've seen around previous discussions about OpenGL being a backend in some >> future... >> would it really stand up compared to the current backends? is there clues >> about that right now? >> > Thanks Nicolas, I' ll take a closer look at GLnumpy.... I can probably gather some info by making a comparison of an imshow to the equivalent in OGL.... > >> thanks for any inputs! :D >> bye >> >> > Hope this helps, it did! thanks jimmy > > Mike > > -- > Michael Droettboom > Science Software Branch > Operations and Engineering Division > Space Telescope Science Institute > Operated by AURA for NASA > >
vehemental wrote: > Hello, > > I'm using matplotlib for various tasks beautifully...but on some occasions, > I have to visualize large datasets (in the range of 10M data points) (using > imshow or regular plots)...system start to choke a bit at that point... > The first thing I would check is whether your system becomes starved for memory at this point and virtual memory swapping kicks in. A common technique for faster plotting of image data is to downsample it before passing it to matplotlib. Same with line plots -- they can be decimated. There is newer/faster path simplification code in SVN trunk that may help with complex line plots (when the path.simplify rcParam is True). I would suggest starting with that as a baseline to see how much performance it already gives over the released version. > I would like to be consistent somehow and not use different tools for > basically similar tasks... > so I'd like some pointers regarding rendering performance...as I would be > interested to be involved in dev is there is something to be done.... > > To active developers, what's the general feel does matplotlib have room to > spare in its rendering performance?... > I've spent a lot of time optimizing the Agg backend (which is already one of the fastest software-only approaches out there), and I'm out of obvious ideas. But a fresh set of eyes may find new things. An advantage of Agg that shouldn't be overlooked is that is works identically everywhere. > or is it pretty tied down to the speed of Agg right now? > Is there something to gain from using the multiprocessing module now > included by default in 2.6? > Probably not. If the work of rendering were to be divided among cores, that would probably be done at the C++ level anyway to see any gains. As it is, the problem with plotting many points generally tends to be limited by memory bandwidth anyway, not processor speed. > or even go as far as using something like pyGPU for fast vectorized > computations...? > Perhaps. But again, the computation isn't the bottleneck -- it's usually a memory bandwidth starvation issue in my experience. Using a GPU may only make matters worse. Note that I consider that approach distinct from just using OpenGL to colormap and render the image as a texture. That approach may bear some fruit -- but only for image plots. Vector graphics acceleration with GPUs is still difficult to do in high quality across platforms and chipsets and beat software for speed. > I've seen around previous discussions about OpenGL being a backend in some > future... > > would it really stand up compared to the current backends? is there clues > about that right now? > > thanks for any inputs! :D > bye > Hope this helps, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Hello, To give you some hints on performances using OpenGL, you can have a look at glumpy: http://www.loria.fr/~rougier/tmp/glumpy.tgz (It requires pyglet for the OpenGL backend). It is not yet finished but it is usable. Current version allows to visualize static numpy float32 array up to 8000x8000 and dynamic numpy float32 array around 500x500 depending on GPU hardware (dynamic means that you update image at around 30 fps/second). The idea behind glumpy is to directly translate a numpy array into a texture and to use shaders to make the colormap transformation and filtering (nearest, bilinear or bicubic). Nicolas On Wed, 2009年06月17日 at 07:02 -0700, vehemental wrote: > Hello, > > I'm using matplotlib for various tasks beautifully...but on some occasions, > I have to visualize large datasets (in the range of 10M data points) (using > imshow or regular plots)...system start to choke a bit at that point... > > I would like to be consistent somehow and not use different tools for > basically similar tasks... > so I'd like some pointers regarding rendering performance...as I would be > interested to be involved in dev is there is something to be done.... > > To active developers, what's the general feel does matplotlib have room to > spare in its rendering performance?... > or is it pretty tied down to the speed of Agg right now? > Is there something to gain from using the multiprocessing module now > included by default in 2.6? > or even go as far as using something like pyGPU for fast vectorized > computations...? > > I've seen around previous discussions about OpenGL being a backend in some > future... > would it really stand up compared to the current backends? is there clues > about that right now? > > thanks for any inputs! :D > bye
Hello, I'm using matplotlib for various tasks beautifully...but on some occasions, I have to visualize large datasets (in the range of 10M data points) (using imshow or regular plots)...system start to choke a bit at that point... I would like to be consistent somehow and not use different tools for basically similar tasks... so I'd like some pointers regarding rendering performance...as I would be interested to be involved in dev is there is something to be done.... To active developers, what's the general feel does matplotlib have room to spare in its rendering performance?... or is it pretty tied down to the speed of Agg right now? Is there something to gain from using the multiprocessing module now included by default in 2.6? or even go as far as using something like pyGPU for fast vectorized computations...? I've seen around previous discussions about OpenGL being a backend in some future... would it really stand up compared to the current backends? is there clues about that right now? thanks for any inputs! :D bye -- View this message in context: http://www.nabble.com/Large-datasets-performance....-tp24074329p24074329.html Sent from the matplotlib - devel mailing list archive at Nabble.com.