You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(12) |
Sep
(12) |
Oct
(56) |
Nov
(65) |
Dec
(37) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(59) |
Feb
(78) |
Mar
(153) |
Apr
(205) |
May
(184) |
Jun
(123) |
Jul
(171) |
Aug
(156) |
Sep
(190) |
Oct
(120) |
Nov
(154) |
Dec
(223) |
2005 |
Jan
(184) |
Feb
(267) |
Mar
(214) |
Apr
(286) |
May
(320) |
Jun
(299) |
Jul
(348) |
Aug
(283) |
Sep
(355) |
Oct
(293) |
Nov
(232) |
Dec
(203) |
2006 |
Jan
(352) |
Feb
(358) |
Mar
(403) |
Apr
(313) |
May
(165) |
Jun
(281) |
Jul
(316) |
Aug
(228) |
Sep
(279) |
Oct
(243) |
Nov
(315) |
Dec
(345) |
2007 |
Jan
(260) |
Feb
(323) |
Mar
(340) |
Apr
(319) |
May
(290) |
Jun
(296) |
Jul
(221) |
Aug
(292) |
Sep
(242) |
Oct
(248) |
Nov
(242) |
Dec
(332) |
2008 |
Jan
(312) |
Feb
(359) |
Mar
(454) |
Apr
(287) |
May
(340) |
Jun
(450) |
Jul
(403) |
Aug
(324) |
Sep
(349) |
Oct
(385) |
Nov
(363) |
Dec
(437) |
2009 |
Jan
(500) |
Feb
(301) |
Mar
(409) |
Apr
(486) |
May
(545) |
Jun
(391) |
Jul
(518) |
Aug
(497) |
Sep
(492) |
Oct
(429) |
Nov
(357) |
Dec
(310) |
2010 |
Jan
(371) |
Feb
(657) |
Mar
(519) |
Apr
(432) |
May
(312) |
Jun
(416) |
Jul
(477) |
Aug
(386) |
Sep
(419) |
Oct
(435) |
Nov
(320) |
Dec
(202) |
2011 |
Jan
(321) |
Feb
(413) |
Mar
(299) |
Apr
(215) |
May
(284) |
Jun
(203) |
Jul
(207) |
Aug
(314) |
Sep
(321) |
Oct
(259) |
Nov
(347) |
Dec
(209) |
2012 |
Jan
(322) |
Feb
(414) |
Mar
(377) |
Apr
(179) |
May
(173) |
Jun
(234) |
Jul
(295) |
Aug
(239) |
Sep
(276) |
Oct
(355) |
Nov
(144) |
Dec
(108) |
2013 |
Jan
(170) |
Feb
(89) |
Mar
(204) |
Apr
(133) |
May
(142) |
Jun
(89) |
Jul
(160) |
Aug
(180) |
Sep
(69) |
Oct
(136) |
Nov
(83) |
Dec
(32) |
2014 |
Jan
(71) |
Feb
(90) |
Mar
(161) |
Apr
(117) |
May
(78) |
Jun
(94) |
Jul
(60) |
Aug
(83) |
Sep
(102) |
Oct
(132) |
Nov
(154) |
Dec
(96) |
2015 |
Jan
(45) |
Feb
(138) |
Mar
(176) |
Apr
(132) |
May
(119) |
Jun
(124) |
Jul
(77) |
Aug
(31) |
Sep
(34) |
Oct
(22) |
Nov
(23) |
Dec
(9) |
2016 |
Jan
(26) |
Feb
(17) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(8) |
Jul
(6) |
Aug
(5) |
Sep
(9) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(5) |
Feb
(7) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
(4) |
2
(3) |
3
(3) |
4
(5) |
5
(1) |
6
|
7
(5) |
8
(1) |
9
(3) |
10
(3) |
11
(15) |
12
(10) |
13
(2) |
14
|
15
|
16
(2) |
17
(3) |
18
|
19
|
20
(3) |
21
(1) |
22
(5) |
23
(5) |
24
|
25
(3) |
26
|
27
(1) |
28
|
29
|
|
|
|
|
|
|
>>>>> "Perry" == Perry Greenfield <pe...@st...> writes: Perry> I like the sounds of this approach even more. But I wonder Perry> if it can be made somewhat more generic. This approach (if Perry> I read it correctly seems to need a backend function for Perry> each shape: perhaps only for circle?). What I was thinking Perry> was if there was a way to pass it the vectors or path for a Perry> symbol (for very often, many points will share the same Perry> shape, if not all the same x,y scale). Of course (slaps self on head). matplotlib 0.1 was designed around gtk drawing which doesn't support paths. Although I've been mumbling about adding paths for sometime (what with paint, ps, and agg), I'm still thinking inside the box. A collection of paths is the natural solution Perry> I suppose circle and other curved items could be handled Perry> with A bezier type call. Agg special cases this one with a dedicated ellipse function. ellipse(ex, y, width, height, numsteps) It's still a path, but you have a dedicated function to build that path up speedily. One potential drawback: how do you bring along the other backends that don't have path support? In the RectangleCollection approach, we can always fall back on draw_rectangle. In the path collection, it's more difficult. backend_gtk (pygtk) - no support for paths AFAIK backend_wx (wxpython) - no support for paths AFAIK; Jeremy? backend_ps - full path support backend_agg - ditto backend_gd - partial, I think; gotta check backend_paint (libart) - full, perhaps with bugs JDH
Excuse me, what does AGG stand for? And I'm curious, have you looked into cairo as a possible backend? It's a vector drawing library that's trying to be OS independent. I think there's a python interface. So far it outputs to bitmap images, X11, postscript, and an Open GL port is underway. http://freedesktop.org/Cairo/Home Interested to hear your thoughts. Maybe it's a matter of a job looking for a volunteer. Cheers, Matthew.
John Hunter writes: > >>>>> "Perry" == Perry Greenfield <pe...@st...> writes: > > Perry> What I was alluding to was that if a backend primitive was > Perry> added that allowed plotting a symbol (patch?) or point for > Perry> an array of points. The base implementation would just do > Perry> a python loop over the single point case so there is no > Perry> requirement for a backend to overload this call. But it > Perry> could do so if it wanted to loop over all points in C. How > Perry> flexible to make this is open to discussion (e.g., allowing > Perry> x, and y scaling factors, as arrays, for the symbol to be > Perry> plotted, and other attributes that may vary with point such > Perry> as color) > > To make this work in the current design, you'll need more than a new > backend method. > [much good explanation of why...] OK, I understand. > My first response to this problem was to use a naive container class, > eg Circles, and an appropriate backend method, eg, draw_circles. In > this case, scatter would instantiate a Circles instance with a list of > circles. When Circles was called to render, it would need to create a > sequence of location data and a sequence of gcs [...] I'd agree that this doesn't seem worth the trouble > > Much better is to implement a GraphicsContextCollection, where the > relevant properties can be either individual elements or > len(collection) sequences. If a property is an element, it's > homogeneous across the collection. If it's len(collection), iterate > over it. The CircleCollection, instead of storing individual Circle > instances as I wrote about above, stores just the location and size > data in arrays and a single GraphicsContextCollection. > > def scatter(x, y, s, c): > > collection = CircleCollection(x, y, s) > gc = GraphicsContextCollection() > gc.set_linewidth(1.0) # a single line width > gc.set_foreground(c) # a len(x) array of facecolors > gc.set_edgecolor('k') # a single edgecolor > > collection.set_gc(gc) > > axes.add_collection(collection) > return collection > > And this will be blazingly fast compared to the solution above, since, > for example, you transform the x, y, and s coordinates as numeric > arrays rather than individually. And there is almost no function call > overhead. And as you say, if the backend doesn't implement a > draw_circles method, the CircleCollection can just fall back on > calling the existing methods in a loop. > > Thoughts? > I like the sounds of this approach even more. But I wonder if it can be made somewhat more generic. This approach (if I read it correctly seems to need a backend function for each shape: perhaps only for circle?). What I was thinking was if there was a way to pass it the vectors or path for a symbol (for very often, many points will share the same shape, if not all the same x,y scale). Here the circle is a bit of a special case compared to crosses, error bars triangles and other symbols that are usually made up of a few straight lines. In these cases you could pass the backend the context collection along with the shape (and perhaps some scaling info if that isn't part of the context). That way only one backend routine is needed. I suppose circle and other curved items could be handled with A bezier type call. But perhaps I still misunderstand. Thanks for your very detailed response. Perry
>>>>> "Perry" == Perry Greenfield <pe...@st...> writes: Perry> What I was alluding to was that if a backend primitive was Perry> added that allowed plotting a symbol (patch?) or point for Perry> an array of points. The base implementation would just do Perry> a python loop over the single point case so there is no Perry> requirement for a backend to overload this call. But it Perry> could do so if it wanted to loop over all points in C. How Perry> flexible to make this is open to discussion (e.g., allowing Perry> x, and y scaling factors, as arrays, for the symbol to be Perry> plotted, and other attributes that may vary with point such Perry> as color) To make this work in the current design, you'll need more than a new backend method. Plot commands like scatter instantiate Artists (Circle) and add them to the Axes as a generic patch instances. On a call to draw, the Axes instance iterates over all of it's patch instances and forwards the call on to the artists it contains. These, in turn instantiate gc instances which contain information like linewidth, facecolor, edgecolor, alpha , etc... The patch instance also transforms its data into display units and calls the relevant backend method. Eg, a Circle instance would call renderer.draw_arc(gc, x, y, width, ...) This makes it relatively easy to write a backend since you only have to worry about 1 coordinate system (display) and don't need to know anything about the Artist objects (Circle, Line, Rectangle, Text, ...) The point is that no existing entity knows that a collection of patches are all circles, and noone is keeping track of whether they share a property or not. This buys you total flexibility to set individual properties, but you pay for it in performance, since you have to set every property for every object and call render methods for each one, and so on. My first response to this problem was to use a naive container class, eg Circles, and an appropriate backend method, eg, draw_circles. In this case, scatter would instantiate a Circles instance with a list of circles. When Circles was called to render, it would need to create a sequence of location data and a sequence of gcs locs = [ (x0, y0, w0, h0), (x1, y1, w1, h1), ...] gcs = [ circ0.get_gc(), circ1.get_gc(), ...] and then call renderer.draw_ellipses( locs, gcs). This would provide some savings, but probably not dramatic ones. The backends would need to know how to read the GCs. In backend_agg extension code, I've implemented the code (in CVS) to read the python GraphicsContextBase information using the python API. _gc_get_linecap _gc_get_joinstyle _gc_get_color # returns rgb This is kind of backward, implementing an object in python and then accessing it at the extension level code using the Python API, but it does keep as much of the frontend in python as possible, which is desirable. The point is that for your approach to work and to not break encapsulation, the backends have to know about the GC. The discussion above was focused on preserving all the individual properties of the actors (eg every circle can have it's own linewidth, color, alpha, dash style). But this is rare. Usually, we just want to vary one or two properties across a large collection, eg, color in pcolor and size and color in scatter. Much better is to implement a GraphicsContextCollection, where the relevant properties can be either individual elements or len(collection) sequences. If a property is an element, it's homogeneous across the collection. If it's len(collection), iterate over it. The CircleCollection, instead of storing individual Circle instances as I wrote about above, stores just the location and size data in arrays and a single GraphicsContextCollection. def scatter(x, y, s, c): collection = CircleCollection(x, y, s) gc = GraphicsContextCollection() gc.set_linewidth(1.0) # a single line width gc.set_foreground(c) # a len(x) array of facecolors gc.set_edgecolor('k') # a single edgecolor collection.set_gc(gc) axes.add_collection(collection) return collection And this will be blazingly fast compared to the solution above, since, for example, you transform the x, y, and s coordinates as numeric arrays rather than individually. And there is almost no function call overhead. And as you say, if the backend doesn't implement a draw_circles method, the CircleCollection can just fall back on calling the existing methods in a loop. Thoughts? JDH
>>>>> "Peter" == Peter Groszkowski <pgr...@ge...> writes: Peter> Will mostly be plotting time Vs value(time) but in certain Peter> cases will need plots of other data, and therefore have to Peter> look at the worst case scenario. Not exactly sure what you Peter> mean by "continuous" since all are descrete data Peter> points. The data may not be smooth (could have misbehaving Peter> sensors giving garbage) and jump all over the place. Bad terminology: for x I meant sorted (monotonic) and for y the ideal cases is smooth and not varying too rapidly. Try the lod feature and see if it works for you. Perhaps it would be better to extend the LOD functionality, so that you control the extent of subsampling. Eg, suppose you have 100,000 x data points but only 1000 pixels of display. Then for every data 100 points you could set the decimation factor, perhaps as a percentage. More generally, we could implement a LOD base class users could supply their own derived instances to subsample the data how they see fit, eg, min and max over the 100 points, and so on. By reshaping the points into a 1000x100 matrix, this could be done in Numeric efficiently. >> econdly, the standard gdmodule will iterate over the x, y >> values in a python loop in gd.py. This is slow for lines with >> lots of points. I have a patched gdmodule that I can send you >> (provide platform info) that moves this step to the extension >> module. Potentially a very big win. Peter> Yes, that would be great! System info: Here is the link http://nitace.bsd.uchicago.edu:8080/files/share/gdmodule-0.52b.tar.gz You must also upgrade gd to 2.0.22 (alas 2.0.21 is obsolete!) since I needed the latest version to get this sucker ported to win32. >> Another possibility: change backends. The GTK backend is >> significantly faster than GD. If you want to work off line >> (ie, draw to image only and not display to screen ) and are on >> a linux box, you can do this with GTK and Xvfb. I'll give you >> instructions if interested. In the next release of matplotlib, >> there will be a libart paint backend (cross platform) that may >> be faster than GD. I'm working on an Agg backend that should >> be considerably faster than all the other backends since it >> does everything in extension code -- we'll see Peter> Yes I am only planning to work offline. Want to be able to Peter> pipe the output images to stdout. I am looking for the Peter> fastest solution possible. I don't know how to write a GTK pixbuf to stdout. I inquired on the pygtk mailing list, so perhaps we'll learn something soon. To use GTK in Xvfb, make sure you have Xvfb (X virtual frame buffer) installed (/usr/X11R6/bin/Xvfb). There is probably an RPM, but I don't remember. You then need to start it with something like XVFB_HOME=/usr/X11R6 $XVFB_HOME/bin/Xvfb :1 -co $XVFB_HOME/lib/X11/rgb -fp $XVFB_HOME/lib/X11/fonts/misc/,$XVFB_HOME/lib/X11/fonts/Speedo/,$XVFB_HOME/lib/X11/fonts/Type1/,$XVFB_HOME/lib/X11/fonts/75dpi/,$XVFB_HOME/lib/X11/fonts/100dpi/ & And connect your display to it > setenv DISPLAY :1 Now you can use gtk as follows from matplotlib.matlab import * from matplotlib.backends.backend_gtk import show_xvfb def f(t): s1 = cos(2*pi*t) e1 = exp(-t) return multiply(s1,e1) t1 = arange(0.0, 5.0, 0.1) t2 = arange(0.0, 5.0, 0.02) t3 = arange(0.0, 2.0, 0.01) subplot(211) plot(t1, f(t1), 'bo', t2, f(t2), 'k') title('A tale of 2 subplots') ylabel('Damped oscillation') subplot(212) plot(t3, cos(2*pi*t3), 'r--') xlabel('time (s)') ylabel('Undamped') savefig('subplot_demo') show_xvfb() # not show!
Perry: Currently using connected line plots, but do not want to limit myself in any way when it comes to presenting data. I am certain that at one point, I will use every plot available in the matplotlib arsenal. On a 3.2Ghz P4 with 2GB RAM get ~90 seconds for a 100,000 data set, ~50 seconds for 50,000 and ~9 seconds for a 10,000 (sort of linear). This is way to long for my purposes. I was hoping more for ~5 seconds for 100,000 points. John: I routinely plot data sets this large. 500,000 data points is a >I routinely plot data sets this large. 500,000 data points is a >typical 10 seconds of EEG, which is the application that led me to > > That sounds good! >If your xdata are sorted, ie like time, the following > > l = plot(blah, blah) > set(l, 'lod', True) > >could be a big win. > >Whether this is appropriate or not depends on the data set of course, >whether it is continuous, and so on. Can you describe your dataset in >more detail, because I would like to add whatever optimizations are >appropriate -- if others can pipe in here too that would help. > > Will mostly be plotting time Vs value(time) but in certain cases will need plots of other data, and therefore have to look at the worst case scenario. Not exactly sure what you mean by "continuous" since all are descrete data points. The data may not be smooth (could have misbehaving sensors giving garbage) and jump all over the place. >econdly, the standard gdmodule will iterate over the x, y values in a >python loop in gd.py. This is slow for lines with lots of points. I >have a patched gdmodule that I can send you (provide platform info) >that moves this step to the extension module. Potentially a very big >win. > > Yes, that would be great! System info: OS: RedHat9 ( kernel 2.4.20) gcc version from running 'gcc -v': Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/specs Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --host=i386-redhat-linux Thread model: posix gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5) Python: Python 2.2.2 (#1, Feb 24 2003, 19:13:11) matplotlig: matplotlib-0.50e gdpython: 0.51 (with modified _gdmodule.c) gd: gd-2.0.21 >Another possibility: change backends. The GTK backend is >significantly faster than GD. If you want to work off line (ie, draw >to image only and not display to screen ) and are on a linux box, you >can do this with GTK and Xvfb. I'll give you instructions if >interested. In the next release of matplotlib, there will be a libart >paint backend (cross platform) that may be faster than GD. I'm >working on an Agg backend that should be considerably faster than all >the other backends since it does everything in extension code -- we'll >see > Yes I am only planning to work offline. Want to be able to pipe the output images to stdout. I am looking for the fastest solution possible. Thanks again. Peter
John Hunter writes: > > could be a big win. LOD is "Level of Detail" and if true subsamples > the data according to the pixel width of the output, as you described. > Whether this is appropriate or not depends on the data set of course, > whether it is continuous, and so on. Can you describe your dataset in > more detail, because I would like to add whatever optimizations are > appropriate -- if others can pipe in here too that would help. > > What I was alluding to was that if a backend primitive was added that allowed plotting a symbol (patch?) or point for an array of points. The base implementation would just do a python loop over the single point case so there is no requirement for a backend to overload this call. But it could do so if it wanted to loop over all points in C. How flexible to make this is open to discussion (e.g., allowing x, and y scaling factors, as arrays, for the symbol to be plotted, and other attributes that may vary with point such as color) Perry
>>>>> "Peter" == Peter Groszkowski <pgr...@ge...> writes: Peter> Hello: We will be dealing with large (> 100,000 but in some Peter> instances as big as 500,000 points) data sets. They are to Peter> be plotted, and I would like to use matplotlib. Are you working with plot/loglog/etc (line data) or pcolor/hist/scatter/bar (patch data)? I routinely plot data sets this large. 500,000 data points is a typical 10 seconds of EEG, which is the application that led me to write matplotlib. EEG is fairly special: the x axis time is monotonically increasing and the y axis is smooth. This lets me take advantage of level of detail subsampling. If your xdata are sorted, ie like time, the following l = plot(blah, blah) set(l, 'lod', True) could be a big win. LOD is "Level of Detail" and if true subsamples the data according to the pixel width of the output, as you described. Whether this is appropriate or not depends on the data set of course, whether it is continuous, and so on. Can you describe your dataset in more detail, because I would like to add whatever optimizations are appropriate -- if others can pipe in here too that would help. Secondly, the standard gdmodule will iterate over the x, y values in a python loop in gd.py. This is slow for lines with lots of points. I have a patched gdmodule that I can send you (provide platform info) that moves this step to the extension module. Potentially a very big win. Another possibility: change backends. The GTK backend is significantly faster than GD. If you want to work off line (ie, draw to image only and not display to screen ) and are on a linux box, you can do this with GTK and Xvfb. I'll give you instructions if interested. In the next release of matplotlib, there will be a libart paint backend (cross platform) that may be faster than GD. I'm working on an Agg backend that should be considerably faster than all the other backends since it does everything in extension code -- we'll see :-). JDH
How are you plotting the data? As a scatter plot (e.g., symbols or points) or as a connected line plot. The former can be quite a bit slower and we have some thoughts about speeding that up (which we haven't broached with JDH yet). How long is it taking and how much faster do you need it? Perry Greenfield > -----Original Message----- > From: mat...@li... > [mailto:mat...@li...]On Behalf Of Peter > Groszkowski > Sent: Wednesday, February 11, 2004 2:14 PM > To: mat...@li... > Subject: [Matplotlib-users] large data sets and performance > > > Hello: > > We will be dealing with large (> 100,000 but in some instances as big as > 500,000 points) data sets. They are to be plotted, and I would like to > use matplotlib. I did a few preliminary tests, and it seems like > plotting that many pairs is a little too much for the system to handle. > Currently we are using (as a backend to some other software) gnuplot for > doing this plotting. It seems to be "lighting-fast" but I suspect (may > be wrong!) that it reduces this data before the plotting takes place, > and only selects every nth point. I have to go through the code that > calls it to be certain. I would imagine that it is not necessary to get > evrey point in 100,000 to produce a page-size plot, but I'm not sure if > simply grabbing every nth point and reducing the data like that is the > best way about this. So my question is to anyone else out there who is > also dealing with these large (and very large) data sets? What do you > do? Any library routines that you use before plotting to massage that > data? Are there any ways (ie flags to set) to optimize this in > matplotlib? Any other software you use? I should note that I use the GD > backend and pipe the output to stdout for a cgi scrpit to pick up. > > Thanks. > > -- > Peter Groszkowski Gemini Observatory > Tel: +1 808 974-2509 670 N. A'ohoku Place > Fax: +1 808 935-9235 Hilo, Hawai'i 96720, USA > > > > > ------------------------------------------------------- > SF.Net is sponsored by: Speed Start Your Linux Apps Now. > Build and deploy apps & Web services for Linux with > a free DVD software kit from IBM. Click Now! > http://ads.osdn.com/?ad_id=1356&alloc_id=3438&op=click > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users >
Hello: We will be dealing with large (> 100,000 but in some instances as big as 500,000 points) data sets. They are to be plotted, and I would like to use matplotlib. I did a few preliminary tests, and it seems like plotting that many pairs is a little too much for the system to handle. Currently we are using (as a backend to some other software) gnuplot for doing this plotting. It seems to be "lighting-fast" but I suspect (may be wrong!) that it reduces this data before the plotting takes place, and only selects every nth point. I have to go through the code that calls it to be certain. I would imagine that it is not necessary to get evrey point in 100,000 to produce a page-size plot, but I'm not sure if simply grabbing every nth point and reducing the data like that is the best way about this. So my question is to anyone else out there who is also dealing with these large (and very large) data sets? What do you do? Any library routines that you use before plotting to massage that data? Are there any ways (ie flags to set) to optimize this in matplotlib? Any other software you use? I should note that I use the GD backend and pipe the output to stdout for a cgi scrpit to pick up. Thanks. -- Peter Groszkowski Gemini Observatory Tel: +1 808 974-2509 670 N. A'ohoku Place Fax: +1 808 935-9235 Hilo, Hawai'i 96720, USA
>>>>> "Jean-Baptiste" =3D=3D Jean-Baptiste Cazier <Jean-Baptiste.cazier@d= ecode.is> writes: Jean-Baptiste> S=E6ll ! Thanks for the info and update. I upgaded Jean-Baptiste> my library and my program. The dramatic change of Jean-Baptiste> API was a bit painful, but the new syntax is more Jean-Baptiste> clear. Do you have any idea when you will hav Jean-Baptiste> reached a stable version in general and in term of Jean-Baptiste> version ? It has been a bit painful, my apologies; I still have one application to port myself. But it was necessary. matplotlib is undergoing active development. The basic idea is to write a backend for a powerful image renderer, http://antigrain.com or libart , and use that backend to draw to the GUIs. Rather than each GUI implementing it's own drawing, I'm moving to one high quality image renderer that will draw to the GUI. Why? Four major benefits * Easier maintenance: when I want to add a feature, I add it to the image backend and all the GUIs automatically benefit. * Enhanced drawing capabilities - the GUIs don't support a lot of the more sophisticated drawing capabilities, eg, paths from PS and SVG, or alpha blending, or gouraud shading. The agg backend supports all of these, and therefore by extension, so will the GUIs. * Font standardization across backends. With a common image renderer that supports freetype, all the GUIs can have freetype support with common font names. =20 * Ease of plain old image integration with figures - 2D image plots, like pcolor, will become very fast and very pretty. Any solution along these lines will be performance competitive with native GUI and work on all major platforms (win32, osx, linux, unix) or we won't do it. In order to do this right, I needed the Figure class (and all it's children) to be totally backend independent. So a WX figure can render to a postscript renderer, or an agg renderer, or a gd renderer, or a libart renderer. In earlier releases of matplotlib, the AxisText and Figure classes were tied up in the backend. I have made it a policy not to predict a stable API because I don't want to go back on my word. That said, I think the existing design is clean and I am happy with it. I would not have said that a month ago. I don't anticipate any major changes to the Figure API or the FigureCanvas API. Changes I do expect to see in the near future are * the addition of GTK/AGG, WX/AGG and Tkinter/AGG backends. These will be optional so for those of you who want to stay with the classic GUI backends you can. But I would encourage you to move over at that time since you'll get better drawing, more sophisticated plots, etc... I will probably add a matplotlibrc file in which you can select your default backend. This will be an almost trivial API port, if you choose to. This will probably be around matplotlib-0.6 (1-2 months). * Addition of path drawing methods in the renderer API. This will allow for the more sophisticated that state dependent paths support. If you don't work directly with the renderer, it won't affect you. It's mainly for backend implementers. As for API stability in general, I have 2 additional thoughts. * for the matlab interface, if it's matlab compatible it's 99% likely to be stable. We've made minor departures from this when there was clearly a better way to do it, but it's a good rule of thumb. * As with many projects, I think the 1.0 release should guarantee some API stability. I predict you'll see a pretty stable API until then. JDH
>>>>> "LUK" == LUK ShunTim <shu...@po...> writes: >> OK, now we at least know where the problem is. I don't get >> such an error message on my system (rhl9, pygtk-2.0.0). What >> platform are you on, and what versions of GTK and pygtk are you >> running? JDH LUK> W2K, Enthought python 2.3, pygtk 2.0, gtk 2.0 Tracked this one down. Apparently the latest version of GTK for windows does different font aliasing. This is controlled by the file c:\gtk\etc\pango\pango.aliases Try adding the line: times = "times new roman,angsana new,mingliu,simsun,gulimche,ms gothic,latha,mangal,code2000" If you get other font messages, adding more font aliases to this file may help. I'm making some changes in the gtk backend - mapping times -> serif, so in the next release of matplotlib this should be fixed automagically. JDH
>>>>> "derek" == derek bandler <d_b...@ho...> writes: derek> A small problem - I downloaded the tar/zipped file to my derek> pc, but winzip errors on onloading it. I downloaded derek> something called power zip and have the same problem. derek> There's an error msg that the archive may be corrupt. Oh, I assumed you were on a UNIX platform because of the Solaris reference. Here's a link to a windows installer http://nitace.bsd.uchicago.edu:8080/files/share/matplotlib-0.50j.win32.exe derek> As far as the patch, would that require re-building the derek> software? Yep, and this is far from trivial on win32. Hopefully, the pygtk maintainers will include the patch in the next release, but there has been grumbling on the pygtk list about patches collecting dust. In any case, I cache most of the relevant stuff now, so repeated draws should not expose the memory leak as before. Give it a try. JDH
Hi You might know that octave is a free software clone of matlab. I used it heavily for a previous project. One thing I liked about it's plot command is an extension to the fmt string which allowed you to specify the legend title for each line between semicolons. Like this: plot(x, sin(x), 'r;sine;') plot(x, cos(x), 'g;cosine;') plot(x, tan(x), 'b;tangent;') It seems like a handy way to be able to do things. I couldn't see this mentioned in the matplotlib docs, and when I tried it I got a dialog "Unrecognized character in format string". May I suggest it as a useful addition to matplotlib? It seems like legend() might need adjustment to allow for it too. BTW, using the GTK backend the "Unrecognized character in format string" dialog has locked up my python session after I clicked OK. It appeared straight after the plot command -- not when I ran show(). After I clicked OK, the window stayed visible but unresponsive to the close button etc. m.