You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(1) |
2
|
3
(2) |
4
(5) |
5
(3) |
6
(5) |
7
|
8
(5) |
9
|
10
|
11
(5) |
12
(2) |
13
(6) |
14
(2) |
15
(3) |
16
(1) |
17
|
18
(9) |
19
(4) |
20
(1) |
21
(3) |
22
(2) |
23
(1) |
24
(1) |
25
|
26
(1) |
27
(1) |
28
(20) |
29
(10) |
30
(2) |
31
(1) |
|
|
|
|
|
|
Hi Eric, I think I understand the two approaches you mention, I have about the same mental distinction although with a different spin, maybe because I come from finite element/data visualisation instead of image manipulation: I see basically two type of 2D data that can be visualized as a colormapped image: a) point data, where you provide a collection of values with associated coordinates. Algorithm is then fully responsible for computing the "best" value to associate to other coordinates (not provided by the user, but nonetheless needed to build a full image, because there was a change of resolution or just some data lacking). One can distinguish sub categories: are the points given on - an regular rectangular grid, * - a regular hexagonal/triangular grid, - an irregular rectangular grid * - another pattern? like mapped rectangular grid, ... - at arbitrary locations, without pattern for now in matplotlib only *-marked subcategories are supported, AFAIK b) cell data, where you provide a collection of cell (polygons), with vertex coordinates and color-mapped data. Again we can distinguish su categories: - cell-wise data, no interpolation needed * - or vertex-wise data, interpolation needed -averaging on the cell -proximity based interpolation -(bi)linear interpolation -higher order interpolation and also it can be subdivided using cell geometry criterium - cell part of a regular rectangular grid * - cell part of an irregular rectangular grid * - cell part of a mapped rectangular * - arbitray cells, but paving the plane (vertex coodinate + connectivity table) - fully arbitrary cells, not necessarily continuously connected Again, AFAIK for now the *-marked categories are currently suported in matplotlib. In addition to those categories, one additional thing can be considered when any kind of interpolation/extrapolation is performed: Does the interpolation happen on the data, and then the interpolated data is mapped to color (phong mapping/interpolation/shading in matlab speach), or are is the interpolation performed on colors after the initial user-provided data have been mapped (gouraud mapping/interpolation/shading in matlab speach). Some equivalences exists between cell based image and point based image, in particular my patch can be seen as irregular rectangular grid patches bith bilinear interpolation, or point data with bilinear interpolation. It is more of the second one, as for example in cell-data, one can expect to be able to set line styles for cell boundaries, while in point data this would have no sense. I do not see the fact that for point data the interpolation is choosen by the algorithm as a bad thing: in fact, if one use point data, it is because there is no cell information as such, so the fact that the algorithm choose automatically the boundaries is an advantage. For example, your NonUniformImage is a point-data image generator, which work for non-uniform rectangular grids. It associate with each pixel the data of the closest (using non-cartesian distance, but a "manhatan distance" given by abs(x-x0) + abs(y-y0)) available point data. My patch added bilinear interpolation, using x and y bilinear interpolation between the four closest. Other type of interpolation are possible, but more complex to implement. The context in which we want to use NonUniformImage is not for image processing, but for data visualisation: we have measurement data dependent on two variables (RPM and frequency), and a classic way to present such data is what is known in the field (acoustic radiation) as a "waterfall diagram". However, the samples are not necessarily taken on a regular grid. They are taken at best on an irregular grid (especially for the RPM), and at worst using a non-obvious pattern. There is no cell data here, boudaries are not really relevant, the only thing that matter is to provide an attractive and visualy-easy-to-parse image of 2D data which was sampled (or simulated) on a set of points. In this context both interpolation (nearest, usefull because if present only measured data extended in the best way so that they cover the whole figure, without any actual interpolation), and bilinear (show smoother variations, making more easy to visually detect some pattern and interresting RPM/FREQUENCY regions ) are interresting for our users... A small example as you requested (a is withing the data, b is outside, c is completely oustide) c 11 12 13 a b 21 22 23 31 32 33 Nearest interpolation (patched (small error), but already present): color at a = color at 12 or 13 or 22 or 23, whichever is closest to a color at b = color at 13 or 23, whichever is closest to b color at c = color at 13 Bilinear interpolation (new): S=(x23-x12)(y12-y23) color at a = color(12) * (x23-xa)(ya-y23)/S + color(13) * (xa-x22)(ya-y22)/S + color(22) * (x13-xa)(y13-ya)/S + color(23) * (xa-x12)(y12-ya)/S color at b = color(13) * (yb-y23)/(y13-y23) + color(23) * (y13-yb)/(y13-y23) color at c = color at 13 As I mentioned in my first message, to complete the full array of image generation (point data, cell data, various interpolation scheme and geometric distribution of points or cell geometries, interpolation before or after the color mapping) is a tremendous work. Not even matlab has it completed, although it is far more complete than matploblib for now and can be considered a worthy goal... But I think this linear interpolation after color mapping on irregular rectangular point data is already a usefull addition ;-) Regards, Greg. Quoting Eric Firing <ef...@ha...>: > Grégory Lielens wrote: >> Thanks a lot for reviewing my patch! >> I have corrected most of the problems (I think ;-) ) >> I indeed introduced memory leak, I think it is fixed now, I have also >> reorganized the code to avoid duplication of the cleanup code. I used an >> helper function instead of the goto, because this cleanup is needed for >> normal exit too, so the helper function can come in handy for that. >> std::vector would have been nice, but I tried to respect the original >> coding style, maybe this could be changed but I guess the original >> author should have something to say about it.... >> Only thing I did not change is the allocation of acols/arows even when >> they are not used. It is not difficult to do, but the code will be a >> little more complex and I think that the extra memory used is not >> significant: The memory for those mapping structure is doubled (1 float >> and 1 int vector of size N, instead of a single int vector of size N), >> but those mapping structures are an order of magnitude smaller than the >> image buffer of size N*N of 4 char anyway... >> >> If one agree that this is to be saved at the (slight) expense of code >> conciseness/simplicity, I can add this optimisation... >> >> Thanks for pointing the error at the left/bottom pixel lines, it is >> corrected now :-) >> >> And I set interpolation to NEAREST by default, so that the behavior of >> NonUniformImage remains the same as before if the user do not specify >> otherwise (I forgot to do it in the first patch)... >> >> >> I include the new patch, > > Gregory, > > I am sorry to be late in joining in, but I have a basic question about > what you are trying to do. It arises from the fact that there are two > styles of image-like plots: > > 1) In the original Image and NonuniformImage, the colors and locations > of "pixel" (or patch or subregion or whatever) centers are specified. > > 2) In pcolor, quadmesh, and pcolorimage, the patch *boundaries* are > specified. pcolorimage is my modification of the original > NonuniformImage code to work with boundaries. I left the original in > place, since presumably it meets some people's needs, even though it > does not meet mine. > > When the grid is uniform, it doesn't really make any difference; but > when the grid is nonuniform, then I don't think style #1 makes much > sense for pcolor-style plotting, in which there is no interpolation; > there is no unique way to specify the boundaries, so it is up to > the plotting routine, and that is bad. > > For an image, it is clear how the interpolation should work: the color > values are given at the centers of the "pixels" (meaning on the original > image grid, not on the screen or other output device grid), which are > uniformly spaced, and interpolated between. > > Exactly what is your patch supposed to do for the nonuniform case? Would > you describe and demonstrate it, please, for a simple case with a 2x2 > array of colors and 3x3 arrays for boundary x and y? > > Thanks. > > Eric
On Sat, Aug 30, 2008 at 9:49 AM, Ray Salem <ray...@gm...> wrote: > Hey John. Your suggestion has definitely made a large improvement, but while > it is still running, if you select any of the windows, python or the figure, > the figure will still enter a state of non-response. I have tried something > very similar in matlab and have observed no problems. Do you have any other > advice or suggestions Please respond on list so others can benefit and contribute to the conversation. You need to "reply to all" to reply to the list. In order to do this well, you need to use your user interface idle handler. The example I used was just to illustrate the line handling, but for real work you should set you update function in either an idle handler or a timeout handler. Look in the examples/animation directory http://matplotlib.sourceforge.net/examples/animation for different examples for each user interface (see also the animations cookbook tutorial at http://www.scipy.org/Cookbook/Matplotlib/Animations). The way to do this differs for each toolkit, and we have examples for many in the examples directory I pointed you to above. Does anyone have an example of how to do proper idle handling with tk? JDH
On Thursday 28 August 2008 09:17:26 John Hunter wrote: > On Thu, Aug 28, 2008 at 6:19 AM, Sandro Tosi <mat...@gm...> wrote: > > Enthought suite is used a lot in scientific area, so mpl and enth > > almost share their users, so have it enabled would be a plus, but we > > are mainly interested in generate "no harm", so we'd like to ask you > > to confirm that enabling that support won't brake anything (but indeed > > provide a valuable asset for users). > > Well, one thing to make sure of is that you do not install the version > of traits that ships with matplotlib, since it will break other > enthought packages. My understanding is that one either uses the > default rc config or the traits enabled one. Since the latter is > experimental and we are not sure if we will eventually adopt it, I > would not recommend enabling it in the debian distro. Is this your > view Darren? Sorry, I haven't been able to keep up with the list recently. Yes, this is my view too, I see no reason to include it for distribution.
Christopher Barker wrote: > Michael Droettboom wrote: > >> It's funny you should mention this. >> >> One of the things I realized after SciPy this year is that there is a >> lot of interfacing of Numpy with C++ (as opposed to C) that could really >> benefit from a standard C++ wrapper around Numpy. >> > > Absolutely. Though now that you mention it, isn't there one in > boost::python? or at least one that worked with Numeric. > > Yes, but AFAICT it forces the use of boost::python, at least to convert the object. It would be nicer (in some cases) to have something that could be wrapper-system-agnostic. I spent a little time working on this today, and I was actually pleasantly surprised by how far I got. I have something that exposes Numpy arrays as boost::multi_arrays on the C++ side, and also allows for creating new Numpy-backed arrays from C++. The memory is still managed with Python reference counting, and the references are automatically freed from the constructor. It requires boost and Numpy, of course, but not boost::python. A standalone C++ class for this would perhaps have been nicer, but that's not an afternoon's 200-line hack like this is. I wouldn't really consider this a "wrapper" around Numpy, since it doesn't actually use Numpy for any of the indexing or computation. It simply converts the Numpy description (pointer + strides, etc.) to a boost::multi_array, without copying the underlying data. > >> projects all had home-grown, semi-complete solutions. >> A standard wrapper would be good for all of the reasons you mention >> (basically resource management), but also to have a cleaner syntax for >> accessing the elements that would abstract away striding, ordering etc. >> > > I think I'm thinking of something slightly different. What I'd like is > set of array classes that can interface well with numpy, and also be > quite usable all by themselves in C++ code that has nothing to do with > python. > This should fit that bill. Just templatize all your algorithms to accept anything with the "boost::multi_array" interface. When building extensions for Numpy, pass in a wrapped numpy array -- when not, use a pure boost::multi_array. Of course, the pure boost::multi_array doesn't have reference-counted memory management. Use of boost::shared_ptr may help, but it's not "built-in" -- views of boost::multi_arrays don't automatically keep their parent data alive. It's the usual C++ "you're on your own" approach. > I don't really envision a C++ ndarray -- I generally think of C++ as > static enough that I'm happy to define that I need, for example, a NX2 > array of doubles. I don't see using a generic array like ndarray that > could hold any type, shape, etc. But maybe others do -- it would be > pretty cool. > My approach (being based on boost::multi_array) has a fixed type and number of dimensions at compile time. If you try to pass in a Numpy array that can't be converted to that, an exception is thrown. Personally, I feel that's good enough for most domain-specific algorithms. Theoretically, one could write algorithms that are templatized on the data type and then dispatch at run time to different instantiations of that template -- but my code doesn't do anything like that yet. > I guess to some extent the question is whether you are writing > extensions for python, or a C++ that you want to use from C++, and also > wrap for python. > I'm just doing this out of curiosity rather than to meet a need. We don't have much C++ code at the Institute. I could see this being very handy in matplotlib, however, but the dependency on Boost is probably a showstopper... :( > But maybe both need can be met by the same C++ classes (templates?) > I think so. See above. > >> It may actually be possible to build it around boost::multiarray >> > > Do you know off hand if multiarray can be constructed from an existing > pointer to a data block? > Yep. multi_array_ref seems custom-built for that purpose. > >> Can't imagine I'll find time for it anytime soon, >> I went ahead and created a Google Code project for this here: http://code.google.com/p/numpy-boost/ Just because it's up there doesn't mean it's anything more than an experimental hack. It needs documentation and more complete unit tests... etc... Mike
Grégory Lielens wrote: > Thanks a lot for reviewing my patch! > I have corrected most of the problems (I think ;-) ) > I indeed introduced memory leak, I think it is fixed now, I have also > reorganized the code to avoid duplication of the cleanup code. I used an > helper function instead of the goto, because this cleanup is needed for > normal exit too, so the helper function can come in handy for that. > std::vector would have been nice, but I tried to respect the original > coding style, maybe this could be changed but I guess the original > author should have something to say about it.... > Only thing I did not change is the allocation of acols/arows even when > they are not used. It is not difficult to do, but the code will be a > little more complex and I think that the extra memory used is not > significant: The memory for those mapping structure is doubled (1 float > and 1 int vector of size N, instead of a single int vector of size N), > but those mapping structures are an order of magnitude smaller than the > image buffer of size N*N of 4 char anyway... > > If one agree that this is to be saved at the (slight) expense of code > conciseness/simplicity, I can add this optimisation... > > Thanks for pointing the error at the left/bottom pixel lines, it is > corrected now :-) > > And I set interpolation to NEAREST by default, so that the behavior of > NonUniformImage remains the same as before if the user do not specify > otherwise (I forgot to do it in the first patch)... > > > I include the new patch, Gregory, I am sorry to be late in joining in, but I have a basic question about what you are trying to do. It arises from the fact that there are two styles of image-like plots: 1) In the original Image and NonuniformImage, the colors and locations of "pixel" (or patch or subregion or whatever) centers are specified. 2) In pcolor, quadmesh, and pcolorimage, the patch *boundaries* are specified. pcolorimage is my modification of the original NonuniformImage code to work with boundaries. I left the original in place, since presumably it meets some people's needs, even though it does not meet mine. When the grid is uniform, it doesn't really make any difference; but when the grid is nonuniform, then I don't think style #1 makes much sense for pcolor-style plotting, in which there is no interpolation; there is no unique way to specify the boundaries, so it is up to the plotting routine, and that is bad. For an image, it is clear how the interpolation should work: the color values are given at the centers of the "pixels" (meaning on the original image grid, not on the screen or other output device grid), which are uniformly spaced, and interpolated between. Exactly what is your patch supposed to do for the nonuniform case? Would you describe and demonstrate it, please, for a simple case with a 2x2 array of colors and 3x3 arrays for boundary x and y? Thanks. Eric > > Best regards, > > Greg. > > On Fri, 2008年08月15日 at 15:45 -0400, Michael Droettboom wrote: >> Thanks for all the work you put into this patch. >> >> As you say, it would be nice to have a generic framework for this so >> that new types of interpolation could be easily added and to be able to >> support arbitrary (non-axis-aligned) quadmeshes as well. But that's >> even more work -- if we keep waiting for everything we want, we'll never >> get it... ;) I agree that Agg probably won't be much help with that. >> >> There are a couple of comments with the patch as it stands --> >> >> There seems to be a gap extrapolating over the left and bottom edge (see >> attached screenshot from pcolor_nonuniform.py). >> >> Memory management looks problematic, some of which I think you inherited >> from earlier code. For example, arows and acols are never freed. >> Personally, I think these temporary buffers should be std::vector's so >> they'll be free'd automatically when scope is left. It might also be >> nice to move all of the Py_XDECREF's that happen when exceptions are >> thrown to either a master try/catch block or an "exit" goto label at the >> bottom. The amount of duplication and care required to ensure >> everything will be freed by all of the different exit paths is a little >> cumbersome. >> >> Also, acols and arows are only used in BILINEAR interpolation, but they >> are allocated always. >> >> Once these issues are addressed, it would be great to have someone who >> *uses* the nonuniform pcolor functionality (Eric Firing?) have a look at >> this patch for any regressions etc.. Assuming none, I'll be happy to >> commit it (but I won't be around for a week or so). >> >> Cheers, >> Mike >> >> Grégory Lielens wrote: >>> Hi all, >>> >>> here is a patch which implement bilinear interpolation on irregular grid >>> ( i.e. it allows NonUniformImage >>> to accept both 'nearest' and 'bilinear' interpoaltion, instead of only >>> 'nearest'.) >>> >>> It is not perfect, given the current architecture of the image module I >>> think there is not simple way >>> to specify other interpolations (except for 'nearest', all >>> interpolations are implemented at the AGG level, not in >>> matplotlib itself). >>> For the same reason, it is not possible to interpolate before colormap >>> lookup instead of after (and this >>> is usually where the biggest effect is, when dealing with coarse grid). >>> However, I think it is already quite useful and the best one ca do >>> without a -very- extensive rewrite >>> of the matrix map modules.... >>> >>> BTW, I think it also corrects a small bug in the 'nearest' >>> interpolation: the last intervals was ignored >>> in the CVS version, now it is taken into account. >>> >>> BOTH nearest and bilinear interpolation do the same for values oustside >>> the data grid: they just use boundary values (constant extrapolation). >>> I avoided linear extrapolation for 'bilinear', as extrapolation is >>> usually dangerous or meaningless (we can go outside the colormap very >>> fast)... >>> >>> I have included a small example showing how both interpolation works.... >>> >>> Any remarks, could this be added before the next release? ;-) >>> >>> >>> Greg. >>> >>> >>> >>> >>> >>> On Mon, 2008年08月11日 at 16:50 +0200, Grégory Lielens wrote: >>> >>>> On Fri, 2008年08月08日 at 16:05 +0200, Grégory Lielens wrote: >>>> >>>>> Hello everybody, >>>>> >>>>> I have sent this message to the user group, but thinking of it, it may be more >>>>> relevant to the development mailing list...so here it is again. >>>>> >>>>> >>>>> >>>>> We are looking for the best way to plot a waterfall diagram in >>>>> Matplotlib. The 2 functions which could be used >>>>> to do that are (as far as I have found) imshow and pcolormesh. Here is a >>>>> small script that use both to compare the output: >>>>> >>>>> ----------------- >>>>> >>>>> from pylab import * >>>>> >>>>> >>>>> delta = 0.2 >>>>> x = arange(-3.0, 3.0, delta) >>>>> y = arange(-2.0, 2.0, delta) >>>>> X, Y = meshgrid(x, y) >>>>> Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) >>>>> Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1) >>>>> # difference of Gaussians >>>>> Z = 10.0 * (Z2 - Z1) >>>>> figure(1) >>>>> im = imshow(Z,extent=(-3,3,-2,2)) >>>>> CS = contour(X, -Y, Z, 6, >>>>> colors='k', # negative contours will be dashed by default >>>>> ) >>>>> clabel(CS, fontsize=9, inline=1) >>>>> title('Using imshow') >>>>> figure(2) >>>>> im = pcolormesh(X,-Y,Z) >>>>> CS = contour(X, -Y, Z, 6, >>>>> colors='k', # negative contours will be dashed by default >>>>> ) >>>>> clabel(CS, fontsize=9, inline=1) >>>>> title('Using pcolormesh') >>>>> show() >>>>> >>>>> --------------------- >>>>> >>>>> >>>>> The problem is that we need some of the flexibility of pcolormesh (which >>>>> is able to map the matrix of value on any deformed mesh), while >>>>> we would like to use the interpolations available in imshow (which >>>>> explain why the imshow version is much "smoother" than the pcolormesh >>>>> one). >>>>> >>>>> In fact, what would be needed is not the full flexibility of pcolormesh >>>>> (which can map the grid to any kind of shape), we "only" have to deal >>>>> with rectangular grids where x- and y- graduations are irregularly spaced. >>>>> >>>>> Is there a drawing function in Matplotlib which would be able to work >>>>> with such a rectangular non-uniform grid? >>>>> And if not (and a quick look at the example and the code make me think >>>>> that indeed the capability is currently not present), >>>>> what about an extension of imshow which would work as this: >>>>> >>>>> im = imshow(Z,x_gridpos=x, y_gridpos=y) #specify the >>>>> position of the grid's nodes, instead of giving the extend and assuming >>>>> uniform spacing. >>>>> >>>>> Longer term, would a pcolormesh accepting interpolation be possible? The >>>>> current behavior, averaging the color of the grids node to get a uniform >>>>> cell color, >>>>> is quite rough except for a large number of cells...And even then, it >>>>> soon shows when you zoom in... >>>>> >>>>> The best would be to allow the same interpolations as in imshow (or a >>>>> subset of it), and also allows to use interpolation before colormap >>>>> lookup (or after), >>>>> like in Matlab. Indeed, Matlab allows to finely tune interpolation by >>>>> specifying Gouraud (interpolation after color >>>>> lookup)/Phong(interpolation before color lookup, i.e. for each pixel). >>>>> Phong is usually much better but also more CPU intensive. Phong is >>>>> especially when using discrete colormap, producing banded colors >>>>> equivalent to countour lines, while Gouraud does not work in those >>>>> cases. >>>>> >>>>> Of course, the performance will be impacted by some of those >>>>> interpolation options, which would degrade performance in animations for >>>>> example.... but I think that having the different options available >>>>> would be very useful, it allows to have the highest map quality, or have >>>>> a "quick and dirty" map depending on situation (grid spacing, type of >>>>> map, animation or not, ...). >>>>> >>>>> Best regards, >>>>> >>>>> Greg. >>>>> >>>> I have found a method which implement the proposed extension to imshow: >>>> NonUniformImage... >>>> >>>> However, this image instance support only nearest neighbor >>>> interpolation. Trying to set the interpolation (using the >>>> set_interpolation method) >>>> to something akin imshow throw a "NotImplementedError: Only nearest >>>> neighbor supported" exception.... >>>> >>>> So basically I am still stuck, it seems that currently there is no way >>>> in matplotlib to plot interpolated >>>> colormap on irregular rectangular grid, and even less on arbitrarily >>>> mapped grid... >>>> >>>> Is there any plans to add support for more interpolation in >>>> NonUniformImage in the future? Or maybe there is another >>>> drawing function that I did not find yet, with this ability? >>>> >>>> >>>> Best regards, >>>> >>>> Greg. >>>> >>>> >>>> >>>> ------------------------------------------------------------------------- >>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>> _______________________________________________ >>>> Matplotlib-devel mailing list >>>> Mat...@li... >>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> ------------------------------------------------------------------------- >>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>> ------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Matplotlib-devel mailing list >>>> Mat...@li... >>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >> Build the coolest Linux based applications with Moblin SDK & win great prizes >> Grand prize is a trip for two to an Open Source event anywhere in the world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ Matplotlib-devel mailing list Mat...@li... https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >> Build the coolest Linux based applications with Moblin SDK & win great prizes >> Grand prize is a trip for two to an Open Source event anywhere in the world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
Michael Droettboom wrote: > It's funny you should mention this. > > One of the things I realized after SciPy this year is that there is a > lot of interfacing of Numpy with C++ (as opposed to C) that could really > benefit from a standard C++ wrapper around Numpy. Absolutely. Though now that you mention it, isn't there one in boost::python? or at least one that worked with Numeric. > projects all had home-grown, semi-complete solutions. > A standard wrapper would be good for all of the reasons you mention > (basically resource management), but also to have a cleaner syntax for > accessing the elements that would abstract away striding, ordering etc. I think I'm thinking of something slightly different. What I'd like is set of array classes that can interface well with numpy, and also be quite usable all by themselves in C++ code that has nothing to do with python. I don't really envision a C++ ndarray -- I generally think of C++ as static enough that I'm happy to define that I need, for example, a NX2 array of doubles. I don't see using a generic array like ndarray that could hold any type, shape, etc. But maybe others do -- it would be pretty cool. I guess to some extent the question is whether you are writing extensions for python, or a C++ that you want to use from C++, and also wrap for python. But maybe both need can be met by the same C++ classes (templates?) > It may actually be possible to build it around boost::multiarray Do you know off hand if multiarray can be constructed from an existing pointer to a data block? > Can't imagine I'll find time for it anytime soon, too bad, but if you do -- I'd love to hear about it! -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
On Fri, Aug 29, 2008 at 12:01 PM, Ray Salem <ray...@gm...> wrote: > This is my first post, I really enjoy using python/numpy/scipy/matplotlib > - great replacement for matlab. I am having a problem, with plot 1 point at > a time. i am attempt to emulate an issue in the lab, where we get samples > every second and plot them. The figure becomes non-responsive and eventually > stops plotting, does anyone have an idea. Yep, this is a bad idea. The line object in matplotlib is a fairly heavy object which has a lot of stuff under the hood. Each plot command creates a new line object. What you want to do is create just one (or a few) line objects, each of which handles many points. Below is the first pass naive implementation to show you how to do this -- you will probably want to improve on this by using a better data structure to store the growing points, eg a buffer that drops old points off the end, or a dynamically resizing numpy array which grows intelligently when it gets full. The main point is that we create a single line object and add the new data to it.... import time import matplotlib matplotlib.use('TkAgg') import numpy as np import matplotlib.pyplot as plt import matplotlib.collections as collections plt.ion() # interactive plotting fig = plt.figure() ax = fig.add_subplot(111, autoscale_on=False) xdata, ydata = [0.], [0.] line, = ax.plot(xdata, ydata, 'o') dt = 0.1 for i in range(100): x = xdata[-1] + dt y = np.sin(2*np.pi*x) xdata.append(x) ydata.append(y) line.set_data(xdata, ydata) ax.set_xlim(x-1, x) ax.set_ylim(-1.1, 1.1) fig.canvas.draw() time.sleep(0.1) plt.show()
Hello Everyone This is my first post, I really enjoy using python/numpy/scipy/matplotlib - great replacement for matlab. I am having a problem, with plot 1 point at a time. i am attempt to emulate an issue in the lab, where we get samples every second and plot them. The figure becomes non-responsive and eventually stops plotting, does anyone have an idea. Cheers Ray *Code* *------------------* from numpy import * from scipy import * from pylab import * import time; ion() figure(1) subplot(412);plot([925],[0.296000],'o') subplot(413);plot([925],[0.247000],'o') subplot(414);plot([925],[0.145000],'o') subplot(411);plot([926],[0.516000],'o') time.sleep(1) subplot(412);plot([926],[0.770000],'o') subplot(413);plot([926],[0.820000],'o') subplot(414);plot([926],[0.405000],'o') subplot(411);plot([927],[0.704000],'o') time.sleep(1) subplot(412);plot([927],[0.721000],'o') subplot(413);plot([927],[0.996000],'o') subplot(414);plot([927],[0.219000],'o') subplot(411);plot([928],[0.414000],'o') time.sleep(1) subplot(412);plot([928],[0.281000],'o') subplot(413);plot([928],[0.066000],'o') subplot(414);plot([928],[0.398000],'o') subplot(411);plot([929],[0.718000],'o') subplot(412);plot([929],[0.247000],'o') subplot(413);plot([929],[0.887000],'o') subplot(414);plot([929],[0.562000],'o') subplot(411);plot([930],[0.386000],'o') time.sleep(1) subplot(412);plot([930],[0.173000],'o') subplot(413);plot([930],[0.678000],'o') subplot(414);plot([930],[0.635000],'o') subplot(411);plot([931],[0.746000],'o') time.sleep(1) subplot(412);plot([931],[0.332000],'o') subplot(413);plot([931],[0.468000],'o') subplot(414);plot([931],[0.373000],'o') subplot(411);plot([932],[0.568000],'o') time.sleep(1) subplot(412);plot([932],[0.929000],'o') subplot(413);plot([932],[0.096000],'o') subplot(414);plot([932],[0.132000],'o') subplot(411);plot([933],[0.840000],'o') time.sleep(1) subplot(412);plot([933],[0.946000],'o') subplot(413);plot([933],[0.607000],'o') subplot(414);plot([933],[0.094000],'o') subplot(411);plot([934],[0.690000],'o') time.sleep(1) subplot(412);plot([934],[0.330000],'o') subplot(413);plot([934],[0.955000],'o') subplot(414);plot([934],[0.101000],'o') subplot(411);plot([935],[0.931000],'o') time.sleep(1) subplot(412);plot([935],[0.515000],'o') subplot(413);plot([935],[0.444000],'o') subplot(414);plot([935],[0.425000],'o') subplot(411);plot([936],[0.960000],'o') time.sleep(1) subplot(412);plot([936],[0.676000],'o') subplot(413);plot([936],[0.295000],'o') subplot(414);plot([936],[0.861000],'o') subplot(411);plot([937],[0.564000],'o') time.sleep(1) subplot(412);plot([937],[0.948000],'o') subplot(413);plot([937],[0.159000],'o') subplot(414);plot([937],[0.512000],'o') subplot(411);plot([938],[0.884000],'o') time.sleep(1) subplot(412);plot([938],[0.542000],'o') subplot(413);plot([938],[0.409000],'o') subplot(414);plot([938],[0.079000],'o') subplot(411);plot([939],[0.194000],'o') time.sleep(1) subplot(412);plot([939],[0.910000],'o') subplot(413);plot([939],[0.901000],'o') subplot(414);plot([939],[0.153000],'o') subplot(411);plot([940],[0.183000],'o') time.sleep(1) subplot(412);plot([940],[0.578000],'o') subplot(413);plot([940],[0.241000],'o') subplot(414);plot([940],[0.484000],'o') subplot(411);plot([941],[0.261000],'o') time.sleep(1) subplot(412);plot([941],[0.436000],'o') subplot(413);plot([941],[0.120000],'o') subplot(414);plot([941],[0.798000],'o') subplot(411);plot([942],[0.339000],'o') time.sleep(1) subplot(412);plot([942],[0.927000],'o') subplot(413);plot([942],[0.964000],'o') subplot(414);plot([942],[0.526000],'o') subplot(411);plot([943],[0.289000],'o') time.sleep(1) subplot(412);plot([943],[0.167000],'o') subplot(413);plot([943],[0.695000],'o') subplot(414);plot([943],[0.788000],'o') subplot(411);plot([944],[0.863000],'o') time.sleep(1) subplot(412);plot([944],[0.054000],'o') subplot(413);plot([944],[0.568000],'o') subplot(414);plot([944],[0.054000],'o') subplot(411);plot([945],[0.858000],'o') time.sleep(1) subplot(412);plot([945],[0.461000],'o') subplot(413);plot([945],[0.916000],'o') subplot(414);plot([945],[0.549000],'o') subplot(411);plot([946],[0.575000],'o') time.sleep(1) subplot(412);plot([946],[0.056000],'o') subplot(413);plot([946],[0.331000],'o') subplot(414);plot([946],[0.864000],'o') subplot(411);plot([947],[0.650000],'o') time.sleep(1) subplot(412);plot([947],[0.205000],'o') subplot(413);plot([947],[0.036000],'o') subplot(414);plot([947],[0.951000],'o') subplot(411);plot([948],[0.289000],'o') time.sleep(1) subplot(412);plot([948],[0.446000],'o') subplot(413);plot([948],[0.132000],'o') subplot(414);plot([948],[0.562000],'o') subplot(411);plot([949],[0.084000],'o') time.sleep(1) subplot(412);plot([949],[0.429000],'o') subplot(413);plot([949],[0.451000],'o') subplot(414);plot([949],[0.994000],'o') subplot(411);plot([950],[0.520000],'o') time.sleep(1) subplot(412);plot([950],[0.280000],'o') subplot(413);plot([950],[0.575000],'o') subplot(414);plot([950],[0.152000],'o') subplot(411);plot([951],[0.253000],'o') time.sleep(1) subplot(412);plot([951],[0.845000],'o') subplot(413);plot([951],[0.973000],'o') subplot(414);plot([951],[0.062000],'o') subplot(411);plot([952],[0.669000],'o') time.sleep(1) subplot(412);plot([952],[0.583000],'o') subplot(413);plot([952],[0.037000],'o') subplot(414);plot([952],[0.179000],'o') subplot(411);plot([953],[0.886000],'o') time.sleep(1) subplot(412);plot([953],[0.577000],'o') subplot(413);plot([953],[0.670000],'o') subplot(414);plot([953],[0.635000],'o') subplot(411);plot([954],[0.609000],'o') time.sleep(1) subplot(412);plot([954],[0.058000],'o') subplot(413);plot([954],[0.978000],'o') subplot(414);plot([954],[0.341000],'o') subplot(411);plot([955],[0.643000],'o') time.sleep(1) subplot(412);plot([955],[0.130000],'o') subplot(413);plot([955],[0.563000],'o') subplot(414);plot([955],[0.127000],'o') subplot(411);plot([956],[0.068000],'o') time.sleep(1) subplot(412);plot([956],[0.405000],'o') subplot(413);plot([956],[0.574000],'o') subplot(414);plot([956],[0.135000],'o') subplot(411);plot([957],[0.967000],'o') time.sleep(1) subplot(412);plot([957],[0.382000],'o') subplot(413);plot([957],[0.191000],'o') subplot(414);plot([957],[0.331000],'o') subplot(411);plot([958],[0.409000],'o') time.sleep(1) subplot(412);plot([958],[0.053000],'o') subplot(413);plot([958],[0.597000],'o') subplot(414);plot([958],[0.516000],'o') subplot(411);plot([959],[0.102000],'o') time.sleep(1) subplot(412);plot([959],[0.972000],'o') subplot(413);plot([959],[0.199000],'o') subplot(414);plot([959],[0.678000],'o') subplot(411);plot([960],[0.200000],'o') time.sleep(1) subplot(412);plot([960],[0.070000],'o') subplot(413);plot([960],[0.935000],'o') subplot(414);plot([960],[0.803000],'o') subplot(411);plot([961],[0.441000],'o') time.sleep(1) subplot(412);plot([961],[0.881000],'o') subplot(413);plot([961],[0.094000],'o') subplot(414);plot([961],[0.640000],'o') subplot(411);plot([962],[0.955000],'o') time.sleep(1) subplot(412);plot([962],[0.997000],'o') subplot(413);plot([962],[0.664000],'o') subplot(414);plot([962],[0.536000],'o') subplot(411);plot([963],[0.669000],'o') time.sleep(1) subplot(412);plot([963],[0.389000],'o') subplot(413);plot([963],[0.633000],'o') subplot(414);plot([963],[0.488000],'o') subplot(411);plot([964],[0.825000],'o') time.sleep(1) subplot(412);plot([964],[0.502000],'o') subplot(413);plot([964],[0.689000],'o') subplot(414);plot([964],[0.804000],'o') subplot(411);plot([965],[0.068000],'o') time.sleep(1) subplot(412);plot([965],[0.542000],'o') subplot(413);plot([965],[0.860000],'o') subplot(414);plot([965],[0.672000],'o') subplot(411);plot([966],[0.070000],'o') time.sleep(1) subplot(412);plot([966],[0.247000],'o') subplot(413);plot([966],[0.613000],'o') subplot(414);plot([966],[0.288000],'o') subplot(411);plot([967],[0.315000],'o') time.sleep(1) subplot(412);plot([967],[0.228000],'o') subplot(413);plot([967],[0.452000],'o') subplot(414);plot([967],[0.283000],'o') subplot(411);plot([968],[0.241000],'o') time.sleep(1) subplot(412);plot([968],[0.267000],'o') subplot(413);plot([968],[0.269000],'o') subplot(414);plot([968],[0.558000],'o') subplot(411);plot([969],[0.282000],'o') time.sleep(1) subplot(412);plot([969],[0.748000],'o') subplot(413);plot([969],[0.323000],'o') subplot(414);plot([969],[0.190000],'o') subplot(411);plot([970],[0.236000],'o') time.sleep(1) subplot(412);plot([970],[0.193000],'o') subplot(413);plot([970],[0.244000],'o') subplot(414);plot([970],[0.464000],'o') subplot(411);plot([971],[0.728000],'o') time.sleep(1) subplot(412);plot([971],[0.174000],'o') subplot(413);plot([971],[0.791000],'o') subplot(414);plot([971],[0.988000],'o') subplot(411);plot([972],[0.984000],'o') time.sleep(1) subplot(412);plot([972],[0.890000],'o') subplot(413);plot([972],[0.140000],'o') subplot(414);plot([972],[0.463000],'o') subplot(411);plot([973],[0.284000],'o') time.sleep(1) subplot(412);plot([973],[0.110000],'o') subplot(413);plot([973],[0.139000],'o') subplot(414);plot([973],[0.400000],'o') subplot(411);plot([974],[0.919000],'o') time.sleep(1) subplot(412);plot([974],[0.038000],'o') subplot(413);plot([974],[0.519000],'o') subplot(414);plot([974],[0.206000],'o') subplot(411);plot([975],[0.645000],'o') time.sleep(1) subplot(412);plot([975],[0.854000],'o') subplot(413);plot([975],[0.276000],'o') subplot(414);plot([975],[0.975000],'o') subplot(411);plot([976],[0.624000],'o') time.sleep(1) subplot(412);plot([976],[0.031000],'o') subplot(413);plot([976],[0.359000],'o') subplot(414);plot([976],[0.908000],'o') subplot(411);plot([977],[0.675000],'o') time.sleep(1) subplot(412);plot([977],[0.810000],'o') subplot(413);plot([977],[0.349000],'o') subplot(414);plot([977],[0.355000],'o') subplot(411);plot([978],[0.430000],'o') time.sleep(1) subplot(412);plot([978],[0.445000],'o') subplot(413);plot([978],[0.423000],'o') subplot(414);plot([978],[0.813000],'o') subplot(411);plot([979],[0.787000],'o') time.sleep(1) subplot(412);plot([979],[0.467000],'o') subplot(413);plot([979],[0.276000],'o') subplot(414);plot([979],[0.402000],'o') subplot(411);plot([980],[0.089000],'o') time.sleep(1) subplot(412);plot([980],[0.664000],'o') subplot(413);plot([980],[0.671000],'o') subplot(414);plot([980],[0.913000],'o') subplot(411);plot([981],[0.247000],'o') time.sleep(1) subplot(412);plot([981],[0.382000],'o') subplot(413);plot([981],[0.939000],'o') subplot(414);plot([981],[0.277000],'o') subplot(411);plot([982],[0.259000],'o') time.sleep(1) subplot(412);plot([982],[0.777000],'o') subplot(413);plot([982],[0.524000],'o') subplot(414);plot([982],[0.068000],'o') subplot(411);plot([983],[0.580000],'o') time.sleep(1) subplot(412);plot([983],[0.346000],'o') subplot(413);plot([983],[0.607000],'o') subplot(414);plot([983],[0.833000],'o') subplot(411);plot([984],[0.300000],'o') time.sleep(1) subplot(412);plot([984],[0.626000],'o') subplot(413);plot([984],[0.837000],'o') subplot(414);plot([984],[0.685000],'o') subplot(411);plot([985],[0.386000],'o') time.sleep(1) subplot(412);plot([985],[0.957000],'o') subplot(413);plot([985],[0.701000],'o') subplot(414);plot([985],[0.657000],'o') subplot(411);plot([986],[0.631000],'o') time.sleep(1) subplot(412);plot([986],[0.121000],'o') subplot(413);plot([986],[0.912000],'o') subplot(414);plot([986],[0.567000],'o') subplot(411);plot([987],[0.088000],'o') time.sleep(1) subplot(412);plot([987],[0.279000],'o') subplot(413);plot([987],[0.024000],'o') subplot(414);plot([987],[0.114000],'o') subplot(411);plot([988],[0.035000],'o') time.sleep(1) subplot(412);plot([988],[0.025000],'o') subplot(413);plot([988],[0.267000],'o') subplot(414);plot([988],[0.251000],'o') subplot(411);plot([989],[0.861000],'o') time.sleep(1) subplot(412);plot([989],[0.909000],'o') subplot(413);plot([989],[0.564000],'o') subplot(414);plot([989],[0.582000],'o') subplot(411);plot([990],[0.211000],'o') time.sleep(1) subplot(412);plot([990],[0.921000],'o') subplot(413);plot([990],[0.915000],'o') subplot(414);plot([990],[0.044000],'o') subplot(411);plot([991],[0.808000],'o') time.sleep(1) subplot(412);plot([991],[0.455000],'o') subplot(413);plot([991],[0.846000],'o') subplot(414);plot([991],[0.385000],'o') subplot(411);plot([992],[0.647000],'o') time.sleep(1) subplot(412);plot([992],[0.723000],'o') subplot(413);plot([992],[0.112000],'o') subplot(414);plot([992],[0.017000],'o') subplot(411);plot([993],[0.689000],'o') time.sleep(1) subplot(412);plot([993],[0.635000],'o') subplot(413);plot([993],[0.225000],'o') subplot(414);plot([993],[0.599000],'o') subplot(411);plot([994],[0.056000],'o') time.sleep(1) subplot(412);plot([994],[0.534000],'o') subplot(413);plot([994],[0.271000],'o') subplot(414);plot([994],[0.774000],'o') subplot(411);plot([995],[0.837000],'o') time.sleep(1) subplot(412);plot([995],[0.408000],'o') subplot(413);plot([995],[0.363000],'o') subplot(414);plot([995],[0.796000],'o') subplot(411);plot([996],[0.839000],'o') time.sleep(1) subplot(412);plot([996],[0.581000],'o') subplot(413);plot([996],[0.063000],'o') subplot(414);plot([996],[0.404000],'o') subplot(411);plot([997],[0.230000],'o') time.sleep(1) subplot(412);plot([997],[0.731000],'o') subplot(413);plot([997],[0.478000],'o') subplot(414);plot([997],[0.274000],'o') subplot(411);plot([998],[0.319000],'o') time.sleep(1) subplot(412);plot([998],[0.870000],'o') subplot(413);plot([998],[0.220000],'o') subplot(414);plot([998],[0.364000],'o') subplot(411);plot([999],[0.592000],'o') time.sleep(1) subplot(412);plot([999],[0.494000],'o') subplot(413);plot([999],[0.320000],'o') subplot(414);plot([999],[0.143000],'o')
Thanks, I quickly went through the code of the pngmath.py, and it seems that the depth(descent) of the dvi file is reported by "dvipng" (but the preview package must be used in the tex file for this to work correctly). Therefore, with this method, we need to run dvipng even if we use ps of pdf backend. Although this seems fine to me, I'll see if I can extract the depth of the text without running the dvipng. Regards, -JJ On Fri, Aug 29, 2008 at 7:59 AM, Michael Droettboom <md...@st...> wrote: > Sphinx contains one way to do this in its new "pngmath" extension. It uses > the LaTeX package "preview" which does all of this magic internally. And I > believe it's a little more general. If I recall, the approach you're taking > won't work with some LaTeX constructs such as: > > \begin{align} > x & = 2 > y & = 2 > \end{align} > > Plus, Sphinx is BSD-licensed, so it should be fine to copy-and-paste > whatever code is necessary. > > Of course, latex-preview is required to be installed, but I think it's a > pretty common package. > > See here: > > http://svn.python.org/projects/doctools/trunk/sphinx/ext/pngmath.py > > Cheers, > Mike > > Jae-Joon Lee wrote: >> >> On Thu, Aug 28, 2008 at 4:18 PM, John Hunter <jd...@gm...> wrote: >> >>> >>> On Thu, Aug 28, 2008 at 2:57 PM, Jae-Joon Lee <lee...@gm...> >>> wrote: >>> >>> >>>> >>>> First of all, I borrowed this idea from the PyX which is in GPL. >>>> Although there is little of copying, other than the basic idea, I'm >>>> not 100% sure if this could be BSD-compatible. >>>> >>> >>> I think it is fine to borrow the idea; what we need to do is a clean >>> room implementation with no copying. You can best answer that, so if >>> you tell us your patch is cleanly implemented, we can accept it. >>> >>> JDH >>> >>> >> >> Thanks for the response. >> >> Well, the only part I borrowed from PyX is TeX related commands they >> use (there is not much of implementation as far as TeX-related code is >> concerned). From their code, I learned the meaning and usage of the >> following TeX commands >> >> \newbox >> \setbox >> \immediate\write16 >> >> And I used the same TeX commands in my code. >> But I personally think this is not a (code) copy. >> >> Other than this, the code is clean. >> Regards, >> >> -JJ >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> Build the coolest Linux based applications with Moblin SDK & win great >> prizes >> Grand prize is a trip for two to an Open Source event anywhere in the >> world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> > >
Sphinx contains one way to do this in its new "pngmath" extension. It uses the LaTeX package "preview" which does all of this magic internally. And I believe it's a little more general. If I recall, the approach you're taking won't work with some LaTeX constructs such as: \begin{align} x & = 2 y & = 2 \end{align} Plus, Sphinx is BSD-licensed, so it should be fine to copy-and-paste whatever code is necessary. Of course, latex-preview is required to be installed, but I think it's a pretty common package. See here: http://svn.python.org/projects/doctools/trunk/sphinx/ext/pngmath.py Cheers, Mike Jae-Joon Lee wrote: > On Thu, Aug 28, 2008 at 4:18 PM, John Hunter <jd...@gm...> wrote: > >> On Thu, Aug 28, 2008 at 2:57 PM, Jae-Joon Lee <lee...@gm...> wrote: >> >> >>> First of all, I borrowed this idea from the PyX which is in GPL. >>> Although there is little of copying, other than the basic idea, I'm >>> not 100% sure if this could be BSD-compatible. >>> >> I think it is fine to borrow the idea; what we need to do is a clean >> room implementation with no copying. You can best answer that, so if >> you tell us your patch is cleanly implemented, we can accept it. >> >> JDH >> >> > > Thanks for the response. > > Well, the only part I borrowed from PyX is TeX related commands they > use (there is not much of implementation as far as TeX-related code is > concerned). From their code, I learned the meaning and usage of the > following TeX commands > > \newbox > \setbox > \immediate\write16 > > And I used the same TeX commands in my code. > But I personally think this is not a (code) copy. > > Other than this, the code is clean. > Regards, > > -JJ > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >
It's funny you should mention this. One of the things I realized after SciPy this year is that there is a lot of interfacing of Numpy with C++ (as opposed to C) that could really benefit from a standard C++ wrapper around Numpy. A bunch of different projects all had home-grown, semi-complete solutions. A standard wrapper would be good for all of the reasons you mention (basically resource management), but also to have a cleaner syntax for accessing the elements that would abstract away striding, ordering etc. It may actually be possible to build it around boost::multiarray (the closest thing the upcoming C++-0x standard has to a n-dimensional array). Can't imagine I'll find time for it anytime soon, but it would be fun to work on... Cheers, Mike Christopher Barker wrote: > Michael Droettboom wrote: > >> I think we want to use std::vector where possible for new code, and >> convert as we go through old code. Manual malloc/free is just too error >> prone. (We're forced to do some by interfacing with Python, but it >> should be kept to a minimum). >> > > My understanding is that this is exactly the issue -- with std::vector > and friends you can't both get the data pointer and create a vector with > a data pointer, which would be what you'd want to do to interface > efficiently with numpy arrays. > > I've been looking for a way to address this problem, but my C++ is > pretty week. maybe smart pointers and/or other boost classes could help. > > > I do have an idea, though. The goal is a set of classes for working with > data that are convenient to use with pure C++, but also play with numpy > arrays. > > I imagine a simple vector/array set of classes build on top of a simple > object: a reference counted data block. This data block object would > simply hold a pointer to a block of data, maybe know how large it is, > maybe know what type the data is, and hold a reference count. when that > count drops to zero it deletes itself. > > The vector/array classes built on top of it would use the data block to > store their data, and increment/decrement the reference count as need > be. This would allow them to share data, have one array that is a subset > of another, using the same data block, etc. > > It should be easy to build numpy arrays from a system like this. > > Ideally (but I'm not sure how), the data blocks reference counting > system could be integrated with Python's, so that when a numpy array was > constructed from one, its reference count would be managed by python as > well as your C++ code. > > But maybe that's all just too much work... > > > -Chris > > > >
Manuel Metz wrote: > Hi all, > I ran into a problem where I wanted to plot a step-plot with dashed > lines instead of solid lines which can be important for print media. > This isn't possible with the current matplotlib version, so I added > support for this. The patch is attached, but I didn't commit it yet > since I wanted to ask for feedback first. It basically adds support to do > > pylab.plot(x, y, 'steps--') > > to create a step plot with dashed lines. Hm, okay - I just noticed that the patch doesn't work with legend. This needs to be fixed ... mm
Hi all, I ran into a problem where I wanted to plot a step-plot with dashed lines instead of solid lines which can be important for print media. This isn't possible with the current matplotlib version, so I added support for this. The patch is attached, but I didn't commit it yet since I wanted to ask for feedback first. It basically adds support to do pylab.plot(x, y, 'steps--') to create a step plot with dashed lines. mm
On Thu, Aug 28, 2008 at 4:18 PM, John Hunter <jd...@gm...> wrote: > On Thu, Aug 28, 2008 at 2:57 PM, Jae-Joon Lee <lee...@gm...> wrote: > >> First of all, I borrowed this idea from the PyX which is in GPL. >> Although there is little of copying, other than the basic idea, I'm >> not 100% sure if this could be BSD-compatible. > > I think it is fine to borrow the idea; what we need to do is a clean > room implementation with no copying. You can best answer that, so if > you tell us your patch is cleanly implemented, we can accept it. > > JDH > Thanks for the response. Well, the only part I borrowed from PyX is TeX related commands they use (there is not much of implementation as far as TeX-related code is concerned). From their code, I learned the meaning and usage of the following TeX commands \newbox \setbox \immediate\write16 And I used the same TeX commands in my code. But I personally think this is not a (code) copy. Other than this, the code is clean. Regards, -JJ
On Thu, Aug 28, 2008 at 2:57 PM, Jae-Joon Lee <lee...@gm...> wrote: > First of all, I borrowed this idea from the PyX which is in GPL. > Although there is little of copying, other than the basic idea, I'm > not 100% sure if this could be BSD-compatible. I think it is fine to borrow the idea; what we need to do is a clean room implementation with no copying. You can best answer that, so if you tell us your patch is cleanly implemented, we can accept it. JDH
Hello, AFAIK, current backends (I only tested agg, pdf, and ps) do not properly respect the text baseline when text is rendered using TeX. The get_text_width_height_descent() method in Agg and PS backends simply return 0 for the descent value. While PDF backend uses the dviread module to figure out correct descent values, there are cases this does not work well (e.g. $\frac{1}{2}\pi$). As an example, the attached figure shows the result for the Agg backend. In all cases, the texts are placed at (0,0) with baseline-alignment. Leftmost one is when usetex=False, which has a correct baseline. The middle one is when usetex=True. It is bottom aligned, which is not intended. The rightmost one is also when usetex=True but after the patch I describe below. First of all, I borrowed this idea from the PyX which is in GPL. Although there is little of copying, other than the basic idea, I'm not 100% sure if this could be BSD-compatible. Anyhow, the idea is that you can have LateX to print out the width, height, and descent (=depth) of a given text by enclosing the text in a box. For example, \newbox\MatplotlibBox% \setbox\MatplotlibBox=\hbox{$\frac{1}{2}\pi$}% \copy\MatplotlibBox \immediate\write16{MatplotlibBox:\the\wd\MatplotlibBox,\the\ht\MatplotlibBox,\the\dp\MatplotlibBox}% I define a newbox (called MatplotlibBox) which encloses $\frac{1}{2}\pi$. And then print out the width, height and depth of the box. Attached is a patch of a texmanager.py which utilize above method to figure out the dimension of the text. The template string to generate a ".tex" file is slightly modified. After latex is run, the dimensional information of the text is extracted and saved in ".baseline" file and get_text_width_height_descent() method is added under the TexManager class, which reads in the ".baseline" file and return its content. (you need to empty out the tex.cache directory for this work correctly). A backend can simply call the get_text_width_height_descent() of texmanager (a simple patch for the Agg backend is attached). I also tested this with PS and PDF backends and they worked out fine. So if the license issue is okay, I wonder if this patch can be reviewed and applied (after any necessary modifications) to improve the baseline handling in matploltib. Regards, -JJ
Michael Droettboom wrote: > I think we want to use std::vector where possible for new code, and > convert as we go through old code. Manual malloc/free is just too error > prone. (We're forced to do some by interfacing with Python, but it > should be kept to a minimum). My understanding is that this is exactly the issue -- with std::vector and friends you can't both get the data pointer and create a vector with a data pointer, which would be what you'd want to do to interface efficiently with numpy arrays. I've been looking for a way to address this problem, but my C++ is pretty week. maybe smart pointers and/or other boost classes could help. I do have an idea, though. The goal is a set of classes for working with data that are convenient to use with pure C++, but also play with numpy arrays. I imagine a simple vector/array set of classes build on top of a simple object: a reference counted data block. This data block object would simply hold a pointer to a block of data, maybe know how large it is, maybe know what type the data is, and hold a reference count. when that count drops to zero it deletes itself. The vector/array classes built on top of it would use the data block to store their data, and increment/decrement the reference count as need be. This would allow them to share data, have one array that is a subset of another, using the same data block, etc. It should be easy to build numpy arrays from a system like this. Ideally (but I'm not sure how), the data blocks reference counting system could be integrated with Python's, so that when a numpy array was constructed from one, its reference count would be managed by python as well as your C++ code. But maybe that's all just too much work... -Chris -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
Michael Droettboom wrote: > On RHEL4, I get the following message: > > This OpenGL does not support framebuffer objects > > I understand that I'm probably just suffering from a relatively old > Mesa/OpenGL stack here. And obviously, just because some older systems > won't support this is not a reason to not include it as an optional > backend. > > Can you possibly send a screen shot to the list for those of us who > can't see it? Is the quadmesh interpolated? That would be a huge > improvement over anything we can do currently. > > As for developing a full OpenGL backend, note that the process of > writing a backend has been greatly simplified as of 0.98.x (there's only > about four rendering methods to implement now, with three more optional > ones to re-implement for performance reasons). You can start by > looking at backend_bases.py. I'm happy to help with any questions you > have as you go along, but as I don't have a working pyglet/OpenGL, I > can't be of much help for testing. > I'll let you know if I need any help. 0.98 definitely seems simpler and I think Pyglet will handle a lot of the context stuff to make things simpler as well. Your error on RHEL4 indicates more that it doesn't support our particular approach, which uses offscreen rendering to render to an image which is then blitted in. It's an expedient approach to get a drop in quadmesh replacement. A full pyglet backend would not suffer from this limitation. I don't have this hooked up fully within matplotlib yet, so right now it just generates a test image with a 2000x2000 mesh of quadrilaterals which is displayed using imshow(). (simple example attached.) I have not looked at interpolation or even turned on antialiasing (should just be one or two function calls). I can tell you that most of the time is spent generating the list of vertices in a form suitable for OpenGL QUAD_LISTS. The actual rendering and handing off to the graphics card is *really* fast. I have a couple other things on my plate ATM, but hopefully next week some time I will get back to this, as I really, really, really want/need to pursue the opengl stuff further. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma
On RHEL4, I get the following message: This OpenGL does not support framebuffer objects I understand that I'm probably just suffering from a relatively old Mesa/OpenGL stack here. And obviously, just because some older systems won't support this is not a reason to not include it as an optional backend. Can you possibly send a screen shot to the list for those of us who can't see it? Is the quadmesh interpolated? That would be a huge improvement over anything we can do currently. As for developing a full OpenGL backend, note that the process of writing a backend has been greatly simplified as of 0.98.x (there's only about four rendering methods to implement now, with three more optional ones to re-implement for performance reasons). You can start by looking at backend_bases.py. I'm happy to help with any questions you have as you go along, but as I don't have a working pyglet/OpenGL, I can't be of much help for testing. Cheers, Mike Ryan May wrote: > Paul Kienzle wrote: > >> Hi, >> >> There was a recent discussion about opengl and matplotlib in the >> context of matplotlib rendering speeds. >> >> At the scipy sprints we put together a proof of concept renderer >> for quad meshes using the opengl frame buffer object, which we >> then render as a matplotlib image. Once the data is massaged >> to the correct form we were seeing rendering speeds of about >> 10 million quads per second. See below. >> >> Using this technique we can get the advantage of opengl without >> having to write a full backend. Please let me know if you have >> time to contribute to writing a more complete backend. >> > > I know that I (as the other contributer :) ) plan on making this into a > full backend provided: > > 1) The quadmesh code can be shown yield the gains when integrated into > matplotlib more fully > > 2) I find the time (which is a matter of *when*, not if) > > I'm certainly finding thus far that pyglet makes it a lot easier to do a > full backend than some of the other python->opengl methods I'd explored > in the past. > > Ryan > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Thanks! It works as I expected. -JJ On Thu, Aug 28, 2008 at 8:43 AM, Michael Droettboom <md...@st...> wrote: > This should now be fixed in SVN r6052. > > Cheers, > Mike > > Jae-Joon Lee wrote: >> >> Hi, >> >> The clip_on and clip_box arguments (and maybe clip_path also) in >> plot() command seem to have no effect. For example, >> >> In [29]: p, =plot([1,2,3], clip_on=False) >> >> In [30]: p.get_clip_on() >> Out[30]: True >> >> >> It seems that the line object is created with the given arguments but >> gets overwritten later when it is added to the axes (Axes.add_line), >> >> def add_line(self, line): >> ''' >> Add a :class:`~matplotlib.lines.Line2D` to the list of plot >> lines >> ''' >> self._set_artist_props(line) >> line.set_clip_path(self.patch) >> >> "line.set_clip_path(self.patch)" update clip_path, clip_box, and clip_on. >> >> So, isn't it better to do something explicitly when these arguments >> are ignored? Maybe to print out some warnings or even raise an >> exception, or do not call set_clip_path when clip_* argument is given, >> or etc.? >> >> A slightly related behavior I found unexpected is that set_clip_path() >> and set_clip_rect() method also update clip_on. >> >> Regards, >> >> -JJ >> >> ------------------------------------------------------------------------- >> This SF.Net email is sponsored by the Moblin Your Move Developer's >> challenge >> Build the coolest Linux based applications with Moblin SDK & win great >> prizes >> Grand prize is a trip for two to an Open Source event anywhere in the >> world >> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >> > >
I'm going to again defer the real question of the functionality and interface etc. to Eric and/or John etc., though it seems good in general. I just have a couple of follow on comments related to implementation details below. Grégory Lielens wrote: > std::vector would have been nice, but I tried to respect the original > coding style, maybe this could be changed but I guess the original > author should have something to say about it.... > I think we want to use std::vector where possible for new code, and convert as we go through old code. Manual malloc/free is just too error prone. (We're forced to do some by interfacing with Python, but it should be kept to a minimum). Maybe this should be added to the coding guidelines? > Only thing I did not change is the allocation of acols/arows even when > they are not used. It is not difficult to do, but the code will be a > little more complex and I think that the extra memory used is not > significant: The memory for those mapping structure is doubled (1 float > and 1 int vector of size N, instead of a single int vector of size N), > but those mapping structures are an order of magnitude smaller than the > image buffer of size N*N of 4 char anyway... > > If one agree that this is to be saved at the (slight) expense of code > conciseness/simplicity, I can add this optimisation... > I don't understand the complexity. Isn't it as simple as moving: acols = reinterpret_cast<float*>(PyMem_Malloc(sizeof(float)*cols)); (and the following NULL check) inside of the "if (interpolation == Image::BILINEAR)" block? Heap allocations can be expensive, so it would be great to minimize them. > Thanks for pointing the error at the left/bottom pixel lines, it is > corrected now :-) > Great! > And I set interpolation to NEAREST by default, so that the behavior of > NonUniformImage remains the same as before if the user do not specify > otherwise (I forgot to do it in the first patch)... > Sounds reasonable -- though perhaps we want to be consistent instead with the standard (uniform) images, which default to the "image.interpolation" setting in matplotlibrc. Of course, "image.interpolation" has a bunch of options that nonuniform doesn't currently support... What do others think? Cheers, Mike > > I include the new patch, > > Best regards, > > Greg. > > On Fri, 2008年08月15日 at 15:45 -0400, Michael Droettboom wrote: > >> Thanks for all the work you put into this patch. >> >> As you say, it would be nice to have a generic framework for this so >> that new types of interpolation could be easily added and to be able to >> support arbitrary (non-axis-aligned) quadmeshes as well. But that's >> even more work -- if we keep waiting for everything we want, we'll never >> get it... ;) I agree that Agg probably won't be much help with that. >> >> There are a couple of comments with the patch as it stands --> >> >> There seems to be a gap extrapolating over the left and bottom edge (see >> attached screenshot from pcolor_nonuniform.py). >> >> Memory management looks problematic, some of which I think you inherited >> from earlier code. For example, arows and acols are never freed. >> Personally, I think these temporary buffers should be std::vector's so >> they'll be free'd automatically when scope is left. It might also be >> nice to move all of the Py_XDECREF's that happen when exceptions are >> thrown to either a master try/catch block or an "exit" goto label at the >> bottom. The amount of duplication and care required to ensure >> everything will be freed by all of the different exit paths is a little >> cumbersome. >> >> Also, acols and arows are only used in BILINEAR interpolation, but they >> are allocated always. >> >> Once these issues are addressed, it would be great to have someone who >> *uses* the nonuniform pcolor functionality (Eric Firing?) have a look at >> this patch for any regressions etc.. Assuming none, I'll be happy to >> commit it (but I won't be around for a week or so). >> >> Cheers, >> Mike >> >> Grégory Lielens wrote: >> >>> Hi all, >>> >>> here is a patch which implement bilinear interpolation on irregular grid >>> ( i.e. it allows NonUniformImage >>> to accept both 'nearest' and 'bilinear' interpoaltion, instead of only >>> 'nearest'.) >>> >>> It is not perfect, given the current architecture of the image module I >>> think there is not simple way >>> to specify other interpolations (except for 'nearest', all >>> interpolations are implemented at the AGG level, not in >>> matplotlib itself). >>> For the same reason, it is not possible to interpolate before colormap >>> lookup instead of after (and this >>> is usually where the biggest effect is, when dealing with coarse grid). >>> However, I think it is already quite useful and the best one ca do >>> without a -very- extensive rewrite >>> of the matrix map modules.... >>> >>> BTW, I think it also corrects a small bug in the 'nearest' >>> interpolation: the last intervals was ignored >>> in the CVS version, now it is taken into account. >>> >>> BOTH nearest and bilinear interpolation do the same for values oustside >>> the data grid: they just use boundary values (constant extrapolation). >>> I avoided linear extrapolation for 'bilinear', as extrapolation is >>> usually dangerous or meaningless (we can go outside the colormap very >>> fast)... >>> >>> I have included a small example showing how both interpolation works.... >>> >>> Any remarks, could this be added before the next release? ;-) >>> >>> >>> Greg. >>> >>> >>> >>> >>> >>> On Mon, 2008年08月11日 at 16:50 +0200, Grégory Lielens wrote: >>> >>> >>>> On Fri, 2008年08月08日 at 16:05 +0200, Grégory Lielens wrote: >>>> >>>> >>>>> Hello everybody, >>>>> >>>>> I have sent this message to the user group, but thinking of it, it may be more >>>>> relevant to the development mailing list...so here it is again. >>>>> >>>>> >>>>> >>>>> We are looking for the best way to plot a waterfall diagram in >>>>> Matplotlib. The 2 functions which could be used >>>>> to do that are (as far as I have found) imshow and pcolormesh. Here is a >>>>> small script that use both to compare the output: >>>>> >>>>> ----------------- >>>>> >>>>> from pylab import * >>>>> >>>>> >>>>> delta = 0.2 >>>>> x = arange(-3.0, 3.0, delta) >>>>> y = arange(-2.0, 2.0, delta) >>>>> X, Y = meshgrid(x, y) >>>>> Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) >>>>> Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1) >>>>> # difference of Gaussians >>>>> Z = 10.0 * (Z2 - Z1) >>>>> figure(1) >>>>> im = imshow(Z,extent=(-3,3,-2,2)) >>>>> CS = contour(X, -Y, Z, 6, >>>>> colors='k', # negative contours will be dashed by default >>>>> ) >>>>> clabel(CS, fontsize=9, inline=1) >>>>> title('Using imshow') >>>>> figure(2) >>>>> im = pcolormesh(X,-Y,Z) >>>>> CS = contour(X, -Y, Z, 6, >>>>> colors='k', # negative contours will be dashed by default >>>>> ) >>>>> clabel(CS, fontsize=9, inline=1) >>>>> title('Using pcolormesh') >>>>> show() >>>>> >>>>> --------------------- >>>>> >>>>> >>>>> The problem is that we need some of the flexibility of pcolormesh (which >>>>> is able to map the matrix of value on any deformed mesh), while >>>>> we would like to use the interpolations available in imshow (which >>>>> explain why the imshow version is much "smoother" than the pcolormesh >>>>> one). >>>>> >>>>> In fact, what would be needed is not the full flexibility of pcolormesh >>>>> (which can map the grid to any kind of shape), we "only" have to deal >>>>> with rectangular grids where x- and y- graduations are irregularly spaced. >>>>> >>>>> Is there a drawing function in Matplotlib which would be able to work >>>>> with such a rectangular non-uniform grid? >>>>> And if not (and a quick look at the example and the code make me think >>>>> that indeed the capability is currently not present), >>>>> what about an extension of imshow which would work as this: >>>>> >>>>> im = imshow(Z,x_gridpos=x, y_gridpos=y) #specify the >>>>> position of the grid's nodes, instead of giving the extend and assuming >>>>> uniform spacing. >>>>> >>>>> Longer term, would a pcolormesh accepting interpolation be possible? The >>>>> current behavior, averaging the color of the grids node to get a uniform >>>>> cell color, >>>>> is quite rough except for a large number of cells...And even then, it >>>>> soon shows when you zoom in... >>>>> >>>>> The best would be to allow the same interpolations as in imshow (or a >>>>> subset of it), and also allows to use interpolation before colormap >>>>> lookup (or after), >>>>> like in Matlab. Indeed, Matlab allows to finely tune interpolation by >>>>> specifying Gouraud (interpolation after color >>>>> lookup)/Phong(interpolation before color lookup, i.e. for each pixel). >>>>> Phong is usually much better but also more CPU intensive. Phong is >>>>> especially when using discrete colormap, producing banded colors >>>>> equivalent to countour lines, while Gouraud does not work in those >>>>> cases. >>>>> >>>>> Of course, the performance will be impacted by some of those >>>>> interpolation options, which would degrade performance in animations for >>>>> example.... but I think that having the different options available >>>>> would be very useful, it allows to have the highest map quality, or have >>>>> a "quick and dirty" map depending on situation (grid spacing, type of >>>>> map, animation or not, ...). >>>>> >>>>> Best regards, >>>>> >>>>> Greg. >>>>> >>>>> >>>> I have found a method which implement the proposed extension to imshow: >>>> NonUniformImage... >>>> >>>> However, this image instance support only nearest neighbor >>>> interpolation. Trying to set the interpolation (using the >>>> set_interpolation method) >>>> to something akin imshow throw a "NotImplementedError: Only nearest >>>> neighbor supported" exception.... >>>> >>>> So basically I am still stuck, it seems that currently there is no way >>>> in matplotlib to plot interpolated >>>> colormap on irregular rectangular grid, and even less on arbitrarily >>>> mapped grid... >>>> >>>> Is there any plans to add support for more interpolation in >>>> NonUniformImage in the future? Or maybe there is another >>>> drawing function that I did not find yet, with this ability? >>>> >>>> >>>> Best regards, >>>> >>>> Greg. >>>> >>>> ------------------------------------------------------------------------ >>>> >>>> ------------------------------------------------------------------------- >>>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge >>>> Build the coolest Linux based applications with Moblin SDK & win great prizes >>>> Grand prize is a trip for two to an Open Source event anywhere in the world >>>> http://moblin-contest.org/redirect.php?banner_id=100&url=/ >>>> ------------------------------------------------------------------------ >>>> >>>> _______________________________________________ >>>> Matplotlib-devel mailing list >>>> Mat...@li... >>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Sorry about that, and thanks for your investigation. It's easy enough to change the default. I think (and it was a while ago, so I may be misremembering) I probably made that change myself in error. (There used to be two completely separate code paths for doing step plots, that I merged into one). Any objections? Is it possible that change was made deliberately (to conform with matlab or gnuplot etc.)? Cheers, Mike Neil Crighton wrote: > Now I see there are more options in 0.98 - 'steps-pre', 'steps-post', > 'steps-mid'. The default should be steps-post for backwards > compatibility. In 0.98.3 the default is steps-pre. And sorry for the > testy tone of the previous email :) > > Neil > > 2008年8月28日 Neil Crighton <nei...@gm...>: > >> linestyle='steps' has changed behaviour between 0.91.2 and 0.98.3. The >> 'step' between two points used to move horizontally and then >> vertically from the left point neighbouring right point, now it moves >> vertically then horizontally. >> >> Was this change intentional? I hope not, because I've just spent the >> past hour working out it was the reason for my plotting routines not >> working properly :-/ >> >> Neil >> >> > > ------------------------------------------------------------------------- > This SF.Net email is sponsored by the Moblin Your Move Developer's challenge > Build the coolest Linux based applications with Moblin SDK & win great prizes > Grand prize is a trip for two to an Open Source event anywhere in the world > http://moblin-contest.org/redirect.php?banner_id=100&url=/ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
On Thu, Aug 28, 2008 at 08:17:26AM -0500, John Hunter wrote: > On Thu, Aug 28, 2008 at 6:19 AM, Sandro Tosi <mat...@gm...> wrote: > > Enthought suite is used a lot in scientific area, so mpl and enth > > almost share their users, so have it enabled would be a plus, but we > > are mainly interested in generate "no harm", so we'd like to ask you > > to confirm that enabling that support won't brake anything (but indeed > > provide a valuable asset for users). > Well, one thing to make sure of is that you do not install the version > of traits that ships with matplotlib, since it will break other > enthought packages. Debian and Ubuntu are doing the right thing here currently. This is very nice, as it provides a clean upgrade path for traits under Debian derivatives. Gaël
Hi John, Hi Eric, In case this has slipped under the radar, I repost my patch for bilinear interp in NonUniformImage (including the discussion around it). I think Eric is the most concerned by this, but you were both on holidays when I raised the issue in matplotlib newsgroup. Best regards, Greg. ------------- Thanks a lot for reviewing my patch! I have corrected most of the problems (I think ;-) ) I indeed introduced memory leak, I think it is fixed now, I have also reorganized the code to avoid duplication of the cleanup code. I used an helper function instead of the goto, because this cleanup is needed for normal exit too, so the helper function can come in handy for that. std::vector would have been nice, but I tried to respect the original coding style, maybe this could be changed but I guess the original author should have something to say about it.... Only thing I did not change is the allocation of acols/arows even when they are not used. It is not difficult to do, but the code will be a little more complex and I think that the extra memory used is not significant: The memory for those mapping structure is doubled (1 float and 1 int vector of size N, instead of a single int vector of size N), but those mapping structures are an order of magnitude smaller than the image buffer of size N*N of 4 char anyway... If one agree that this is to be saved at the (slight) expense of code conciseness/simplicity, I can add this optimisation... Thanks for pointing the error at the left/bottom pixel lines, it is corrected now :-) And I set interpolation to NEAREST by default, so that the behavior of NonUniformImage remains the same as before if the user do not specify otherwise (I forgot to do it in the first patch)... I include the new patch, Best regards, Greg. On Fri, 2008年08月15日 at 15:45 -0400, Michael Droettboom wrote: > Thanks for all the work you put into this patch. > > As you say, it would be nice to have a generic framework for this so > that new types of interpolation could be easily added and to be able to > support arbitrary (non-axis-aligned) quadmeshes as well. But that's > even more work -- if we keep waiting for everything we want, we'll never > get it... ;) I agree that Agg probably won't be much help with that. > > There are a couple of comments with the patch as it stands --> > > There seems to be a gap extrapolating over the left and bottom edge (see > attached screenshot from pcolor_nonuniform.py). > > Memory management looks problematic, some of which I think you inherited > from earlier code. For example, arows and acols are never freed. > Personally, I think these temporary buffers should be std::vector's so > they'll be free'd automatically when scope is left. It might also be > nice to move all of the Py_XDECREF's that happen when exceptions are > thrown to either a master try/catch block or an "exit" goto label at the > bottom. The amount of duplication and care required to ensure > everything will be freed by all of the different exit paths is a little > cumbersome. > > Also, acols and arows are only used in BILINEAR interpolation, but they > are allocated always. > > Once these issues are addressed, it would be great to have someone who > *uses* the nonuniform pcolor functionality (Eric Firing?) have a look at > this patch for any regressions etc.. Assuming none, I'll be happy to > commit it (but I won't be around for a week or so). > > Cheers, > Mike > > Grégory Lielens wrote: > > Hi all, > > > > here is a patch which implement bilinear interpolation on irregular grid > > ( i.e. it allows NonUniformImage > > to accept both 'nearest' and 'bilinear' interpoaltion, instead of only > > 'nearest'.) > > > > It is not perfect, given the current architecture of the image module I > > think there is not simple way > > to specify other interpolations (except for 'nearest', all > > interpolations are implemented at the AGG level, not in > > matplotlib itself). > > For the same reason, it is not possible to interpolate before colormap > > lookup instead of after (and this > > is usually where the biggest effect is, when dealing with coarse grid). > > However, I think it is already quite useful and the best one ca do > > without a -very- extensive rewrite > > of the matrix map modules.... > > > > BTW, I think it also corrects a small bug in the 'nearest' > > interpolation: the last intervals was ignored > > in the CVS version, now it is taken into account. > > > > BOTH nearest and bilinear interpolation do the same for values oustside > > the data grid: they just use boundary values (constant extrapolation). > > I avoided linear extrapolation for 'bilinear', as extrapolation is > > usually dangerous or meaningless (we can go outside the colormap very > > fast)... > > > > I have included a small example showing how both interpolation works.... > > > > Any remarks, could this be added before the next release? ;-) > > > > > > Greg. > > > > > > > > > > > > On Mon, 2008年08月11日 at 16:50 +0200, Grégory Lielens wrote: > > > >> On Fri, 2008年08月08日 at 16:05 +0200, Grégory Lielens wrote: > >> > >>> Hello everybody, > >>> > >>> I have sent this message to the user group, but thinking of it, it may be more > >>> relevant to the development mailing list...so here it is again. > >>> > >>> > >>> > >>> We are looking for the best way to plot a waterfall diagram in > >>> Matplotlib. The 2 functions which could be used > >>> to do that are (as far as I have found) imshow and pcolormesh. Here is a > >>> small script that use both to compare the output: > >>> > >>> ----------------- > >>> > >>> from pylab import * > >>> > >>> > >>> delta = 0.2 > >>> x = arange(-3.0, 3.0, delta) > >>> y = arange(-2.0, 2.0, delta) > >>> X, Y = meshgrid(x, y) > >>> Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0) > >>> Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1) > >>> # difference of Gaussians > >>> Z = 10.0 * (Z2 - Z1) > >>> figure(1) > >>> im = imshow(Z,extent=(-3,3,-2,2)) > >>> CS = contour(X, -Y, Z, 6, > >>> colors='k', # negative contours will be dashed by default > >>> ) > >>> clabel(CS, fontsize=9, inline=1) > >>> title('Using imshow') > >>> figure(2) > >>> im = pcolormesh(X,-Y,Z) > >>> CS = contour(X, -Y, Z, 6, > >>> colors='k', # negative contours will be dashed by default > >>> ) > >>> clabel(CS, fontsize=9, inline=1) > >>> title('Using pcolormesh') > >>> show() > >>> > >>> --------------------- > >>> > >>> > >>> The problem is that we need some of the flexibility of pcolormesh (which > >>> is able to map the matrix of value on any deformed mesh), while > >>> we would like to use the interpolations available in imshow (which > >>> explain why the imshow version is much "smoother" than the pcolormesh > >>> one). > >>> > >>> In fact, what would be needed is not the full flexibility of pcolormesh > >>> (which can map the grid to any kind of shape), we "only" have to deal > >>> with rectangular grids where x- and y- graduations are irregularly spaced. > >>> > >>> Is there a drawing function in Matplotlib which would be able to work > >>> with such a rectangular non-uniform grid? > >>> And if not (and a quick look at the example and the code make me think > >>> that indeed the capability is currently not present), > >>> what about an extension of imshow which would work as this: > >>> > >>> im = imshow(Z,x_gridpos=x, y_gridpos=y) #specify the > >>> position of the grid's nodes, instead of giving the extend and assuming > >>> uniform spacing. > >>> > >>> Longer term, would a pcolormesh accepting interpolation be possible? The > >>> current behavior, averaging the color of the grids node to get a uniform > >>> cell color, > >>> is quite rough except for a large number of cells...And even then, it > >>> soon shows when you zoom in... > >>> > >>> The best would be to allow the same interpolations as in imshow (or a > >>> subset of it), and also allows to use interpolation before colormap > >>> lookup (or after), > >>> like in Matlab. Indeed, Matlab allows to finely tune interpolation by > >>> specifying Gouraud (interpolation after color > >>> lookup)/Phong(interpolation before color lookup, i.e. for each pixel). > >>> Phong is usually much better but also more CPU intensive. Phong is > >>> especially when using discrete colormap, producing banded colors > >>> equivalent to countour lines, while Gouraud does not work in those > >>> cases. > >>> > >>> Of course, the performance will be impacted by some of those > >>> interpolation options, which would degrade performance in animations for > >>> example.... but I think that having the different options available > >>> would be very useful, it allows to have the highest map quality, or have > >>> a "quick and dirty" map depending on situation (grid spacing, type of > >>> map, animation or not, ...). > >>> > >>> Best regards, > >>> > >>> Greg. > >>> > >> I have found a method which implement the proposed extension to imshow: > >> NonUniformImage... > >> > >> However, this image instance support only nearest neighbor > >> interpolation. Trying to set the interpolation (using the > >> set_interpolation method) > >> to something akin imshow throw a "NotImplementedError: Only nearest > >> neighbor supported" exception.... > >> > >> So basically I am still stuck, it seems that currently there is no way > >> in matplotlib to plot interpolated > >> colormap on irregular rectangular grid, and even less on arbitrarily > >> mapped grid... > >> > >> Is there any plans to add support for more interpolation in > >> NonUniformImage in the future? Or maybe there is another > >> drawing function that I did not find yet, with this ability? > >> > >> > >> Best regards, > >> > >> Greg.
Hi, I just committed a patch that allows for multiple histogram where each data-set might have a different length (see example histogram_demo_extended.py). I don't think that it breaks any existing code, but I would feel much better if someone could check that ... Manuel