You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(3) |
2
(9) |
3
(2) |
4
(2) |
5
(16) |
6
(8) |
7
(6) |
8
(1) |
9
(12) |
10
(3) |
11
(1) |
12
(10) |
13
|
14
(6) |
15
(7) |
16
(3) |
17
(1) |
18
(1) |
19
(1) |
20
|
21
(6) |
22
(7) |
23
(3) |
24
|
25
(1) |
26
(9) |
27
(4) |
28
(2) |
29
|
30
|
31
(2) |
David Cournapeau wrote: > Hi, > > I don't know if that's of any interest for matplotlib developers, > but I added scripts to build matplotlib with numscons: > > http://github.com/cournape/matplotlib/tree/scons_build > OK, I managed to clone your repo -- I cloned mine, then added yours as a remote, fetched it, and pushed the results to a new branch on my github repo: http://github.com/astraw/matplotlib/tree/dev/cournapeau-scons-build But having done that, now I'm having trouble building. Calling with "python setupscons.py install", I get: Traceback (most recent call last): File "setupscons.py", line 232, in <module> setup_package() File "setupscons.py", line 228, in setup_package configuration=configuration) File "/usr/lib/python2.6/dist-packages/numpy/distutils/core.py", line 150, in setup config = configuration() File "setupscons.py", line 197, in configuration config.add_sconscript('SConstruct', package_path='lib/matplotlib') TypeError: add_sconscript() got an unexpected keyword argument 'package_path' What version of numpy do I need for this? I might have to build a new chroot, since I want the Ubuntu Hardy chroot I'm currently using to stick with Hardy's default numpy installation. -Andrew
Hi, I'm trying to get something like texture mapping to work. (I don't need anything fancy transforms between texel location and image location, though. I'm happy to specify just a 2D grid of pixel colors that appear onto a rectangle positioned in 3D space.) Given that, I made a demo based on pcolor which I think should do the job with a little work. I based this mainly on examples/mplot3d/pathpatch3d_demo.py, which is pretty close to this. Here's my http://github.com/astraw/matplotlib/blob/dev/straw-poormans-texture/examples/mplot3d/poor_mans_texture_map.py Right now, though, running this results in the following traceback: $ python poor_mans_texture_map.py Traceback (most recent call last): File "poor_mans_texture_map.py", line 21, in <module> art3d.patch_collection_2d_to_3d(s, zdir="x") File "/usr/local/lib/python2.6/dist-packages/mpl_toolkits/mplot3d/art3d.py", line 301, in patch_collection_2d_to_3d col.set_3d_properties(zs, zdir) File "/usr/local/lib/python2.6/dist-packages/mpl_toolkits/mplot3d/art3d.py", line 279, in set_3d_properties xs, ys = zip(*self.get_offsets()) ValueError: need more than 0 values to unpack Reinier -- can you comment on how the patch offsets are supposed to be interpreted as x,y values in this context? From the docstring at lib/matplotlib/collections.py, I read "*offsets* and *transOffset* are used to translate the patch after rendering (default no offsets)". It's not immediately clear to me how the offsets should then be resulting in X,Y values for 3D plotting. Or is something else entirely going on here? I'm happy to do some leg-work here if you point me in the right direction. -Andrew
I'd like to see a single function that combines ListedColormap and BoundaryNorm. This function could compare the lengths of the color list and threshold list to determine what type of colormap is desired. (If the lengths are the same, then the calling program wants a continuous colormap; if there is one more color than boundaries, the calling program wants a discrete colormap). If this function had optional arguments to specify the `under` and `over` colors, that would be even better. Phillip Phillip M. Feldman wrote: > Eric and Reinier- > > It seems to me that continuous (piecewise-linear) colormaps could work > in much the same fashion. One would specify n boundary colors and n > thresholds (for continuous colormaps, I believe that the number of > thresholds and colors must be the same), and for any value between two > thresholds, the colors associated with the bounding thresholds would > be automatically interpolated. What do you think? > > Phillip > > Eric Firing wrote: >> What does allow you to specify the transitions exactly (to within the >> limits of double precision) is this: >> >> cmap = ListedColormap(['r','g','b']) >> norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N) > >
Eric and Reinier- It seems to me that continuous (piecewise-linear) colormaps could work in much the same fashion. One would specify n boundary colors and n thresholds (for continuous colormaps, I believe that the number of thresholds and colors must be the same), and for any value between two thresholds, the colors associated with the bounding thresholds would be automatically interpolated. What do you think? Phillip Eric Firing wrote: > What does allow you to specify the transitions exactly (to within the > limits of double precision) is this: > > cmap = ListedColormap(['r','g','b']) > norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N)
Eric Firing wrote: > Phillip M. Feldman wrote: >> Hello Eric- >> >> I'd like to understand the reason why you object to >> piecewise-constant colormaps. I have found these to be useful for >> some types of plots. > > It is a crude and indirect way of achieving a result that can be > achieved precisely and directly using ListedColormap and BoundaryNorm, > or possibly a subclass. The problem is that discretization is being > done at the wrong stage. > > Suppose you have data in the range 1.5 to 2.5, and you want the low > third of that range to be red, the middle green, and the upper blue. > If you use LinearSegmentedColormap and one of your functions to make a > discrete map, then you will have divided your interval of length 1 > into 256 segments, which does not allow you to specify exactly 1.5 + > 1.0/3 as a transition point. You have only 8 bits of precision available. OK. Good explanation. > > What does allow you to specify the transitions exactly (to within the > limits of double precision) is this: > > cmap = ListedColormap(['r','g','b']) > norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N) > > Simple, readable, flexible: choose any boundaries you like, specify > the colors any way you like, including pulling them out of an existing > colormap. Efficient: the cmap lookup table has only as many entries > as it needs, and the index into that table is calculated directly in a > single step. > > Now if you need autoscaling, with the boundaries calculated from vmin > and vmax, then this can be done by subclassing BoundaryNorm. In both > cases, after using this cmap and norm for a mappable, passing that > mappable to colorbar will yield a reasonable result, because colorbar > has special code to handle the BoundaryNorm. > > >> Also, the functionality to create piecewise-constant colormaps is >> already provided by LinearSegmentedColormap, so "the cat is already >> out of the bag", so to speak.. (I created my functions mainly because I > LinearSegmentedColormap is very general, so yes, it can be used for this. >> found the LinearSegmentedColormap interface painful to use. Since then, > That painfulness is exactly the reason why John Hunter added the > from_list method 8 months ago (I forgot it had been there that long), > and Reinier recently made it more flexible. When I look at the online documentaiton for from_list, here's what I see: "Make a linear segmented colormap with /name/ from a sequence of /colors/ which evenly transitions from colors[0] at val=1 to colors[-1] at val=1. N is the number of rgb quantization levels." There must be a mistake here, because val=l at both ends. Also, is there web documentation for Reinier's new version? Thanks! Phillip
Andrew Straw wrote: >>> As far as the test data -- I agree this is an issue. One point in favor >>> of the status quo is that it's really nice to have the test data >>> included with the source code so there are no configuration hassles. I'm >>> not sure how well the buildbot infrastructure would cope with anything >>> else. For example, to my knowledge, there is no Buildbot precedent to >>> automatically pull from two branches to execute a single test run. But >>> in general I think this does bear thinking about. >>> >>> >> An easy improvement may be having an extra kwarg on the image_comparison >> decorator to select a subset of backends. For example, many of the ones >> in test_simplification.py only apply to the Agg backend. >> > > Done in r7863. To make use of it, do something like the following patch > (and don't forget to delete the baseline .pdf files from the repository): > > -@image_comparison(baseline_images=['simplify_curve']) > +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) > Great! > >> While I'm sharing my wish list out loud, I think it would also be highly >> cool to get the native Mac OS backend in the buildbot tests, as that's >> one I can't test easily myself. >> > > That would require the Mac OS X buildslave to start working again too, > as I assume the backend actually requires the OS. And that would require > building on Snow Leopard to work, as I understand it. > Oh yeah. Forgot that detail. Well -- something to think about when the other pieces fall into place. Mike
Michael Droettboom wrote: >> "inkscape input.svg --export-png=output.png" works very well as an svg >> renderer. >> > I'd also like to run SVG through xmllint against the SVG schema as > another sanity check. I may get to this if I can find the time. That'd be great. I just installed inkscape and xmllint on the non-bare buildslave machine. >> As far as the test data -- I agree this is an issue. One point in favor >> of the status quo is that it's really nice to have the test data >> included with the source code so there are no configuration hassles. I'm >> not sure how well the buildbot infrastructure would cope with anything >> else. For example, to my knowledge, there is no Buildbot precedent to >> automatically pull from two branches to execute a single test run. But >> in general I think this does bear thinking about. >> > An easy improvement may be having an extra kwarg on the image_comparison > decorator to select a subset of backends. For example, many of the ones > in test_simplification.py only apply to the Agg backend. Done in r7863. To make use of it, do something like the following patch (and don't forget to delete the baseline .pdf files from the repository): -@image_comparison(baseline_images=['simplify_curve']) +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) > While I'm sharing my wish list out loud, I think it would also be highly > cool to get the native Mac OS backend in the buildbot tests, as that's > one I can't test easily myself. That would require the Mac OS X buildslave to start working again too, as I assume the backend actually requires the OS. And that would require building on Snow Leopard to work, as I understand it. -Andrew
Andrew Straw wrote: > Jouni K. Seppänen wrote: > >> Jouni K. Seppänen <jk...@ik...> writes: >> >> When we add new formats (comparing postscript files could easily be done >> using the same ghostscript command as used for pdf files, and some svg >> renderer could also be added) >> > > "inkscape input.svg --export-png=output.png" works very well as an svg > renderer. > I'd also like to run SVG through xmllint against the SVG schema as another sanity check. I may get to this if I can find the time. > >> and new tests, we'll have to think about >> if we want to run all tests on all backends, since the amount of data in >> the repository will start growing pretty fast. >> >> > As far as the test data -- I agree this is an issue. One point in favor > of the status quo is that it's really nice to have the test data > included with the source code so there are no configuration hassles. I'm > not sure how well the buildbot infrastructure would cope with anything > else. For example, to my knowledge, there is no Buildbot precedent to > automatically pull from two branches to execute a single test run. But > in general I think this does bear thinking about. > An easy improvement may be having an extra kwarg on the image_comparison decorator to select a subset of backends. For example, many of the ones in test_simplification.py only apply to the Agg backend. While I'm sharing my wish list out loud, I think it would also be highly cool to get the native Mac OS backend in the buildbot tests, as that's one I can't test easily myself. Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Jouni K. Seppänen wrote: > Jouni K. Seppänen <jk...@ik...> writes: > > >> Oh, right. My fault: when I implemented the pdf comparison, I made it >> run the test for only those formats for which a baseline image exists, >> > > I committed a change to make it run both png and pdf tests all the time. > Thanks for the fix, Jouni. (My svn commit was rejected because you did exactly the same thing as me.) > When we add new formats (comparing postscript files could easily be done > using the same ghostscript command as used for pdf files, and some svg > renderer could also be added) "inkscape input.svg --export-png=output.png" works very well as an svg renderer. > and new tests, we'll have to think about > if we want to run all tests on all backends, since the amount of data in > the repository will start growing pretty fast. > As far as the test data -- I agree this is an issue. One point in favor of the status quo is that it's really nice to have the test data included with the source code so there are no configuration hassles. I'm not sure how well the buildbot infrastructure would cope with anything else. For example, to my knowledge, there is no Buildbot precedent to automatically pull from two branches to execute a single test run. But in general I think this does bear thinking about. -Andrew
Jouni K. Seppänen <jk...@ik...> writes: > Oh, right. My fault: when I implemented the pdf comparison, I made it > run the test for only those formats for which a baseline image exists, I committed a change to make it run both png and pdf tests all the time. When we add new formats (comparing postscript files could easily be done using the same ghostscript command as used for pdf files, and some svg renderer could also be added) and new tests, we'll have to think about if we want to run all tests on all backends, since the amount of data in the repository will start growing pretty fast. -- Jouni K. Seppänen http://www.iki.fi/jks
Michael Droettboom <md...@st...> writes: > Unfortunately, it doesn't seem to be running the new test at all. If I > put "assert False" at the top of the test, and even that doesn't fail. > If I remove the "image_comparison" decorator, however, the test will > fail. Maybe this is because the baseline image doesn't exist yet? Oh, right. My fault: when I implemented the pdf comparison, I made it run the test for only those formats for which a baseline image exists, to avoid causing spurious failures when a test is not even meant to be run for all backends. I guess this should be done in some other way, and perhaps the usual case is to test all backends for which the output can be compared. -- Jouni K. Seppänen http://www.iki.fi/jks
I went to create a new image comparison test related to the hatching bug reported this morning. I added my test to the bottom of test_simplification.py, and ran all the tests as follows: python -c "import matplotlib; matplotlib.test()" Unfortunately, it doesn't seem to be running the new test at all. If I put "assert False" at the top of the test, and even that doesn't fail. If I remove the "image_comparison" decorator, however, the test will fail. Maybe this is because the baseline image doesn't exist yet? In the past, I've just run the tests, got a mismatch (because no baseline existed), and copied the current image to the baseline image and checked that in. Am I using the wrong workflow, or is this a bug? Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA