You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(3) |
2
(9) |
3
(2) |
4
(2) |
5
(16) |
6
(8) |
7
(6) |
8
(1) |
9
(12) |
10
(3) |
11
(1) |
12
(10) |
13
|
14
(6) |
15
(7) |
16
(3) |
17
(1) |
18
(1) |
19
(1) |
20
|
21
(6) |
22
(7) |
23
(3) |
24
|
25
(1) |
26
(9) |
27
(4) |
28
(2) |
29
|
30
|
31
(2) |
Hi, Anybody know what the status of AutoDateLocator/AutoDateFormatter in matplotlib.dates are? They work and seem reasonably well documented. However, they do not show up in our online docs: http://matplotlib.sourceforge.net/api/dates_api.html They show up in the inheritance graph, but are not mentioned elsewhere in the page and in fact have no link from the image. They're also not present in the __all__ in the dates module. If this is just an oversight, what do I need to do to make the classes show up in the docs? Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma
Andrew Straw wrote: > Michael Droettboom wrote: > >> A couple more comments about the test framework -- which has already >> paid for itself ten times over. In Numpy (and a number of local >> Python projects), I can 'cd' to the tests directory and do something >> like: >> >> nosetests test_simplification.py:test_hatch_simplify >> >> and run on particular test, or a single file of tests. It's a huge >> time saver when trying to fix a bug. However, with matplotlib I get: >> >> > nosetests test_simplification.py >> E >> ====================================================================== >> ERROR: Failure: ImportError (cannot import name cbook) >> <snip> >> I suspect this is something peculiar to how matplotlib gets imported. >> > > Yes, it would be very nice, I absolutely agree. I'm not sure what's > going on, either, but I agree that it would be nice to fix. See below > for an idea. > > >> Also, I have a quad-core machine, so I put this in my .noserc, which >> will run tests in parallel: >> >> [nosetests] >> processes=4 >> >> Unfortunately, due to however matplotlib is delegating to nose, this >> doesn't seem to get picked up. >> >> I don't know if I'll get a chance to look at these things right away, >> but thought I'd share in case the solutions are obvious to anyone >> else (which I know isn't good form, but hey... ;) >> > My guess is that this may actually be related to the first issue. On > this second issue, though, I have a specific idea -- in order for MPL > to pickup the nose plugin, I had to do the song and dance in test() of > matplotlib/__init__.py in which I create a nose.config.Config > instance. I suspect this is why your processes argument isn't getting > through -- we're completely bypassing any local nose config and > creating ours programattically. I'm definitely not in favor the big > song and dance, so if you can rip it out and still get the plugin to > load, that would be super. > > Once that is figured out, presumably the direct call to "nosetests > test_simplification.py:test_hatch_simplify" will also load the nose > plugins and thus not exhibit weird behavior when a known failure is > encountered. > > I almost certainly won't get a chance to look at these right away, so > if anyone wants to go spelunking in the nose/mpl interaction, feel free. I have a partial solution to these problems. You can now do (from any directory) nosetests matplotlib.tests and this automatically picks up nose parameters, so you can do multiprocessing, pdb, coverage and other nose niceties. Strangely, I still can't make running "nosetests" from the lib/matplotlib/tests directory work. The imports seem to get all screwed up in that case, presumably because nose is messing with sys.path. Unfortunately, nosetests -P (which is supposed to not touch sys.path) doesn't seem to help. I made sure that "import matplotlib; matplotlib.test()" still works, so the buildbots should be unaffected. As a side note, I disabled the xmllint testing since curl (which xmllint uses to fetch the SVG DTD) has some problems with caching in a multiprocess situation. It gives random "Operation in progress" errors. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Jouni K. Seppänen wrote: > Andrew Straw <str...@as...> writes: > > >> Sorry for not noticing this earlier, but I'm looking in the baseline >> image directory, and I see a bunch of *_pdf.png files. I guess these >> have been convered to png from pdf on the tester's machine. Do you think >> it makes more sense to have the .pdf files in the test repo and convert >> to png at test run time? This way we don't become dependent on gs >> rendering quirks or differences across pdf renderers. Maybe the files >> are also smaller. >> > > That's exactly how it works now, isn't it? You are seeing the *_pdf.png > files because those get produced while running the tests from the *.pdf > files, and they are not checked into the svn repository. > Sorry for the false alarm. My image browser was showing the .png and .svg files, but not the .pdfs for some reason. I assumed it was showing all the files in the directory, which lead to this false conclusion. -Andrew
Andrew Straw <str...@as...> writes: > I just noticed the test_axes/hexbin_extent.svg baseline image is more > than 5 MB. I wonder if we can somehow simplify the svg generated to > reduce the file size or if we should perhaps just not do the svg > extension on this test? How about keeping them gzipped? If you gzip an svg file and name it something.svgz, inkscape seems to open it fine. -- Jouni K. Seppänen http://www.iki.fi/jks
Andrew Straw <str...@as...> writes: > Sorry for not noticing this earlier, but I'm looking in the baseline > image directory, and I see a bunch of *_pdf.png files. I guess these > have been convered to png from pdf on the tester's machine. Do you think > it makes more sense to have the .pdf files in the test repo and convert > to png at test run time? This way we don't become dependent on gs > rendering quirks or differences across pdf renderers. Maybe the files > are also smaller. That's exactly how it works now, isn't it? You are seeing the *_pdf.png files because those get produced while running the tests from the *.pdf files, and they are not checked into the svn repository. -- Jouni K. Seppänen http://www.iki.fi/jks
I suspect for that one we can just do without it. It isn't really testing anything SVG-specific. Mike Andrew Straw wrote: > Michael Droettboom wrote: >> I've committed support for comparing SVG files using Inkscape and >> verifying them against the official SVG DTD using xmllint. > I just noticed the test_axes/hexbin_extent.svg baseline image is more > than 5 MB. I wonder if we can somehow simplify the svg generated to > reduce the file size or if we should perhaps just not do the svg > extension on this test? > > -Andrew -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Michael Droettboom wrote: > I've committed support for comparing SVG files using Inkscape and > verifying them against the official SVG DTD using xmllint. I just noticed the test_axes/hexbin_extent.svg baseline image is more than 5 MB. I wonder if we can somehow simplify the svg generated to reduce the file size or if we should perhaps just not do the svg extension on this test? -Andrew
Jouni K. Seppänen wrote: > I am thinking about adding pdf comparison ability to compare_images. One > simple way to do this would be to convert pdf files to pngs using > Ghostscript: if we store reference pdf files, and both the reference > file and the result of the test are converted using with exactly the > same version of gs, there should be no font-rendering or antialiasing > mismatches. > > Can we assume that all test computers will have some version of > Ghostscript installed and callable as "gs"? > > Hi Jouni, Sorry for not noticing this earlier, but I'm looking in the baseline image directory, and I see a bunch of *_pdf.png files. I guess these have been convered to png from pdf on the tester's machine. Do you think it makes more sense to have the .pdf files in the test repo and convert to png at test run time? This way we don't become dependent on gs rendering quirks or differences across pdf renderers. Maybe the files are also smaller. -Andrew
Gellule Xg wrote: >>> This is a bug report for matplotlib version 0.99.1.1 >>> >>> The clip_path keyword of imshow() does not work when setting it to >>> (Path, Transform) for two reasons: >>> >> Hi, Thanks for the report. Do you have a simple test script that we can >> use to see the problem and then fix it? We will then use this as the >> basis to create a test function to make sure the issue never "crops" up >> again. >> >> -Andrew >> > > Hi Andrew, > Please try the following. It's a script that clips an NxN image based on > its contour. > Thanks, > -Julien > Hi Julien, I made a couple fixes to MPL 0.99 maintenance branch and then added a modified version your example in the tests. See http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/lib/matplotlib/tests/test_axes.py?r1=7870&r2=7869&pathrev=7870 You should be able to use this approach to do what you were originally after. My actual fixes to MPL were in r7866 and r7867. I had to take a slightly different approach than your suggestion. The approach I implemented based on your initial patch doesn't access private variables or change the way calling setters methods works. -Andrew
Michael Droettboom wrote: > I've committed support for comparing SVG files using Inkscape and > verifying them against the official SVG DTD using xmllint. > Man, are we standards compliant around here or what? :) Cool. > Michael Droettboom wrote: > >> Andrew Straw wrote: >> >> >>> Done in r7863. To make use of it, do something like the following patch >>> (and don't forget to delete the baseline .pdf files from the repository): >>> >>> -@image_comparison(baseline_images=['simplify_curve']) >>> +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) >>> >>> >>> >> Great! >> >> > This is a nice feature. However, in hindsight, I may not use it right > away -- I actually found a bug in the SVG backend using one of the tests > I assumed would only affect the Agg backend. :) > I think it's good not to use the feature very much. I've already found it handy when developing against a test -- you only need to generate that test's image once. > A couple more comments about the test framework -- which has already > paid for itself ten times over. In Numpy (and a number of local Python > projects), I can 'cd' to the tests directory and do something like: > > nosetests test_simplification.py:test_hatch_simplify > > and run on particular test, or a single file of tests. It's a huge time > saver when trying to fix a bug. However, with matplotlib I get: > > > nosetests test_simplification.py > E > ====================================================================== > ERROR: Failure: ImportError (cannot import name cbook) > <snip> > I suspect this is something peculiar to how matplotlib gets imported. > Yes, it would be very nice, I absolutely agree. I'm not sure what's going on, either, but I agree that it would be nice to fix. See below for an idea. > Also, I have a quad-core machine, so I put this in my .noserc, which > will run tests in parallel: > > [nosetests] > processes=4 > > Unfortunately, due to however matplotlib is delegating to nose, this > doesn't seem to get picked up. > > I don't know if I'll get a chance to look at these things right away, > but thought I'd share in case the solutions are obvious to anyone else > (which I know isn't good form, but hey... ;) > My guess is that this may actually be related to the first issue. On this second issue, though, I have a specific idea -- in order for MPL to pickup the nose plugin, I had to do the song and dance in test() of matplotlib/__init__.py in which I create a nose.config.Config instance. I suspect this is why your processes argument isn't getting through -- we're completely bypassing any local nose config and creating ours programattically. I'm definitely not in favor the big song and dance, so if you can rip it out and still get the plugin to load, that would be super. Once that is figured out, presumably the direct call to "nosetests test_simplification.py:test_hatch_simplify" will also load the nose plugins and thus not exhibit weird behavior when a known failure is encountered. I almost certainly won't get a chance to look at these right away, so if anyone wants to go spelunking in the nose/mpl interaction, feel free. -Andrew
I've committed support for comparing SVG files using Inkscape and verifying them against the official SVG DTD using xmllint. Michael Droettboom wrote: > Andrew Straw wrote: > >> Done in r7863. To make use of it, do something like the following patch >> (and don't forget to delete the baseline .pdf files from the repository): >> >> -@image_comparison(baseline_images=['simplify_curve']) >> +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) >> >> > Great! > This is a nice feature. However, in hindsight, I may not use it right away -- I actually found a bug in the SVG backend using one of the tests I assumed would only affect the Agg backend. :) A couple more comments about the test framework -- which has already paid for itself ten times over. In Numpy (and a number of local Python projects), I can 'cd' to the tests directory and do something like: nosetests test_simplification.py:test_hatch_simplify and run on particular test, or a single file of tests. It's a huge time saver when trying to fix a bug. However, with matplotlib I get: > nosetests test_simplification.py E ====================================================================== ERROR: Failure: ImportError (cannot import name cbook) ---------------------------------------------------------------------- Traceback (most recent call last): File "/wonkabar/data1/usr/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/loader.py", line 379, in loadTestsFromName addr.filename, addr.module) File "/wonkabar/data1/usr/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 39, in importFromPath return self.importFromDir(dir_path, fqname) File "/wonkabar/data1/usr/lib/python2.5/site-packages/nose-0.11.1-py2.5.egg/nose/importer.py", line 86, in importFromDir mod = load_module(part_fqname, fh, filename, desc) File "/wonkabar/data1/builds/matplotlib/build/lib.linux-i686-2.5/matplotlib/tests/test_simplification.py", line 4, in <module> import matplotlib.pyplot as plt File "/wonkabar/data1/builds/matplotlib/build/lib.linux-i686-2.5/matplotlib/pyplot.py", line 6, in <module> from matplotlib import docstring File "/wonkabar/data1/builds/matplotlib/build/lib.linux-i686-2.5/matplotlib/docstring.py", line 1, in <module> from matplotlib import cbook ImportError: cannot import name cbook ---------------------------------------------------------------------- Ran 1 test in 0.009s FAILED (errors=1) I suspect this is something peculiar to how matplotlib gets imported. Also, I have a quad-core machine, so I put this in my .noserc, which will run tests in parallel: [nosetests] processes=4 Unfortunately, due to however matplotlib is delegating to nose, this doesn't seem to get picked up. I don't know if I'll get a chance to look at these things right away, but thought I'd share in case the solutions are obvious to anyone else (which I know isn't good form, but hey... ;) Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Hello, With the caveat that I have not closely followed this mail list, I thought I would offer an alternate build method than what seems to be the preferred method for building matplotlib on Mac OS X. I tried to skim through recent mailing list posts to see if anything like this had been done, and did not immediately find anything. Forgive me if I'm repeating! As of Mac OS X 10.6.1, if you have X11 installed, you can build and use matplotlib without downloading anything else. (Unfortunately I have not yet had the opportunity to try this procedure out on a Leopard system.) First, I created a new setup.cfg on full auto: $ diff setup.cfg.template setup.cfg 25,26c25,26 < #pytz = False < #dateutil = False --- > pytz = auto > dateutil = auto 53,57c53,57 < #gtk = False < #gtkagg = False < #tkagg = False < #wxagg = False < #macosx = False --- > gtk = auto > gtkagg = auto > tkagg = auto > wxagg = auto > macosx = auto Then, I altered setupext.py so that the darwin config works like other operating systems. The difference is that I'm telling the script to look in /usr/X11, where freetype and libpng can be found on Mac OS X 10.6.1. $ diff setupext.py.orig setupext.py 53,54c53,54 < '_darwin' : ['/sw/lib/freetype2', '/sw/lib/freetype219', '/usr/ local', < '/usr', '/sw'], --- > # '_darwin' : ['/sw/lib/freetype2', '/sw/lib/freetype219', '/ usr/local', > # '/usr', '/sw'], 60c60 < 'darwin' : [], --- > 'darwin' : ['/usr', '/usr/X11',], Then, I run: $ sudo python setupegg.py install And matplotlib builds as a universal binary and installs an egg into / Library/Python/2.6/site-packages/ The setup process detects the included numpy 1.2.1 and freetype2 (unknown version*), and also detects libpng (unknown version*), Tkinter 67083, Tk 8.5, and Tcl 8.5. * freetype is 2.3.9; see /usr/X11/include/freetype2/freetype/freetype.h * libpng is 1.2.35; see /usr/X11/include/libpng12/png.h On a 64-bit system in the default configuration (like both of mine), there is no 64-bit version of the wxPython libraries to load, and so it is not detected. However, if I do: $ export VERSIONER_PYTHON_PREFER_32_BIT=yes ### see the python man page Then the setup process detects wxPython 2.8.8.1. I filed a bug with Apple about the lack of a 64-bit wxPython library, and it is Apple Problem ID # 7294359, in case anyone cares. The setup process does not detect Gtk+, Qt(4), or Cairo, because these are not included with Mac OS X (that I know of). Hope this is helpful! -Dan Huff ps -- As an aside, I create & edit ~/.pydistutils.cfg to allow an install for a per-user install as follows: $ cat ~/.pydistutils.cfg [install] install_lib = $HOME/Library/Python/$py_version_short/site-packages install_scripts = $HOME/usr/local/bin $ mkdir -p ~/Library/Python/2.6/site-packages $ python setupegg.py install This allows for more isolated testing in a sandbox user account. (end aside)
Andrew Straw wrote: > David Cournapeau wrote: > >> Hi, >> >> I don't know if that's of any interest for matplotlib developers, >> but I added scripts to build matplotlib with numscons: >> >> http://github.com/cournape/matplotlib/tree/scons_build >> >> > OK, I managed to clone your repo -- I cloned mine, then added yours as a > remote, fetched it, and pushed the results to a new branch on my github > repo: http://github.com/astraw/matplotlib/tree/dev/cournapeau-scons-build > Ok - I still don't understand why it does not work the usual way, though. > But having done that, now I'm having trouble building. Calling with > "python setupscons.py install", I get: > > Traceback (most recent call last): > File "setupscons.py", line 232, in <module> > setup_package() > File "setupscons.py", line 228, in setup_package > configuration=configuration) > File "/usr/lib/python2.6/dist-packages/numpy/distutils/core.py", line > 150, in setup > config = configuration() > File "setupscons.py", line 197, in configuration > config.add_sconscript('SConstruct', package_path='lib/matplotlib') > TypeError: add_sconscript() got an unexpected keyword argument > 'package_path' > > What version of numpy do I need for this? You need a very recent one (a few days old): the package_path was added for matplotlib, as it uses a slightly unusual source. organization, cheers, David
I'm not sure whether I'm correctly understanding you. Let's consider a hypothetical engineering performance-versus-requirements scatter plot or contour plot of a performance metric that takes positive values. I'd like to map values to colors so that anything below 50 gets solid red, values from 50 to 70 get a color between red and orange (linear interpolation), values from 70 to 90 get a color between orange and yellow, values from 90 to 105 get a color between yellow and green, and anything greater than or equal to 105 gets solid green. Is this considered "smooth variation"? If so, how would I implement something like this? Thanks! Phillip Eric Firing wrote: > Phillip M. Feldman wrote: >> Eric and Reinier- >> >> It seems to me that continuous (piecewise-linear) colormaps could >> work in much the same fashion. One would specify n boundary colors >> and n thresholds (for continuous colormaps, I believe that the number >> of thresholds and colors must be the same), and for any value between >> two thresholds, the colors associated with the bounding thresholds >> would be automatically interpolated. What do you think? > > How does this differ from LinearSegmentedColormap.from_list()? I > guess what you are getting at is the quantization problem I mentioned > in connection with discrete colormaps. But it is not a problem when > the colors are linearly interpolated--that is, smoothly varying from > one end of the map to the other. It is only a problem when there are > jumps. > > Eric > >> >> Phillip >> >> Eric Firing wrote: >>> What does allow you to specify the transitions exactly (to within >>> the limits of double precision) is this: >>> >>> cmap = ListedColormap(['r','g','b']) >>> norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N) >> > >
Phillip M. Feldman wrote: > Eric and Reinier- > > It seems to me that continuous (piecewise-linear) colormaps could work > in much the same fashion. One would specify n boundary colors and n > thresholds (for continuous colormaps, I believe that the number of > thresholds and colors must be the same), and for any value between two > thresholds, the colors associated with the bounding thresholds would be > automatically interpolated. What do you think? How does this differ from LinearSegmentedColormap.from_list()? I guess what you are getting at is the quantization problem I mentioned in connection with discrete colormaps. But it is not a problem when the colors are linearly interpolated--that is, smoothly varying from one end of the map to the other. It is only a problem when there are jumps. Eric > > Phillip > > Eric Firing wrote: >> What does allow you to specify the transitions exactly (to within the >> limits of double precision) is this: >> >> cmap = ListedColormap(['r','g','b']) >> norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N) >
Phillip M. Feldman wrote: > > When I look at the online documentaiton for from_list, here's what I > see: "Make a linear segmented colormap with /name/ from a sequence of > /colors/ which evenly transitions from colors[0] at val=1 to colors[-1] > at val=1. N is the number of rgb quantization levels." There must be a > mistake here, because val=l at both ends. Also, is there web > documentation for Reinier's new version? I'm not sure I know what you mean by web documentation, but in any case, I think all there is at present is the docstring. And yes, that docstring needs work. I'll try to correct and clarify it. Eric
David Cournapeau wrote: > Hi, > > I don't know if that's of any interest for matplotlib developers, > but I added scripts to build matplotlib with numscons: > > http://github.com/cournape/matplotlib/tree/scons_build > OK, I managed to clone your repo -- I cloned mine, then added yours as a remote, fetched it, and pushed the results to a new branch on my github repo: http://github.com/astraw/matplotlib/tree/dev/cournapeau-scons-build But having done that, now I'm having trouble building. Calling with "python setupscons.py install", I get: Traceback (most recent call last): File "setupscons.py", line 232, in <module> setup_package() File "setupscons.py", line 228, in setup_package configuration=configuration) File "/usr/lib/python2.6/dist-packages/numpy/distutils/core.py", line 150, in setup config = configuration() File "setupscons.py", line 197, in configuration config.add_sconscript('SConstruct', package_path='lib/matplotlib') TypeError: add_sconscript() got an unexpected keyword argument 'package_path' What version of numpy do I need for this? I might have to build a new chroot, since I want the Ubuntu Hardy chroot I'm currently using to stick with Hardy's default numpy installation. -Andrew
Hi, I'm trying to get something like texture mapping to work. (I don't need anything fancy transforms between texel location and image location, though. I'm happy to specify just a 2D grid of pixel colors that appear onto a rectangle positioned in 3D space.) Given that, I made a demo based on pcolor which I think should do the job with a little work. I based this mainly on examples/mplot3d/pathpatch3d_demo.py, which is pretty close to this. Here's my http://github.com/astraw/matplotlib/blob/dev/straw-poormans-texture/examples/mplot3d/poor_mans_texture_map.py Right now, though, running this results in the following traceback: $ python poor_mans_texture_map.py Traceback (most recent call last): File "poor_mans_texture_map.py", line 21, in <module> art3d.patch_collection_2d_to_3d(s, zdir="x") File "/usr/local/lib/python2.6/dist-packages/mpl_toolkits/mplot3d/art3d.py", line 301, in patch_collection_2d_to_3d col.set_3d_properties(zs, zdir) File "/usr/local/lib/python2.6/dist-packages/mpl_toolkits/mplot3d/art3d.py", line 279, in set_3d_properties xs, ys = zip(*self.get_offsets()) ValueError: need more than 0 values to unpack Reinier -- can you comment on how the patch offsets are supposed to be interpreted as x,y values in this context? From the docstring at lib/matplotlib/collections.py, I read "*offsets* and *transOffset* are used to translate the patch after rendering (default no offsets)". It's not immediately clear to me how the offsets should then be resulting in X,Y values for 3D plotting. Or is something else entirely going on here? I'm happy to do some leg-work here if you point me in the right direction. -Andrew
I'd like to see a single function that combines ListedColormap and BoundaryNorm. This function could compare the lengths of the color list and threshold list to determine what type of colormap is desired. (If the lengths are the same, then the calling program wants a continuous colormap; if there is one more color than boundaries, the calling program wants a discrete colormap). If this function had optional arguments to specify the `under` and `over` colors, that would be even better. Phillip Phillip M. Feldman wrote: > Eric and Reinier- > > It seems to me that continuous (piecewise-linear) colormaps could work > in much the same fashion. One would specify n boundary colors and n > thresholds (for continuous colormaps, I believe that the number of > thresholds and colors must be the same), and for any value between two > thresholds, the colors associated with the bounding thresholds would > be automatically interpolated. What do you think? > > Phillip > > Eric Firing wrote: >> What does allow you to specify the transitions exactly (to within the >> limits of double precision) is this: >> >> cmap = ListedColormap(['r','g','b']) >> norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N) > >
Eric and Reinier- It seems to me that continuous (piecewise-linear) colormaps could work in much the same fashion. One would specify n boundary colors and n thresholds (for continuous colormaps, I believe that the number of thresholds and colors must be the same), and for any value between two thresholds, the colors associated with the bounding thresholds would be automatically interpolated. What do you think? Phillip Eric Firing wrote: > What does allow you to specify the transitions exactly (to within the > limits of double precision) is this: > > cmap = ListedColormap(['r','g','b']) > norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N)
Eric Firing wrote: > Phillip M. Feldman wrote: >> Hello Eric- >> >> I'd like to understand the reason why you object to >> piecewise-constant colormaps. I have found these to be useful for >> some types of plots. > > It is a crude and indirect way of achieving a result that can be > achieved precisely and directly using ListedColormap and BoundaryNorm, > or possibly a subclass. The problem is that discretization is being > done at the wrong stage. > > Suppose you have data in the range 1.5 to 2.5, and you want the low > third of that range to be red, the middle green, and the upper blue. > If you use LinearSegmentedColormap and one of your functions to make a > discrete map, then you will have divided your interval of length 1 > into 256 segments, which does not allow you to specify exactly 1.5 + > 1.0/3 as a transition point. You have only 8 bits of precision available. OK. Good explanation. > > What does allow you to specify the transitions exactly (to within the > limits of double precision) is this: > > cmap = ListedColormap(['r','g','b']) > norm = BoundaryNorm([1.5+1.0/3, 1.5+2.0/3], cmap.N) > > Simple, readable, flexible: choose any boundaries you like, specify > the colors any way you like, including pulling them out of an existing > colormap. Efficient: the cmap lookup table has only as many entries > as it needs, and the index into that table is calculated directly in a > single step. > > Now if you need autoscaling, with the boundaries calculated from vmin > and vmax, then this can be done by subclassing BoundaryNorm. In both > cases, after using this cmap and norm for a mappable, passing that > mappable to colorbar will yield a reasonable result, because colorbar > has special code to handle the BoundaryNorm. > > >> Also, the functionality to create piecewise-constant colormaps is >> already provided by LinearSegmentedColormap, so "the cat is already >> out of the bag", so to speak.. (I created my functions mainly because I > LinearSegmentedColormap is very general, so yes, it can be used for this. >> found the LinearSegmentedColormap interface painful to use. Since then, > That painfulness is exactly the reason why John Hunter added the > from_list method 8 months ago (I forgot it had been there that long), > and Reinier recently made it more flexible. When I look at the online documentaiton for from_list, here's what I see: "Make a linear segmented colormap with /name/ from a sequence of /colors/ which evenly transitions from colors[0] at val=1 to colors[-1] at val=1. N is the number of rgb quantization levels." There must be a mistake here, because val=l at both ends. Also, is there web documentation for Reinier's new version? Thanks! Phillip
Andrew Straw wrote: >>> As far as the test data -- I agree this is an issue. One point in favor >>> of the status quo is that it's really nice to have the test data >>> included with the source code so there are no configuration hassles. I'm >>> not sure how well the buildbot infrastructure would cope with anything >>> else. For example, to my knowledge, there is no Buildbot precedent to >>> automatically pull from two branches to execute a single test run. But >>> in general I think this does bear thinking about. >>> >>> >> An easy improvement may be having an extra kwarg on the image_comparison >> decorator to select a subset of backends. For example, many of the ones >> in test_simplification.py only apply to the Agg backend. >> > > Done in r7863. To make use of it, do something like the following patch > (and don't forget to delete the baseline .pdf files from the repository): > > -@image_comparison(baseline_images=['simplify_curve']) > +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) > Great! > >> While I'm sharing my wish list out loud, I think it would also be highly >> cool to get the native Mac OS backend in the buildbot tests, as that's >> one I can't test easily myself. >> > > That would require the Mac OS X buildslave to start working again too, > as I assume the backend actually requires the OS. And that would require > building on Snow Leopard to work, as I understand it. > Oh yeah. Forgot that detail. Well -- something to think about when the other pieces fall into place. Mike
Michael Droettboom wrote: >> "inkscape input.svg --export-png=output.png" works very well as an svg >> renderer. >> > I'd also like to run SVG through xmllint against the SVG schema as > another sanity check. I may get to this if I can find the time. That'd be great. I just installed inkscape and xmllint on the non-bare buildslave machine. >> As far as the test data -- I agree this is an issue. One point in favor >> of the status quo is that it's really nice to have the test data >> included with the source code so there are no configuration hassles. I'm >> not sure how well the buildbot infrastructure would cope with anything >> else. For example, to my knowledge, there is no Buildbot precedent to >> automatically pull from two branches to execute a single test run. But >> in general I think this does bear thinking about. >> > An easy improvement may be having an extra kwarg on the image_comparison > decorator to select a subset of backends. For example, many of the ones > in test_simplification.py only apply to the Agg backend. Done in r7863. To make use of it, do something like the following patch (and don't forget to delete the baseline .pdf files from the repository): -@image_comparison(baseline_images=['simplify_curve']) +@image_comparison(baseline_images=['simplify_curve'],extensions=['png']) > While I'm sharing my wish list out loud, I think it would also be highly > cool to get the native Mac OS backend in the buildbot tests, as that's > one I can't test easily myself. That would require the Mac OS X buildslave to start working again too, as I assume the backend actually requires the OS. And that would require building on Snow Leopard to work, as I understand it. -Andrew
Andrew Straw wrote: > Jouni K. Seppänen wrote: > >> Jouni K. Seppänen <jk...@ik...> writes: >> >> When we add new formats (comparing postscript files could easily be done >> using the same ghostscript command as used for pdf files, and some svg >> renderer could also be added) >> > > "inkscape input.svg --export-png=output.png" works very well as an svg > renderer. > I'd also like to run SVG through xmllint against the SVG schema as another sanity check. I may get to this if I can find the time. > >> and new tests, we'll have to think about >> if we want to run all tests on all backends, since the amount of data in >> the repository will start growing pretty fast. >> >> > As far as the test data -- I agree this is an issue. One point in favor > of the status quo is that it's really nice to have the test data > included with the source code so there are no configuration hassles. I'm > not sure how well the buildbot infrastructure would cope with anything > else. For example, to my knowledge, there is no Buildbot precedent to > automatically pull from two branches to execute a single test run. But > in general I think this does bear thinking about. > An easy improvement may be having an extra kwarg on the image_comparison decorator to select a subset of backends. For example, many of the ones in test_simplification.py only apply to the Agg backend. While I'm sharing my wish list out loud, I think it would also be highly cool to get the native Mac OS backend in the buildbot tests, as that's one I can't test easily myself. Cheers, Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Jouni K. Seppänen wrote: > Jouni K. Seppänen <jk...@ik...> writes: > > >> Oh, right. My fault: when I implemented the pdf comparison, I made it >> run the test for only those formats for which a baseline image exists, >> > > I committed a change to make it run both png and pdf tests all the time. > Thanks for the fix, Jouni. (My svn commit was rejected because you did exactly the same thing as me.) > When we add new formats (comparing postscript files could easily be done > using the same ghostscript command as used for pdf files, and some svg > renderer could also be added) "inkscape input.svg --export-png=output.png" works very well as an svg renderer. > and new tests, we'll have to think about > if we want to run all tests on all backends, since the amount of data in > the repository will start growing pretty fast. > As far as the test data -- I agree this is an issue. One point in favor of the status quo is that it's really nice to have the test data included with the source code so there are no configuration hassles. I'm not sure how well the buildbot infrastructure would cope with anything else. For example, to my knowledge, there is no Buildbot precedent to automatically pull from two branches to execute a single test run. But in general I think this does bear thinking about. -Andrew