You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
1
(8) |
2
(3) |
3
(3) |
4
(11) |
5
(1) |
6
(10) |
7
(1) |
8
(24) |
9
(4) |
10
(2) |
11
(3) |
12
(1) |
13
(4) |
14
(2) |
15
(6) |
16
|
17
(9) |
18
(12) |
19
(4) |
20
(4) |
21
(6) |
22
(10) |
23
(17) |
24
(2) |
25
|
26
|
27
(1) |
28
(17) |
29
(4) |
30
(5) |
|
|
|
Interesting result. I pulled all of the new "actual" files from the 21 failing tests on the buildbots to my local machine and all of those tests now pass for me. Good. Interestingly, there are still two tests failing on my machine which did not fail on the buildbots, so I can't grab the buildbots' new output. Could this just be a thresholding issue for the tolerance value? I'm a little wary of "polluting" the baseline images with images from my machine which doesn't have our "standard" version of Freetype, so I'll leave those out of SVN for now, but will go ahead and commit the new baseline images from the buildbots. Assuming these two mystery failures are resolved by pulling new images from the buildbots, I think this experiment with turning of hinting is a success. As an aside, is there an easy way to update the baselines I'm missing? At the moment, I'm copying each result file to the correct folder under tests/baseline_images, but it takes me a while because I don't know the heirarchy by heart and there are 22 failures. I was expecting to just manually verify everything was ok and then "cp *.png" from my scratch tests folder to baseline_images and let SVN take care of which files had actually changed. This is just the naive feedback of a new set of eyes: it's extremely useful and powerful what you've put together here. Mike On 09/08/2009 12:06 PM, Andrew Straw wrote: > Michael Droettboom wrote: > >> Doing so, my results are even *less* in agreement with the baseline, but >> the real question is whether my results are in agreement with those on >> the buildbot machines with this change to forcibly turn hinting off. I >> should no pretty quickly when the buildbots start complaining in a few >> minutes and I can look at the results ;) >> >> > Yes, even though the waterfall is showing green (for the next 2 minutes > until my buildbot script bugfix gets run), it's pretty clear from the > image failure page that disabling hinting introduced changes to the > generated figure appearance. It will be interesting to see if, after > checking in the newly generated actual images as the new baseline, the > tests start passing on your machine with the newer freetype. > > In a footnote to myself, I think the ImageComparisonFailure exception > should tell nose that the test failed, not that there was an error. > > -Andrew >
Michael Droettboom wrote: > Doing so, my results are even *less* in agreement with the baseline, but > the real question is whether my results are in agreement with those on > the buildbot machines with this change to forcibly turn hinting off. I > should no pretty quickly when the buildbots start complaining in a few > minutes and I can look at the results ;) > Yes, even though the waterfall is showing green (for the next 2 minutes until my buildbot script bugfix gets run), it's pretty clear from the image failure page that disabling hinting introduced changes to the generated figure appearance. It will be interesting to see if, after checking in the newly generated actual images as the new baseline, the tests start passing on your machine with the newer freetype. In a footnote to myself, I think the ImageComparisonFailure exception should tell nose that the test failed, not that there was an error. -Andrew
John Hunter wrote: > Perhaps with hinting turned off this won't be necessary. Ie, maybe we > can get more agreement across a wide range of freetype versions w/o > hinting. Are you planning on committing the unhinted baselines? I have a presentation to give tomorrow, so I'd just as soon let you and Michael fight the flood of red that is about to occur! :) But I can step up again later in the week for with more time. In the meantime, why don't I just keep my eye on my email inbox but stay out of the code and baseline images for the most part? -Andrew
On Tue, Sep 8, 2009 at 11:46 AM, Andrew Straw<str...@as...> wrote: > Michael Droettboom wrote: >> On 09/08/2009 10:24 AM, John Hunter wrote: >> >>> On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote: >>> >>> >>>> I've been only skimming the surface of the discussion about the new test >>>> framework up until now. >>>> >>>> Just got around to trying it, and every comparison failed because it was >>>> selecting a different font than that used in the baseline images. (My >>>> matplotlibrc customizes the fonts). >>>> >>>> It seems we should probably force "font.family" to "Bitstream Vera Sans" >>>> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera >>>> Sans'" to the "test" function seems to do the trick, but I'll let Andrew >>>> make the final call about whether that's the right change. Perhaps we >>>> should (as with the documentation build) provide a stock matplotlibrc >>>> specifically for testing, since there will be other things like this? Of >>>> course, all of these options cause matplotlib.test() to have rcParam >>>> side-effects. Probably not worth addressing now, but perhaps worth noting. >>>> >>>> >>> We do have a matplotlibrc file in the "test" dir (the dir that lives >>> next to setup.py, not lib/matplotlib/tests. This is where we run the >>> buildbot tests from. It might be a good idea to set the font >>> explicitly in the test code itself so people can run the tests from >>> any dir, but I'll leave it to Andrew to weigh in on that. >>> >>> >> Sure. If we *don't* decide to set it in the code, we should perhaps add >> a line suggesting to "run the tests from lib/matplotlib/tests" in the >> documentation. An even better solution might be to forcibly load the >> matplotlibrc in that directory (even if it's an install directory) when >> the tests are run. >> > While the default test usage should probably set as much as possible to > ensure things are identical, we also want to be able to test other code > paths, so I think I'll add some kind of kwarg to matplotlib.test() to > handle non-testing-default rcParams. I think setting lots of things, > including the font, explicitly in the default case is a good idea. I think the defaults should be used and any non-standard settings should explicitly define the rcSettings. Perhaps a decorator is needed that lets you pass the rc values, runs the test, and then calls rcdefaults on the way out? I haven't been following your progress closely (sorry about that, I am very grateful for all the work you are doing.) > Question for the rcParams experts: Can we save a copy of it so that we > can restore its state after matplotlib.test() is done? (It's just a > dictionary, right?) rcdefaults() should reset all the values for you. Darren
On Tue, Sep 8, 2009 at 10:46 AM, Andrew Straw<str...@as...> wrote: > While the default test usage should probably set as much as possible to > ensure things are identical, we also want to be able to test other code > paths, so I think I'll add some kind of kwarg to matplotlib.test() to handle > non-testing-default rcParams. I think setting lots of things, including the > font, explicitly in the default case is a good idea. > > Question for the rcParams experts: Can we save a copy of it so that we can > restore its state after matplotlib.test() is done? (It's just a dictionary, > right?) I committed this change > Yes, I completely agree. In the matplotlib.testing.image_comparison() > decorator, we right now have only a single image comparison algorithm based > on RMS error. Perhaps we could try the perceptual difference code you linked > to? Also, maybe we could simply turn font rendering off completely for a > majority of the tests? Or maybe the tests should be run with and without > text drawn, with much lower error tolerances when there's no text? Perhaps with hinting turned off this won't be necessary. Ie, maybe we can get more agreement across a wide range of freetype versions w/o hinting. Are you planning on committing the unhinted baselines? JDH
Michael Droettboom wrote: > On 09/08/2009 10:24 AM, John Hunter wrote: > >> On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote: >> >> >>> I've been only skimming the surface of the discussion about the new test >>> framework up until now. >>> >>> Just got around to trying it, and every comparison failed because it was >>> selecting a different font than that used in the baseline images. (My >>> matplotlibrc customizes the fonts). >>> >>> It seems we should probably force "font.family" to "Bitstream Vera Sans" >>> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera >>> Sans'" to the "test" function seems to do the trick, but I'll let Andrew >>> make the final call about whether that's the right change. Perhaps we >>> should (as with the documentation build) provide a stock matplotlibrc >>> specifically for testing, since there will be other things like this? Of >>> course, all of these options cause matplotlib.test() to have rcParam >>> side-effects. Probably not worth addressing now, but perhaps worth noting. >>> >>> >> We do have a matplotlibrc file in the "test" dir (the dir that lives >> next to setup.py, not lib/matplotlib/tests. This is where we run the >> buildbot tests from. It might be a good idea to set the font >> explicitly in the test code itself so people can run the tests from >> any dir, but I'll leave it to Andrew to weigh in on that. >> >> > Sure. If we *don't* decide to set it in the code, we should perhaps add > a line suggesting to "run the tests from lib/matplotlib/tests" in the > documentation. An even better solution might be to forcibly load the > matplotlibrc in that directory (even if it's an install directory) when > the tests are run. > While the default test usage should probably set as much as possible to ensure things are identical, we also want to be able to test other code paths, so I think I'll add some kind of kwarg to matplotlib.test() to handle non-testing-default rcParams. I think setting lots of things, including the font, explicitly in the default case is a good idea. Question for the rcParams experts: Can we save a copy of it so that we can restore its state after matplotlib.test() is done? (It's just a dictionary, right?) >> >> >>> I am also still getting 6 image comparison failures due to hinting >>> differences (I've attached one of the diffs as an example). Since I haven't >>> been following closely, what's the status on that? Should we be seeing >>> these as failures? What type of hinting are the baseline images produced >>> with? >>> >>> >> We ended up deciding to do identical source builds of freetype to make >> sure there were no version differences or freetype configuration >> differences. We are using freetype 2.3.5 with the default >> configuration. We have seen other versions, eg 2.3.7, even in the >> default configuration, give rise to different font renderings, as you >> are seeing. This will make testing hard for plain-ol-users, since it >> is a lot to ask them to install a special version of freetype for >> testing. The alternative, which we discussed before, is to expose the >> unhinted option to the frontend, and do all testing with unhinted >> text. >> >> > I just committed a change to add a "text.hinting" rcParam (which is > currently only followed by the Agg backend, though it might make sense > for Cairo and macosx to also obey it). This param is then forcibly set > to False when the tests are run. > > Doing so, my results are even *less* in agreement with the baseline, but > the real question is whether my results are in agreement with those on > the buildbot machines with this change to forcibly turn hinting off. I > should no pretty quickly when the buildbots start complaining in a few > minutes and I can look at the results ;) > I think we compiled freetype with no hinting as a configuration option, so I don't anticipate a failure. Of course, now I look at the waterfall display, see a bunch of green, think "this looks suspicious" (what does that say about my personality?), click the log of the stdio of the "test" components and see a whole bunch of errors. It seems when I switched over to the matplotlib.test() call for running the tests, I forgot to set the exit code. Let me do that right now. Expect a flood of buildbot errors in the near future... > Hopefully we can find a way for Joe Developer to run these tests without > a custom build of freetype. > Yes, I completely agree. In the matplotlib.testing.image_comparison() decorator, we right now have only a single image comparison algorithm based on RMS error. Perhaps we could try the perceptual difference code you linked to? Also, maybe we could simply turn font rendering off completely for a majority of the tests? Or maybe the tests should be run with and without text drawn, with much lower error tolerances when there's no text? The nice thing about our test infrastructure now is that it's pretty small, lightweight, and flexible. The image comparison stuff is just done in a single decorator function, and the only nose plugin is the "known failure" plugin. We can continue writing tests which I hope will be mostly indepdendent from improving the infrastructure.
On 09/08/2009 10:24 AM, John Hunter wrote: > On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote: > >> I've been only skimming the surface of the discussion about the new test >> framework up until now. >> >> Just got around to trying it, and every comparison failed because it was >> selecting a different font than that used in the baseline images. (My >> matplotlibrc customizes the fonts). >> >> It seems we should probably force "font.family" to "Bitstream Vera Sans" >> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera >> Sans'" to the "test" function seems to do the trick, but I'll let Andrew >> make the final call about whether that's the right change. Perhaps we >> should (as with the documentation build) provide a stock matplotlibrc >> specifically for testing, since there will be other things like this? Of >> course, all of these options cause matplotlib.test() to have rcParam >> side-effects. Probably not worth addressing now, but perhaps worth noting. >> > We do have a matplotlibrc file in the "test" dir (the dir that lives > next to setup.py, not lib/matplotlib/tests. This is where we run the > buildbot tests from. It might be a good idea to set the font > explicitly in the test code itself so people can run the tests from > any dir, but I'll leave it to Andrew to weigh in on that. > Sure. If we *don't* decide to set it in the code, we should perhaps add a line suggesting to "run the tests from lib/matplotlib/tests" in the documentation. An even better solution might be to forcibly load the matplotlibrc in that directory (even if it's an install directory) when the tests are run. > >> I am also still getting 6 image comparison failures due to hinting >> differences (I've attached one of the diffs as an example). Since I haven't >> been following closely, what's the status on that? Should we be seeing >> these as failures? What type of hinting are the baseline images produced >> with? >> > We ended up deciding to do identical source builds of freetype to make > sure there were no version differences or freetype configuration > differences. We are using freetype 2.3.5 with the default > configuration. We have seen other versions, eg 2.3.7, even in the > default configuration, give rise to different font renderings, as you > are seeing. This will make testing hard for plain-ol-users, since it > is a lot to ask them to install a special version of freetype for > testing. The alternative, which we discussed before, is to expose the > unhinted option to the frontend, and do all testing with unhinted > text. > I just committed a change to add a "text.hinting" rcParam (which is currently only followed by the Agg backend, though it might make sense for Cairo and macosx to also obey it). This param is then forcibly set to False when the tests are run. Doing so, my results are even *less* in agreement with the baseline, but the real question is whether my results are in agreement with those on the buildbot machines with this change to forcibly turn hinting off. I should no pretty quickly when the buildbots start complaining in a few minutes and I can look at the results ;) Hopefully we can find a way for Joe Developer to run these tests without a custom build of freetype. FWIW, I'm using the freetype 2.3.9 packaged with FC11. Cheers, Mike
On Tue, Sep 8, 2009 at 10:14 AM, Andrew Straw<str...@as...> wrote: >> but I do not see an additional test being run (I still get the usual >> 26 tests). Is there another step to getting this to be picked up by >> the test harness? >> > > As described in the "Creating a new module in matplotlib.tests" of the > developer coding guide (see line 780 of > http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7664&view=markup > ): > > Let's say you've added a new module named > ``matplotlib.tests.test_whizbang_features``. To add this module to the list > of default tests, append its name to ``default_test_modules`` in > :file:`lib/matplotlib/__init__.py`. Thanks, missed that. Since I added test_dates which automagically worked, I didn't know I needed to add the module anywhere, but I'm guessing you added that for me. I added test_image to the default, so it should fire off shortly. JDH
John Hunter wrote: > I must be missing something obvious, but I tried to add a new module > to lib/matplotlib/tests called test_image, which has a single method > so far, test_image_interps. I added the standard decorator and > baseline image, and I can see it being installed in the stdio on the > sage buildbot > > http://mpl-buildbot.code.astraw.com/builders/Mac%20OS%20X%2C%20Python%202.6%2C%20x86/builds/109/steps/test/logs/stdio > > but I do not see an additional test being run (I still get the usual > 26 tests). Is there another step to getting this to be picked up by > the test harness? > As described in the "Creating a new module in matplotlib.tests" of the developer coding guide (see line 780 of http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7664&view=markup ): Let's say you've added a new module named ``matplotlib.tests.test_whizbang_features``. To add this module to the list of default tests, append its name to ``default_test_modules`` in :file:`lib/matplotlib/__init__.py`.
On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote: > I've been only skimming the surface of the discussion about the new test > framework up until now. > > Just got around to trying it, and every comparison failed because it was > selecting a different font than that used in the baseline images. (My > matplotlibrc customizes the fonts). > > It seems we should probably force "font.family" to "Bitstream Vera Sans" > when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera > Sans'" to the "test" function seems to do the trick, but I'll let Andrew > make the final call about whether that's the right change. Perhaps we > should (as with the documentation build) provide a stock matplotlibrc > specifically for testing, since there will be other things like this? Of > course, all of these options cause matplotlib.test() to have rcParam > side-effects. Probably not worth addressing now, but perhaps worth noting. We do have a matplotlibrc file in the "test" dir (the dir that lives next to setup.py, not lib/matplotlib/tests. This is where we run the buildbot tests from. It might be a good idea to set the font explicitly in the test code itself so people can run the tests from any dir, but I'll leave it to Andrew to weigh in on that. > I am also still getting 6 image comparison failures due to hinting > differences (I've attached one of the diffs as an example). Since I haven't > been following closely, what's the status on that? Should we be seeing > these as failures? What type of hinting are the baseline images produced > with? We ended up deciding to do identical source builds of freetype to make sure there were no version differences or freetype configuration differences. We are using freetype 2.3.5 with the default configuration. We have seen other versions, eg 2.3.7, even in the default configuration, give rise to different font renderings, as you are seeing. This will make testing hard for plain-ol-users, since it is a lot to ask them to install a special version of freetype for testing. The alternative, which we discussed before, is to expose the unhinted option to the frontend, and do all testing with unhinted text. JDH
I've been only skimming the surface of the discussion about the new test framework up until now. Just got around to trying it, and every comparison failed because it was selecting a different font than that used in the baseline images. (My matplotlibrc customizes the fonts). It seems we should probably force "font.family" to "Bitstream Vera Sans" when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera Sans'" to the "test" function seems to do the trick, but I'll let Andrew make the final call about whether that's the right change. Perhaps we should (as with the documentation build) provide a stock matplotlibrc specifically for testing, since there will be other things like this? Of course, all of these options cause matplotlib.test() to have rcParam side-effects. Probably not worth addressing now, but perhaps worth noting. I am also still getting 6 image comparison failures due to hinting differences (I've attached one of the diffs as an example). Since I haven't been following closely, what's the status on that? Should we be seeing these as failures? What type of hinting are the baseline images produced with? Mike
I must be missing something obvious, but I tried to add a new module to lib/matplotlib/tests called test_image, which has a single method so far, test_image_interps. I added the standard decorator and baseline image, and I can see it being installed in the stdio on the sage buildbot http://mpl-buildbot.code.astraw.com/builders/Mac%20OS%20X%2C%20Python%202.6%2C%20x86/builds/109/steps/test/logs/stdio but I do not see an additional test being run (I still get the usual 26 tests). Is there another step to getting this to be picked up by the test harness? JDH
The EMF backend of matplotlib is broken, which is a real problem if you want to put matplotlib vector graphics into MS Word documents. I have just released a patch which fixes the backend providing the basic features required, such as drawing paths and texts in different styles and respecting clipping. The EMFs created for most of the demos are correctly displayed in OpenOffice and MS Word (e.g. alignment_test.py, csd_demo.py, line_styles.py, simple_plot.py, arctest.py, errorbar_demo.py, log_demo.py, subplot_demo.py, axes_demo.py, figlegend_demo.py, logo.py, text_handles.py, barchart_demo.py, histogram_demo.py, pcolor_demo.py, vline_demo.py, bar_stacked.py, image.py, psd_demo.py, color_demo.py, legend_demo.py, scatter_demo.py). Not supported features are bitmaps, math texts (TeX-style) and Unicode text with characters beyond the ISO-8859-1 character set. You can find the patch at https://sourceforge.net/tracker/?func=detail&aid=2853659&group_id=80706&atid=560722 Regards, Urs
John Hunter wrote: > On Sun, Sep 6, 2009 at 3:44 PM, John Hunter<jd...@gm...> wrote: > >> I am working on this stuff now and am near a solution for the empty >> datetime bug which is cleaner and more general. I'll populate tests >> for this stuff so just let me know where to put the baselines. >> > > Hey Andrew -- I finally got the new date tests working. Just run > lib/matplotlib/tests/test_dates.py to generate the new baseline images > and add them wherever you want them to go. Or if you'd rather I add > them create the baseline images dirs in svn and I'll add/commit them. > But I'll be offline for a bit so go ahead and add them if you want. > OK, great, we now have several examples of tests in matplotlib.tests, and they seem to be passing on the buildbot slaves now. I added the images to lib/matplotlib/tests/baseline_images. I also removed ".png" from the filenames in anticipation of auto-backend selection and comparison of other formats. (However, that's not something I plan to implement any time soon.) We now have 26 tests running on the buildbots and 11 with the plain "import matplotlib; matplotlib.test()", which tests only the new simplified tests. That means there are 15 tests left in the original testing infrastructure that remain to be ported over, which I don't think will be too hard now that I ported over JPL's units stuff into matplotlib.testing.jpl_units and test_matplotlib/TestAxes.py to matplotlib.tests.test_axes.py. -Andrew
On Sun, Sep 6, 2009 at 3:44 PM, John Hunter<jd...@gm...> wrote: > > I am working on this stuff now and am near a solution for the empty > datetime bug which is cleaner and more general. I'll populate tests > for this stuff so just let me know where to put the baselines. Hey Andrew -- I finally got the new date tests working. Just run lib/matplotlib/tests/test_dates.py to generate the new baseline images and add them wherever you want them to go. Or if you'd rather I add them create the baseline images dirs in svn and I'll add/commit them. But I'll be offline for a bit so go ahead and add them if you want. JDH
On Sun, Sep 6, 2009 at 11:17 AM, Andrew Straw<str...@as...> wrote: > After letting the implications settle a bit, I think I'm in favor of > baseline images living in the matplotlib svn trunk so that they're > always in sync with the test scripts and available to those who have > done svn checkouts. Another important consideration is that this > approach plays will with branching the repo. OK, we can put them in the trunk alongside the tests, anywhere you think they should go, and we just won't ship them by default in all releases. We can decide on that at release time, but I had gotten trapped into thinking that if it lived alongside the tests we would need to ship the baseline images, too, but of course we do not. It will make svn checkouts a little slower, but this is a rare event and worth the cost. I am working on this stuff now and am near a solution for the empty datetime bug which is cleaner and more general. I'll populate tests for this stuff so just let me know where to put the baselines. We should probably have a quick call if you are working on this stuff now -- 773-955-7825
John Hunter wrote: > I was able to run the buildbot mac script when logged into sage with:: > So it seems the bus error on Mac is due to networking (DNS lookups) being broken in non-interactive logins. This is a pain for the get_sample_data() approach. (Although I suspect we could work around it by giving the IP address of the svn repo just like we did for the main MPL checkout. In this case, however, the IP address would be hardcoded into cbook.) > I am not sure I want to distribute the baseline images with the main > mpl distribution, but I am open to considering it. As the number of > tests and baseline images grows, which hopefully will happen soon, > this could potentially become large -- the reason I added > get_sampledata in the first place was to get the distribution size > down. We could add support to get_sampledata to use an environment > variable for the cache directory. Then I could do an svn checkout of > the sample_data tree and svn up this dir in my buildbot script. If I > point the sample data environment var to this directory, it would have > the latest data for the buildbot and would not need to make an http > request (it is odd though that svn checkouts on the sage buildbot work > fine even when not interactively logged in but http requests > apparently do not). > After letting the implications settle a bit, I think I'm in favor of baseline images living in the matplotlib svn trunk so that they're always in sync with the test scripts and available to those who have done svn checkouts. Another important consideration is that this approach plays will with branching the repo. Just because they'd be in the main repository directory, though, doesn't mean that we have to ship source or binaries with them in place -- that's a decision that could be discussed when release day gets closer. Many of these images will be .pngs with large regions of white, so they're relatively small files. But, I agree, hopefully there will be a lot of tests and thus a lot of images, which will add up. As far as the linux packaging goes -- the packagers can decide how to ship their own binaries, but I'm sure they'd appreciate a mechanism for shipping the test image data separately from the main binary package. This could cause us to come up with a nice mechanism which we enable when building Mac and Windows binary packages. As for the source packages, I think I'd tend toward including the test images for more or less the same reasons as including them in the svn trunk. Also, we could set it up such that we skip image_comparison tests if the baseline images weren't available (or simply not compare the results). > If you think the sample_data w/ support for local svn checkouts is the > way to go for the baseline data and images, let me know. I would like > to utilize a subdir, eg, sample_data/baseline, if we go this route, to > keep the top-level directory a bit cleaner for user data. We could > also release a tarball of the sample_data/baseline directory with each > release, so people who want to untar, set the environment var and test > could do so. > OK, I will move them to a new subdir if we decide to keep the sample_data approach. I thought I read a preference to keep sample_data flat, and I wasn't sure about Windows path names. > I am not sure this is the right approach by any means, just putting it > up for consideration. One disadvantage of the sample_data approach is > that it would probably work well with HEAD but not with releases, > because as the baseline images changes, it becomes difficult to test > existing releases against it, which may be assuming a prior baseline. > This is why I mentioned releasing the baseline images too, but it does > raise the barrier for doing tests. > Likewise, I'm not sure my idea is best, either, but I think it plays best with version control, which IMO is a substantial benefit. > I should have some time today to play as well. One thing I would like > to do is to continue the clean up on naming conventions to make them > compliant with the coding guide. Thanks for your efforts so far on > this -- one thing left to do here that I can see is to rename the > modules to test_axes.py rather than TestAxes.py, etc..., and to finish > renaming the methods which use the wrong convention, eg > TestAxes.TestAxes.tearDown should be test_axes.TestAxes.tear_down > (module_lower_under.ClassMixedUpper.method_lower_under). I think we should forget about subclassing unittest.TestCase and simply use flat functions as our tests. In particular, we should drop setUp and tearDown altogether (which IIRC have to be named what they are because they're a subclass of unittest.TestCase and override baseclass methods). If we need any setup and teardown functionality, let's make a new decorator to support it. Then, I think the tests in test/test_matplotlib/TestAxes.py should go into lib/matplotlib/tests/test_axes.py and each test should be it's own function at module level, such as test_empty_datetime(). -Andrew
On Sun, Sep 6, 2009 at 1:00 AM, Andrew Straw<str...@as...> wrote: > Andrew Straw wrote: >> Today I committed to svn a simplified testing infrastructure, which I've >> committed to matplotlib/testing/*. A few sample tests are in >> matplotlib/tests/*. I also wrote some docs, which are now in the >> developer coding guide. See that ( >> http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7654&view=markup >> , starting at line 675) for information about how to write tests. >> > I should also say that I plan to migrate the existing tests to this new > simplified architecture. You can get your testing feet wet by joining > the fun. I should have some time today to play as well. One thing I would like to do is to continue the clean up on naming conventions to make them compliant with the coding guide. Thanks for your efforts so far on this -- one thing left to do here that I can see is to rename the modules to test_axes.py rather than TestAxes.py, etc..., and to finish renaming the methods which use the wrong convention, eg TestAxes.TestAxes.tearDown should be test_axes.TestAxes.tear_down (module_lower_under.ClassMixedUpper.method_lower_under). When I get some time to work later I'll ping you to make sure we aren't working on the same thing at the same time. JDH
On Sun, Sep 6, 2009 at 12:58 AM, Andrew Straw<str...@as...> wrote: > Today I committed to svn a simplified testing infrastructure, which I've > committed to matplotlib/testing/*. A few sample tests are in > matplotlib/tests/*. I also wrote some docs, which are now in the > developer coding guide. See that ( > http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7654&view=markup > , starting at line 675) for information about how to write tests. > > Now, I have a question. As currently written, the baseline (a.k.a. > expected) images are stored in the sample_data directory and > matplotlib.cbook.get_sample_data() downloads the images. However, I > suspect that the Mac Sage buildslave doesn't like to download stuff > while not in an interactive login. (Remember the initial problems > running tests on that machine?) That's probably a good indication that > we probably don't want to require network access to run the tests. So, > the next question is whether we want to install baseline images with > standard MPL installs so that any user can run the full test suite? That > would be my preference, as it would be the simplest and most robust to > implement, but it comes at the cost of using additional disk space. > Otherwise, I'm open to suggestions. > > (John, to confirm my suspicions about the network access issue, could > you ssh into the Sage Mac and run test/_buildbot_mac_sage.sh by hand to > see if that eliminates the bus error we're getting when run from the > buildbot?) I was able to run the buildbot mac script when logged into sage with:: Test plotting empty axes with dates along one axis. ... KNOWNFAIL: Fails due to SF bug 2850075 Test Some formatter and ticker issues. ... ok Test the 'is_string_like cookbook' function. ... ok Test DateFormatter ... ok Test RRuleLocator ... ok Basic Annotations ... ok Polar Plot Annotations ... ok Polar Coordinate Annotations ... ok Test the fill method with unitized-data. ... ok Test constant xy data. ... ok Test numpy shaped data. ... ok Test single-point date plots. ... ok Test single-point plots. ... ok Test polar plots with unitized data. ... ok Test polar plots where data crosses 0 degrees. ... ok Test the axhspan method with Epochs. ... ok Test the axvspan method with Epochs. ... ok very simple example test ... ok very simple example test that should fail ... KNOWNFAIL: Test known to fail matplotlib.tests.test_transforms.test_Affine2D_from_values ... ok matplotlib.tests.test_spines.test_spines_axes_positions ... ok ---------------------------------------------------------------------- Ran 21 tests in 8.342s OK (KNOWNFAIL=2) I am not sure I want to distribute the baseline images with the main mpl distribution, but I am open to considering it. As the number of tests and baseline images grows, which hopefully will happen soon, this could potentially become large -- the reason I added get_sampledata in the first place was to get the distribution size down. We could add support to get_sampledata to use an environment variable for the cache directory. Then I could do an svn checkout of the sample_data tree and svn up this dir in my buildbot script. If I point the sample data environment var to this directory, it would have the latest data for the buildbot and would not need to make an http request (it is odd though that svn checkouts on the sage buildbot work fine even when not interactively logged in but http requests apparently do not). If you think the sample_data w/ support for local svn checkouts is the way to go for the baseline data and images, let me know. I would like to utilize a subdir, eg, sample_data/baseline, if we go this route, to keep the top-level directory a bit cleaner for user data. We could also release a tarball of the sample_data/baseline directory with each release, so people who want to untar, set the environment var and test could do so. I am not sure this is the right approach by any means, just putting it up for consideration. One disadvantage of the sample_data approach is that it would probably work well with HEAD but not with releases, because as the baseline images changes, it becomes difficult to test existing releases against it, which may be assuming a prior baseline. This is why I mentioned releasing the baseline images too, but it does raise the barrier for doing tests. JDH
Andrew Straw wrote: > Today I committed to svn a simplified testing infrastructure, which I've > committed to matplotlib/testing/*. A few sample tests are in > matplotlib/tests/*. I also wrote some docs, which are now in the > developer coding guide. See that ( > http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7654&view=markup > , starting at line 675) for information about how to write tests. > I should also say that I plan to migrate the existing tests to this new simplified architecture. You can get your testing feet wet by joining the fun. -Andrew
Today I committed to svn a simplified testing infrastructure, which I've committed to matplotlib/testing/*. A few sample tests are in matplotlib/tests/*. I also wrote some docs, which are now in the developer coding guide. See that ( http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7654&view=markup , starting at line 675) for information about how to write tests. Now, I have a question. As currently written, the baseline (a.k.a. expected) images are stored in the sample_data directory and matplotlib.cbook.get_sample_data() downloads the images. However, I suspect that the Mac Sage buildslave doesn't like to download stuff while not in an interactive login. (Remember the initial problems running tests on that machine?) That's probably a good indication that we probably don't want to require network access to run the tests. So, the next question is whether we want to install baseline images with standard MPL installs so that any user can run the full test suite? That would be my preference, as it would be the simplest and most robust to implement, but it comes at the cost of using additional disk space. Otherwise, I'm open to suggestions. (John, to confirm my suspicions about the network access issue, could you ssh into the Sage Mac and run test/_buildbot_mac_sage.sh by hand to see if that eliminates the bus error we're getting when run from the buildbot?) -Andrew
My kids have commandeered my computer for family movie night, but I'll merge this tomorrow if someone else hasn't gotten to it before then. I suspect the amount of time you must have sunk into determining the date dependency of the empty datetime bug has exhausted your free energy for mpl hassles, so I'm happy to take this one. JDH On Sep 5, 2009, at 7:09 PM, Andrew Straw <str...@as...> wrote: > jas...@cr... wrote: >> https://sourceforge.net/tracker/?func=detail&aid=2852168&group_id=80706&atid=560720 >> > I fixed this in svn r7638. Thanks for the report. > > Can a dev merge r7638 from the v0_99_maint branch into the trunk? I'm > having a hard time figuring out svnmerge and I really don't feel like > fighting it right now. (I read the docs at > http://matplotlib.sourceforge.net/devel/coding_guide.html#using-svnmerge > but I still can't get it to work.) > > -Andrew
jas...@cr... wrote: > https://sourceforge.net/tracker/?func=detail&aid=2852168&group_id=80706&atid=560720 > I fixed this in svn r7638. Thanks for the report. Can a dev merge r7638 from the v0_99_maint branch into the trunk? I'm having a hard time figuring out svnmerge and I really don't feel like fighting it right now. (I read the docs at http://matplotlib.sourceforge.net/devel/coding_guide.html#using-svnmerge but I still can't get it to work.) -Andrew
John Hunter wrote: > On Thu, Sep 3, 2009 at 8:05 PM, <jas...@cr...> wrote: > > >> This is just a friendly ping about the issue in this thread. I'm >> delaying the patch that shifts Sage's graphics to using the new >> matplotlib and spines, and I think resolving this issue would probably >> resolve the remaining big problem (placing axes labels, as mentioned in >> the thread "setting axis label offset from end of spine"). >> > > To make sure this doesn't get misplaced, please post it on the bug tracker: > > https://sourceforge.net/tracker/?group_id=80706&atid=560720 > > and I'll make sure it gets assigned to Andrew :-) > > Done: https://sourceforge.net/tracker/?func=detail&aid=2852168&group_id=80706&atid=560720 Thanks, Jason
> From: Andrew Straw [mailto:str...@as...] > Sent: Thursday, September 03, 2009 13:50 > > I am interested in getting the buildbot infrastructure to > build automatic nightly binaries for Windows (XP was my > thought, but 7 would also be good). If you you'd be willing > to perform the work to automate build and installation from > the svn repo on either your own machine or a virtual machine > running in my linux box (presuming I could get Windows 7 > running in VirtualBox), you could itch your own scratch as > well as help the MPL community. . . . . > Regardless of whether you can help with the buildbot > automation part, if you take notes about what you did, I'm > sure it will help whomever comes after you in the process. > > -Andrew Hi, Andrew. I'm happy to share what I learn along the way. At this point, I'm trying to get clear about the various build approaches that are in play. The impression I'm forming is that what I called method (a) is the legacy method, if you will, and the developers are trying to implement two automated build systems: one for releases -- method (b) -- and one for the buildbot, and apparently there are some distinctions between those two. Is that right? By the way, my XP partition is still bootable, so I'm not restricted to Win 7 and may be able to help with an XP buildbot.