I'm noticing that SVN r8269 is failing a great number of regression tests -- with pretty major things like the number of digits in the formatter being different. The buildbot seems to be getting the same failures I am, but I don't see any buildbot e-mails since Wednesday. Does anyone know the source of these errors? It seems to have to do with changes to the formatters. Mike
Michael Droettboom wrote: > I'm noticing that SVN r8269 is failing a great number of regression > tests -- with pretty major things like the number of digits in the > formatter being different. The buildbot seems to be getting the same > failures I am, but I don't see any buildbot e-mails since Wednesday. > Does anyone know the source of these errors? It seems to have to do > with changes to the formatters. I suspect I'm the culprit. I will look into it. Eric > > Mike > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
Michael Droettboom wrote: > I'm noticing that SVN r8269 is failing a great number of regression > tests -- with pretty major things like the number of digits in the > formatter being different. The buildbot seems to be getting the same > failures I am, but I don't see any buildbot e-mails since Wednesday. > Does anyone know the source of these errors? It seems to have to do > with changes to the formatters. Mike, What is the easiest way to run a single test? I want to work on one failure at at time. Eric > > Mike > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
On 04/26/2010 02:37 PM, Eric Firing wrote: > Michael Droettboom wrote: >> I'm noticing that SVN r8269 is failing a great number of regression >> tests -- with pretty major things like the number of digits in the >> formatter being different. The buildbot seems to be getting the same >> failures I am, but I don't see any buildbot e-mails since Wednesday. >> Does anyone know the source of these errors? It seems to have to do >> with changes to the formatters. > > Mike, > > What is the easiest way to run a single test? I want to work on one > failure at at time. You can provide a dot-separated path to the module and function, eg. (this is assuming the test is installed): nosetests matplotlib.tests.test_simplification:test_clipping Mike > > Eric > > >> >> Mike >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel >
Michael Droettboom wrote: > I'm noticing that SVN r8269 is failing a great number of regression > tests -- with pretty major things like the number of digits in the > formatter being different. The buildbot seems to be getting the same > failures I am, but I don't see any buildbot e-mails since Wednesday. > Does anyone know the source of these errors? It seems to have to do > with changes to the formatters. Mike, I have fixed all but one test--in some cases, partly by fixing tests and/or the baseline images. The one exception is figimage. I don't think I have changed anything that would affect that in any obvious way, but certainly I could be wrong, and I haven't looked into it. The immediate problem is that I can't see what the difference is between the baseline and what I am generating now. They look identical to my eye, on my screen. The diff plot shows up as a black square. The png files are different, differing in length by 2 bytes, so something has changed--but what? Given that I can't *see* any difference, I am tempted to just change the baseline plot so that the test will pass, but maybe that would be a mistake. Building mpl from a version 2 months ago, I am still seeing the figimage failure. Eric > > Mike > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-devel mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
Thanks, Eric. I can confirm the failure on figimage, but with the same seemingly miniscule difference. It looks like Jouni wrote that test... Jouni: could you verify that the difference isn't significant? If it's not, we can just create a new baseline image. Mike Eric Firing wrote: > Michael Droettboom wrote: >> I'm noticing that SVN r8269 is failing a great number of regression >> tests -- with pretty major things like the number of digits in the >> formatter being different. The buildbot seems to be getting the same >> failures I am, but I don't see any buildbot e-mails since Wednesday. >> Does anyone know the source of these errors? It seems to have to do >> with changes to the formatters. > > Mike, > > I have fixed all but one test--in some cases, partly by fixing tests > and/or the baseline images. > > The one exception is figimage. I don't think I have changed anything > that would affect that in any obvious way, but certainly I could be > wrong, and I haven't looked into it. The immediate problem is that I > can't see what the difference is between the baseline and what I am > generating now. They look identical to my eye, on my screen. The diff > plot shows up as a black square. The png files are different, > differing in length by 2 bytes, so something has changed--but what? > Given that I can't *see* any difference, I am tempted to just change > the baseline plot so that the test will pass, but maybe that would be > a mistake. > > Building mpl from a version 2 months ago, I am still seeing the > figimage failure. > > Eric > > >> >> Mike >> >> ------------------------------------------------------------------------------ >> >> _______________________________________________ >> Matplotlib-devel mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Michael Droettboom <md...@st...> writes: > Thanks, Eric. I can confirm the failure on figimage, but with the same > seemingly miniscule difference. > > It looks like Jouni wrote that test... Jouni: could you verify that the > difference isn't significant? If it's not, we can just create a new > baseline image. I can't replicate the problem: for me "nosetests matplotlib.tests.test_image:test_figimage" passes (Python 2.6.1, matplotlib revision 8276, OS X 10.6.3). Could you post the failing images somewhere? -- Jouni K. Seppänen http://www.iki.fi/jks
Jouni K. Seppänen wrote: > The following message is a courtesy copy of an article > that has been posted to gmane.comp.python.matplotlib.devel as well. > > Jouni K. Seppänen <jks...@pu...> writes: > > >>> It looks like Jouni wrote that test... Jouni: could you verify that the >>> difference isn't significant? If it's not, we can just create a new >>> baseline image. >>> >> I can't replicate the problem: for me "nosetests >> matplotlib.tests.test_image:test_figimage" passes (Python 2.6.1, >> matplotlib revision 8276, OS X 10.6.3). Could you post the failing >> images somewhere? >> > > Thanks for the images Michael. If I change all E1 bytes in the current > baseline image to E0 bytes, the images match perfectly. Sounds like a > change in the discretization of floating-point values to integer bytes. > > The difference is definitely not significant, but as I said, I get the > old image on my system. I don't object to changing the baseline image, > but the best solution would be to make the image diff less sensitive. > Unfortunately I don't have the time to pursue this now. > Thanks for the analysis. I'll look into the image diffing and see what can be done. Mike -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
I made a small change to the test harness so that the tolerance can be set on a per-test basis. I increased the tolerance for the figimage test so it passes on my machine. Cheers, Mike Michael Droettboom wrote: > Jouni K. Seppänen wrote: > >> The following message is a courtesy copy of an article >> that has been posted to gmane.comp.python.matplotlib.devel as well. >> >> Jouni K. Seppänen <jks...@pu...> writes: >> >> >> >>>> It looks like Jouni wrote that test... Jouni: could you verify that the >>>> difference isn't significant? If it's not, we can just create a new >>>> baseline image. >>>> >>>> >>> I can't replicate the problem: for me "nosetests >>> matplotlib.tests.test_image:test_figimage" passes (Python 2.6.1, >>> matplotlib revision 8276, OS X 10.6.3). Could you post the failing >>> images somewhere? >>> >>> >> Thanks for the images Michael. If I change all E1 bytes in the current >> baseline image to E0 bytes, the images match perfectly. Sounds like a >> change in the discretization of floating-point values to integer bytes. >> >> The difference is definitely not significant, but as I said, I get the >> old image on my system. I don't object to changing the baseline image, >> but the best solution would be to make the image diff less sensitive. >> Unfortunately I don't have the time to pursue this now. >> >> > Thanks for the analysis. I'll look into the image diffing and see what > can be done. > > Mike > > -- Michael Droettboom Science Software Branch Operations and Engineering Division Space Telescope Science Institute Operated by AURA for NASA
Jouni K. Seppänen <jk...@ik...> writes: >> It looks like Jouni wrote that test... Jouni: could you verify that the >> difference isn't significant? If it's not, we can just create a new >> baseline image. > > I can't replicate the problem: for me "nosetests > matplotlib.tests.test_image:test_figimage" passes (Python 2.6.1, > matplotlib revision 8276, OS X 10.6.3). Could you post the failing > images somewhere? Thanks for the images Michael. If I change all E1 bytes in the current baseline image to E0 bytes, the images match perfectly. Sounds like a change in the discretization of floating-point values to integer bytes. The difference is definitely not significant, but as I said, I get the old image on my system. I don't object to changing the baseline image, but the best solution would be to make the image diff less sensitive. Unfortunately I don't have the time to pursue this now. -- Jouni K. Seppänen http://www.iki.fi/jks