SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S


1
(8)
2
(3)
3
(3)
4
(11)
5
(1)
6
(10)
7
(1)
8
(24)
9
(4)
10
(2)
11
(3)
12
(1)
13
(4)
14
(2)
15
(6)
16
17
(9)
18
(12)
19
(4)
20
(4)
21
(6)
22
(10)
23
(17)
24
(2)
25
26
27
(1)
28
(17)
29
(4)
30
(5)



Showing 24 results of 24

From: Darren D. <dsd...@gm...> - 2009年09月08日 23:16:55
On Tue, Sep 8, 2009 at 5:57 PM, David Warde-Farley<dw...@cs...> wrote:
> Howdy,
>
> The Qt4 backend appears to be broken in the Mac py2.6 binaries, using
> OS X 10.5.7 and the latest version of the Qt SDK from qt.nokia.com. I
> don't have the machine handy right this second but using plot() from
> an IPython interpreter with my backend set to Qt4Agg causes a hard
> crash for me every single time.
>
> I should note that TraitsBackendQt seems to work fine, so at least
> PyQt doesn't (on the surface) appear to be at the root of it.
>
> I'm wondering if it's a peculiarity of my system. Can anyone reproduce
> this? (this is an Intel Mac, by the way)
I would be very surprised if this is due to the backend. More likely a
mismatch between sip and pyqt versions.
From: David Warde-F. <dw...@cs...> - 2009年09月08日 22:31:56
Howdy,
The Qt4 backend appears to be broken in the Mac py2.6 binaries, using 
OS X 10.5.7 and the latest version of the Qt SDK from qt.nokia.com. I 
don't have the machine handy right this second but using plot() from 
an IPython interpreter with my backend set to Qt4Agg causes a hard 
crash for me every single time.
I should note that TraitsBackendQt seems to work fine, so at least 
PyQt doesn't (on the surface) appear to be at the root of it.
I'm wondering if it's a peculiarity of my system. Can anyone reproduce 
this? (this is an Intel Mac, by the way)
David
From: Brian G. <ell...@gm...> - 2009年09月08日 21:52:07
You also may need to do:
plt.interactive(True)
Cheers,
Brian
On Tue, Sep 8, 2009 at 12:45 PM, Gökhan Sever <gok...@gm...> wrote:
> Hello,
>
> The thread switches will be gone by the release of the new IPython. I am
> assuming that some extra work needs to be done on both sides in preparation
> to the new release. See the following test cases:
>
>
> ### This one locks the IPython unless the figure window is killed. If you
> do an additional plt.show() without a figure is up then you get a complete
> lock-up of the shell.
>
> I[1]: import matplotlib.pyplot as plt
>
> I[2]: %gui qt
>
> I[3]: plt.plot(range(10))
> O[3]: [<matplotlib.lines.Line2D object at 0xab2686c>]
>
> I[4]: plt.show()
>
>
>
>
> ### The following cannot resolve that issue
>
> I[5]: %gui #disable event loops
>
> I[6]: %gui -a qt
> O[6]: <PyQt4.QtGui.QApplication object at 0xaa477ac>
>
> I[7]: plt.plot(range(10))
> O[7]: [<matplotlib.lines.Line2D object at 0xaf237ac>]
>
> I[8]: plt.show()
>
>
>
> ### In a new IPython, these lines work --no locking after plt.show() "-a"
> makes the difference.
>
> I[1]: import matplotlib.pyplot as plt
>
> I[2]: %gui -a qt
> O[2]: <PyQt4.QtGui.QApplication object at 0x8fdceac>
>
> I[3]: plt.plot(range(10))
> O[3]: [<matplotlib.lines.Line2D object at 0x9a2c84c>]
>
> I[4]: plt.show()
>
>
>
>
> ================================================================================
> Platform :
> Linux-2.6.29.6-217.2.3.fc11.i686.PAE-i686-with-fedora-11-Leonidas
> Python : ('CPython', 'tags/r26', '66714')
> IPython : 0.11.bzr.r1205
> NumPy : 1.4.0.dev
> Matplotlib : 1.0.svn
>
> ================================================================================
>
> --
> Gökhan
>
> _______________________________________________
> IPython-dev mailing list
> IPy...@sc...
> http://mail.scipy.org/mailman/listinfo/ipython-dev
>
>
From: Fernando P. <fpe...@gm...> - 2009年09月08日 20:46:30
Hey Gokhan,
thanks for the summary.
On Tue, Sep 8, 2009 at 12:45 PM, Gökhan Sever <gok...@gm...> wrote:
> ### In a new IPython, these lines work --no locking after plt.show() "-a"
> makes the difference.
>
> I[1]: import matplotlib.pyplot as plt
>
> I[2]: %gui -a qt
> O[2]: <PyQt4.QtGui.QApplication object at 0x8fdceac>
>
> I[3]: plt.plot(range(10))
> O[3]: [<matplotlib.lines.Line2D object at 0x9a2c84c>]
>
> I[4]: plt.show()
If you do
plt.ion()
right after you import it, then you don't need to do 'show'
explicitely anymore. Basically what today's '-pylab' does is:
- a bunch of imports
- the equivalent of %gui, but uglier and at startup
- do plt.ion() for you
- patch %run a little so it does ioff() before starting up and ion() at the end.
As you can see, even now with trunk in the state of upheaval it is,
you can get almost all of this back with this snippet. This is pretty
much what we'll make available built-in when the dust settles (with
the 'import *' being optional, as they are today):
%gui -a qt
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.pylab as pylab
import matplotlib.mlab as mlab
from numpy import *
from matplotlib.pyplot import *
plt.ion()
### END CODE
Cheers,
f
From: Gökhan S. <gok...@gm...> - 2009年09月08日 19:46:11
Hello,
The thread switches will be gone by the release of the new IPython. I am
assuming that some extra work needs to be done on both sides in preparation
to the new release. See the following test cases:
### This one locks the IPython unless the figure window is killed. If you do
an additional plt.show() without a figure is up then you get a complete
lock-up of the shell.
I[1]: import matplotlib.pyplot as plt
I[2]: %gui qt
I[3]: plt.plot(range(10))
O[3]: [<matplotlib.lines.Line2D object at 0xab2686c>]
I[4]: plt.show()
### The following cannot resolve that issue
I[5]: %gui #disable event loops
I[6]: %gui -a qt
O[6]: <PyQt4.QtGui.QApplication object at 0xaa477ac>
I[7]: plt.plot(range(10))
O[7]: [<matplotlib.lines.Line2D object at 0xaf237ac>]
I[8]: plt.show()
### In a new IPython, these lines work --no locking after plt.show() "-a"
makes the difference.
I[1]: import matplotlib.pyplot as plt
I[2]: %gui -a qt
O[2]: <PyQt4.QtGui.QApplication object at 0x8fdceac>
I[3]: plt.plot(range(10))
O[3]: [<matplotlib.lines.Line2D object at 0x9a2c84c>]
I[4]: plt.show()
================================================================================
Platform :
Linux-2.6.29.6-217.2.3.fc11.i686.PAE-i686-with-fedora-11-Leonidas
Python : ('CPython', 'tags/r26', '66714')
IPython : 0.11.bzr.r1205
NumPy : 1.4.0.dev
Matplotlib : 1.0.svn
================================================================================
-- 
Gökhan
From: Andrew S. <str...@as...> - 2009年09月08日 18:11:30
John Hunter wrote:
> On Tue, Sep 8, 2009 at 12:34 PM, Andrew Straw<str...@as...> wrote:
> 
>> Michael Droettboom wrote:
>> 
>>> More information after another build iteration.
>>>
>>> The two tests that failed after updating to the unhinted images were
>>> subtests of tests that were failing earlier. If a single test
>>> function outputs multiple images, image comparison stops after the
>>> first mismatched image. So there's nothing peculiar about these
>>> tests, it's just that the system wasn't saying they were failing
>>> before since they were short-circuited by earlier failures. I wonder
>>> if it's possible to run through all the images and batch up all the
>>> failures together, so we don't have these "hidden" failures -- might
>>> mean fewer iterations with the buildbots down the road.
>>> 
>> Ahh, good point. I can collect the failures in the image_comparison()
>> decorator and raise one failure that describes all the failed images.
>> Right now the loop that iterates over the images raises an exception on
>> the first failure, which clearly breaks out of the loop. I'd added it to
>> the nascent TODO list, which I'll check into the repo next to
>> _buildbot_test.py.
>> 
>
> Should I hold off on committing the other formatter baselines until
> you have made these changes so you can test, or do you want me to go
> ahead and commit the rest of these now?
> 
Go ahead -- please don't wait for me. I have many means of causing image
comparison failures when the time comes. :)
-Andrew
From: John H. <jd...@gm...> - 2009年09月08日 18:00:55
On Tue, Sep 8, 2009 at 12:34 PM, Andrew Straw<str...@as...> wrote:
> Michael Droettboom wrote:
>> More information after another build iteration.
>>
>> The two tests that failed after updating to the unhinted images were
>> subtests of tests that were failing earlier. If a single test
>> function outputs multiple images, image comparison stops after the
>> first mismatched image. So there's nothing peculiar about these
>> tests, it's just that the system wasn't saying they were failing
>> before since they were short-circuited by earlier failures. I wonder
>> if it's possible to run through all the images and batch up all the
>> failures together, so we don't have these "hidden" failures -- might
>> mean fewer iterations with the buildbots down the road.
> Ahh, good point. I can collect the failures in the image_comparison()
> decorator and raise one failure that describes all the failed images.
> Right now the loop that iterates over the images raises an exception on
> the first failure, which clearly breaks out of the loop. I'd added it to
> the nascent TODO list, which I'll check into the repo next to
> _buildbot_test.py.
Should I hold off on committing the other formatter baselines until
you have made these changes so you can test, or do you want me to go
ahead and commit the rest of these now?
From: Andrew S. <str...@as...> - 2009年09月08日 17:36:21
John Hunter wrote:
> I wrote a script at scipy when Andrew and I worked on this to
> recursively move known good actuals into the baselines directory, with
> some yes/no prompting, but it looks like it did not survive the test
> code migration, so we may want to develop something to replace it.
Yes, we do. But I think we should hold off a bit until I get a slightly
better output image hierarchy established. (See my other post for more
detailed thoughts -- our emails crossed in the ether.)
-Andrew
From: Andrew S. <str...@as...> - 2009年09月08日 17:34:47
Michael Droettboom wrote:
> More information after another build iteration.
>
> The two tests that failed after updating to the unhinted images were
> subtests of tests that were failing earlier. If a single test
> function outputs multiple images, image comparison stops after the
> first mismatched image. So there's nothing peculiar about these
> tests, it's just that the system wasn't saying they were failing
> before since they were short-circuited by earlier failures. I wonder
> if it's possible to run through all the images and batch up all the
> failures together, so we don't have these "hidden" failures -- might
> mean fewer iterations with the buildbots down the road.
Ahh, good point. I can collect the failures in the image_comparison()
decorator and raise one failure that describes all the failed images.
Right now the loop that iterates over the images raises an exception on
the first failure, which clearly breaks out of the loop. I'd added it to
the nascent TODO list, which I'll check into the repo next to
_buildbot_test.py.
>
> Good news is this does point to having the font problem licked.
Very good news indeed.
From: Michael D. <md...@st...> - 2009年09月08日 17:29:15
More information after another build iteration.
The two tests that failed after updating to the unhinted images were 
subtests of tests that were failing earlier. If a single test function 
outputs multiple images, image comparison stops after the first 
mismatched image. So there's nothing peculiar about these tests, it's 
just that the system wasn't saying they were failing before since they 
were short-circuited by earlier failures. I wonder if it's possible to 
run through all the images and batch up all the failures together, so we 
don't have these "hidden" failures -- might mean fewer iterations with 
the buildbots down the road.
Good news is this does point to having the font problem licked.
Mike
On 09/08/2009 12:47 PM, Michael Droettboom wrote:
> Interesting result. I pulled all of the new "actual" files from the 21
> failing tests on the buildbots to my local machine and all of those
> tests now pass for me. Good. Interestingly, there are still two tests
> failing on my machine which did not fail on the buildbots, so I can't
> grab the buildbots' new output. Could this just be a thresholding issue
> for the tolerance value? I'm a little wary of "polluting" the baseline
> images with images from my machine which doesn't have our "standard"
> version of Freetype, so I'll leave those out of SVN for now, but will go
> ahead and commit the new baseline images from the buildbots. Assuming
> these two mystery failures are resolved by pulling new images from the
> buildbots, I think this experiment with turning of hinting is a success.
>
> As an aside, is there an easy way to update the baselines I'm missing?
> At the moment, I'm copying each result file to the correct folder under
> tests/baseline_images, but it takes me a while because I don't know the
> heirarchy by heart and there are 22 failures. I was expecting to just
> manually verify everything was ok and then "cp *.png" from my scratch
> tests folder to baseline_images and let SVN take care of which files had
> actually changed. This is just the naive feedback of a new set of eyes:
> it's extremely useful and powerful what you've put together here.
>
> Mike
>
> On 09/08/2009 12:06 PM, Andrew Straw wrote:
> 
>> Michael Droettboom wrote:
>>
>> 
>>> Doing so, my results are even *less* in agreement with the baseline, but
>>> the real question is whether my results are in agreement with those on
>>> the buildbot machines with this change to forcibly turn hinting off. I
>>> should no pretty quickly when the buildbots start complaining in a few
>>> minutes and I can look at the results ;)
>>>
>>>
>>> 
>> Yes, even though the waterfall is showing green (for the next 2 minutes
>> until my buildbot script bugfix gets run), it's pretty clear from the
>> image failure page that disabling hinting introduced changes to the
>> generated figure appearance. It will be interesting to see if, after
>> checking in the newly generated actual images as the new baseline, the
>> tests start passing on your machine with the newer freetype.
>>
>> In a footnote to myself, I think the ImageComparisonFailure exception
>> should tell nose that the test failed, not that there was an error.
>>
>> -Andrew
>>
>> 
>
> ------------------------------------------------------------------------------
> Let Crystal Reports handle the reporting - Free Crystal Reports 2008 30-Day
> trial. Simplify your report design, integration and deployment - and focus on
> what you do best, core application coding. Discover what's new with
> Crystal Reports now. http://p.sf.net/sfu/bobj-july
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
> 
From: John H. <jd...@gm...> - 2009年09月08日 17:28:32
On Tue, Sep 8, 2009 at 11:47 AM, Michael Droettboom<md...@st...> wrote:
> Interesting result. I pulled all of the new "actual" files from the 21
> failing tests on the buildbots to my local machine and all of those tests
> now pass for me. Good. Interestingly, there are still two tests failing on
> my machine which did not fail on the buildbots, so I can't grab the
> buildbots' new output. Could this just be a thresholding issue for the
> tolerance value? I'm a little wary of "polluting" the baseline images with
> images from my machine which doesn't have our "standard" version of
> Freetype, so I'll leave those out of SVN for now, but will go ahead and
> commit the new baseline images from the buildbots. Assuming these two
> mystery failures are resolved by pulling new images from the buildbots, I
> think this experiment with turning of hinting is a success.
>
Are these two images you are referring to the formatter_ticker_002.png
and polar_wrap_360.png failures? I just committed those from the
actual output on the sage buildbot. But I am curious why you couldn't
pull these down from the buildbot, eg
 http://mpl.code.astraw.com/hardy-py24-amd64-chroot/formatter_ticker_002/actual.png
 http://mpl.code.astraw.com/hardy-py24-amd64-chroot/polar_wrap_360/actual.gif
> As an aside, is there an easy way to update the baselines I'm missing? At
> the moment, I'm copying each result file to the correct folder under
> tests/baseline_images, but it takes me a while because I don't know the
> heirarchy by heart and there are 22 failures. I was expecting to just
> manually verify everything was ok and then "cp *.png" from my scratch tests
> folder to baseline_images and let SVN take care of which files had actually
> changed. This is just the naive feedback of a new set of eyes: it's
> extremely useful and powerful what you've put together here.
I wrote a script at scipy when Andrew and I worked on this to
recursively move known good actuals into the baselines directory, with
some yes/no prompting, but it looks like it did not survive the test
code migration, so we may want to develop something to replace it.
JDH
From: Andrew S. <str...@as...> - 2009年09月08日 17:28:27
Michael Droettboom wrote:
> Interesting result. I pulled all of the new "actual" files from the 21 
> failing tests on the buildbots to my local machine and all of those 
> tests now pass for me. Good. Interestingly, there are still two tests 
> failing on my machine which did not fail on the buildbots, so I can't 
> grab the buildbots' new output.
Well, if they're not failing on the buildbots, that means the baseline
in svn can't be too different than what they generate. But it's a good
point that we want the actual output of the buildbots regardless of
whether the test failed.
> Could this just be a thresholding issue 
> for the tolerance value? I'm a little wary of "polluting" the baseline 
> images with images from my machine which doesn't have our "standard" 
> version of Freetype, so I'll leave those out of SVN for now, but will go 
> ahead and commit the new baseline images from the buildbots.
Looking at the 2 images failing on the buildbots, I'm reasonably sure
they were generated by James Evans when he created the first test
infrastructure. So I say go ahead an check in the actual images
generated by the buildbots. (Or did you recently re-upload those images?)
> Assuming 
> these two mystery failures are resolved by pulling new images from the 
> buildbots, I think this experiment with turning of hinting is a success.
> 
Yes, I think so, too. I was going to suggest getting on the freetype
email list to ask them about their opinion on what we're doing.
> As an aside, is there an easy way to update the baselines I'm missing? 
> At the moment, I'm copying each result file to the correct folder under 
> tests/baseline_images, but it takes me a while because I don't know the 
> heirarchy by heart and there are 22 failures. I was expecting to just 
> manually verify everything was ok and then "cp *.png" from my scratch 
> tests folder to baseline_images and let SVN take care of which files had 
> actually changed.
Unfortunately, there's no easy baseline update yet. John wrote one for
the old test infrastructure, but I ended up dropping that in the
switchover to the simplified infrastructure. The reason was that the
image comparison mechanism, and the directories to which they were
saved, changed, and thus his script would have require a re-working.
Given that I don't consider the current mechanism for this particularly
good, I was hesitant to invest the effort to port over support for a
crappy layout.
(The trouble with the current actual/baseline/diff result gathering
mechanism is that it uses the filesystem as a means for communication
withing the nose test running process in addition to communication with
the buildbot process through hard-coded assumptions about paths and
filenames. If the only concern was within nose, we could presumably
re-work some the old MplNoseTester plugin to handle the new case, but
given the buildbot consideration it gets more difficult to get these
frameworks talking through supported API calls. Thus, although the
hardcoded path and filename stuff is a hack, it will require some
serious nose and buildbot learning to figure out how to do it the
"right" way. So I'm all for sticking with the hack right now, and making
a bit nicer by doing things like having a better directory hierarchy
layout for the actual result images.)
> This is just the naive feedback of a new set of eyes: 
> it's extremely useful and powerful what you've put together here.
> 
Thanks for the feedback.
The goal is that Joe Dev would think it's easy and useful and thus start
using it. Tests should be simple to write and run so that we actually do
that. Like I wrote earlier, by keeping the tests themselves simple and
clean, I hope we can improve the testing infrastructure mostly
independently of changes to the tests themselves.
-Andrew
From: Michael D. <md...@st...> - 2009年09月08日 16:47:30
Interesting result. I pulled all of the new "actual" files from the 21 
failing tests on the buildbots to my local machine and all of those 
tests now pass for me. Good. Interestingly, there are still two tests 
failing on my machine which did not fail on the buildbots, so I can't 
grab the buildbots' new output. Could this just be a thresholding issue 
for the tolerance value? I'm a little wary of "polluting" the baseline 
images with images from my machine which doesn't have our "standard" 
version of Freetype, so I'll leave those out of SVN for now, but will go 
ahead and commit the new baseline images from the buildbots. Assuming 
these two mystery failures are resolved by pulling new images from the 
buildbots, I think this experiment with turning of hinting is a success.
As an aside, is there an easy way to update the baselines I'm missing? 
At the moment, I'm copying each result file to the correct folder under 
tests/baseline_images, but it takes me a while because I don't know the 
heirarchy by heart and there are 22 failures. I was expecting to just 
manually verify everything was ok and then "cp *.png" from my scratch 
tests folder to baseline_images and let SVN take care of which files had 
actually changed. This is just the naive feedback of a new set of eyes: 
it's extremely useful and powerful what you've put together here.
Mike
On 09/08/2009 12:06 PM, Andrew Straw wrote:
> Michael Droettboom wrote:
> 
>> Doing so, my results are even *less* in agreement with the baseline, but
>> the real question is whether my results are in agreement with those on
>> the buildbot machines with this change to forcibly turn hinting off. I
>> should no pretty quickly when the buildbots start complaining in a few
>> minutes and I can look at the results ;)
>>
>> 
> Yes, even though the waterfall is showing green (for the next 2 minutes
> until my buildbot script bugfix gets run), it's pretty clear from the
> image failure page that disabling hinting introduced changes to the
> generated figure appearance. It will be interesting to see if, after
> checking in the newly generated actual images as the new baseline, the
> tests start passing on your machine with the newer freetype.
>
> In a footnote to myself, I think the ImageComparisonFailure exception
> should tell nose that the test failed, not that there was an error.
>
> -Andrew
> 
From: Andrew S. <str...@as...> - 2009年09月08日 16:07:05
Michael Droettboom wrote:
> Doing so, my results are even *less* in agreement with the baseline, but 
> the real question is whether my results are in agreement with those on 
> the buildbot machines with this change to forcibly turn hinting off. I 
> should no pretty quickly when the buildbots start complaining in a few 
> minutes and I can look at the results ;)
> 
Yes, even though the waterfall is showing green (for the next 2 minutes
until my buildbot script bugfix gets run), it's pretty clear from the
image failure page that disabling hinting introduced changes to the
generated figure appearance. It will be interesting to see if, after
checking in the newly generated actual images as the new baseline, the
tests start passing on your machine with the newer freetype.
In a footnote to myself, I think the ImageComparisonFailure exception
should tell nose that the test failed, not that there was an error.
-Andrew
From: Andrew S. <str...@as...> - 2009年09月08日 16:00:41
John Hunter wrote:
> Perhaps with hinting turned off this won't be necessary. Ie, maybe we
> can get more agreement across a wide range of freetype versions w/o
> hinting. Are you planning on committing the unhinted baselines?
I have a presentation to give tomorrow, so I'd just as soon let you and
Michael fight the flood of red that is about to occur! :)
But I can step up again later in the week for with more time. In the
meantime, why don't I just keep my eye on my email inbox but stay out of
the code and baseline images for the most part?
-Andrew
From: Darren D. <dsd...@gm...> - 2009年09月08日 15:57:15
On Tue, Sep 8, 2009 at 11:46 AM, Andrew Straw<str...@as...> wrote:
> Michael Droettboom wrote:
>> On 09/08/2009 10:24 AM, John Hunter wrote:
>>
>>> On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote:
>>>
>>>
>>>> I've been only skimming the surface of the discussion about the new test
>>>> framework up until now.
>>>>
>>>> Just got around to trying it, and every comparison failed because it was
>>>> selecting a different font than that used in the baseline images. (My
>>>> matplotlibrc customizes the fonts).
>>>>
>>>> It seems we should probably force "font.family" to "Bitstream Vera Sans"
>>>> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera
>>>> Sans'" to the "test" function seems to do the trick, but I'll let Andrew
>>>> make the final call about whether that's the right change. Perhaps we
>>>> should (as with the documentation build) provide a stock matplotlibrc
>>>> specifically for testing, since there will be other things like this? Of
>>>> course, all of these options cause matplotlib.test() to have rcParam
>>>> side-effects. Probably not worth addressing now, but perhaps worth noting.
>>>>
>>>>
>>> We do have a matplotlibrc file in the "test" dir (the dir that lives
>>> next to setup.py, not lib/matplotlib/tests. This is where we run the
>>> buildbot tests from. It might be a good idea to set the font
>>> explicitly in the test code itself so people can run the tests from
>>> any dir, but I'll leave it to Andrew to weigh in on that.
>>>
>>>
>> Sure. If we *don't* decide to set it in the code, we should perhaps add
>> a line suggesting to "run the tests from lib/matplotlib/tests" in the
>> documentation. An even better solution might be to forcibly load the
>> matplotlibrc in that directory (even if it's an install directory) when
>> the tests are run.
>>
> While the default test usage should probably set as much as possible to
> ensure things are identical, we also want to be able to test other code
> paths, so I think I'll add some kind of kwarg to matplotlib.test() to
> handle non-testing-default rcParams. I think setting lots of things,
> including the font, explicitly in the default case is a good idea.
I think the defaults should be used and any non-standard settings
should explicitly define the rcSettings. Perhaps a decorator is needed
that lets you pass the rc values, runs the test, and then calls
rcdefaults on the way out? I haven't been following your progress
closely (sorry about that, I am very grateful for all the work you are
doing.)
> Question for the rcParams experts: Can we save a copy of it so that we
> can restore its state after matplotlib.test() is done? (It's just a
> dictionary, right?)
rcdefaults() should reset all the values for you.
Darren
From: John H. <jd...@gm...> - 2009年09月08日 15:52:20
On Tue, Sep 8, 2009 at 10:46 AM, Andrew Straw<str...@as...> wrote:
> While the default test usage should probably set as much as possible to
> ensure things are identical, we also want to be able to test other code
> paths, so I think I'll add some kind of kwarg to matplotlib.test() to handle
> non-testing-default rcParams. I think setting lots of things, including the
> font, explicitly in the default case is a good idea.
>
> Question for the rcParams experts: Can we save a copy of it so that we can
> restore its state after matplotlib.test() is done? (It's just a dictionary,
> right?)
I committed this change
> Yes, I completely agree. In the matplotlib.testing.image_comparison()
> decorator, we right now have only a single image comparison algorithm based
> on RMS error. Perhaps we could try the perceptual difference code you linked
> to? Also, maybe we could simply turn font rendering off completely for a
> majority of the tests? Or maybe the tests should be run with and without
> text drawn, with much lower error tolerances when there's no text?
Perhaps with hinting turned off this won't be necessary. Ie, maybe we
can get more agreement across a wide range of freetype versions w/o
hinting. Are you planning on committing the unhinted baselines?
JDH
From: Andrew S. <str...@as...> - 2009年09月08日 15:46:53
Michael Droettboom wrote:
> On 09/08/2009 10:24 AM, John Hunter wrote:
> 
>> On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote:
>> 
>> 
>>> I've been only skimming the surface of the discussion about the new test
>>> framework up until now.
>>>
>>> Just got around to trying it, and every comparison failed because it was
>>> selecting a different font than that used in the baseline images. (My
>>> matplotlibrc customizes the fonts).
>>>
>>> It seems we should probably force "font.family" to "Bitstream Vera Sans"
>>> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera
>>> Sans'" to the "test" function seems to do the trick, but I'll let Andrew
>>> make the final call about whether that's the right change. Perhaps we
>>> should (as with the documentation build) provide a stock matplotlibrc
>>> specifically for testing, since there will be other things like this? Of
>>> course, all of these options cause matplotlib.test() to have rcParam
>>> side-effects. Probably not worth addressing now, but perhaps worth noting.
>>> 
>>> 
>> We do have a matplotlibrc file in the "test" dir (the dir that lives
>> next to setup.py, not lib/matplotlib/tests. This is where we run the
>> buildbot tests from. It might be a good idea to set the font
>> explicitly in the test code itself so people can run the tests from
>> any dir, but I'll leave it to Andrew to weigh in on that.
>> 
>> 
> Sure. If we *don't* decide to set it in the code, we should perhaps add 
> a line suggesting to "run the tests from lib/matplotlib/tests" in the 
> documentation. An even better solution might be to forcibly load the 
> matplotlibrc in that directory (even if it's an install directory) when 
> the tests are run.
> 
While the default test usage should probably set as much as possible to 
ensure things are identical, we also want to be able to test other code 
paths, so I think I'll add some kind of kwarg to matplotlib.test() to 
handle non-testing-default rcParams. I think setting lots of things, 
including the font, explicitly in the default case is a good idea.
Question for the rcParams experts: Can we save a copy of it so that we 
can restore its state after matplotlib.test() is done? (It's just a 
dictionary, right?)
>> 
>> 
>>> I am also still getting 6 image comparison failures due to hinting
>>> differences (I've attached one of the diffs as an example). Since I haven't
>>> been following closely, what's the status on that? Should we be seeing
>>> these as failures? What type of hinting are the baseline images produced
>>> with?
>>> 
>>> 
>> We ended up deciding to do identical source builds of freetype to make
>> sure there were no version differences or freetype configuration
>> differences. We are using freetype 2.3.5 with the default
>> configuration. We have seen other versions, eg 2.3.7, even in the
>> default configuration, give rise to different font renderings, as you
>> are seeing. This will make testing hard for plain-ol-users, since it
>> is a lot to ask them to install a special version of freetype for
>> testing. The alternative, which we discussed before, is to expose the
>> unhinted option to the frontend, and do all testing with unhinted
>> text.
>> 
>> 
> I just committed a change to add a "text.hinting" rcParam (which is 
> currently only followed by the Agg backend, though it might make sense 
> for Cairo and macosx to also obey it). This param is then forcibly set 
> to False when the tests are run.
>
> Doing so, my results are even *less* in agreement with the baseline, but 
> the real question is whether my results are in agreement with those on 
> the buildbot machines with this change to forcibly turn hinting off. I 
> should no pretty quickly when the buildbots start complaining in a few 
> minutes and I can look at the results ;)
> 
I think we compiled freetype with no hinting as a configuration option, 
so I don't anticipate a failure.
Of course, now I look at the waterfall display, see a bunch of green, 
think "this looks suspicious" (what does that say about my 
personality?), click the log of the stdio of the "test" components and 
see a whole bunch of errors. It seems when I switched over to the 
matplotlib.test() call for running the tests, I forgot to set the exit 
code. Let me do that right now. Expect a flood of buildbot errors in the 
near future...
> Hopefully we can find a way for Joe Developer to run these tests without 
> a custom build of freetype.
> 
Yes, I completely agree. In the matplotlib.testing.image_comparison() 
decorator, we right now have only a single image comparison algorithm 
based on RMS error. Perhaps we could try the perceptual difference code 
you linked to? Also, maybe we could simply turn font rendering off 
completely for a majority of the tests? Or maybe the tests should be run 
with and without text drawn, with much lower error tolerances when 
there's no text?
The nice thing about our test infrastructure now is that it's pretty 
small, lightweight, and flexible. The image comparison stuff is just 
done in a single decorator function, and the only nose plugin is the 
"known failure" plugin. We can continue writing tests which I hope will 
be mostly indepdendent from improving the infrastructure.
From: Michael D. <md...@st...> - 2009年09月08日 15:26:13
On 09/08/2009 10:24 AM, John Hunter wrote:
> On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote:
> 
>> I've been only skimming the surface of the discussion about the new test
>> framework up until now.
>>
>> Just got around to trying it, and every comparison failed because it was
>> selecting a different font than that used in the baseline images. (My
>> matplotlibrc customizes the fonts).
>>
>> It seems we should probably force "font.family" to "Bitstream Vera Sans"
>> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera
>> Sans'" to the "test" function seems to do the trick, but I'll let Andrew
>> make the final call about whether that's the right change. Perhaps we
>> should (as with the documentation build) provide a stock matplotlibrc
>> specifically for testing, since there will be other things like this? Of
>> course, all of these options cause matplotlib.test() to have rcParam
>> side-effects. Probably not worth addressing now, but perhaps worth noting.
>> 
> We do have a matplotlibrc file in the "test" dir (the dir that lives
> next to setup.py, not lib/matplotlib/tests. This is where we run the
> buildbot tests from. It might be a good idea to set the font
> explicitly in the test code itself so people can run the tests from
> any dir, but I'll leave it to Andrew to weigh in on that.
> 
Sure. If we *don't* decide to set it in the code, we should perhaps add 
a line suggesting to "run the tests from lib/matplotlib/tests" in the 
documentation. An even better solution might be to forcibly load the 
matplotlibrc in that directory (even if it's an install directory) when 
the tests are run.
> 
>> I am also still getting 6 image comparison failures due to hinting
>> differences (I've attached one of the diffs as an example). Since I haven't
>> been following closely, what's the status on that? Should we be seeing
>> these as failures? What type of hinting are the baseline images produced
>> with?
>> 
> We ended up deciding to do identical source builds of freetype to make
> sure there were no version differences or freetype configuration
> differences. We are using freetype 2.3.5 with the default
> configuration. We have seen other versions, eg 2.3.7, even in the
> default configuration, give rise to different font renderings, as you
> are seeing. This will make testing hard for plain-ol-users, since it
> is a lot to ask them to install a special version of freetype for
> testing. The alternative, which we discussed before, is to expose the
> unhinted option to the frontend, and do all testing with unhinted
> text.
> 
I just committed a change to add a "text.hinting" rcParam (which is 
currently only followed by the Agg backend, though it might make sense 
for Cairo and macosx to also obey it). This param is then forcibly set 
to False when the tests are run.
Doing so, my results are even *less* in agreement with the baseline, but 
the real question is whether my results are in agreement with those on 
the buildbot machines with this change to forcibly turn hinting off. I 
should no pretty quickly when the buildbots start complaining in a few 
minutes and I can look at the results ;)
Hopefully we can find a way for Joe Developer to run these tests without 
a custom build of freetype. FWIW, I'm using the freetype 2.3.9 packaged 
with FC11.
Cheers,
Mike
From: John H. <jd...@gm...> - 2009年09月08日 15:24:54
On Tue, Sep 8, 2009 at 10:14 AM, Andrew Straw<str...@as...> wrote:
>> but I do not see an additional test being run (I still get the usual
>> 26 tests). Is there another step to getting this to be picked up by
>> the test harness?
>>
>
> As described in the "Creating a new module in matplotlib.tests" of the
> developer coding guide (see line 780 of
> http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7664&view=markup
> ):
>
> Let's say you've added a new module named
> ``matplotlib.tests.test_whizbang_features``. To add this module to the list
> of default tests, append its name to ``default_test_modules`` in
> :file:`lib/matplotlib/__init__.py`.
Thanks, missed that. Since I added test_dates which automagically
worked, I didn't know I needed to add the module anywhere, but I'm
guessing you added that for me. I added test_image to the default, so
it should fire off shortly.
JDH
From: Andrew S. <str...@as...> - 2009年09月08日 15:14:18
John Hunter wrote:
> I must be missing something obvious, but I tried to add a new module
> to lib/matplotlib/tests called test_image, which has a single method
> so far, test_image_interps. I added the standard decorator and
> baseline image, and I can see it being installed in the stdio on the
> sage buildbot
>
> http://mpl-buildbot.code.astraw.com/builders/Mac%20OS%20X%2C%20Python%202.6%2C%20x86/builds/109/steps/test/logs/stdio
>
> but I do not see an additional test being run (I still get the usual
> 26 tests). Is there another step to getting this to be picked up by
> the test harness?
> 
As described in the "Creating a new module in matplotlib.tests" of the 
developer coding guide (see line 780 of 
http://matplotlib.svn.sourceforge.net/viewvc/matplotlib/trunk/matplotlib/doc/devel/coding_guide.rst?revision=7664&view=markup 
):
Let's say you've added a new module named 
``matplotlib.tests.test_whizbang_features``. To add this module to the 
list of default tests, append its name to ``default_test_modules`` in 
:file:`lib/matplotlib/__init__.py`.
From: John H. <jd...@gm...> - 2009年09月08日 14:24:44
On Tue, Sep 8, 2009 at 8:54 AM, Michael Droettboom<md...@st...> wrote:
> I've been only skimming the surface of the discussion about the new test
> framework up until now.
>
> Just got around to trying it, and every comparison failed because it was
> selecting a different font than that used in the baseline images. (My
> matplotlibrc customizes the fonts).
>
> It seems we should probably force "font.family" to "Bitstream Vera Sans"
> when running the tests. Adding "rcParam['font.family'] = 'Bitstream Vera
> Sans'" to the "test" function seems to do the trick, but I'll let Andrew
> make the final call about whether that's the right change. Perhaps we
> should (as with the documentation build) provide a stock matplotlibrc
> specifically for testing, since there will be other things like this? Of
> course, all of these options cause matplotlib.test() to have rcParam
> side-effects. Probably not worth addressing now, but perhaps worth noting.
We do have a matplotlibrc file in the "test" dir (the dir that lives
next to setup.py, not lib/matplotlib/tests. This is where we run the
buildbot tests from. It might be a good idea to set the font
explicitly in the test code itself so people can run the tests from
any dir, but I'll leave it to Andrew to weigh in on that.
> I am also still getting 6 image comparison failures due to hinting
> differences (I've attached one of the diffs as an example). Since I haven't
> been following closely, what's the status on that? Should we be seeing
> these as failures? What type of hinting are the baseline images produced
> with?
We ended up deciding to do identical source builds of freetype to make
sure there were no version differences or freetype configuration
differences. We are using freetype 2.3.5 with the default
configuration. We have seen other versions, eg 2.3.7, even in the
default configuration, give rise to different font renderings, as you
are seeing. This will make testing hard for plain-ol-users, since it
is a lot to ask them to install a special version of freetype for
testing. The alternative, which we discussed before, is to expose the
unhinted option to the frontend, and do all testing with unhinted
text.
JDH
From: Michael D. <md...@st...> - 2009年09月08日 13:55:48
I've been only skimming the surface of the discussion about the new test 
framework up until now.
Just got around to trying it, and every comparison failed because it was 
selecting a different font than that used in the baseline images. (My 
matplotlibrc customizes the fonts).
It seems we should probably force "font.family" to "Bitstream Vera Sans" 
when running the tests. Adding "rcParam['font.family'] = 'Bitstream 
Vera Sans'" to the "test" function seems to do the trick, but I'll let 
Andrew make the final call about whether that's the right change. 
Perhaps we should (as with the documentation build) provide a stock 
matplotlibrc specifically for testing, since there will be other things 
like this? Of course, all of these options cause matplotlib.test() to 
have rcParam side-effects. Probably not worth addressing now, but 
perhaps worth noting.
I am also still getting 6 image comparison failures due to hinting 
differences (I've attached one of the diffs as an example). Since I 
haven't been following closely, what's the status on that? Should we be 
seeing these as failures? What type of hinting are the baseline images 
produced with?
Mike
From: John H. <jd...@gm...> - 2009年09月08日 11:51:21
I must be missing something obvious, but I tried to add a new module
to lib/matplotlib/tests called test_image, which has a single method
so far, test_image_interps. I added the standard decorator and
baseline image, and I can see it being installed in the stdio on the
sage buildbot
http://mpl-buildbot.code.astraw.com/builders/Mac%20OS%20X%2C%20Python%202.6%2C%20x86/builds/109/steps/test/logs/stdio
but I do not see an additional test being run (I still get the usual
26 tests). Is there another step to getting this to be picked up by
the test harness?
JDH

Showing 24 results of 24

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /