SourceForge logo
SourceForge logo
Menu

matplotlib-devel

From: Jouni K. S. <jk...@ik...> - 2009年09月22日 19:09:37
I am thinking about adding pdf comparison ability to compare_images. One
simple way to do this would be to convert pdf files to pngs using
Ghostscript: if we store reference pdf files, and both the reference
file and the result of the test are converted using with exactly the
same version of gs, there should be no font-rendering or antialiasing
mismatches.
Can we assume that all test computers will have some version of
Ghostscript installed and callable as "gs"?
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: Michael D. <md...@st...> - 2009年09月22日 19:24:30
Jouni K. Seppänen wrote:
> I am thinking about adding pdf comparison ability to compare_images. One
> simple way to do this would be to convert pdf files to pngs using
> Ghostscript: if we store reference pdf files, and both the reference
> file and the result of the test are converted using with exactly the
> same version of gs, there should be no font-rendering or antialiasing
> mismatches.
>
> Can we assume that all test computers will have some version of
> Ghostscript installed and callable as "gs"?
> 
We can probably standardize the version of gs on the buildbot machines, 
but it's been very useful up to now to have tests that can run on a 
variety of developer machines as well. 
I don't know how different the output will be from different versions of 
gs -- maybe we should just try it and see. I have a pretty old version 
of gs on my RHEL4 box (7.07). If you want me to send you a png of a 
particular pdf to directly compare with yours before you even start with 
the test infrastructure, I'm happy to do that.
Mike
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Jouni K. S. <jk...@ik...> - 2009年09月23日 15:49:15
Andrew Straw <str...@as...> writes:
> Jouni - I don't think this would be hard to add, but I'm swamped at
> work. If this is an itch you'd like to scratch, feel free to hack away
> on the image_comparison() function in
> lib/matplotlib/testing/decorators.py -- it's a pretty straightforward
> piece of code.
Changing that is probably easy enough, but how should the overall code
path look?
I was planning to switch backends in matplotlib.test after it runs the
Agg tests, so that the same test cases could be used to produce pdf
files (that's why they save files without extensions, right?) but this
seems to be impossible. The matplotlib.use function is a no-op if
matplotlib.backends has been imported, regardless of the warn argument.
For example, the following code produces two png files:
 #!/usr/bin/env python
 import matplotlib
 matplotlib.use('agg')
 from matplotlib import pyplot
 pyplot.plot([3,1,4,1])
 pyplot.savefig('foo1')
 pyplot.switch_backend('pdf')
 pyplot.plot([5,9,2,6])
 pyplot.savefig('foo2')
If you interchange the 'agg' and 'pdf' strings, you get two pdf files.
It looks like the following change to matplotlib/__init__.py would fix
this, but I'm a little doubtful since maybe there was a good reason to
make it like it is:
--- __init__.py	(revision 7815)
+++ __init__.py	(working copy)
@@ -822,8 +822,8 @@
 make the backend switch work (in some cases, eg pure image
 backends) so one can set warn=False to supporess the warnings
 """
- if 'matplotlib.backends' in sys.modules:
- if warn: warnings.warn(_use_error_msg)
+ if 'matplotlib.backends' in sys.modules and warn:
+ warnings.warn(_use_error_msg)
 return
 arg = arg.lower()
 if arg.startswith('module://'):
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: John H. <jd...@gm...> - 2009年09月23日 16:06:56
On Wed, Sep 23, 2009 at 10:48 AM, Jouni K. Seppänen <jk...@ik...> wrote:
> Andrew Straw <str...@as...> writes:
>
>> Jouni - I don't think this would be hard to add, but I'm swamped at
>> work. If this is an itch you'd like to scratch, feel free to hack away
>> on the image_comparison() function in
>> lib/matplotlib/testing/decorators.py -- it's a pretty straightforward
>> piece of code.
>
> Changing that is probably easy enough, but how should the overall code
> path look?
>
> I was planning to switch backends in matplotlib.test after it runs the
> Agg tests, so that the same test cases could be used to produce pdf
> files (that's why they save files without extensions, right?) but this
> seems to be impossible. The matplotlib.use function is a no-op if
> matplotlib.backends has been imported, regardless of the warn argument.
> For example, the following code produces two png files:
>
>  #!/usr/bin/env python
>  import matplotlib
>
>  matplotlib.use('agg')
>  from matplotlib import pyplot
>  pyplot.plot([3,1,4,1])
>  pyplot.savefig('foo1')
Take a look at the pyplot "switch_backends" function.
Alternatively, agg knows how to save pdf if given the extension, so we
could wire up the testing to use a module level extension set
somewhere which could be updated for each backend. This is probably
safer and cleaner than switch_backends
JDH
From: Andrew S. <str...@as...> - 2009年09月23日 17:43:10
Attachments: test-pdf-idea.patch
Jouni K. Seppänen wrote:
> John Hunter <jd...@gm...> writes:
>
> 
>>> pyplot.savefig('foo1')
>>> 
>> Take a look at the pyplot "switch_backends" function.
>> 
>
> Yes, that function was on the next line after the part you quoted. :-)
> It calls matplotlib.use with warn=False, but that function ends up doing
> nothing.
>
> 
>> Alternatively, agg knows how to save pdf if given the extension, so we
>> could wire up the testing to use a module level extension set
>> somewhere which could be updated for each backend. This is probably
>> safer and cleaner than switch_backends
>> 
>
> That sounds complicated. How about having the test cases call savefig
> with all the relevant file formats? That doesn't look so nice if the
> test cases end up with a big block of savefig calls, but it has the
> advantage that there is no magic involved and it is very obvious what is
> going on.
> 
Sorry, I should have been more clear. I was thinking that the
image_compare() decorator would call the test function multiple times,
having switched the backend between invocations. Thus, the call to
savefig() would continue not to explicitly set the extension. I've
quickly modified the source to reflect my idea, but I haven't had a
chance to flesh it out or test it. It should show the idea, though. See
attached.
-Andrew
From: Jouni K. S. <jk...@ik...> - 2009年09月27日 19:09:40
Andrew Straw <str...@as...> writes:
> Thus, the call to savefig() would continue not to explicitly set the
> extension. I've quickly modified the source to reflect my idea, but I
> haven't had a chance to flesh it out or test it. It should show the
> idea, though. See attached.
I committed something based on this, and a new rc parameter
savefig.extension that sets the filename extension when you call savefig
with a bare filename. The pdf tests seem to be working, at least for me,
but I am sure that the code can be improved.
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: Andrew S. <str...@as...> - 2009年09月22日 20:51:43
Michael Droettboom wrote:
> Jouni K. Seppänen wrote:
> 
>> I am thinking about adding pdf comparison ability to compare_images. One
>> simple way to do this would be to convert pdf files to pngs using
>> Ghostscript: if we store reference pdf files, and both the reference
>> file and the result of the test are converted using with exactly the
>> same version of gs, there should be no font-rendering or antialiasing
>> mismatches.
>>
>> Can we assume that all test computers will have some version of
>> Ghostscript installed and callable as "gs"?
>> 
>> 
> We can probably standardize the version of gs on the buildbot machines, 
> but it's been very useful up to now to have tests that can run on a 
> variety of developer machines as well. 
>
> I don't know how different the output will be from different versions of 
> gs -- maybe we should just try it and see. I have a pretty old version 
> of gs on my RHEL4 box (7.07). If you want me to send you a png of a 
> particular pdf to directly compare with yours before you even start with 
> the test infrastructure, I'm happy to do that.
> 
I understood Jouni's idea to be to save the .pdfs as baseline images --
then the same version of gs would be used to generated the rasterized
images for the baseline and test result -- the version on your computer.
I think this is the way to go (either that or compare the PDFs directly
somehow).
Anyhow, allowing the test infrastructure to support testing multiple
backends is why I removed file extensions from the test image name in
the first place, so anything along these lines should hopefully be quite
doable. We could add a keyword arg to the image comparison decorator
that specified which image formats to test. Alternatively, it could
perform comparisons based on the presence baseline images of known
extensions.
-Andrew
From: Andrew S. <str...@as...> - 2009年09月23日 17:57:01
John Hunter wrote:
> On Wed, Sep 23, 2009 at 12:42 PM, Andrew Straw <str...@as...> wrote:
>
> 
>> Sorry, I should have been more clear. I was thinking that the
>> image_compare() decorator would call the test function multiple times,
>> having switched the backend between invocations. Thus, the call to
>> savefig() would continue not to explicitly set the extension. I've
>> quickly modified the source to reflect my idea, but I haven't had a
>> chance to flesh it out or test it. It should show the idea, though. See
>> attached.
>> 
>
> Why not have the decorator pass the extension in to the test funcs --
> agg can print to pdf, ps, svg and png
> 
I'm not sure what you're suggesting. Presumably if we're driving agg OK
to draw .png, it will also draw .pdf OK (or does it have a pdf vector
backend independent of the MPL pdf backend that we want to test separately?)
I was just thinking it would be easiest to have test functions that look
like:
@image_comparison('my_figure')
def my_figure_test():
 plt.plot([1,2,3],[4,5,6])
 plt.savefig('my_figure')
This could automatically test all backends we have the infrastructure
and the baseline images for. It doesn't force the test writer to worry
about that stuff.
-Andrew
From: John H. <jd...@gm...> - 2009年09月23日 18:13:10
On Wed, Sep 23, 2009 at 12:56 PM, Andrew Straw <str...@as...> wrote:
> John Hunter wrote:
>> On Wed, Sep 23, 2009 at 12:42 PM, Andrew Straw <str...@as...> wrote:
>>
>>
>>> Sorry, I should have been more clear. I was thinking that the
>>> image_compare() decorator would call the test function multiple times,
>>> having switched the backend between invocations. Thus, the call to
>>> savefig() would continue not to explicitly set the extension. I've
>>> quickly modified the source to reflect my idea, but I haven't had a
>>> chance to flesh it out or test it. It should show the idea, though. See
>>> attached.
>>>
>>
>> Why not have the decorator pass the extension in to the test funcs --
>> agg can print to pdf, ps, svg and png
>>
> I'm not sure what you're suggesting. Presumably if we're driving agg OK
> to draw .png, it will also draw .pdf OK (or does it have a pdf vector
> backend independent of the MPL pdf backend that we want to test separately?)
No, it doesn't have a separate backend, but the backend_agg figure
canvas savefig method knows how to create FigureCanvasPDF etc to use
that backend to write the file w/o having to switch the default
backend with all the attendant hassles. So if you are using *Agg, and
do
 savefig(somefile.pdf)
agg will load the native pdf backend and use it. So I was envisioning
def test_something(ext):
 make_plot
 fig.savefig('myfile.%s'%ext)
and having the decorator pass in the extensions it wants one-by-one
JDH
From: Andrew S. <str...@as...> - 2009年09月23日 18:22:00
John Hunter wrote:
> On Wed, Sep 23, 2009 at 12:56 PM, Andrew Straw <str...@as...> wrote:
> 
>> John Hunter wrote:
>> 
>>> On Wed, Sep 23, 2009 at 12:42 PM, Andrew Straw <str...@as...> wrote:
>>>
>>>
>>> 
>>>> Sorry, I should have been more clear. I was thinking that the
>>>> image_compare() decorator would call the test function multiple times,
>>>> having switched the backend between invocations. Thus, the call to
>>>> savefig() would continue not to explicitly set the extension. I've
>>>> quickly modified the source to reflect my idea, but I haven't had a
>>>> chance to flesh it out or test it. It should show the idea, though. See
>>>> attached.
>>>>
>>>> 
>>> Why not have the decorator pass the extension in to the test funcs --
>>> agg can print to pdf, ps, svg and png
>>>
>>> 
>> I'm not sure what you're suggesting. Presumably if we're driving agg OK
>> to draw .png, it will also draw .pdf OK (or does it have a pdf vector
>> backend independent of the MPL pdf backend that we want to test separately?)
>> 
>
> No, it doesn't have a separate backend, but the backend_agg figure
> canvas savefig method knows how to create FigureCanvasPDF etc to use
> that backend to write the file w/o having to switch the default
> backend with all the attendant hassles. So if you are using *Agg, and
> do
>
> savefig(somefile.pdf)
>
> agg will load the native pdf backend and use it. So I was envisioning
>
> def test_something(ext):
> make_plot
> fig.savefig('myfile.%s'%ext)
>
> and having the decorator pass in the extensions it wants one-by-one
> 
I see. Is there something like
backend_agg.set_default_savefig_extension()? That would achieve both of
our goals. So maybe if it doesn't exist it would be easy to add in?
-Andrew
From: Jouni K. S. <jk...@ik...> - 2009年09月23日 05:17:34
Andrew Straw <str...@as...> writes:
> Michael Droettboom wrote:
>> We can probably standardize the version of gs on the buildbot machines, 
>> but it's been very useful up to now to have tests that can run on a 
>> variety of developer machines as well. 
>>
> I understood Jouni's idea to be to save the .pdfs as baseline images --
> then the same version of gs would be used to generated the rasterized
> images for the baseline and test result -- the version on your computer.
> I think this is the way to go (either that or compare the PDFs directly
> somehow).
Yes, that's what I meant: we want to test that the PDF file generated by
the code is "equivalent" to the baseline, and aside from some metadata
in the files, I think "equivalence" should mean that the files generate
the same rasterized output on some particular PDF renderer.
I suppose Ghostscript is widespread enough that we can assume that it
exists in the test environment? Or is there some buildout magic that we
should add in some file?
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: Andrew S. <str...@as...> - 2009年09月23日 06:26:17
Jouni K. Seppänen wrote:
> Andrew Straw <str...@as...> writes:
>
> 
>> Michael Droettboom wrote:
>> 
>>> We can probably standardize the version of gs on the buildbot machines, 
>>> but it's been very useful up to now to have tests that can run on a 
>>> variety of developer machines as well. 
>>>
>>> 
>> I understood Jouni's idea to be to save the .pdfs as baseline images --
>> then the same version of gs would be used to generated the rasterized
>> images for the baseline and test result -- the version on your computer.
>> I think this is the way to go (either that or compare the PDFs directly
>> somehow).
>> 
>
> Yes, that's what I meant: we want to test that the PDF file generated by
> the code is "equivalent" to the baseline, and aside from some metadata
> in the files, I think "equivalence" should mean that the files generate
> the same rasterized output on some particular PDF renderer.
>
> I suppose Ghostscript is widespread enough that we can assume that it
> exists in the test environment? Or is there some buildout magic that we
> should add in some file?
> 
We can always use "gs --version" in subprocess.check_call() and if it's
not installed (or if there's some other error with it) just not compare
the pdf output. That way we still test that the pdf generation at least
doesn't raise an exception, and a test pdf is generated for later
inspection if need be.
Jouni - I don't think this would be hard to add, but I'm swamped at
work. If this is an itch you'd like to scratch, feel free to hack away
on the image_comparison() function in
lib/matplotlib/testing/decorators.py -- it's a pretty straightforward
piece of code.
-Andrew
From: Andrew S. <str...@as...> - 2009年10月02日 05:27:46
Jouni K. Seppänen wrote:
> Jouni K. Seppänen <jk...@ik...> writes:
>
> 
>> I committed something based on this, and a new rc parameter
>> savefig.extension that sets the filename extension when you call savefig
>> with a bare filename. The pdf tests seem to be working, at least for me,
>> but I am sure that the code can be improved.
>> 
>
> The buildbot was getting errors, since the build environments don't have
> gs. I changed the tests so that this isn't an error. It might be better
> to make it a known fail, but is it possible for the image comparison
> decorator to turn one test function into several cases? I.e., the png
> case could be pass/fail, and the pdf case a known fail if there is no
> Ghostscript.
> 
Hi Jouni,
I just installed gs on one of the buildbots -- so at least the .pdf
generation should get tested on one machine. (The one running the py24
and py25 tests.)
As far as the decorator turning one test in into multiple tests out --
it may be possible. Nose does this automatically for tests like:
def check_sum(func):
 a = 10; b = 20
 assert a+b == func(a,b)
def test_sum():
 for func in [np.add, pylab.add]:
 yield check_sum, func
This test function is a generator that nose will then generate two test
cases out of. So, perhaps the image_comparison decorator could be
changed to become a generator? I'm not 100% sure it will work, but I
don't see why it won't. If it does work, I think this is a good idea.
-Andrew
From: John H. <jd...@gm...> - 2009年10月05日 11:26:12
On Fri, Oct 2, 2009 at 12:27 AM, Andrew Straw <str...@as...> wrote:
> I just installed gs on one of the buildbots -- so at least the .pdf
> generation should get tested on one machine. (The one running the py24
> and py25 tests.)
The OSX build bot has been down ever since the build machine was
upgraded to 10.6. Unfortunately, this has triggered some build and
runtime problems for mpl that I haven't been able to crack yet.
From: Jouni K. S. <jk...@ik...> - 2009年09月23日 16:25:28
John Hunter <jd...@gm...> writes:
>>  pyplot.savefig('foo1')
>
> Take a look at the pyplot "switch_backends" function.
Yes, that function was on the next line after the part you quoted. :-)
It calls matplotlib.use with warn=False, but that function ends up doing
nothing.
> Alternatively, agg knows how to save pdf if given the extension, so we
> could wire up the testing to use a module level extension set
> somewhere which could be updated for each backend. This is probably
> safer and cleaner than switch_backends
That sounds complicated. How about having the test cases call savefig
with all the relevant file formats? That doesn't look so nice if the
test cases end up with a big block of savefig calls, but it has the
advantage that there is no magic involved and it is very obvious what is
going on.
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: John H. <jd...@gm...> - 2009年09月23日 17:50:46
On Wed, Sep 23, 2009 at 12:42 PM, Andrew Straw <str...@as...> wrote:
> Sorry, I should have been more clear. I was thinking that the
> image_compare() decorator would call the test function multiple times,
> having switched the backend between invocations. Thus, the call to
> savefig() would continue not to explicitly set the extension. I've
> quickly modified the source to reflect my idea, but I haven't had a
> chance to flesh it out or test it. It should show the idea, though. See
> attached.
Why not have the decorator pass the extension in to the test funcs --
agg can print to pdf, ps, svg and png
From: Jouni K. S. <jk...@ik...> - 2009年09月28日 17:50:31
Jouni K. Seppänen <jk...@ik...> writes:
> I committed something based on this, and a new rc parameter
> savefig.extension that sets the filename extension when you call savefig
> with a bare filename. The pdf tests seem to be working, at least for me,
> but I am sure that the code can be improved.
The buildbot was getting errors, since the build environments don't have
gs. I changed the tests so that this isn't an error. It might be better
to make it a known fail, but is it possible for the image comparison
decorator to turn one test function into several cases? I.e., the png
case could be pass/fail, and the pdf case a known fail if there is no
Ghostscript.
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: Jouni K. S. <jk...@ik...> - 2009年10月04日 19:40:12
Andrew Straw <str...@as...> writes:
> This test function is a generator that nose will then generate two test
> cases out of. So, perhaps the image_comparison decorator could be
> changed to become a generator? I'm not 100% sure it will work, but I
> don't see why it won't. If it does work, I think this is a good idea.
It seems to have worked. The build slave with gs shows 65 tests with 2
known failures, and the one without gs shows 30 known failures:
http://mpl-buildbot.code.astraw.com/builders/Ubuntu%208.04%2C%20Python%202.5%2C%20amd64/builds/167/steps/test/logs/stdio
http://mpl-buildbot.code.astraw.com/builders/Ubuntu%208.04%2C%20Python%202.5%2C%20amd64%2C%20no%20dvipng%2C%20no%20latex/builds/91/steps/test/logs/stdio
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: Andrew S. <str...@as...> - 2009年10月05日 15:38:54
Jouni K. Seppänen wrote:
> Andrew Straw <str...@as...> writes:
>
> 
>> This test function is a generator that nose will then generate two test
>> cases out of. So, perhaps the image_comparison decorator could be
>> changed to become a generator? I'm not 100% sure it will work, but I
>> don't see why it won't. If it does work, I think this is a good idea.
>> 
>
> It seems to have worked. The build slave with gs shows 65 tests with 2
> known failures, and the one without gs shows 30 known failures:
>
> http://mpl-buildbot.code.astraw.com/builders/Ubuntu%208.04%2C%20Python%202.5%2C%20amd64/builds/167/steps/test/logs/stdio
> http://mpl-buildbot.code.astraw.com/builders/Ubuntu%208.04%2C%20Python%202.5%2C%20amd64%2C%20no%20dvipng%2C%20no%20latex/builds/91/steps/test/logs/stdio
>
> 
Great -- I think this is nice from a test-writer perspective, and I
think it tests just what we want to test -- the appearance of the pdfs.
I think this idea could be easily extended to the ps format and, if
inkscape was installed, we could use the --export-png option to test svg.
Thanks,
Andrew
From: Andrew S. <str...@as...> - 2009年10月12日 16:09:50
Jouni K. Seppänen wrote:
> I am thinking about adding pdf comparison ability to compare_images. One
> simple way to do this would be to convert pdf files to pngs using
> Ghostscript: if we store reference pdf files, and both the reference
> file and the result of the test are converted using with exactly the
> same version of gs, there should be no font-rendering or antialiasing
> mismatches.
>
> Can we assume that all test computers will have some version of
> Ghostscript installed and callable as "gs"?
>
> 
Hi Jouni,
Sorry for not noticing this earlier, but I'm looking in the baseline 
image directory, and I see a bunch of *_pdf.png files. I guess these 
have been convered to png from pdf on the tester's machine. Do you think 
it makes more sense to have the .pdf files in the test repo and convert 
to png at test run time? This way we don't become dependent on gs 
rendering quirks or differences across pdf renderers. Maybe the files 
are also smaller.
-Andrew
From: Jouni K. S. <jk...@ik...> - 2009年10月12日 16:32:16
Andrew Straw <str...@as...> writes:
> Sorry for not noticing this earlier, but I'm looking in the baseline 
> image directory, and I see a bunch of *_pdf.png files. I guess these 
> have been convered to png from pdf on the tester's machine. Do you think 
> it makes more sense to have the .pdf files in the test repo and convert 
> to png at test run time? This way we don't become dependent on gs 
> rendering quirks or differences across pdf renderers. Maybe the files 
> are also smaller.
That's exactly how it works now, isn't it? You are seeing the *_pdf.png
files because those get produced while running the tests from the *.pdf
files, and they are not checked into the svn repository.
-- 
Jouni K. Seppänen
http://www.iki.fi/jks
From: Andrew S. <str...@as...> - 2009年10月12日 20:14:18
Jouni K. Seppänen wrote:
> Andrew Straw <str...@as...> writes:
>
> 
>> Sorry for not noticing this earlier, but I'm looking in the baseline 
>> image directory, and I see a bunch of *_pdf.png files. I guess these 
>> have been convered to png from pdf on the tester's machine. Do you think 
>> it makes more sense to have the .pdf files in the test repo and convert 
>> to png at test run time? This way we don't become dependent on gs 
>> rendering quirks or differences across pdf renderers. Maybe the files 
>> are also smaller.
>> 
>
> That's exactly how it works now, isn't it? You are seeing the *_pdf.png
> files because those get produced while running the tests from the *.pdf
> files, and they are not checked into the svn repository.
> 
Sorry for the false alarm. My image browser was showing the .png and
.svg files, but not the .pdfs for some reason. I assumed it was showing
all the files in the directory, which lead to this false conclusion.
-Andrew
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /