SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S




1
(1)
2
(7)
3
4
5
(16)
6
(11)
7
8
(1)
9
(4)
10
(10)
11
12
(4)
13
(4)
14
(5)
15
(5)
16
(11)
17
(3)
18
(2)
19
(5)
20
(2)
21
(5)
22
(2)
23
(2)
24
25
26
(4)
27
(8)
28
(9)
29
(9)
30
(5)
31
(1)

Showing 11 results of 11

From: John H. <jd...@gm...> - 2009年01月06日 22:30:11
On Tue, Jan 6, 2009 at 3:07 PM, Andrew Straw <str...@as...> wrote:
> Hi Mike, This sounds like good news. I am swamped right now, but I hope
> to get the git mirror on github up-to-date early next week when work is
> a little less busy.
>
> John (or any developer with SF admin capabilities), if we could set up
> some kind of auto-notification on SVN commits, I could possibly set up a
> script to update the git repo. The thing I've seen so far is to go to
> the Sourceforge project page and under "Admin" selection "Subversion"
> then at the bottom in the "Hooks" section, change the entry to
> "ciabot_svn.py - Send commit stats to cia.navi.cx". If you could do
> that, from there I will attempt to take things to auto-update the git
> mirror.
I think you should have the permissions to do this Andrew so give it a
shot and let me know if you encounter any problems.
BTW, hexbin has been really useful to me -- I made a couple of
enhancements (adding the marginal distributions, suppressing cells
where mincnt < some threshold) so check them out with your use cases.
JDH
From: Andrew S. <str...@as...> - 2009年01月06日 21:07:33
Hi Mike, This sounds like good news. I am swamped right now, but I hope
to get the git mirror on github up-to-date early next week when work is
a little less busy.
John (or any developer with SF admin capabilities), if we could set up
some kind of auto-notification on SVN commits, I could possibly set up a
script to update the git repo. The thing I've seen so far is to go to
the Sourceforge project page and under "Admin" selection "Subversion"
then at the bottom in the "Hooks" section, change the entry to
"ciabot_svn.py - Send commit stats to cia.navi.cx". If you could do
that, from there I will attempt to take things to auto-update the git
mirror.
-Andrew
Michael Droettboom wrote:
> I have successfully used the git mirror to commit changes to the 
> maintenance branch. I've updated the matplotlib developer docs to 
> describe how to do it (not that bad really), though it takes a while 
> given the v0_98_4 "oops" branch ;) I have yet to figure out all the 
> loop-de-loops required to merge from the maintenance branch to the trunk 
> in git and then push that all back to SVN (should be possible, but may 
> not play well with svnmerge, anyway). The good news is that, as always, 
> svnmerge still works for that purpose.
> 
> Mike
> 
> Michael Droettboom wrote:
>> Thanks. These are really helpful pointers. For me, this is the one 
>> missing piece that would help me use git full-time, particularly with 
>> the way matplotlib and other projects I work on are laid out in SVN. So 
>> I'm pretty motivated to figure this out.
>>
>> I'll certainly share any findings in this regard.
>>
>> Cheers,
>> Mike
>>
>> Andrew Straw wrote:
>> 
>>> Hi Mike,
>>>
>>> I have not imported the branches. ( IIRC, this was there were several
>>> that weren't MPL but other parts of the repo such as py4science,
>>> toolkits and so on). It may be possible to add just the 0.98.5
>>> maintenance branch without the others, but I won't have a chance
>>> immediately to play around with that.
>>>
>>> To add all the branches to your git repo, you might be able to add
>>> something like "branches = branches/*:refs/remotes/branches/*" to the
>>> [svn-remote "svn"] section of .git/config and re-do "git svn fetch"...
>>> This will grab all the branches over all svn history, which will, I
>>> think, pull in py4science and toolkits branches... And I guess the
>>> download time from svn will be extremely long... In that case it's
>>> probably better to rsync from sourceforge's server to a local disk and
>>> do the git svn checkout that way making a whole new git repo.
>>>
>>> It may be worth attempting to talk to some real git/svn gurus at this
>>> point about tracking (only one or a couple) svn branches with git
>>> branches. So far, I've only dealt with the trunk in my git/svn
>>> interoperation experience.
>>>
>>> -Andrew
>>>
>>> Michael Droettboom wrote:
>>> 
>>> 
>>>> Thanks. I've incorporated your docs into the developer documentation.
>>>>
>>>> My next experiment will be to see if I can track the 0.98.5 maintenance 
>>>> branch with git. SVN tags/* show up as available remote branches, but 
>>>> not branches/*, which leaves me a bit stumped? If you've done this and 
>>>> there's a quick answer, let me know, otherwise I'll do a little digging 
>>>> to see if it's possible.
>>>>
>>>> Mike
>>>>
>>>> Andrew Straw wrote:
>>>> 
>>>> 
>>>>> Hi Michael,
>>>>>
>>>>> The main issue is that we can't use git "normally" because the main
>>>>> history will be kept with svn. Thus, there's going to be a lot of
>>>>> history rewriting going on through the rebase command. (These
>>>>> complications are obviously not the ideal scenario for git newbies...)
>>>>> Rather than answer your questions directly, I'll walk you through how I
>>>>> do things. (This is not tried on the MPL svn repo, but some a private
>>>>> svn repo. I don't see any fundamental differences, though. So this
>>>>> should work.)
>>>>>
>>>>> (Hopefully this will be cut-and-pasteable ReST, which could go in the
>>>>> docs somewhere):
>>>>>
>>>>> Making a git feature branch and committing to svn trunk
>>>>> -------------------------------------------------------
>>>>>
>>>>> Start with a virgin tree in sync with the svn trunk on the git branch
>>>>> "master"::
>>>>>
>>>>> git checkout master
>>>>> git svn rebase
>>>>>
>>>>> To create a new, local branch called "whizbang-branch"::
>>>>>
>>>>> git checkout -b whizbang-branch
>>>>>
>>>>> Do make commits to the local branch::
>>>>>
>>>>> # hack on a bunch of files
>>>>> git add bunch of files
>>>>> git commit -m "modified a bunch of files"
>>>>> # repeat this as necessary
>>>>>
>>>>> Now, go back to the master branch and append the history of your branch
>>>>> to the master branch, which will end up as the svn trunk::
>>>>>
>>>>> git checkout master
>>>>> git svn rebase # Ensure we have most recent svn
>>>>> git rebase whizbang-branch # Append whizbang changes to master branch
>>>>> git svn dcommit -n # Check that this will apply to svn
>>>>> git svn dcommit # Actually apply to svn
>>>>>
>>>>> Finally, you may want to continue working on your whizbang-branch, so
>>>>> rebase it to the new master::
>>>>>
>>>>> git checkout whizbang-branch
>>>>> git rebase master
>>>>>
>>>>> Michael Droettboom wrote:
>>>>> 
>>>>> 
>>>>> 
>>>>>> This is mostly for Andrew Straw, but thought anyone else experimenting
>>>>>> with git may be interested. I'm going through some real newbie pains
>>>>>> here, and I don't think what I'm doing is all that advanced.
>>>>>>
>>>>>> So, I've had a local git repository cloned from github (as per Andrew's
>>>>>> instructions), made a branch, started hacking, all is well. Now, I
>>>>>> would like to update my master branch from SVN to get some of the recent
>>>>>> changes others have been making.
>>>>>>
>>>>>> Following the instructions in the FAQ,
>>>>>>
>>>>>> git svn rebase
>>>>>>
>>>>>> actually results in a number of conflicts in files I didn't touch. I
>>>>>> shouldn't have to resolve this conflicts, right? 'git status' shows no
>>>>>> local changes, nothing staged -- nothing that should conflict.
>>>>>>
>>>>>> It turns out, if I do
>>>>>>
>>>>>> git pull
>>>>>>
>>>>>> then,
>>>>>>
>>>>>> git svn rebase
>>>>>>
>>>>>> all is well.
>>>>>>
>>>>>> Any idea why? Should I add that to the instructions in the FAQ?
>>>>>>
>>>>>> Now, here's where I'm really stumped. I finished my experimental
>>>>>> branch, and I would like to commit it back to SVN.
>>>>>>
>>>>>> This is what I did, with comments preceded by '#'
>>>>>>
>>>>>> # Go back to the master branch
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> git checkout master
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> # Merge in experimental
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> git merge experimental
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> # Ok -- looks good: experimental new feature is integrated, there were
>>>>>> no conflicts
>>>>>> # However...
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> git svn dcommit
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> Committing to
>>>>>> https://matplotlib.svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib
>>>>>> ...
>>>>>> Merge conflict during commit: File or directory
>>>>>> 'doc/users/whats_new.rst' is out of date; try updating: resource out of
>>>>>> date; try updating at /home/mdroe/usr/libexec/git-core//git-svn line 467
>>>>>> # 1) I didn't change that file, why should I care
>>>>>> # 2) I don't know who to update it
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> git pull
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> Already up-to-date.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> git svn rebase
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> First, rewinding head to replay your work on top of it...
>>>>>> Applying: more doc adds
>>>>>> /home/mdroe/builds/matplotlib.git/.git/rebase-apply/patch:14: trailing
>>>>>> whitespace.
>>>>>> a lot of new features and bug-fixes. warning: 1 line adds whitespace
>>>>>> errors.
>>>>>> Applying: added some docs for linestyles and markers
>>>>>> Applying: Remove trailing whitespace.
>>>>>> Applying: figure/subplot and font_manager bugfixes
>>>>>> Applying: added support for xlwt in exceltools
>>>>>> Applying: fixed a typo in whats_new_98_4_legend.py
>>>>>> Applying: fixed typo in Line2D.set_marker doc.
>>>>>> Applying: /matplotlib/__init__.py: catch OSError when calling subprocess.
>>>>>> Applying: more doc adds
>>>>>> /home/mdroe/builds/matplotlib.git/.git/rebase-apply/patch:14: trailing
>>>>>> whitespace.
>>>>>> a lot of new features and bug-fixes. error: patch failed:
>>>>>> doc/users/whats_new.rst:10
>>>>>> error: doc/users/whats_new.rst: patch does not apply
>>>>>> Using index info to reconstruct a base tree...
>>>>>> <stdin>:14: trailing whitespace.
>>>>>> a lot of new features and bug-fixes. warning: 1 line adds whitespace
>>>>>> errors.
>>>>>> Falling back to patching base and 3-way merge...
>>>>>> No changes -- Patch already applied.
>>>>>> Applying: added some docs for linestyles and markers
>>>>>> error: patch failed: doc/devel/coding_guide.rst:62
>>>>>> error: doc/devel/coding_guide.rst: patch does not apply
>>>>>> error: patch failed: doc/matplotlibrc:43
>>>>>> error: doc/matplotlibrc: patch does not apply
>>>>>> error: patch failed: doc/pyplots/whats_new_98_4_legend.py:4
>>>>>> error: doc/pyplots/whats_new_98_4_legend.py: patch does not apply
>>>>>> error: patch failed: lib/matplotlib/lines.py:313
>>>>>> error: lib/matplotlib/lines.py: patch does not apply
>>>>>> Using index info to reconstruct a base tree...
>>>>>> Falling back to patching base and 3-way merge...
>>>>>> Auto-merged doc/pyplots/whats_new_98_4_legend.py
>>>>>> CONFLICT (content): Merge conflict in doc/pyplots/whats_new_98_4_legend.py
>>>>>> Auto-merged lib/matplotlib/lines.py
>>>>>> Failed to merge in the changes.
>>>>>> Patch failed at 0010.
>>>>>>
>>>>>> When you have resolved this problem run "git rebase --continue".
>>>>>> If you would prefer to skip this patch, instead run "git rebase --skip".
>>>>>> To restore the original branch and stop rebasing run "git rebase --abort".
>>>>>>
>>>>>> rebase refs/remotes/trunk: command returned error: 1
>>>>>> # Ok, I'm back to merging files that don't conflict with my changes! # I
>>>>>> shouldn't have to do that, right?
>>>>>> # And FYI:
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>>> git status
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>> doc/pyplots/whats_new_98_4_legend.py: needs merge
>>>>>> # Not currently on any branch.
>>>>>> # Changes to be committed:
>>>>>> # (use "git reset HEAD <file>..." to unstage)
>>>>>> #
>>>>>> # modified: lib/matplotlib/lines.py
>>>>>> #
>>>>>> # Changed but not updated:
>>>>>> # (use "git add <file>..." to update what will be committed)
>>>>>> #
>>>>>> # unmerged: doc/pyplots/whats_new_98_4_legend.py
>>>>>> #
>>>>>> # Untracked files:
>>>>>> # (use "git add <file>..." to include in what will be committed)
>>>>>> #
>>>>>> # lib/matplotlib/mpl-data/matplotlib.conf
>>>>>> # lib/matplotlib/mpl-data/matplotlibrc
>>>>>> # setupext.pyc
>>>>>> # src/backend_agg.cpp~
>>>>>>
>>>>>> Now I feel stuck. How do I "undo" the merge from experimental to master?
>>>>>>
>>>>>> Sorry if these are obvious questions, but I think I've followed the
>>>>>> git-svn instructions -- I must have made a mistake somewhere along the
>>>>>> way, but I'm not sure how to debug and/or fix it.
>>>>>>
>>>>>> Mike
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> ------------------------------------------------------------------------------
>>>> SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
>>>> The future of the web can't happen without you. Join us at MIX09 to help
>>>> pave the way to the Next Web now. Learn more and register at
>>>> http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
>>>> _______________________________________________
>>>> Matplotlib-devel mailing list
>>>> Mat...@li...
>>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>>>> 
>>>> 
>>> 
>>> 
>> 
> 
From: Michael D. <md...@st...> - 2009年01月06日 20:59:25
I have successfully used the git mirror to commit changes to the 
maintenance branch. I've updated the matplotlib developer docs to 
describe how to do it (not that bad really), though it takes a while 
given the v0_98_4 "oops" branch ;) I have yet to figure out all the 
loop-de-loops required to merge from the maintenance branch to the trunk 
in git and then push that all back to SVN (should be possible, but may 
not play well with svnmerge, anyway). The good news is that, as always, 
svnmerge still works for that purpose.
Mike
Michael Droettboom wrote:
> Thanks. These are really helpful pointers. For me, this is the one 
> missing piece that would help me use git full-time, particularly with 
> the way matplotlib and other projects I work on are laid out in SVN. So 
> I'm pretty motivated to figure this out.
>
> I'll certainly share any findings in this regard.
>
> Cheers,
> Mike
>
> Andrew Straw wrote:
> 
>> Hi Mike,
>>
>> I have not imported the branches. ( IIRC, this was there were several
>> that weren't MPL but other parts of the repo such as py4science,
>> toolkits and so on). It may be possible to add just the 0.98.5
>> maintenance branch without the others, but I won't have a chance
>> immediately to play around with that.
>>
>> To add all the branches to your git repo, you might be able to add
>> something like "branches = branches/*:refs/remotes/branches/*" to the
>> [svn-remote "svn"] section of .git/config and re-do "git svn fetch"...
>> This will grab all the branches over all svn history, which will, I
>> think, pull in py4science and toolkits branches... And I guess the
>> download time from svn will be extremely long... In that case it's
>> probably better to rsync from sourceforge's server to a local disk and
>> do the git svn checkout that way making a whole new git repo.
>>
>> It may be worth attempting to talk to some real git/svn gurus at this
>> point about tracking (only one or a couple) svn branches with git
>> branches. So far, I've only dealt with the trunk in my git/svn
>> interoperation experience.
>>
>> -Andrew
>>
>> Michael Droettboom wrote:
>> 
>> 
>>> Thanks. I've incorporated your docs into the developer documentation.
>>>
>>> My next experiment will be to see if I can track the 0.98.5 maintenance 
>>> branch with git. SVN tags/* show up as available remote branches, but 
>>> not branches/*, which leaves me a bit stumped? If you've done this and 
>>> there's a quick answer, let me know, otherwise I'll do a little digging 
>>> to see if it's possible.
>>>
>>> Mike
>>>
>>> Andrew Straw wrote:
>>> 
>>> 
>>>> Hi Michael,
>>>>
>>>> The main issue is that we can't use git "normally" because the main
>>>> history will be kept with svn. Thus, there's going to be a lot of
>>>> history rewriting going on through the rebase command. (These
>>>> complications are obviously not the ideal scenario for git newbies...)
>>>> Rather than answer your questions directly, I'll walk you through how I
>>>> do things. (This is not tried on the MPL svn repo, but some a private
>>>> svn repo. I don't see any fundamental differences, though. So this
>>>> should work.)
>>>>
>>>> (Hopefully this will be cut-and-pasteable ReST, which could go in the
>>>> docs somewhere):
>>>>
>>>> Making a git feature branch and committing to svn trunk
>>>> -------------------------------------------------------
>>>>
>>>> Start with a virgin tree in sync with the svn trunk on the git branch
>>>> "master"::
>>>>
>>>> git checkout master
>>>> git svn rebase
>>>>
>>>> To create a new, local branch called "whizbang-branch"::
>>>>
>>>> git checkout -b whizbang-branch
>>>>
>>>> Do make commits to the local branch::
>>>>
>>>> # hack on a bunch of files
>>>> git add bunch of files
>>>> git commit -m "modified a bunch of files"
>>>> # repeat this as necessary
>>>>
>>>> Now, go back to the master branch and append the history of your branch
>>>> to the master branch, which will end up as the svn trunk::
>>>>
>>>> git checkout master
>>>> git svn rebase # Ensure we have most recent svn
>>>> git rebase whizbang-branch # Append whizbang changes to master branch
>>>> git svn dcommit -n # Check that this will apply to svn
>>>> git svn dcommit # Actually apply to svn
>>>>
>>>> Finally, you may want to continue working on your whizbang-branch, so
>>>> rebase it to the new master::
>>>>
>>>> git checkout whizbang-branch
>>>> git rebase master
>>>>
>>>> Michael Droettboom wrote:
>>>> 
>>>> 
>>>> 
>>>>> This is mostly for Andrew Straw, but thought anyone else experimenting
>>>>> with git may be interested. I'm going through some real newbie pains
>>>>> here, and I don't think what I'm doing is all that advanced.
>>>>>
>>>>> So, I've had a local git repository cloned from github (as per Andrew's
>>>>> instructions), made a branch, started hacking, all is well. Now, I
>>>>> would like to update my master branch from SVN to get some of the recent
>>>>> changes others have been making.
>>>>>
>>>>> Following the instructions in the FAQ,
>>>>>
>>>>> git svn rebase
>>>>>
>>>>> actually results in a number of conflicts in files I didn't touch. I
>>>>> shouldn't have to resolve this conflicts, right? 'git status' shows no
>>>>> local changes, nothing staged -- nothing that should conflict.
>>>>>
>>>>> It turns out, if I do
>>>>>
>>>>> git pull
>>>>>
>>>>> then,
>>>>>
>>>>> git svn rebase
>>>>>
>>>>> all is well.
>>>>>
>>>>> Any idea why? Should I add that to the instructions in the FAQ?
>>>>>
>>>>> Now, here's where I'm really stumped. I finished my experimental
>>>>> branch, and I would like to commit it back to SVN.
>>>>>
>>>>> This is what I did, with comments preceded by '#'
>>>>>
>>>>> # Go back to the master branch
>>>>> 
>>>>> 
>>>>> 
>>>>>> git checkout master
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> # Merge in experimental
>>>>> 
>>>>> 
>>>>> 
>>>>>> git merge experimental
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> # Ok -- looks good: experimental new feature is integrated, there were
>>>>> no conflicts
>>>>> # However...
>>>>> 
>>>>> 
>>>>> 
>>>>>> git svn dcommit
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> Committing to
>>>>> https://matplotlib.svn.sourceforge.net/svnroot/matplotlib/trunk/matplotlib
>>>>> ...
>>>>> Merge conflict during commit: File or directory
>>>>> 'doc/users/whats_new.rst' is out of date; try updating: resource out of
>>>>> date; try updating at /home/mdroe/usr/libexec/git-core//git-svn line 467
>>>>> # 1) I didn't change that file, why should I care
>>>>> # 2) I don't know who to update it
>>>>> 
>>>>> 
>>>>> 
>>>>>> git pull
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> Already up-to-date.
>>>>> 
>>>>> 
>>>>> 
>>>>>> git svn rebase
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> First, rewinding head to replay your work on top of it...
>>>>> Applying: more doc adds
>>>>> /home/mdroe/builds/matplotlib.git/.git/rebase-apply/patch:14: trailing
>>>>> whitespace.
>>>>> a lot of new features and bug-fixes. warning: 1 line adds whitespace
>>>>> errors.
>>>>> Applying: added some docs for linestyles and markers
>>>>> Applying: Remove trailing whitespace.
>>>>> Applying: figure/subplot and font_manager bugfixes
>>>>> Applying: added support for xlwt in exceltools
>>>>> Applying: fixed a typo in whats_new_98_4_legend.py
>>>>> Applying: fixed typo in Line2D.set_marker doc.
>>>>> Applying: /matplotlib/__init__.py: catch OSError when calling subprocess.
>>>>> Applying: more doc adds
>>>>> /home/mdroe/builds/matplotlib.git/.git/rebase-apply/patch:14: trailing
>>>>> whitespace.
>>>>> a lot of new features and bug-fixes. error: patch failed:
>>>>> doc/users/whats_new.rst:10
>>>>> error: doc/users/whats_new.rst: patch does not apply
>>>>> Using index info to reconstruct a base tree...
>>>>> <stdin>:14: trailing whitespace.
>>>>> a lot of new features and bug-fixes. warning: 1 line adds whitespace
>>>>> errors.
>>>>> Falling back to patching base and 3-way merge...
>>>>> No changes -- Patch already applied.
>>>>> Applying: added some docs for linestyles and markers
>>>>> error: patch failed: doc/devel/coding_guide.rst:62
>>>>> error: doc/devel/coding_guide.rst: patch does not apply
>>>>> error: patch failed: doc/matplotlibrc:43
>>>>> error: doc/matplotlibrc: patch does not apply
>>>>> error: patch failed: doc/pyplots/whats_new_98_4_legend.py:4
>>>>> error: doc/pyplots/whats_new_98_4_legend.py: patch does not apply
>>>>> error: patch failed: lib/matplotlib/lines.py:313
>>>>> error: lib/matplotlib/lines.py: patch does not apply
>>>>> Using index info to reconstruct a base tree...
>>>>> Falling back to patching base and 3-way merge...
>>>>> Auto-merged doc/pyplots/whats_new_98_4_legend.py
>>>>> CONFLICT (content): Merge conflict in doc/pyplots/whats_new_98_4_legend.py
>>>>> Auto-merged lib/matplotlib/lines.py
>>>>> Failed to merge in the changes.
>>>>> Patch failed at 0010.
>>>>>
>>>>> When you have resolved this problem run "git rebase --continue".
>>>>> If you would prefer to skip this patch, instead run "git rebase --skip".
>>>>> To restore the original branch and stop rebasing run "git rebase --abort".
>>>>>
>>>>> rebase refs/remotes/trunk: command returned error: 1
>>>>> # Ok, I'm back to merging files that don't conflict with my changes! # I
>>>>> shouldn't have to do that, right?
>>>>> # And FYI:
>>>>> 
>>>>> 
>>>>> 
>>>>>> git status
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> doc/pyplots/whats_new_98_4_legend.py: needs merge
>>>>> # Not currently on any branch.
>>>>> # Changes to be committed:
>>>>> # (use "git reset HEAD <file>..." to unstage)
>>>>> #
>>>>> # modified: lib/matplotlib/lines.py
>>>>> #
>>>>> # Changed but not updated:
>>>>> # (use "git add <file>..." to update what will be committed)
>>>>> #
>>>>> # unmerged: doc/pyplots/whats_new_98_4_legend.py
>>>>> #
>>>>> # Untracked files:
>>>>> # (use "git add <file>..." to include in what will be committed)
>>>>> #
>>>>> # lib/matplotlib/mpl-data/matplotlib.conf
>>>>> # lib/matplotlib/mpl-data/matplotlibrc
>>>>> # setupext.pyc
>>>>> # src/backend_agg.cpp~
>>>>>
>>>>> Now I feel stuck. How do I "undo" the merge from experimental to master?
>>>>>
>>>>> Sorry if these are obvious questions, but I think I've followed the
>>>>> git-svn instructions -- I must have made a mistake somewhere along the
>>>>> way, but I'm not sure how to debug and/or fix it.
>>>>>
>>>>> Mike
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>>> 
>>> ------------------------------------------------------------------------------
>>> SF.Net email is Sponsored by MIX09, March 18-20, 2009 in Las Vegas, Nevada.
>>> The future of the web can't happen without you. Join us at MIX09 to help
>>> pave the way to the Next Web now. Learn more and register at
>>> http://ad.doubleclick.net/clk;208669438;13503038;i?http://2009.visitmix.com/
>>> _______________________________________________
>>> Matplotlib-devel mailing list
>>> Mat...@li...
>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>>> 
>>> 
>> 
>> 
>
> 
-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA
From: Manuel M. <mm...@as...> - 2009年01月06日 20:02:50
Eric Firing wrote:
> Manuel Metz wrote:
>> Hi,
>> I just noted a strange behavior of contour(). When a contour plot is
>> created with negative values and using a single color instead of a cmap,
>> contour _always_ uses the contour.negative_linestyle, even if linestyles
>> are specifically provided:
>>
>> x = linspace(-pi,pi,100)
>> X,Y = meshgrid(x,x)
>> Z = cos(X) + 0.5*cos(Y)
>>
>> contour(X,Y,Z, colors='k', linestyles='solid')
>>
>> I would expect the contour.negative_linestyle to be used when
>> linestyles=None, but not if it is explicitly specified. More serious,
>> even in the following case the effect takes place.
>>
>> contour_ls = ['solid']*9
>> contour(X,Y,Z,10, colors='k', linestyles=contour_ls)
>>
>>
>> Should we consider this as a bug and fix it ???
> 
> Fixed. Thank you. Oh, rats. I forgot again--I need to put this in the
> maintenance branch, and your immediately preceding change probably
> should go there, too, so I will take care of both now.
Thanks ...
> Eric
From: Eric F. <ef...@ha...> - 2009年01月06日 19:42:38
Manuel Metz wrote:
> Hi,
> I just noted a strange behavior of contour(). When a contour plot is
> created with negative values and using a single color instead of a cmap,
> contour _always_ uses the contour.negative_linestyle, even if linestyles
> are specifically provided:
> 
> x = linspace(-pi,pi,100)
> X,Y = meshgrid(x,x)
> Z = cos(X) + 0.5*cos(Y)
> 
> contour(X,Y,Z, colors='k', linestyles='solid')
> 
> I would expect the contour.negative_linestyle to be used when
> linestyles=None, but not if it is explicitly specified. More serious,
> even in the following case the effect takes place.
> 
> contour_ls = ['solid']*9
> contour(X,Y,Z,10, colors='k', linestyles=contour_ls)
> 
> 
> Should we consider this as a bug and fix it ???
Fixed. Thank you. Oh, rats. I forgot again--I need to put this in the 
maintenance branch, and your immediately preceding change probably 
should go there, too, so I will take care of both now.
Eric
From: Manuel M. <mm...@as...> - 2009年01月06日 18:53:12
Hi,
 I just noted a strange behavior of contour(). When a contour plot is
created with negative values and using a single color instead of a cmap,
contour _always_ uses the contour.negative_linestyle, even if linestyles
are specifically provided:
 x = linspace(-pi,pi,100)
 X,Y = meshgrid(x,x)
 Z = cos(X) + 0.5*cos(Y)
 contour(X,Y,Z, colors='k', linestyles='solid')
I would expect the contour.negative_linestyle to be used when
linestyles=None, but not if it is explicitly specified. More serious,
even in the following case the effect takes place.
 contour_ls = ['solid']*9
 contour(X,Y,Z,10, colors='k', linestyles=contour_ls)
Should we consider this as a bug and fix it ???
mm
From: John H. <jd...@gm...> - 2009年01月06日 16:11:31
On Tue, Jan 6, 2009 at 9:58 AM, Drain, Theodore R
<the...@jp...> wrote:
> OK - nose it is. How do you want to handle the dependency? My opinion is that since tests are development tools, it's not unreasonable to require that nose be installed by the developer and not as an embedded dependency in MPL (or at least that should be an option).
agreed
JDH
From: Drain, T. R <the...@jp...> - 2009年01月06日 15:59:02
OK - nose it is. How do you want to handle the dependency? My opinion is that since tests are development tools, it's not unreasonable to require that nose be installed by the developer and not as an embedded dependency in MPL (or at least that should be an option).
> -----Original Message-----
> From: John Hunter [mailto:jd...@gm...]
> Sent: Tuesday, January 06, 2009 7:03 AM
> To: Drain, Theodore R
> Cc: mat...@li...
> Subject: Re: [matplotlib-devel] John: Thoughts on a standard test
> system
>
> On Mon, Dec 22, 2008 at 11:45 AM, Drain, Theodore R
> <the...@jp...> wrote:
> > John,
> > Sometime in January, we are going to spend some time fixing a few
> minor MPL bugs we've hit and a probably work on a few enhancements
> (I'll send you a list in Jan before we start anything - it's nothing
> major). We're also going to work on writing a set of tests that try
> various plots w/ units. I was thinking this would be a good time to
> introduce a standard test harness into the MPL CM tree.
>
> Hey Ted -- Sorry I haven't gotten back to you yet. These proposals
> sound good. I have only very limited experience with unit testing and
> you have tons, so I don't have a lot to add to what you've already
> written, but I have a few inline comments below.
>
>
> > I think we should:
> >
> > 1) Select a standard test harness. The two big hitters seem to be
> unittest and nose. unittest has the advantage that it's shipped w/
> Python. nose seems to do better with automatic discovery of test
> cases.
>
> I prefer nose. I've used both a bit and find nose much more intuitive
> and easy to use. The fact that ipython, numpy, and scipy are all
> using nose makes the choice fairly compelling, especially if some of
> your image specific tests could be ported w/o too much headache.
>
>
> > 2) Establish a set of testing requirements. Naming conventions,
> usage conventions, etc. Things like tests should never print anything
> to the screen (i.e. correct behavior is encoded in the test case) or
> rely on a GUI unless that's what is being tested (allows tests to be
> run w/o an X-server). Basically write some documentation for the test
> system that includes how to use it and what's required of people when
> they add tests.
> >
> > 3) Write a test 'template' for people to use. This would define a
> test case and put TODO statements or something like it in place for
> people to fill in. More than one might be good for various classes of
> tests (maybe an image comparison template for testing agg drawing and a
> non-plot template for testing basic computations like transforms?).
> >
> > Some things we do on my project for our Python test systems:
> >
> > We put all unit tests in a 'test' directory inside the python package
> being tested. The disadvantage of this is that potentially large tests
> are inside the code to be delivered (though a nice delivery script can
> easily strip them out). The advantage of this is that it makes
> coverage checking easier. You can run the test case for a package and
> then check the coverage in the module w/o trying to figure out which
> things should be coverage checked or not. If you put the test cases in
> a different directory tree, then it's much harder to identify coverage
> sources. Though in our case we have 100's of python modules - in MPL's
> case, there is really just MPL, projections, backends, and numerix so
> maybe that's not too much of a problem.
> >
> > Automatic coverage isn't something that is must have, but it is
> really nice. I've found that it actually causes developers to write
> more tests because they can run the coverage and get a "score" that
> other people will see. It's also a good way to check a new submission
> to see if the developer has done basic testing of the code.
>
> All of the above sounds reasonable and I don't have strong opinions on
> any of it, so I will defer to those who write the initial framework
> and tests.
>
>
> > For our tests, we require that the test never print anything to the
> screen, clean up any of its output files (i.e. leave the directory in
> the same state it was before), and only report that the test passed or
> failed and if it failed, add some error message. The key thing is that
> the conditions for correctness are encoded into the test itself. We
> have a command line option that gets passed to the test cases to say
> "don't clean up" so that you can examine the output from a failing test
> case w/o modifying the test code. This option is really useful when an
> image comparison fails.
> > We've wrapped the basic python unittest package. It's pretty simple
> and reasonably powerful. I doubt there is anything MPL would be doing
> that it can't handle. The auto-discovery of nose is nice but
> unnecessary in my opinion. As long as people follow a standard way of
> doing things, auto-discovery is fairly easy. Of course if you prefer
> nose and don't mind the additional tool requirement, that's fine too.
> Some things that are probably needed:
> >
> > - command line executable that runs the tests.
> > - support flags for running only some tests
> > - support flags for running only tests that don't need a GUI
> backend
> > (require Agg?). This allows automated testing and visual
> testing to be
> > combined. GUI tests could be placed in identified
> directories and then
> > only run when requested since by their nature they require
> specific backends
> > and user interaction.
> > - nice report on test pass/fail status
> > - hooks to add coverage checking and reporting in the future
> > - test utilities
> > - image comparison tools
> > - ??? basically anything that helps w/ testing and could be
> common across
> > test cases
> >
> > As a first cut, I would suggest is something like this:
> >
> > .../test/run.py
> > mplTest/
> > test_unit/
> > test_transform/
> > test_...
> >
> > The run script would execute all/some of the tests. Any common test
> code would be put in the mplTest directory. Any directory named
> 'test_XXX' is for test cases where 'XXX' is some category name that can
> be used in the run script to run a subset of cases. Inside each
> test_XXX directory, one unittest class per file. The run script would
> find the .py files in the test_XXX directories, import them, find all
> the unittest classes, and run them. The run script also sets up
> sys.path so that the mplTest package is available.
> >
> > Links:
> > http://docs.python.org/library/unittest.html
> > http://somethingaboutorange.com/mrl/projects/nose/
> > http://kbyanc.blogspot.com/2007/06/pythons-unittest-module-aint-that-
> bad.html
> >
> > coverage checking:
> > http://nedbatchelder.com/code/modules/coverage.html
> > http://darcs.idyll.org/~t/projects/figleaf/doc/
> >
> > Thoughts?
> > Ted
> > ps: looking at the current unit directory, it looks like at least one
> test (nose_tests) is using nose even though it's not supplied w/ MPL.
> Most of the tests do something and show a plot but the correct behavior
> is never written into the test.
>
>
> My fault -- I wrote some tests to make sure all the different kwargs
> variants were processed properly, but since we did not have a
> "correctness of output" framework in place, punted on that part. I
> think having coverage of the myriad ways of setting properties is of
> some value.
>
> On the issue of units (not unit testing but unit support which is
> motivating your writing of unit test) I think we may need a new
> approach. The current approach is to put unitized data into the
> artists, and update the converted data at the artist layer. I don't
> know that this is the proper design. For this approach to work, every
> scalar and array quantity must support units at the artist layer, and
> all the calculations that are done at the plotting layer (eg error
> bar) to setup these artists must be careful to preserve unitized data
> throughout. So it is burdensome on the artist layer and on the
> plotting function layer.
>
> The problem is compounded because most of the other developers are not
> really aware of how to use the units interface, which I take
> responsibility for because they have oft asked for a design document,
> which I have yet to provide because I am unhappy with the design. So
> new code tends to break functions that once had unit support. Which
> is why we need unit tests ....
>
> I think everything might be easier if mpl had an intermediate class
> layer PlotItem for plot types, eg XYPlot, BarChart, ErrorBar as we
> already do for Legend. The plotting functions would instantiate these
> objects with the input arguments and track unit data through the
> reference to the axis. These plot objects would contain all the
> artist primitives which would store their data in native floating
> point, which would remove the burden on the artists from handling
> units and put it all in the plot creation/update logic. The objects
> would store references to all of the original inputs, and would update
> the primitive artists on unit changes. The basic problem is that the
> unitized data must live somewhere, and I am not sure that the low
> level primitive artists are the best place for that -- it may be a
> better idea to keep this data at the level of a PlotItem and let the
> primitive artists handle already converted floating point data. This
> is analogous to the current approach of passing transformed data to
> the backends to make it easier to write new backends. I need to chew
> on this some more.
>
>
> But this question aside, by all means fire away on creating the unit
> tests.
>
> JDH
From: Darren D. <dsd...@gm...> - 2009年01月06日 15:37:39
On Tue, Jan 6, 2009 at 10:02 AM, John Hunter <jd...@gm...> wrote:
> On Mon, Dec 22, 2008 at 11:45 AM, Drain, Theodore R
> <the...@jp...> wrote:
> > John,
> > Sometime in January, we are going to spend some time fixing a few minor
> MPL bugs we've hit and a probably work on a few enhancements (I'll send you
> a list in Jan before we start anything - it's nothing major). We're also
> going to work on writing a set of tests that try various plots w/ units. I
> was thinking this would be a good time to introduce a standard test harness
> into the MPL CM tree.
>
> Hey Ted -- Sorry I haven't gotten back to you yet. These proposals
> sound good. I have only very limited experience with unit testing and
> you have tons, so I don't have a lot to add to what you've already
> written, but I have a few inline comments below.
>
>
> > I think we should:
> >
> > 1) Select a standard test harness. The two big hitters seem to be
> unittest and nose. unittest has the advantage that it's shipped w/ Python.
> nose seems to do better with automatic discovery of test cases.
>
> I prefer nose. I've used both a bit and find nose much more intuitive
> and easy to use. The fact that ipython, numpy, and scipy are all
> using nose makes the choice fairly compelling, especially if some of
> your image specific tests could be ported w/o too much headache.
>
I also prefer to use nose. nose can still be used to discover tests written
with unittest.
>
> > 2) Establish a set of testing requirements. Naming conventions, usage
> conventions, etc. Things like tests should never print anything to the
> screen (i.e. correct behavior is encoded in the test case) or rely on a GUI
> unless that's what is being tested (allows tests to be run w/o an X-server).
> Basically write some documentation for the test system that includes how to
> use it and what's required of people when they add tests.
> >
> > 3) Write a test 'template' for people to use. This would define a test
> case and put TODO statements or something like it in place for people to
> fill in. More than one might be good for various classes of tests (maybe an
> image comparison template for testing agg drawing and a non-plot template
> for testing basic computations like transforms?).
> >
> > Some things we do on my project for our Python test systems:
> >
> > We put all unit tests in a 'test' directory inside the python package
> being tested. The disadvantage of this is that potentially large tests are
> inside the code to be delivered (though a nice delivery script can easily
> strip them out). The advantage of this is that it makes coverage checking
> easier. You can run the test case for a package and then check the coverage
> in the module w/o trying to figure out which things should be coverage
> checked or not. If you put the test cases in a different directory tree,
> then it's much harder to identify coverage sources. Though in our case we
> have 100's of python modules - in MPL's case, there is really just MPL,
> projections, backends, and numerix so maybe that's not too much of a
> problem.
> >
> > Automatic coverage isn't something that is must have, but it is really
> nice. I've found that it actually causes developers to write more tests
> because they can run the coverage and get a "score" that other people will
> see. It's also a good way to check a new submission to see if the developer
> has done basic testing of the code.
>
> All of the above sounds reasonable and I don't have strong opinions on
> any of it, so I will defer to those who write the initial framework
> and tests.
>
>
> > For our tests, we require that the test never print anything to the
> screen, clean up any of its output files (i.e. leave the directory in the
> same state it was before), and only report that the test passed or failed
> and if it failed, add some error message. The key thing is that the
> conditions for correctness are encoded into the test itself. We have a
> command line option that gets passed to the test cases to say "don't clean
> up" so that you can examine the output from a failing test case w/o
> modifying the test code. This option is really useful when an image
> comparison fails.
> > We've wrapped the basic python unittest package. It's pretty simple and
> reasonably powerful. I doubt there is anything MPL would be doing that it
> can't handle. The auto-discovery of nose is nice but unnecessary in my
> opinion. As long as people follow a standard way of doing things,
> auto-discovery is fairly easy. Of course if you prefer nose and don't mind
> the additional tool requirement, that's fine too. Some things that are
> probably needed:
> >
> > - command line executable that runs the tests.
> > - support flags for running only some tests
> > - support flags for running only tests that don't need a GUI
> backend
> > (require Agg?). This allows automated testing and visual
> testing to be
> > combined. GUI tests could be placed in identified directories
> and then
> > only run when requested since by their nature they require
> specific backends
> > and user interaction.
> > - nice report on test pass/fail status
> > - hooks to add coverage checking and reporting in the future
> > - test utilities
> > - image comparison tools
> > - ??? basically anything that helps w/ testing and could be common
> across
> > test cases
> >
> > As a first cut, I would suggest is something like this:
> >
> > .../test/run.py
> > mplTest/
> > test_unit/
> > test_transform/
> > test_...
> >
> > The run script would execute all/some of the tests. Any common test code
> would be put in the mplTest directory. Any directory named 'test_XXX' is
> for test cases where 'XXX' is some category name that can be used in the run
> script to run a subset of cases. Inside each test_XXX directory, one
> unittest class per file. The run script would find the .py files in the
> test_XXX directories, import them, find all the unittest classes, and run
> them. The run script also sets up sys.path so that the mplTest package is
> available.
> >
> > Links:
> > http://docs.python.org/library/unittest.html
> > http://somethingaboutorange.com/mrl/projects/nose/
> >
> http://kbyanc.blogspot.com/2007/06/pythons-unittest-module-aint-that-bad.html
> >
> > coverage checking:
> > http://nedbatchelder.com/code/modules/coverage.html
> > http://darcs.idyll.org/~t/projects/figleaf/doc/<http://darcs.idyll.org/%7Et/projects/figleaf/doc/>
> >
> > Thoughts?
> > Ted
> > ps: looking at the current unit directory, it looks like at least one
> test (nose_tests) is using nose even though it's not supplied w/ MPL. Most
> of the tests do something and show a plot but the correct behavior is never
> written into the test.
>
>
> My fault -- I wrote some tests to make sure all the different kwargs
> variants were processed properly, but since we did not have a
> "correctness of output" framework in place, punted on that part. I
> think having coverage of the myriad ways of setting properties is of
> some value.
>
> On the issue of units (not unit testing but unit support which is
> motivating your writing of unit test) I think we may need a new
> approach. The current approach is to put unitized data into the
> artists, and update the converted data at the artist layer. I don't
> know that this is the proper design. For this approach to work, every
> scalar and array quantity must support units at the artist layer, and
> all the calculations that are done at the plotting layer (eg error
> bar) to setup these artists must be careful to preserve unitized data
> throughout. So it is burdensome on the artist layer and on the
> plotting function layer.
>
> The problem is compounded because most of the other developers are not
> really aware of how to use the units interface, which I take
> responsibility for because they have oft asked for a design document,
> which I have yet to provide because I am unhappy with the design. So
> new code tends to break functions that once had unit support. Which
> is why we need unit tests ....
>
I'm not ready to make a proper announcement yet, but I spent most of my free
time over the break continuing development on my Quantities package, which
supports units and unit transformations, and some new support for error
propagation that I just put together yesterday. There is some information at
packages.python.org/quantities , if anyone is interested. Probably the best
place to start is the tutorial in the documentation. I'm planning on
requesting comments and feedback from scipy-dev or numpy-discussion after a
little more development and optimization. The basic interface and
functionality are in place, though. If you guys have a chance to try out the
package and provide some early feedback, I would really appreciate it.
>
> I think everything might be easier if mpl had an intermediate class
> layer PlotItem for plot types, eg XYPlot, BarChart, ErrorBar as we
> already do for Legend. The plotting functions would instantiate these
> objects with the input arguments and track unit data through the
> reference to the axis. These plot objects would contain all the
> artist primitives which would store their data in native floating
> point, which would remove the burden on the artists from handling
> units and put it all in the plot creation/update logic. The objects
> would store references to all of the original inputs, and would update
> the primitive artists on unit changes. The basic problem is that the
> unitized data must live somewhere, and I am not sure that the low
> level primitive artists are the best place for that -- it may be a
> better idea to keep this data at the level of a PlotItem and let the
> primitive artists handle already converted floating point data. This
> is analogous to the current approach of passing transformed data to
> the backends to make it easier to write new backends. I need to chew
> on this some more.
>
>
> But this question aside, by all means fire away on creating the unit tests.
>
> JDH
>
>
> ------------------------------------------------------------------------------
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>
From: John H. <jd...@gm...> - 2009年01月06日 15:04:45
On Mon, Dec 22, 2008 at 11:45 AM, Drain, Theodore R
<the...@jp...> wrote:
> John,
> Sometime in January, we are going to spend some time fixing a few minor MPL bugs we've hit and a probably work on a few enhancements (I'll send you a list in Jan before we start anything - it's nothing major). We're also going to work on writing a set of tests that try various plots w/ units. I was thinking this would be a good time to introduce a standard test harness into the MPL CM tree.
Hey Ted -- Sorry I haven't gotten back to you yet. These proposals
sound good. I have only very limited experience with unit testing and
you have tons, so I don't have a lot to add to what you've already
written, but I have a few inline comments below.
> I think we should:
>
> 1) Select a standard test harness. The two big hitters seem to be unittest and nose. unittest has the advantage that it's shipped w/ Python. nose seems to do better with automatic discovery of test cases.
I prefer nose. I've used both a bit and find nose much more intuitive
and easy to use. The fact that ipython, numpy, and scipy are all
using nose makes the choice fairly compelling, especially if some of
your image specific tests could be ported w/o too much headache.
> 2) Establish a set of testing requirements. Naming conventions, usage conventions, etc. Things like tests should never print anything to the screen (i.e. correct behavior is encoded in the test case) or rely on a GUI unless that's what is being tested (allows tests to be run w/o an X-server). Basically write some documentation for the test system that includes how to use it and what's required of people when they add tests.
>
> 3) Write a test 'template' for people to use. This would define a test case and put TODO statements or something like it in place for people to fill in. More than one might be good for various classes of tests (maybe an image comparison template for testing agg drawing and a non-plot template for testing basic computations like transforms?).
>
> Some things we do on my project for our Python test systems:
>
> We put all unit tests in a 'test' directory inside the python package being tested. The disadvantage of this is that potentially large tests are inside the code to be delivered (though a nice delivery script can easily strip them out). The advantage of this is that it makes coverage checking easier. You can run the test case for a package and then check the coverage in the module w/o trying to figure out which things should be coverage checked or not. If you put the test cases in a different directory tree, then it's much harder to identify coverage sources. Though in our case we have 100's of python modules - in MPL's case, there is really just MPL, projections, backends, and numerix so maybe that's not too much of a problem.
>
> Automatic coverage isn't something that is must have, but it is really nice. I've found that it actually causes developers to write more tests because they can run the coverage and get a "score" that other people will see. It's also a good way to check a new submission to see if the developer has done basic testing of the code.
All of the above sounds reasonable and I don't have strong opinions on
any of it, so I will defer to those who write the initial framework
and tests.
> For our tests, we require that the test never print anything to the screen, clean up any of its output files (i.e. leave the directory in the same state it was before), and only report that the test passed or failed and if it failed, add some error message. The key thing is that the conditions for correctness are encoded into the test itself. We have a command line option that gets passed to the test cases to say "don't clean up" so that you can examine the output from a failing test case w/o modifying the test code. This option is really useful when an image comparison fails.
> We've wrapped the basic python unittest package. It's pretty simple and reasonably powerful. I doubt there is anything MPL would be doing that it can't handle. The auto-discovery of nose is nice but unnecessary in my opinion. As long as people follow a standard way of doing things, auto-discovery is fairly easy. Of course if you prefer nose and don't mind the additional tool requirement, that's fine too. Some things that are probably needed:
>
> - command line executable that runs the tests.
> - support flags for running only some tests
> - support flags for running only tests that don't need a GUI backend
> (require Agg?). This allows automated testing and visual testing to be
> combined. GUI tests could be placed in identified directories and then
> only run when requested since by their nature they require specific backends
> and user interaction.
> - nice report on test pass/fail status
> - hooks to add coverage checking and reporting in the future
> - test utilities
> - image comparison tools
> - ??? basically anything that helps w/ testing and could be common across
> test cases
>
> As a first cut, I would suggest is something like this:
>
> .../test/run.py
> mplTest/
> test_unit/
> test_transform/
> test_...
>
> The run script would execute all/some of the tests. Any common test code would be put in the mplTest directory. Any directory named 'test_XXX' is for test cases where 'XXX' is some category name that can be used in the run script to run a subset of cases. Inside each test_XXX directory, one unittest class per file. The run script would find the .py files in the test_XXX directories, import them, find all the unittest classes, and run them. The run script also sets up sys.path so that the mplTest package is available.
>
> Links:
> http://docs.python.org/library/unittest.html
> http://somethingaboutorange.com/mrl/projects/nose/
> http://kbyanc.blogspot.com/2007/06/pythons-unittest-module-aint-that-bad.html
>
> coverage checking:
> http://nedbatchelder.com/code/modules/coverage.html
> http://darcs.idyll.org/~t/projects/figleaf/doc/
>
> Thoughts?
> Ted
> ps: looking at the current unit directory, it looks like at least one test (nose_tests) is using nose even though it's not supplied w/ MPL. Most of the tests do something and show a plot but the correct behavior is never written into the test.
My fault -- I wrote some tests to make sure all the different kwargs
variants were processed properly, but since we did not have a
"correctness of output" framework in place, punted on that part. I
think having coverage of the myriad ways of setting properties is of
some value.
On the issue of units (not unit testing but unit support which is
motivating your writing of unit test) I think we may need a new
approach. The current approach is to put unitized data into the
artists, and update the converted data at the artist layer. I don't
know that this is the proper design. For this approach to work, every
scalar and array quantity must support units at the artist layer, and
all the calculations that are done at the plotting layer (eg error
bar) to setup these artists must be careful to preserve unitized data
throughout. So it is burdensome on the artist layer and on the
plotting function layer.
The problem is compounded because most of the other developers are not
really aware of how to use the units interface, which I take
responsibility for because they have oft asked for a design document,
which I have yet to provide because I am unhappy with the design. So
new code tends to break functions that once had unit support. Which
is why we need unit tests ....
I think everything might be easier if mpl had an intermediate class
layer PlotItem for plot types, eg XYPlot, BarChart, ErrorBar as we
already do for Legend. The plotting functions would instantiate these
objects with the input arguments and track unit data through the
reference to the axis. These plot objects would contain all the
artist primitives which would store their data in native floating
point, which would remove the burden on the artists from handling
units and put it all in the plot creation/update logic. The objects
would store references to all of the original inputs, and would update
the primitive artists on unit changes. The basic problem is that the
unitized data must live somewhere, and I am not sure that the low
level primitive artists are the best place for that -- it may be a
better idea to keep this data at the level of a PlotItem and let the
primitive artists handle already converted floating point data. This
is analogous to the current approach of passing transformed data to
the backends to make it easier to write new backends. I need to chew
on this some more.
But this question aside, by all means fire away on creating the unit tests.
JDH
From: Jouni K. S. <jk...@ik...> - 2009年01月06日 08:39:30
Jouni K. Seppänen <jk...@ik...> writes:
> Or perhaps the user-visible object doesn't need to be the same PdfFile
> that is used internally
I ended up doing this, calling the object PdfPages. I also gave it a
savefig method so you can do pdfpages.savefig(figure) in addition to
figure.savefig(pdfpages, format='pdf').
-- 
Jouni K. Seppänen
http://www.iki.fi/jks

Showing 11 results of 11

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /