You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(1) |
Nov
(33) |
Dec
(20) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(7) |
Feb
(44) |
Mar
(51) |
Apr
(43) |
May
(43) |
Jun
(36) |
Jul
(61) |
Aug
(44) |
Sep
(25) |
Oct
(82) |
Nov
(97) |
Dec
(47) |
2005 |
Jan
(77) |
Feb
(143) |
Mar
(42) |
Apr
(31) |
May
(93) |
Jun
(93) |
Jul
(35) |
Aug
(78) |
Sep
(56) |
Oct
(44) |
Nov
(72) |
Dec
(75) |
2006 |
Jan
(116) |
Feb
(99) |
Mar
(181) |
Apr
(171) |
May
(112) |
Jun
(86) |
Jul
(91) |
Aug
(111) |
Sep
(77) |
Oct
(72) |
Nov
(57) |
Dec
(51) |
2007 |
Jan
(64) |
Feb
(116) |
Mar
(70) |
Apr
(74) |
May
(53) |
Jun
(40) |
Jul
(519) |
Aug
(151) |
Sep
(132) |
Oct
(74) |
Nov
(282) |
Dec
(190) |
2008 |
Jan
(141) |
Feb
(67) |
Mar
(69) |
Apr
(96) |
May
(227) |
Jun
(404) |
Jul
(399) |
Aug
(96) |
Sep
(120) |
Oct
(205) |
Nov
(126) |
Dec
(261) |
2009 |
Jan
(136) |
Feb
(136) |
Mar
(119) |
Apr
(124) |
May
(155) |
Jun
(98) |
Jul
(136) |
Aug
(292) |
Sep
(174) |
Oct
(126) |
Nov
(126) |
Dec
(79) |
2010 |
Jan
(109) |
Feb
(83) |
Mar
(139) |
Apr
(91) |
May
(79) |
Jun
(164) |
Jul
(184) |
Aug
(146) |
Sep
(163) |
Oct
(128) |
Nov
(70) |
Dec
(73) |
2011 |
Jan
(235) |
Feb
(165) |
Mar
(147) |
Apr
(86) |
May
(74) |
Jun
(118) |
Jul
(65) |
Aug
(75) |
Sep
(162) |
Oct
(94) |
Nov
(48) |
Dec
(44) |
2012 |
Jan
(49) |
Feb
(40) |
Mar
(88) |
Apr
(35) |
May
(52) |
Jun
(69) |
Jul
(90) |
Aug
(123) |
Sep
(112) |
Oct
(120) |
Nov
(105) |
Dec
(116) |
2013 |
Jan
(76) |
Feb
(26) |
Mar
(78) |
Apr
(43) |
May
(61) |
Jun
(53) |
Jul
(147) |
Aug
(85) |
Sep
(83) |
Oct
(122) |
Nov
(18) |
Dec
(27) |
2014 |
Jan
(58) |
Feb
(25) |
Mar
(49) |
Apr
(17) |
May
(29) |
Jun
(39) |
Jul
(53) |
Aug
(52) |
Sep
(35) |
Oct
(47) |
Nov
(110) |
Dec
(27) |
2015 |
Jan
(50) |
Feb
(93) |
Mar
(96) |
Apr
(30) |
May
(55) |
Jun
(83) |
Jul
(44) |
Aug
(8) |
Sep
(5) |
Oct
|
Nov
(1) |
Dec
(1) |
2016 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(2) |
Jul
|
Aug
(3) |
Sep
(1) |
Oct
(3) |
Nov
|
Dec
|
2017 |
Jan
|
Feb
(5) |
Mar
|
Apr
|
May
|
Jun
|
Jul
(3) |
Aug
|
Sep
(7) |
Oct
|
Nov
|
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
(3) |
2
(7) |
3
(5) |
4
(1) |
5
(36) |
6
(36) |
7
|
8
(7) |
9
(23) |
10
(24) |
11
(6) |
12
(16) |
13
(34) |
14
(5) |
15
|
16
(34) |
17
(25) |
18
(13) |
19
(26) |
20
(64) |
21
(26) |
22
(20) |
23
(10) |
24
(24) |
25
(23) |
26
(13) |
27
(15) |
28
(1) |
29
(4) |
30
(9) |
31
(9) |
|
|
|
|
Darren Dale wrote: > Hi Andrew, > > On Wednesday 18 July 2007 4:00:05 pm Andrew Straw wrote: >> I don't have time to root around for the cause right now, but the PDF >> backend isn't working. See the attachment for tracebacks generated with >> "python backend_driver.py pdf" This is with recent svn (3565). > > It looks like your errors are related to the changes I made to rcParams, but I > cant reproduce the problem on my machine. backend_driver reports no problems > here, pdf or otherwise. Would you try deleting your build and > site-packages/matplotlib* directories, and building from scratch? You're right -- it's a silly mistake on my end -- I had detritus left in site-packages.. Apologies for the false alarm. I do try to be careful.
Hi Andrew, On Wednesday 18 July 2007 4:00:05 pm Andrew Straw wrote: > I don't have time to root around for the cause right now, but the PDF > backend isn't working. See the attachment for tracebacks generated with > "python backend_driver.py pdf" This is with recent svn (3565). It looks like your errors are related to the changes I made to rcParams, but I cant reproduce the problem on my machine. backend_driver reports no problems here, pdf or otherwise. Would you try deleting your build and site-packages/matplotlib* directories, and building from scratch? Darren
Hi, I don't have time to root around for the cause right now, but the PDF backend isn't working. See the attachment for tracebacks generated with "python backend_driver.py pdf" This is with recent svn (3565). -Andrew
> 3. Traits. We (Brian and I) have gone back and forth a lot on Traits, > and we've come very close to just making them a dependency. The only > real issue holding us back is that ipython so far has exactly *zero* > extension code, which is a plus in terms of ease of > installation/deployment. Having said that, as far as extension code > is concerned, Traits is light and clean, and nowhere near the kinds of > complexities that MPL deals with. But since anything is more than > zero, it is a bit of an issue for us. We may tip over though, I'm > just stating what our reasoning so far has been. > > In terms of Traits, point (2) above makes them even more attractive. > The delegation aspect of Traits is a very appealing way of combining > validation with additional action for the runtime modification of an > object. For example: > > ipython.color_scheme = "foo" > > If color_scheme were a Trait, the above could simply: > > a) validate that "foo" was acceptable as a value > b) trigger the chain of other trait updates (dependent color schemes > for exceptions, prompts, etc). At some level though, configuration is a very different thing than an application's runtime API. While they may be related (by exposing common functionality), not everything that can be configured would appear in a runtime API and vice-versa. Also, some events that need to happen when an attribute is changed at runtime can't happen at config time as the application might not yet be up and running yet. Using Traits in the runtime API is an entirely different ballgame than simply using it for type validation in configuration: When using Traits for validation+configuration, the objects that inherit from HasTraits are simply bunch/dict like objects that do type validation - but they don't contain any application logic. The actually application logic is contained in classes that aren't traited themselves, but that consume traited config objects to configure themselves at startup. If traited objects are exposed in a runtime API, all of a sudden the application logic moves to the traited classes themselves. Then, the entire application (any object that needs config or has a runtime API) is built upon traits at its core. This is very from the previous case where traited classes are used as a minor implementation detail of the config system. I am not saying that it is bad to build an application with traits at its core, but only that that is very different from the path that ipython and matplotlib have taken thus far. Also, it makes the commitment level to traits much higher than if it is used merely as a component in the config system - which could easily be swapped out if desired. Brian > All of this can obviously be done with python properties, but before I > write Traits again, poorly (I'm no David Morrill) I'd rather ride > Enthought's back. > > This approach via Traits (or something like them) also ensures that > the work done in ensuring the consitency/robustness of an object's > *runtime* configuration API becomes automatically useful to users, and > it simplifies the init-time config API, since it gives us the option > to defer more complicated tasks to runtime. > > > So this is a summary of where we stand. It's surprising that the > 'simple question' of configuring an application can turn out to be > such a can of worms, but for us it has proven to be. Input/ideas from > you all would be very welcome, as it's quite likely we've missed > possible solutions. > > I'd love to get out of this discussion some ideas, clarity and > ultimately a plan for how to proceed in the long haul. If in addition > this means that ipython, mpl and others end up uniformizing further > our approach to this problem, even better! Having our end users find > familiar configuration mechanisms across their various libraries would > only be a good thing in the long run. > > Cheers, > > Brian and f. > > > ps - I debated on having this discussion on ipython-dev, but for now > I'm going to not cross-post. The MPL team is attentive to the issue > now, so I'd rather collect your wisdom here, and I'll take it upon > myself to summarize our conclusions on ipython-dev later. I just want > to avoid list cross-posting. > -- Brian E. Granger, Ph.D. Research Scientist Tech-X Corporation phone: 720-974-1850 bgr...@tx... ell...@gm...
Hi, I wrote (on -users, but have moved the discussion here to -devel): > > I was wondering, though, whether there'd be any support for some work > > which tidied up the near-duplicate code in axes.py. John Hunter replied: > Certainly, but probably not using meta-classes. > [...] > I'm disinclined to use python black magic -- I find it makes the code > harder to grok for the typical scientist, and these are our main > developers. Also, Darren Dale replied: > It's true. That bit of code looks like Perl to me. There's no need to be rude :-) The generalised docstrings are a bit ugly because they're unavoidably filled with phrases like "set the x-axis limits to go from xmin to xmax", which need to be changed to "set the y-axis limits to go from ymin to ymax" for the other function. The %(axis_letter)s stuff is quite verbose, unfortunately, but it's the same as in things like kwargs set the Text properties. Valid properties are %(Text)s This does make the docstring a bit harder to work with, but the alternative is to write it out twice, and then ensure the two near-copies are kept in sync as changes are made. Now of course this isn't impossible, but that kind of thing does add to the maintenance burden in my experience. Keeping the two copies of the code in sync requires effort, too, and it's easy to miss things; e.g., set_xlim() includes self._process_unit_info(xdata=(xmin, xmax)) but set_ylim() doesn't --- which is correct? panx() and zoomx() include xmin, xmax = self.viewLim.intervalx().get_bounds() but pany() and zoomy() have no analogous calls --- which is correct? The docstrings of set_xlabel() and set_ylabel() are formatted differently (obviously this is not a critical problem but still), and the examples in axhline() and axvline()'s docstrings are slightly different. There was the bug with hlines() and vlines() that lead to my query. I'm not trying to be annoying in pointing out these things, just saying that although there is a bit of a learning curve in the 'only write the function once' approach, I've found that it does save trouble in the long run. The metaclass itself is only really used to automate the process of creating the two specific functions from the one generic one. It could be left out, instead meaning that you'd write something like class Axes: [...] def set__AXISLETTER_scale(self, [...]): # [...] # do stuff # [...] set_xscale = make_specific_function(set__AXISLETTER_scale, X_TRAITS) set_yscale = make_specific_function(set__AXISLETTER_scale, Y_TRAITS) The make_specific_function() function is a little bit hairy and there might well be a better way to do it. My current implementation fiddles with the bytecode of the general function, and while I think it works correctly, this kind of malarkey can certainly lead to bugs which are very difficult to unravel if the "behind the scenes" stuff (i.e., make_specific_function() in this case) has problems. Without using something like make_specific_function(), you could do something more like John's suggestion: def set_xscale(self, value, basex = 10, subsx = None): """ [...] x docstring [...] """ self._set_scale(self.X_TRAITS, value, basex, subsx) def set_yscale(self, value, basey = 10, subsy = None): """ [...] y docstring [...] """ self._set_scale(self.Y_TRAITS, value, basey, subsy) def _set_scale(self, traits, value, base, subs): assert(value.lower() in ('log', 'linear', )) if value == 'log': traits.axis.set_major_locator(mticker.LogLocator(base)) traits.axis.set_major_formatter(mticker.LogFormatterMathtext(base)) traits.axis.set_minor_locator(mticker.LogLocator(base, subs)) traits.transfunc.set_type(mtrans.LOG10) min_val, max_val = traits.limfunc() if min(min_val, max_val) <= 0: self.autoscale_view() elif value == 'linear': traits.axis.set_major_locator(mticker.AutoLocator()) traits.axis.set_major_formatter(mticker.ScalarFormatter()) traits.axis.set_minor_locator(mticker.NullLocator()) traits.axis.set_minor_formatter(mticker.NullFormatter()) traits.transfunc.set_type(mtrans.IDENTITY) where the 'traits' stuff allows you to pass around the 'limfunc = self.get_xlim' etc. all in one go. 'X_TRAITS' and 'Y_TRAITS' would be set up in __init__(). Maybe 'X_METHODS' and 'Y_METHODS' would be better names. With the above approach, you do still end up with two near-copies of the docstring but at least the actual code is all in one place. Anyway. Just thought I'd make the suggestion. If there's interest I can tidy up what I've got and post it. Or a patch which takes the last-mentioned approach above. Ben.
On Wed, Jul 18, 2007 at 01:49:19AM -0500, Robert Kern wrote: > Gael Varoquaux wrote: > > On Tue, Jul 17, 2007 at 04:54:49PM -1000, Eric Firing wrote: > >> The 2.0 branch is not new enough, though; we need the trunk. > > Nonononono. Believe me. Unless you want to live on the bleeding edge. > > All > > the QA is done on the branches. The trunk is for active developement > > (some active developement happens in the branches, like for mayavi2, > > so > > far). It is not even garantied to work (it does, most of the time). > Well, I do suggest that matplotlib start using Traits 3 rather than > Traits 2. > Otherwise, they will need to switch later. Also, we've tried to > minimize the > dependencies for Traits 3, but less so for Traits 2. Well if _you_ say this, than I'll trust you. Especially since there are some really nifty features in Traits 3. Cheers, Gaël
Gael Varoquaux wrote: > On Tue, Jul 17, 2007 at 04:54:49PM -1000, Eric Firing wrote: >> The 2.0 branch is not new enough, though; we need the trunk. > > Nonononono. Believe me. Unless you want to live on the bleeding edge. All > the QA is done on the branches. The trunk is for active developement > (some active developement happens in the branches, like for mayavi2, so > far). It is not even garantied to work (it does, most of the time). Well, I do suggest that matplotlib start using Traits 3 rather than Traits 2. Otherwise, they will need to switch later. Also, we've tried to minimize the dependencies for Traits 3, but less so for Traits 2. Another way to put Eric's suggestion is, "We need Traits 3," rather than, "We should always use the trunk." matplotlib might want to aim for a released version of Traits 3. >> (I tried the 2.0 branch, but it turned out to depend on enthought.util, >> which depended on enthought.somethingelse... so I stopped. Also it was >> using numerix; it is not numpified.) > > Traits depend on enthought.etsconfig and enthought.util. Once traits is > on pypi, these dependencies should install automatically. I am wondering > if there is a possibility of shipping a "fat" egg, that provides the > dependancies, just in case the user does not have internet access, and > that first checks pypi before using the local dependencies. > > I am very surprised about the comment on traits numpification. I just > tried out my local copy, and it is clearly numpified. I didn't do > anything special to instal it, just built from source. What makes you > think the branches are not numpified ? The Array trait has been updated in Traits 3 to use numpy idioms (dtypes instead of typecodes, mostly). Traits 2 happily uses numpy for the Array trait, but you're stuck using icky typecodes that you need to import from numpy.oldnumeric. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
On Tue, Jul 17, 2007 at 04:54:49PM -1000, Eric Firing wrote: > The 2.0 branch is not new enough, though; we need the trunk. Nonononono. Believe me. Unless you want to live on the bleeding edge. All the QA is done on the branches. The trunk is for active developement (some active developement happens in the branches, like for mayavi2, so far). It is not even garantied to work (it does, most of the time). > (I tried the 2.0 branch, but it turned out to depend on enthought.util, > which depended on enthought.somethingelse... so I stopped. Also it was > using numerix; it is not numpified.) Traits depend on enthought.etsconfig and enthought.util. Once traits is on pypi, these dependencies should install automatically. I am wondering if there is a possibility of shipping a "fat" egg, that provides the dependancies, just in case the user does not have internet access, and that first checks pypi before using the local dependencies. I am very surprised about the comment on traits numpification. I just tried out my local copy, and it is clearly numpified. I didn't do anything special to instal it, just built from source. What makes you think the branches are not numpified ? Cheers, Gaël
Bryce Hendrix wrote: > John, you're trying to check out the code from Trac, not svn :) Try this > URL > > https://svn.enthought.com/svn/enthought/branches/enthought.traits_2.0 I just did this. There is still an authentication problem, but manually accepting the cert makes it work. The 2.0 branch is not new enough, though; we need the trunk. (I tried the 2.0 branch, but it turned out to depend on enthought.util, which depended on enthought.somethingelse... so I stopped. Also it was using numerix; it is not numpified.) Then I checked out the trunk: https://svn.enthought.com/svn/enthought/trunk/enthought.traits/ ran "python setup.py build", "python setup.py install --prefix=/usr/local". No glitches so far; but when I tried to run tests from the tests directory, it failed. It was looking for ui components, even for non-ui tests. So I commented out the ui parts of api.py, and that seems to have done the trick. Eric
Hi all, this is an important discussion also for us (ipython), so I'll go in some detail into things. It would be great if out of this we got something that both ipython and matplotlib could reuse for the long haul, though I'm not sure (in a sense, ipython has some nastier requirements that mpl may not need to worry about). On 7/17/07, John Hunter <jd...@gm...> wrote: > On 7/17/07, Darren Dale <dd...@co...> wrote: > > > I'm really impressed with how readable and well organized the code is in > > ipython1. It looks like their approach to configuration has been carefully > > considered. Any chance we can follow their lead? It looks like it would be a > > I haven't looked at it closely, but I am willing to trust Fernando on > this for the most part. I know he has put a lot of thought into it, > and it's something we've talked about for years. Thanks for the compliments. We have put much thought into it, and Brian deserves all the implementation credit. But as you'll see in a moment, all is not quite well; hopefully after this discussion and feedback from you guys we'll find a good solution so we can (finally!) consider this problem fixed and move on to other things. > I am not too fond of the dictionary usage here: > > controllerConfig.listenForEnginesOn['ip'] = '127.0.0.1' > controllerConfig.listenForEnginesOn['port'] = 20000 > > I prefer > > controllerConfig.listenForEnginesOn.ip = '127.0.0.1' > controllerConfig.listenForEnginesOn.port = 20000 > > Since dicts and attrs are intimately linked, eg with something like > Bunch, this should be a minor tweak. Fernando, why did you prefer > dict semantics. And are you happy with the state of your config > system in ipython1 Let's not worry about this for now: I agree with you, but it's an implementation detail that is trivial to fix with a light wrapper, which already basically exists in IPython.ipstruct. The real problems lie elsewhere. Brian and I spent a lot of time discussing this today, and here's the summary of the whole thing. Again, some of these issues may not affect mpl nearly as seriously as they do ipython. Having config files be pure python has some nice benefits: - familiar syntax for users - the ability for users to put in complex logic when declaring their configuration (like make decisions based on OS, hostname, etc). - If developers want, wrapping the underlying objects in something like Traits provides automatically validation mechanisms for configuration, which is important. The security issue is for *me* moot: ipython is a full-access Python system where people are *expected* to run code. So I don't care about letting them run real code at config time. But for other projects, the story is completely different, obviously. I suspect MPL falls into ipython's category here though. So what is the problem? The main one is that we have a real difficult time with circular import dependencies and init-time logic. Basically, you need these configuration objects to exist at startup for other objects to be built, but the configuration *itself* may need to import various things if it is to work as real Python code. In IPython (the dev branches), for example, the user-visible system is assembled out of multiple objects, which themselves need to build their own internal sub-objects. Some of these simply require value parameters at init time, but in some cases even which *class* to use is tweakable at init time (for example, so users can choose amongst various classes that implement different network transport protocols but with identical API as far as IPython goes). While in any one case one can probably disentangle the various dependency chains that arise, it's a major pain to have to deal with this constantly. The fact that the problem is a recurring one is to us an indicator of a design problem that we should address differently. There are other issues as well, besides circular imports: A. Initialization time modification of live objects: while config objects are obviously meant to be built *before* the 'real' objects (who in term use the config ones to assemble themselves), there are cases where one would like to deal with the 'real thing'. For example, in today's IPython (not the dev branch, but the one y'all run) you can add things like magics at init time. This is only possible because when the rc file is parsed, the real object is already up and alive as __IPYTHON__ (also visible via ipapi.get() ). Obviously this is a hack, and a cleaner design would probably separate the declaration of code objects (say magics) that are meant to be fed into the live instance as part of the construction of the config object, and then would have the real ipython load up its tweaks 100% from this config object. But since ipython is also meant to be *runtime* modifiable, it feels natural to let users access such an API for their own tweaks, yet there is a conflict with doing this purely on a config object that is basically just a fancy 'data bag'. B. Automatic creation of default config files. With pure python, this is just hard to do, since you'd be in the business of auto-generating valid code from the structure of an object. For a pure 'class Bunch:pass' type thing this may not be *too* hard, as long as all values allowed are such that eval(repr(foo))==foo, which isn't really always the case. C. Complexity: even for Brian, who wrote the config code, it's hard to figure out where all the details go, since declaring new flags typically ends up requiring fishing inside the config classes themselves. A more plain-text, purely declarative file would summarize these things in a way that would probably be easier for both developers and end users. Given all this, we are currently looking at going back to a mixture of text-based config and real code, as a two-stage process. The ConfigObj library: http://www.voidspace.org.uk/python/configobj.html looks like a nice, self-contained step above hand-parsing rc files or Python's rather primitive ConfigParser module. It has a lot of good features, is mature and is small enough that one can ship it inside the project (license is BSD). Our current plan is something along these lines: 1. Expose for all configuration objects the part that is really just a key=value API over ConfigObj. This would dispense with all the circular import problems, and let users specify the most common things they want to tweak with a really simple syntax (that of .ini files). We would then load these and build all necessary objects. 2. Make sure that the part of any given object that we mean to expose for *runtime* modification (say adding magics to a running ipython, for example) is easily available. This would let users modify or extend those aspects of the system that can't just be easily declared as simple key=value settings. The mechanism for this would simply be to have a filename in each object's ConfigObj session like: [SomeObject] post_init_file = "my_post_init.py" This file would be run via execfile() *once SomeObject* were guaranteed to be up and running. This would then allow users to have code that manipulates SomeObject programatically, via its runtime API. All this conversation brings us to... 3. Traits. We (Brian and I) have gone back and forth a lot on Traits, and we've come very close to just making them a dependency. The only real issue holding us back is that ipython so far has exactly *zero* extension code, which is a plus in terms of ease of installation/deployment. Having said that, as far as extension code is concerned, Traits is light and clean, and nowhere near the kinds of complexities that MPL deals with. But since anything is more than zero, it is a bit of an issue for us. We may tip over though, I'm just stating what our reasoning so far has been. In terms of Traits, point (2) above makes them even more attractive. The delegation aspect of Traits is a very appealing way of combining validation with additional action for the runtime modification of an object. For example: ipython.color_scheme = "foo" If color_scheme were a Trait, the above could simply: a) validate that "foo" was acceptable as a value b) trigger the chain of other trait updates (dependent color schemes for exceptions, prompts, etc). All of this can obviously be done with python properties, but before I write Traits again, poorly (I'm no David Morrill) I'd rather ride Enthought's back. This approach via Traits (or something like them) also ensures that the work done in ensuring the consitency/robustness of an object's *runtime* configuration API becomes automatically useful to users, and it simplifies the init-time config API, since it gives us the option to defer more complicated tasks to runtime. So this is a summary of where we stand. It's surprising that the 'simple question' of configuring an application can turn out to be such a can of worms, but for us it has proven to be. Input/ideas from you all would be very welcome, as it's quite likely we've missed possible solutions. I'd love to get out of this discussion some ideas, clarity and ultimately a plan for how to proceed in the long haul. If in addition this means that ipython, mpl and others end up uniformizing further our approach to this problem, even better! Having our end users find familiar configuration mechanisms across their various libraries would only be a good thing in the long run. Cheers, Brian and f. ps - I debated on having this discussion on ipython-dev, but for now I'm going to not cross-post. The MPL team is attentive to the issue now, so I'd rather collect your wisdom here, and I'll take it upon myself to summarize our conclusions on ipython-dev later. I just want to avoid list cross-posting.
On 7/17/07, Gael Varoquaux <gae...@no...> wrote: > I have written a small script that uses pylab to retrieve the data for > the MPL colormaps. I am thinking of checking in both the script, and the > resulting data in Mayavi2 (FBSD licenced). Can I do such a thing > (stealing colormaps ?). You are welcome to do this, but you might be better served stealing our colormapping infrastructure and data directly (_cm.py, cm.py and colors.py) rather than trying to extract it via pylab. Some of our colormap data is borrowed from other sources (colorbrewer, yorick). They have BSD compatible licenses, but have some restrictions that you distribute their licenses. matplotlib does so in the licenses subdir of the svn repo. Search _cm.py for "license" for more details. JDH
I have written a small script that uses pylab to retrieve the data for the MPL colormaps. I am thinking of checking in both the script, and the resulting data in Mayavi2 (FBSD licenced). Can I do such a thing (stealing colormaps ?). Cheers