SourceForge logo
SourceForge logo
Menu

matplotlib-devel — matplotlib developers

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
(1)
Nov
(33)
Dec
(20)
2004 Jan
(7)
Feb
(44)
Mar
(51)
Apr
(43)
May
(43)
Jun
(36)
Jul
(61)
Aug
(44)
Sep
(25)
Oct
(82)
Nov
(97)
Dec
(47)
2005 Jan
(77)
Feb
(143)
Mar
(42)
Apr
(31)
May
(93)
Jun
(93)
Jul
(35)
Aug
(78)
Sep
(56)
Oct
(44)
Nov
(72)
Dec
(75)
2006 Jan
(116)
Feb
(99)
Mar
(181)
Apr
(171)
May
(112)
Jun
(86)
Jul
(91)
Aug
(111)
Sep
(77)
Oct
(72)
Nov
(57)
Dec
(51)
2007 Jan
(64)
Feb
(116)
Mar
(70)
Apr
(74)
May
(53)
Jun
(40)
Jul
(519)
Aug
(151)
Sep
(132)
Oct
(74)
Nov
(282)
Dec
(190)
2008 Jan
(141)
Feb
(67)
Mar
(69)
Apr
(96)
May
(227)
Jun
(404)
Jul
(399)
Aug
(96)
Sep
(120)
Oct
(205)
Nov
(126)
Dec
(261)
2009 Jan
(136)
Feb
(136)
Mar
(119)
Apr
(124)
May
(155)
Jun
(98)
Jul
(136)
Aug
(292)
Sep
(174)
Oct
(126)
Nov
(126)
Dec
(79)
2010 Jan
(109)
Feb
(83)
Mar
(139)
Apr
(91)
May
(79)
Jun
(164)
Jul
(184)
Aug
(146)
Sep
(163)
Oct
(128)
Nov
(70)
Dec
(73)
2011 Jan
(235)
Feb
(165)
Mar
(147)
Apr
(86)
May
(74)
Jun
(118)
Jul
(65)
Aug
(75)
Sep
(162)
Oct
(94)
Nov
(48)
Dec
(44)
2012 Jan
(49)
Feb
(40)
Mar
(88)
Apr
(35)
May
(52)
Jun
(69)
Jul
(90)
Aug
(123)
Sep
(112)
Oct
(120)
Nov
(105)
Dec
(116)
2013 Jan
(76)
Feb
(26)
Mar
(78)
Apr
(43)
May
(61)
Jun
(53)
Jul
(147)
Aug
(85)
Sep
(83)
Oct
(122)
Nov
(18)
Dec
(27)
2014 Jan
(58)
Feb
(25)
Mar
(49)
Apr
(17)
May
(29)
Jun
(39)
Jul
(53)
Aug
(52)
Sep
(35)
Oct
(47)
Nov
(110)
Dec
(27)
2015 Jan
(50)
Feb
(93)
Mar
(96)
Apr
(30)
May
(55)
Jun
(83)
Jul
(44)
Aug
(8)
Sep
(5)
Oct
Nov
(1)
Dec
(1)
2016 Jan
Feb
Mar
(1)
Apr
May
Jun
(2)
Jul
Aug
(3)
Sep
(1)
Oct
(3)
Nov
Dec
2017 Jan
Feb
(5)
Mar
Apr
May
Jun
Jul
(3)
Aug
Sep
(7)
Oct
Nov
Dec
2018 Jan
Feb
Mar
Apr
May
Jun
Jul
(2)
Aug
Sep
Oct
Nov
Dec
S M T W T F S


1
(2)
2
(5)
3
4
5
(1)
6
7
8
9
10
(2)
11
(3)
12
13
(1)
14
15
(3)
16
(6)
17
(4)
18
(4)
19
(5)
20
(2)
21
(9)
22
(3)
23
(1)
24
(1)
25
(2)
26
27
28
(10)
29
(6)
30
(5)
31
(4)


Showing results of 79

1 2 3 4 > >> (Page 1 of 4)
From: Fernando P. <fpe...@gm...> - 2009年12月31日 22:00:21
On Thu, Dec 31, 2009 at 4:54 AM, Darren Dale <dsd...@gm...> wrote:
> I have been resistant to committing this patch because (in my opinion)
> mpl should not have to provide workarounds for bugs in package X on OS
> Y, distribution Z. I think this particular issue was fixed when
> PyQt4-4.6.2 was released. But its time to get practical, I suppose.
> The patch looks fine, I just checked it into the trunk.
Thanks! As the zen goes, practicality beats purity :) I understand
your reluctance though, it's annoying to pepper mpl's code with this
kind of junk.
Happy New Year!
f
From: Darren D. <dsd...@gm...> - 2009年12月31日 12:55:12
On Wed, Dec 30, 2009 at 11:11 PM, Fernando Perez <fpe...@gm...> wrote:
> Howdy,
>
> On Sat, Nov 7, 2009 at 12:30 PM, Darren Dale <dsd...@gm...> wrote:
>> Me too. And thank you for posting the report and a workaround.
>
> Quick question: would it be worth adding this monkeypatch to mpl
> proper? Right now, the qt4 backend is effectively unusable out of the
> box in distros like Karmic.
I have been resistant to committing this patch because (in my opinion)
mpl should not have to provide workarounds for bugs in package X on OS
Y, distribution Z. I think this particular issue was fixed when
PyQt4-4.6.2 was released. But its time to get practical, I suppose.
The patch looks fine, I just checked it into the trunk.
Darren
From: Fernando P. <fpe...@gm...> - 2009年12月31日 04:40:53
Howdy,
are there fundamental reasons to keep this code around in
backends/__init__.py:use() ?
 if 'matplotlib.backends' in sys.modules:
 if warn: warnings.warn(_use_error_msg)
 return
I am now testing the new IPython gui stuff, and if I comment that out
and call pyplot.switch_backend() when I load mpl (along with the qt
patch I sent earlier), I can hop around backends, at least with Qt4
and Wx:
In [10]: %pylab qt
Welcome to pylab, a matplotlib-based Python environment.
Backend in use: Qt4Agg
For more information, type 'help(pylab)'.
In [11]: run simpleplot.py
In [13]: %pylab wx
Welcome to pylab, a matplotlib-based Python environment.
Backend in use: WXAgg
For more information, type 'help(pylab)'.
In [16]: run simpleplot.py
In [17]: figure()
Out[17]: <matplotlib.figure.Figure object at 0xa311b0c>
In [18]: plot(sin(linspace(0,2*pi,200)**2))
Out[18]: [<matplotlib.lines.Line2D object at 0xa49862c>]
In [19]: %pylab qt
Welcome to pylab, a matplotlib-based Python environment.
Backend in use: Qt4Agg
For more information, type 'help(pylab)'.
In [20]: run simpleplot.py
In [21]: figure()
Out[21]: <matplotlib.figure.Figure object at 0xa64fb2c>
In [22]: plot(sin(linspace(0,2*pi,200)))
Out[22]: [<matplotlib.lines.Line2D object at 0xa67d26c>]
etc...
I see lockups trying to throw Tk or Gtk in the mix, but being able to
switch from Wx to Qt and back would be nice, especially since it would
allow using Mayavi and other enthought apps together with Qt-based
tools. mpl closes open windows on backend switch, but at least you
can go back and forth, which is better than nothing and can be useful
for debugging.
This is still very early/experimental ipython code, I just want to
start getting a feel for the mpl-side of the equation...
Cheers,
f
From: Fernando P. <fpe...@gm...> - 2009年12月31日 04:11:16
Attachments: mpl_qt4.diff
Howdy,
On Sat, Nov 7, 2009 at 12:30 PM, Darren Dale <dsd...@gm...> wrote:
> Me too. And thank you for posting the report and a workaround.
Quick question: would it be worth adding this monkeypatch to mpl
proper? Right now, the qt4 backend is effectively unusable out of the
box in distros like Karmic. Which is a bummer, because with the
ipython sitting on my laptop, one can now load 'pylab' at any time
during a session:
maqroll[scratch]> ip
Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15)
Type "copyright", "credits" or "license" for more information.
IPython 0.11.bzr.r1219 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object'. ?object also works, ?? prints more.
In [1]: import sys
In [2]: 'matplotlib' in sys.modules
Out[2]: False
In [3]: %pylab wx
Activating matplotlib with backend: WXAgg
 Welcome to pylab, a matplotlib-based Python environment.
 For more information, type 'help(pylab)'.
Switching IPython gui support to: wx True
In [4]: 'matplotlib' in sys.modules
Out[4]: True
In [5]: plot(sin(linspace(0,2*pi,200)))
Out[5]: [<matplotlib.lines.Line2D object at 0xae0dccc>]
This is starting to look very promising, but unfortunately:
- right now we don't have gui switching support for Qt3 at all in
ipython. Help is welcome, but I have no clue if it's easy/hard or
even needed much anymore...
- qt4 is unusable with the system's qt/pyqt...
So perhaps a local patch would be worth it, no? I can confirm that
with the attached patch, the new ipython support works:
In [1]: %pylab qt
Activating matplotlib with backend: Qt4Agg
 Welcome to pylab, a matplotlib-based Python environment.
 For more information, type 'help(pylab)'.
In [2]: run simpleplot.py
In [3]: close('all')
In [4]: run simpleplot.py
whereas before, I'd get the same nasty error mentioned above.
The patch now has no run-time impact (I modified Pierre's code a bit
so the check is done only once), but I'm not about to commit something
in the Qt backend without someone else's eyes, especially Darren's :)
Cheers,
f
On Wed, Dec 30, 2009 at 11:16 AM, David Cournapeau <cou...@gm...> wrote:
> On Wed, Dec 30, 2009 at 11:26 PM, Darren Dale <dsd...@gm...> wrote:
>> Hi David,
>>
>> On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau <cou...@gm...> wrote:
>>> Executable: grin
>>>  module: grin
>>>  function: grin_main
>>>
>>> Executable: grind
>>>  module: grin
>>>  function: grind_main
>>
>> Have you thought at all about operations that are currently performed
>> by post-installation scripts? For example, it might be desirable for
>> the ipython or MayaVi windows installers to create a folder in the
>> Start menu that contains links the the executable and the
>> documentation. This is probably a secondary issue at this point in
>> toydist's development, but I think it is an important feature in the
>> long run.
>>
>> Also, have you considered support for package extras (package variants
>> in Ports, allowing you to specify features that pull in additional
>> dependencies like traits[qt4])? Enthought makes good use of them in
>> ETS, and I think they would be worth keeping.
>
> Does this example covers what you have in mind ? I am not so familiar
> with this feature of setuptools:
>
> Name: hello
> Version: 1.0
>
> Library:
>  BuildRequires: paver, sphinx, numpy
>  if os(windows)
>    BuildRequires: pywin32
>  Packages:
>    hello
>  Extension: hello._bar
>    sources:
>      src/hellomodule.c
>  if os(linux)
>    Extension: hello._linux_backend
>      sources:
>        src/linbackend.c
>
> Note that instead of os(os_name), you can use flag(flag_name), where
> flag are boolean variables which can be user defined:
>
> http://github.com/cournape/toydist/blob/master/examples/simples/conditional/toysetup.info
>
> http://github.com/cournape/toydist/blob/master/examples/var_example/toysetup.info
I should defer to the description of extras in the setuptools
documentation. It is only a few paragraphs long:
http://peak.telecommunity.com/DevCenter/setuptools#declaring-extras-optional-features-with-their-own-dependencies
Darren
On Wed, Dec 30, 2009 at 11:26 PM, Darren Dale <dsd...@gm...> wrote:
> Hi David,
>
> On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau <cou...@gm...> wrote:
>> Executable: grin
>>  module: grin
>>  function: grin_main
>>
>> Executable: grind
>>  module: grin
>>  function: grind_main
>
> Have you thought at all about operations that are currently performed
> by post-installation scripts? For example, it might be desirable for
> the ipython or MayaVi windows installers to create a folder in the
> Start menu that contains links the the executable and the
> documentation. This is probably a secondary issue at this point in
> toydist's development, but I think it is an important feature in the
> long run.
>
> Also, have you considered support for package extras (package variants
> in Ports, allowing you to specify features that pull in additional
> dependencies like traits[qt4])? Enthought makes good use of them in
> ETS, and I think they would be worth keeping.
Does this example covers what you have in mind ? I am not so familiar
with this feature of setuptools:
Name: hello
Version: 1.0
Library:
 BuildRequires: paver, sphinx, numpy
 if os(windows)
 BuildRequires: pywin32
 Packages:
 hello
 Extension: hello._bar
 sources:
 src/hellomodule.c
 if os(linux)
 Extension: hello._linux_backend
 sources:
 src/linbackend.c
Note that instead of os(os_name), you can use flag(flag_name), where
flag are boolean variables which can be user defined:
http://github.com/cournape/toydist/blob/master/examples/simples/conditional/toysetup.info
http://github.com/cournape/toydist/blob/master/examples/var_example/toysetup.info
David
On Wed, Dec 30, 2009 at 11:26 PM, Darren Dale <dsd...@gm...> wrote:
> Hi David,
>
> On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau <cou...@gm...> wrote:
>> Executable: grin
>>  module: grin
>>  function: grin_main
>>
>> Executable: grind
>>  module: grin
>>  function: grind_main
>
> Have you thought at all about operations that are currently performed
> by post-installation scripts? For example, it might be desirable for
> the ipython or MayaVi windows installers to create a folder in the
> Start menu that contains links the the executable and the
> documentation. This is probably a secondary issue at this point in
> toydist's development, but I think it is an important feature in the
> long run.
The main problem I see with post hooks is how to support them in
installers. For example, you would have a function which does the post
install, and declare it as a post install hook through decorator:
@hook.post_install
def myfunc():
 pass
The main issue is how to communicate data - that's a major issue in
every build system I know of (scons' solution is ugly: every function
takes an env argument, which is basically a giant global variable).
>
> Also, have you considered support for package extras (package variants
> in Ports, allowing you to specify features that pull in additional
> dependencies like traits[qt4])? Enthought makes good use of them in
> ETS, and I think they would be worth keeping.
The declarative format may declare flags as follows:
Flag: c_exts
 Description: Build (optional) C extensions
 Default: false
Library:
 if flag(c_exts):
 Extension: foo
 sources: foo.c
And this is automatically available at configure stage. It can be used
anywhere in Library, not just for Extension (you could use is within
the Requires section). I am considering adding more than Flag (flag
are boolean), if it does not make the format too complex. The use case
I have in mind is something like:
toydist configure --with-lapack-dir=/opt/mkl/lib
which I have wished to implement for numpy for ages.
David
On Wed, Dec 30, 2009 at 8:15 PM, René Dudfield <re...@gm...> wrote:
>
> Sitting down with Tarek(who is one of the current distutils
> maintainers) in Berlin we had a little discussion about packaging over
> pizza and beer... and he was quite mindful of OS packagers problems
> and issues.
This has been said many times on distutils-sig, but no concrete action
has ever been taken in that direction. For example, toydist already
supports the FHS better than distutils, and is more flexible. I have
tried several times to explain why this matters on distutils-sig, but
you then have the peanuts gallery interfering with unrelated nonsense
(like it would break windows, as if it could not be implemented
independently).
Also, retrofitting support for --*dir in distutils would be *very*
difficult, unless you are ready to break backward compatibility (there
are 6 ways to install data files, and each of them has some corner
cases, for example - it is a real pain to support this correctly in
the convert command of toydist, and you simply cannot recover missing
information to comply with the FHS in every case).
>
> However these systems were developed by the zope/plone/web crowd, so
> they are naturally going to be thinking a lot about zope/plone/web
> issues.
Agreed - it is natural that they care about their problems first,
that's how it works in open source. What I find difficult is when our
concern are constantly dismissed by people who have no clue about our
issues - and later claim we are not cooperative.
> Debian, and ubuntu packages for them are mostly useless
> because of the age.
That's where the build farm enters. This is known issue, that's why
the build service or PPA exist in the first place.
> I think
> perhaps if toydist included something like stdeb as not an extension
> to distutils, but a standalone tool (like toydist) there would be less
> problems with it.
That's pretty much how I intend to do things. Currently, in toydist,
you can do something like:
from toydist.core import PackageDescription
pkg = PackageDescription.from_file("toysetup.info")
# pkg now gives you access to metadata, as well as extensions, python
modules, etc...
I think this gives almost everything that is needed to implement a
sdist_dsc command. Contrary to the Distribution class in distutils,
this class would not need to be subclassed/monkey-patched by
extensions, as it only cares about the description, and is 100 %
uncoupled from the build part.
> yes, I have also battled with distutils over the years. However it is
> simpler than autotools (for me... maybe distutils has perverted my
> fragile mind), and works on more platforms for python than any other
> current system.
Autotools certainly works on more platforms (windows notwhistanding),
if only because python itself is built with autoconf. Distutils
simplicity is a trap: it is simpler only if you restrict to what
distutils gives you. Don't get me wrong, autotools are horrible, but I
have never encountered cases where I had to spend hours to do trivial
tasks, as has been the case with distutils. Numpy build system would
be much, much easier to implement through autotools, and would be much
more reliable.
> However
> distutils has had more tests and testing systems added, so that
> refactoring/cleaning up of distutils can happen more so.
You can't refactor distutils without breaking backward compatibility,
because distutils has no API. The whole implementation is the API.
That's one of the fundamental disagreement I and other scipy dev have
with current contributors on distutils-sig: the starting point
(distutils) and the goal are so far away from each other that getting
there step by step is hopeless.
> I agree with many things in that post. Except your conclusion on
> multiple versions of packages in isolation. Package isolation is like
> processes, and package sharing is like threads - and threads are evil!
I don't find the comparison very helpful (for once, you can share data
between processes, whereas virtualenv cannot see each other AFAIK).
> Science is supposed to allow repeatability. Without the same versions
> of packages, repeating experiments is harder. This is a big problem
> in science that multiple versions of packages in _isolation_ can help
> get to a solution to the repeatability problem.
I don't think that's true - at least it does not reflect my experience
at all. But then, I don't pretend to have an extensive experience
either. From most of my discussions at scipy conferences, I know most
people are dissatisfied with the current python solutions.
>
>>> Plenty of good work is going on with python packaging.
>>
>> That's the opposite of my experience. What I care about is:
>> - tools which are hackable and easily extensible
>> - robust install/uninstall
>> - real, DAG-based build system
>> - explicit and repeatability
>>
>> None of this is supported by the tools, and the current directions go
>> even further away. When I have to explain at length why the
>> command-based design of distutils is a nightmare to work with, I don't
>> feel very confident that the current maintainers are aware of the
>> issues, for example. It shows that they never had to extend distutils
>> much.
>>
>
> All agreed! I'd add to the list parallel builds/tests (make -j 16),
> and outputting to native build systems. eg, xcode, msvc projects, and
> makefiles.
Yep - I got quite far with numscons already. It cannot be used as a
general solution, but as a dev tool for my own work on numpy/scipy, it
has been a huge time saver, especially given the top notch dependency
tracking system. It supports // builds, and I can build full debug
builds of scipy < 1 minute on a fast machine. That's a real
productivity booster.
>
> How will you handle toydist extensions so that multiple extensions do
> not have problems with each other? I don't think this is possible
> without isolation, and even then it's still a problem.
By doing it mostly the Unix way, through protocols and file format,
not through API. Good API is hard, but for build tools, it is much,
much harder. When talking about extensions, I mostly think about the
following:
 - adding a new compiler/new platform
 - adding a new installer format
 - adding a new kind of source file/target (say ctypes extension,
cython compilation, etc...)
Instead of using classes for compilers/tools, I am considering using
python modules for each tool, and each tool would be registered
through a source file extension (associate a function to ".c", for
example). Actual compilation steps would be done through strings ("$CC
...."). The system would be kept simple, because for complex projects,
one should forward all this to a real build system (like waf or
scons).
There is also the problem of post/pre hooks, adding new steps in
toymaker: I have not thought much about this, but I like waf's way of
doing it, and it may be applicable. In waf, the main script (called
wscript) defines a function for each build step:
def configure():
 pass
def build():
 pass
....
And undefined functions are considered unmodified.
What I know for sure is that the distutils-way of extending through
inheritance does not work at all. As soon as two extensions subclass
the same base class, you're done.
>
> Yeah, cool. Many other projects have their own servers too.
> pygame.org, plone, etc etc, which meet their own needs. Patches are
> accepted for pypi btw.
Yes, but how long before the patch is accepted and deployed ?
> What type of enforcements of meta data, and how would they help? I
> imagine this could be done in a number of ways to pypi.
> - a distutils command extension that people could use.
> - change pypi source code.
> - check the metadata for certain packages, then email their authors
> telling them about issues.
First, packages with malformed metadata would be rejected, and it
would not be possible to register a package without uploading the
sources. I simply do not want to publish a package which does not even
have a name or a version, for example.
The current way of doing things in pypi in insane if you ask me. For
example, if you want to install a package with its dependencies, you
need to download the package, which may be in another website, and you
need to execute setup.py just to know its dependencies. This has so
many failures modes, I don't understand how this can seriously be
considered, really. Every other system has an index to do this kind of
things (curiously, both EPD and pypm have an index as well AFAIK).
Again, a typical example of NIH, with inferior solutions implemented
in the case of python.
>
> yeah, cool. That would let you develop things incrementally too, and
> still have toydist be useful for the whole development period until it
> catches up with the features of distutils needed.
Initially, toydist was started to show that writing something
compatible with distutils without being tight to distutils was
possible.
> If you execute build tools on arbitrary code, then arbitrary code
> execution is easy for someone who wants to do bad things.
Well, you could surely exploit built tools bugs. But at least, I can
query metadata and packages features in a safe way - and this is very
useful already (cf my points about being able to query packages
metadata in one "query").
> and many times I still
> get errors on different platforms, despite many years of multi
> platform coding.
Yes, that's a difficult process. We cannot fix this - but having
automatically built (and hopefully tested) installers on major
platforms would be a significant step in the right direction. That's
one of the killer feature of CRAN (whenever you submit a package for
CRAN, a windows installer is built, and tested).
cheers,
David
Hi David,
On Mon, Dec 28, 2009 at 9:03 AM, David Cournapeau <cou...@gm...> wrote:
> Executable: grin
>  module: grin
>  function: grin_main
>
> Executable: grind
>  module: grin
>  function: grind_main
Have you thought at all about operations that are currently performed
by post-installation scripts? For example, it might be desirable for
the ipython or MayaVi windows installers to create a folder in the
Start menu that contains links the the executable and the
documentation. This is probably a secondary issue at this point in
toydist's development, but I think it is an important feature in the
long run.
Also, have you considered support for package extras (package variants
in Ports, allowing you to specify features that pull in additional
dependencies like traits[qt4])? Enthought makes good use of them in
ETS, and I think they would be worth keeping.
Darren
On Wed, Dec 30, 2009 at 3:36 AM, René Dudfield <re...@gm...> wrote:
> On Tue, Dec 29, 2009 at 2:34 PM, David Cournapeau <cou...@gm...> wrote:
>> On Tue, Dec 29, 2009 at 10:27 PM, René Dudfield <re...@gm...> wrote:
>>
>>> Buildout is what a lot of the python community are using now.
>>
>> I would like to note that buildout is a solution to a problem that I
>> don't care to solve. This issue is particularly difficult to explain
>> to people accustomed with buildout in my experience - I have not found
>> a way to explain it very well yet.
>
> Hello,
>
> The main problem buildout solves is getting developers up to speed
> very quickly on a project. They should be able to call one command
> and get dozens of packages, and everything else needed ready to go,
> completely isolated from the rest of the system.
>
> If a project does not want to upgrade to the latest versions of
> packages, they do not have to. This reduces the dependency problem a
> lot. As one package does not have to block on waiting for 20 other
> packages. It makes iterating packages daily, or even hourly to not be
> a problem - even with dozens of different packages used. This is not
> theoretical, many projects iterate this quickly, and do not have
> problems.
>
> Backwards compatibility is of course a great thing to keep up... but
> harder to do with dozens of packages, some of which are third party
> ones. For example, some people are running pygame applications
> written 8 years ago that are still running today on the latest
> versions of pygame. I don't think people in the python world
> understand API, and ABI compatibility as much as those in the C world.
>
> However buildout is a solution to their problem, and allows them to
> iterate quickly with many participants, on many different projects.
> Many of these people work on maybe 20-100 different projects at once,
> and some machines may be running that many applications at once too.
> So using the system pythons packages is completely out of the question
> for them.
This is all great, but I don't care about solving this issue, this is
a *developer* issue. I don't mean this is not an important issue, it
is just totally out of scope.
The developer issues I care about are much more fine-grained (corrent
dependency handling between target, toolchain customization, etc...).
Note however that hopefully, by simplifying the packaging tools, the
problems you see with numpy on 2.6 would be less common. The whole
distutils/setuptools/distribute stack is hopelessly intractable, given
how messy the code is.
>
> It is very easy to include a dozen packages in a buildout, so that you
> have all the packages required.
I think there is a confusion - I mostly care about *end users*. People
who may not have compilers, who want to be able to easily upgrade one
package, etc...
David
David Cournapeau wrote:
> Buildout, virtualenv all work by sandboxing from the system python:
> each of them do not see each other, which may be useful for
> development,
And certain kinds of deployment, like web servers or installed tools.
> but as a deployment solution to the casual user who may
> not be familiar with python, it is useless. A scientist who installs
> numpy, scipy, etc... to try things out want to have everything
> available in one python interpreter, and does not want to jump to
> different virtualenvs and whatnot to try different packages.
Absolutely true -- which is why Python desperately needs package version 
selection of some sort. I've been tooting this horn on and off for years 
but never got any interest at all from the core python developers.
I see putting packages in with no version like having non-versioned 
dynamic libraries in a system -- i.e. dll hell. If I have a bunch of 
stuff running just fine with the various package versions I've 
installed, but then I start working on something (maybe just testing, 
maybe something more real) that requires the latest version of a 
package, I have a few choices:
 - install the new package and hope I don't break too much
 - use something like virtualenv, which requires a lot of overhead to 
setup and use (my evidence is personal, despite working with a team that 
uses it, somehow I've never gotten around to using for my dev work, even 
though, in theory, it should be a good solution)
 - setuptools does supposedly support multiple version installs and 
selection, but it's ugly and poorly documented enough that I've never 
figured out how to use it.
This has been addressed with a handful of ad-hock solution: wxPython as 
wxversion.select, and I think PyGTK has something, and who knows what 
else. It would be really nice to have a standard solution available.
Note that the usual response I've gotten is to use py2exe or something 
to distribute, so you're defining the whole stack. That's good for some 
things, but not all (though py2app's "alias" bundles are nice), and 
really pretty worthless for development. Also, many, many packages are a 
 pain to use with py2exe and friends anyway (see my forthcoming other 
long post...)
> - you cannot use sandboxing as a replacement for backward
> compatibility (that's why I don't care much about all the discussion
> about versioning - I don't think it is very useful as long as python
> itself does not support it natively).
could be -- I'd love to have Python support it natively, though 
wxversion isn't too bad.
-Chris
-- 
Christopher Barker, Ph.D.
Oceanographer
Emergency Response Division
NOAA/NOS/OR&R (206) 526-6959 voice
7600 Sand Point Way NE (206) 526-6329 fax
Seattle, WA 98115 (206) 526-6317 main reception
Chr...@no...
On Tue, Dec 29, 2009 at 11:34:44PM +0900, David Cournapeau wrote:
> Buildout, virtualenv all work by sandboxing from the system python:
> each of them do not see each other, which may be useful for
> development, but as a deployment solution to the casual user who may
> not be familiar with python, it is useless. A scientist who installs
> numpy, scipy, etc... to try things out want to have everything
> available in one python interpreter, and does not want to jump to
> different virtualenvs and whatnot to try different packages.
I think that you are pointing out a large source of misunderstanding
in packaging discussion. People behind setuptools, pip or buildout care
to have a working ensemble of packages that deliver an application (often
a web application)[1]. You and I, and many scientific developers see
libraries as building blocks that need to be assembled by the user, the
scientist using them to do new science. Thus the idea of isolation is not
something that we can accept, because it means that we are restricting
the user to a set of libraries.
Our definition of user is not the same as the user targeted by buildout.
Our user does not push buttons, but he writes code. However, unlike the
developer targeted by buildout and distutils, our user does not want or
need to learn about packaging.
Trying to make the debate clearer...
Gaël
[1] I know your position on why simply focusing on sandboxing working
ensemble of libraries is not a replacement for backward compatibility,
and will only create impossible problems in the long run. While I agree
with you, this is not my point here.
On Tue, Dec 29, 2009 at 10:27 PM, René Dudfield <re...@gm...> wrote:
> Buildout is what a lot of the python community are using now.
I would like to note that buildout is a solution to a problem that I
don't care to solve. This issue is particularly difficult to explain
to people accustomed with buildout in my experience - I have not found
a way to explain it very well yet.
Buildout, virtualenv all work by sandboxing from the system python:
each of them do not see each other, which may be useful for
development, but as a deployment solution to the casual user who may
not be familiar with python, it is useless. A scientist who installs
numpy, scipy, etc... to try things out want to have everything
available in one python interpreter, and does not want to jump to
different virtualenvs and whatnot to try different packages.
This has strong consequences on how you look at things from a packaging POV:
 - uninstall is crucial
 - a package bringing down python is a big no no (this happens way too
often when you install things through setuptools)
 - if something fails, the recovery should be trivial - the person
doing the installation may not know much about python
 - you cannot use sandboxing as a replacement for backward
compatibility (that's why I don't care much about all the discussion
about versioning - I don't think it is very useful as long as python
itself does not support it natively).
In the context of ruby, this article makes a similar point:
http://www.madstop.com/ruby/ruby_has_a_distribution_problem.html
David
On Tue, Dec 29, 2009 at 10:27 PM, René Dudfield <re...@gm...> wrote:
> Hi,
>
> In the toydist proposal/release notes, I would address 'what does
> toydist do better' more explicitly.
>
>
>
> **** A big problem for science users is that numpy does not work with
> pypi + (easy_install, buildout or pip) and python 2.6. ****
>
>
>
> Working with the rest of the python community as much as possible is
> likely a good goal.
Yes, but it is hopeless. Most of what is being discussed on
distutils-sig is useless for us, and what matters is ignored at best.
I think most people on distutils-sig are misguided, and I don't think
the community is representative of people concerned with packaging
anyway - most of the participants seem to be around web development,
and are mostly dismissive of other's concerns (OS packagers, etc...).
I want to note that I am not starting this out of thin air - I know
most of distutils code very well, I have been the mostly sole
maintainer of numpy.distutils for 2 years now. I have written
extensive distutils extensions, in particular numscons which is able
to fully build numpy, scipy and matplotlib on every platform that
matters.
Simply put, distutils code is horrible (this is an objective fact) and
 flawed beyond repair (this is more controversial). IMHO, it has
almost no useful feature, except being standard.
If you want a more detailed explanation of why I think distutils and
all tools on top are deeply flawed, you can look here:
http://cournape.wordpress.com/2009/04/01/python-packaging-a-few-observations-cabal-for-a-solution/
> numpy used to work with buildout in python2.5, but not with 2.6.
> buildout lets other team members get up to speed with a project by
> running one command. It installs things in the local directory, not
> system wide. So you can have different dependencies per project.
I don't think it is a very useful feature, honestly. It seems to me
that they created a huge infrastructure to split packages into tiny
pieces, and then try to get them back together, imaganing that
multiple installed versions is a replacement for backward
compatibility. Anyone with extensive packaging experience knows that's
a deeply flawed model in general.
> Plenty of good work is going on with python packaging.
That's the opposite of my experience. What I care about is:
 - tools which are hackable and easily extensible
 - robust install/uninstall
 - real, DAG-based build system
 - explicit and repeatability
None of this is supported by the tools, and the current directions go
even further away. When I have to explain at length why the
command-based design of distutils is a nightmare to work with, I don't
feel very confident that the current maintainers are aware of the
issues, for example. It shows that they never had to extend distutils
much.
>
> There are build farms for windows packages and OSX uploaded to pypi.
> Start uploading pre releases to pypi, and you get these for free (once
> you make numpy compile out of the box on those compile farms). There
> are compile farms for other OSes too... like ubuntu/debian, macports
> etc. Some distributions even automatically download, compile and
> package new releases once they spot a new file on your ftp/web site.
I am familiar with some of those systems (PPA and opensuse build
service in particular). One of the goal of my proposal is to make it
easier to interoperate with those tools.
I think Pypi is mostly useless. The lack of enforced metadata is a big
no-no IMHO. The fact that Pypi is miles beyond CRAN for example is
quite significant. I want CRAN for scientific python, and I don't see
Pypi becoming it in the near future.
The point of having our own Pypi-like server is that we could do the following:
 - enforcing metadata
 - making it easy to extend the service to support our needs
>
> pypm: http://pypm.activestate.com/list-n.html#numpy
It is interesting to note that one of the maintainer of pypm has
recently quitted the discussion about Pypi, most likely out of
frustration from the other participants.
> Documentation projects are being worked on to document, give tutorials
> and make python packaging be easier all round. As witnessed by 20 or
> so releases on pypi every day(and growing), lots of people are using
> the python packaging tools successfully.
This does not mean much IMO. Uploading on Pypi is almost required to
use virtualenv, buildout, etc.. An interesting metric is not how many
packages are uploaded, but how much it is used outside developers.
>
> I'm not sure making a separate build tool is a good idea. I think
> going with the rest of the python community, and improving the tools
> there is a better idea.
It has been tried, and IMHO has been proved to have failed. You can
look at the recent discussion (the one started by Guido in
particular).
> pps. some notes on toydist itself.
> - toydist convert is cool for people converting a setup.py . This
> means that most people can try out toydist right away. but what does
> it gain these people who convert their setup.py files?
Not much ATM, except that it is easier to write a toysetup.info
compared to setup.py IMO, and that it supports a simple way to include
data files (something which is currently *impossible* to do without
writing your own distutils extensions). It has also the ability to
build eggs without using setuptools (I consider not using setuptools a
feature, given the too many failure modes of this package).
The main goals though are to make it easier to build your own tools on
top of if, and to integrate with real build systems.
> - a toydist convert that generates a setup.py file might be cool :)
toydist started like this, actually: you would write a setup.py file
which loads the package from toysetup.info, and can be converted to a
dict argument to distutils.core.setup. I have not updated it recently,
but that's definitely on the TODO list for a first alpha, as it would
enable people to benefit from the format, with 100 % backward
compatibility with distutils.
> - arbitrary code execution happens when building or testing with
> toydist.
You are right for testing, but wrong for building. As long as the
build is entirely driven by toysetup.info, you only have to trust
toydist (which is not safe ATM, but that's an implementation detail),
and your build tools of course.
Obviously, if you have a package which uses an external build tool on
top of toysetup.info (as will be required for numpy itself for
example), all bets are off. But I think that's a tiny fraction of the
interesting packages for scientific computing.
Sandboxing is particularly an issue on windows - I don't know a good
solution for windows sandboxing, outside of full vms, which are
heavy-weights.
> - it should be possible to build this toydist functionality as a
> distutils/distribute/buildout extension.
No, it cannot, at least as far as distutils/distribute are concerned
(I know nothing about buildout). Extending distutils is horrible, and
fragile in general. Even autotools with its mix of generated sh
scripts through m4 and perl is a breeze compared to distutils.
> - extending toydist? How are extensions made? there are 175 buildout
> packages which extend buildout, and many that extend
> distutils/setuptools - so extension of build tools in a necessary
> thing.
See my answer earlier about interoperation with build tools.
cheers,
David
On Tue, Dec 29, 2009 at 8:02 AM, Gael Varoquaux
<gae...@no...> wrote:
> On Mon, Dec 28, 2009 at 02:29:24PM -0500, Neal Becker wrote:
>> Perhaps this could be useful:
>> http://checkinstall.izto.org/
>
> Yes, checkinstall is really cool. However, I tend to prefer things with
> no magic that I don't have to sandbox to know what they are doing.
I am still not sure the design is entirely right, but the install
command in toymaker just reads a build manifest, which is a file
containing all the files necessary for install. It is explicit, and
list every file to be installed. By design, it cannot install anything
outside this manifest.
That's also how eggs are built (and soon win installers and mac os x pkg).
cheers,
David
On Mon, Dec 28, 2009 at 02:29:24PM -0500, Neal Becker wrote:
> Perhaps this could be useful:
> http://checkinstall.izto.org/
Yes, checkinstall is really cool. However, I tend to prefer things with
no magic that I don't have to sandbox to know what they are doing. This
is why I am also happy to hear about toydist.
Gaël
David Cournapeau wrote:
> The idea would be that for a few major distributions at least, you
> would have .rpm available on the repository. If you install from
> sources, there would be a few mechanisms to avoid your exact issue
> (like maybe defaulting to --user kind of installs). Of course, it can
> only be dealt up to a point.
> 
> David
> 
Perhaps this could be useful:
http://checkinstall.izto.org/
> On Tue, Dec 29, 2009 at 3:49 AM, Dag Sverre Seljebotn
> <da...@st...> wrote:
>
>>
>> Do you here mean automatic generation of Ubuntu debs, Debian debs,
>> Windows
>> MSI installer, Windows EXE installer, and so on? (If so then great!)
>
> Yes (although this is not yet implemented). In particular on windows,
> I want to implement a scheme so that you can convert from eggs to .exe
> and vice et versa, so people can still install as exe (or msi), even
> though the method would default to eggs.
>
>> If this is the goal, I wonder if one looks outside of Python-land one
>> might find something that already does this -- there's a lot of
>> different
>> package format, "Linux meta-distributions", "install everywhere
>> packages"
>> and so on.
>
> Yes, there are things like 0install or autopackage. I think those are
> deemed to fail, as long as it is not supported thoroughly by the
> distribution. Instead, my goal here is much simpler: producing
> rpm/deb. It does not solve every issue (install by non root, multiple
> // versions), but one has to be realistic :)
>
> I think automatically built rpm/deb, easy integration with native
> method can solve a lot of issues already.
>
>>
>> - Currently I'm making a Sage SPKG for CHOLMOD. This essentially gets
>> the
>> job done by not bothering about the problem, not even using the
>> OS-installed Python.
>>
>> Something that would spit out both Sage SPKGs, Ubuntu debs, Windows
>> installers, both with Python code and C/Fortran code or a mix (and put
>> both in the place preferred by the system in question), seems ideal. Of
>> course one would still need to make sure that the code builds properly
>> everywhere, but just solving the distribution part of this would be a
>> huge
>> step ahead.
>
> On windows, this issue may be solved using eggs: enstaller has a
> feature where dll put in a special location of an egg are installed in
> python such as they are found by the OS loader. One could have
> mechanisms based on $ORIGIN + rpath on linux to solve this issue for
> local installs on Linux, etc...
>
> But again, one has to be realistic on the goals. With toydist, I want
> to remove all the pile of magic, hacks built on top of distutils so
> that people can again hack their own solutions, as it should have been
> from the start (that's a big plus of python in general). It won't
> magically solve every issue out there, but it would hopefully help
> people to make their own.
>
> Bundling solutions like SAGE, EPD, etc... are still the most robust
> ways to deal with those issues in general, and I do not intended to
> replace those.
>
>> What I'm saying is that this is a software distribution problem in
>> general, and I'm afraid that Python-specific solutions are too narrow.
>
> Distribution is a hard problem. Instead of pushing a very narrow (and
> mostly ill-funded) view of how people should do things like
> distutils/setuptools/pip/buildout do, I want people to be able to be
> able to build their own solutions. No more "use this magic stick v
> 4.0.3.3.14svn1234, trust me it work you don't have to understand"
> which is too prevalant with those tools, which has always felt deeply
> unpythonic to me.
Thanks, this cleared things up, and I like the direction this is heading.
Thanks a lot for doing this!
Dag Sverre
On Tue, Dec 29, 2009 at 3:49 AM, Dag Sverre Seljebotn
<da...@st...> wrote:
>
> Do you here mean automatic generation of Ubuntu debs, Debian debs, Windows
> MSI installer, Windows EXE installer, and so on? (If so then great!)
Yes (although this is not yet implemented). In particular on windows,
I want to implement a scheme so that you can convert from eggs to .exe
and vice et versa, so people can still install as exe (or msi), even
though the method would default to eggs.
> If this is the goal, I wonder if one looks outside of Python-land one
> might find something that already does this -- there's a lot of different
> package format, "Linux meta-distributions", "install everywhere packages"
> and so on.
Yes, there are things like 0install or autopackage. I think those are
deemed to fail, as long as it is not supported thoroughly by the
distribution. Instead, my goal here is much simpler: producing
rpm/deb. It does not solve every issue (install by non root, multiple
// versions), but one has to be realistic :)
I think automatically built rpm/deb, easy integration with native
method can solve a lot of issues already.
>
> - Currently I'm making a Sage SPKG for CHOLMOD. This essentially gets the
> job done by not bothering about the problem, not even using the
> OS-installed Python.
>
> Something that would spit out both Sage SPKGs, Ubuntu debs, Windows
> installers, both with Python code and C/Fortran code or a mix (and put
> both in the place preferred by the system in question), seems ideal. Of
> course one would still need to make sure that the code builds properly
> everywhere, but just solving the distribution part of this would be a huge
> step ahead.
On windows, this issue may be solved using eggs: enstaller has a
feature where dll put in a special location of an egg are installed in
python such as they are found by the OS loader. One could have
mechanisms based on $ORIGIN + rpath on linux to solve this issue for
local installs on Linux, etc...
But again, one has to be realistic on the goals. With toydist, I want
to remove all the pile of magic, hacks built on top of distutils so
that people can again hack their own solutions, as it should have been
from the start (that's a big plus of python in general). It won't
magically solve every issue out there, but it would hopefully help
people to make their own.
Bundling solutions like SAGE, EPD, etc... are still the most robust
ways to deal with those issues in general, and I do not intended to
replace those.
> What I'm saying is that this is a software distribution problem in
> general, and I'm afraid that Python-specific solutions are too narrow.
Distribution is a hard problem. Instead of pushing a very narrow (and
mostly ill-funded) view of how people should do things like
distutils/setuptools/pip/buildout do, I want people to be able to be
able to build their own solutions. No more "use this magic stick v
4.0.3.3.14svn1234, trust me it work you don't have to understand"
which is too prevalant with those tools, which has always felt deeply
unpythonic to me.
David
David wrote:
> Repository
> ========
>
> The goal here is to have something like CRAN
> (http://cran.r-project.org/web/views/), ideally with a build farm so
> that whenever anyone submits a package to our repository, it would
> automatically be checked, and built for windows/mac os x and maybe a
> few major linux distributions. One could investigate the build service
> from open suse to that end (http://en.opensuse.org/Build_Service),
> which is based on xen VM to build installers in a reproducible way.
Do you here mean automatic generation of Ubuntu debs, Debian debs, Windows
MSI installer, Windows EXE installer, and so on? (If so then great!)
If this is the goal, I wonder if one looks outside of Python-land one
might find something that already does this -- there's a lot of different
package format, "Linux meta-distributions", "install everywhere packages"
and so on.
Of course, toydist could have such any such tool as a backend/in a pipeline.
> What's next ?
> ==========
>
> At this point, I would like to ask for help and comments, in particular:
> - Does all this make sense, or hopelessly intractable ?
> - Besides the points I have mentioned, what else do you think is needed ?
Hmm. What I miss is the discussion of other native libraries which the
Python libraries need to bundle. Is it assumed that one want to continue
linking C and Fortran code directly into Python .so modules, like the
scipy library currently does?
Let me take CHOLMOD (sparse Cholesky) as an example.
 - The Python package cvxopt use it, simply by linking about 20 C files
directly into the Python-loadable module (.so) which goes into the Python
site-packages (or wherever). This makes sure it just works. But, it
doesn't feel like the right way at all.
 - scikits.sparse.cholmod OTOH simple specifies libraries=["cholmod"], and
leave it up to the end-user to make sure it is installed. Linux users
with root access can simply apt-get, but it is a pain for everybody else
(Windows, Mac, non-root Linux).
 - Currently I'm making a Sage SPKG for CHOLMOD. This essentially gets the
job done by not bothering about the problem, not even using the
OS-installed Python.
Something that would spit out both Sage SPKGs, Ubuntu debs, Windows
installers, both with Python code and C/Fortran code or a mix (and put
both in the place preferred by the system in question), seems ideal. Of
course one would still need to make sure that the code builds properly
everywhere, but just solving the distribution part of this would be a huge
step ahead.
What I'm saying is that this is a software distribution problem in
general, and I'm afraid that Python-specific solutions are too narrow.
Dag Sverre
On Tue, Dec 29, 2009 at 3:03 AM, Neal Becker <ndb...@gm...> wrote:
> David Cournapeau wrote:
>
>> On Mon, Dec 28, 2009 at 11:47 PM, Stefan Schwarzburg
>> <ste...@go...> wrote:
>>> Hi,
>>> I would like to add a comment from the user perspective:
>>>
>>> - the main reason why I'm not satisfied with pypi/distutils/etc. and why
>>> I will not be satisfied with toydist (with the features you listed), is
>>> that they break my installation (debian/ubuntu).
>>
>> Toydist (or distutils) does not break anything as is. It would be like
>> saying make breaks debian - it does not make much sense. As stated,
>> one of the goal of giving up distutils is to make packaging by os
>> vendors easier. In particular, by allowing to follow the FHS, and
>> making things more consistent. It should be possible to automatically
>> convert most packages to .deb (or .rpm) relatively easily. When you
>> look at the numpy .deb package, most of the issues are distutils
>> issues, and almost everything else can be done automatically.
>>
>> Note that even ignoring the windows problem, there are systems to do
>> the kind of things I am talking about for linux-only systems (the
>> opensuse build service), because distributions are not always really
>> good at tracking fast changing softwares. IOW, traditional linux
>> packaging has some issues as well. And anyway, nothing prevents debian
>> or other OS vendors to package things as they want (as they do for R
>> packages).
>>
>> David
>
> I think the breakage that is referred to I can describe on my favorite
> system, fedora.
>
> I can install the fedora numpy rpm using yum. I could also use
> easy_install. Unfortunately:
> 1) Each one knows nothing about the other
> 2) They may install things into conflicting paths. In particular, on fedora
> arch-dependent things go in /usr/lib64/python<version>/site-packages while
> arch-independent goes into /usr/lib/python<version>... If you mix yum with
> easy_install (or setuptools), you many times wind up with 2 versions and a
> lot of confusion.
>
> This is NOT unusual. Let's say I have numpy-1.3.0 installed from rpms. I
> see the announcement of numpy-1.4.0, and decide I want it, before the rpm is
> available, so I use easy_install. Now numpy-1.4.0 shows up as a standard
> rpm, and a subsequent update (which could be automatic!) could produce a
> broken system.
Several points:
 - First, this is caused by distutils misfeature of defaulting to
/usr. This is a mistake. It should default to /usr/local, as does
every other install method from sources.
 - A lot of instructions start by sudo easy_install... This is a very
bad advice, especially given the previous issue.
> I don't really know what could be done about it. Perhaps a design that
> attempts to use native backends for installation where available?
The idea would be that for a few major distributions at least, you
would have .rpm available on the repository. If you install from
sources, there would be a few mechanisms to avoid your exact issue
(like maybe defaulting to --user kind of installs). Of course, it can
only be dealt up to a point.
David
From: Andrew S. <str...@as...> - 2009年12月28日 16:32:50
Stefan Schwarzburg wrote:
> Hi,
> I would like to add a comment from the user perspective:
>
> - the main reason why I'm not satisfied with pypi/distutils/etc. and
> why I will not be satisfied with toydist (with the features you
> listed), is that they break my installation (debian/ubuntu). The main
> advantage of these kinds of linux is their packaging system. So even
> if there is a way to query if a python package is installed, I will
> need three or more commands to check if it really is there (toydist,
> distutils, aptitude and other packages installed by hand).
I am interested in adapting stdeb ( http://github.com/astraw/stdeb ) to
handle toydist distributions. Nevertheless, the goals of toydist seem
mostly unrelated to your issue with the exception that toydist will
hopefully make generating .deb packages easier and more robust.
See http://github.com/astraw/stdeb/issues#issue/16 for a wishlist item
that I think would solve your issue. In that ticket, I give the steps
required to solve the issue -- I don't think it looks too hard, and
patches will be reviewed with a goal of producing something that gets
merged.
-Andrew
From: David C. <cou...@gm...> - 2009年12月28日 15:03:31
On Mon, Dec 28, 2009 at 11:47 PM, Stefan Schwarzburg
<ste...@go...> wrote:
> Hi,
> I would like to add a comment from the user perspective:
>
> - the main reason why I'm not satisfied with pypi/distutils/etc. and why I
> will not be satisfied with toydist (with the features you listed), is that
> they break my installation (debian/ubuntu).
Toydist (or distutils) does not break anything as is. It would be like
saying make breaks debian - it does not make much sense. As stated,
one of the goal of giving up distutils is to make packaging by os
vendors easier. In particular, by allowing to follow the FHS, and
making things more consistent. It should be possible to automatically
convert most packages to .deb (or .rpm) relatively easily. When you
look at the numpy .deb package, most of the issues are distutils
issues, and almost everything else can be done automatically.
Note that even ignoring the windows problem, there are systems to do
the kind of things I am talking about for linux-only systems (the
opensuse build service), because distributions are not always really
good at tracking fast changing softwares. IOW, traditional linux
packaging has some issues as well. And anyway, nothing prevents debian
or other OS vendors to package things as they want (as they do for R
packages).
David
From: Stefan S. <ste...@go...> - 2009年12月28日 14:47:24
Hi,
I would like to add a comment from the user perspective:
- the main reason why I'm not satisfied with pypi/distutils/etc. and why I
will not be satisfied with toydist (with the features you listed), is that
they break my installation (debian/ubuntu). The main advantage of these
kinds of linux is their packaging system. So even if there is a way to query
if a python package is installed, I will need three or more commands to
check if it really is there (toydist, distutils, aptitude and other packages
installed by hand).
I know you don't want to hear this, but my suggestion (as a user) would be:
use the debian package system. If you introduce a new system anyway, why not
use the best there is and remove at least one incompatibility with at least
one system?
It shouldn't make a big difference on windows, because they would have to
install/learn/use your new system anyway. The same is true for OS X, BSDs
and RPM based linuxes. So they won't have any disadvantage when debs are
used. But deb based Linuxes would have a big advantage: their system
wouldn't be broken.
And with debs, you would have:
- http-based package repository ala debian/ubuntu, which would be easy to
mirror and backup (through rsync-like tools)
- decoupling the building, packaging and distribution of code and data
- reliable install/uninstall/query of what is installed locally
- making the life of OS vendors (Linux, *BSD, etc...) easier
You wouldn't get automatic compiling for windows and I'm not sure if the
tools are as easy to understand as your toydist would be. But they are
mighty, ready to use, heavily tested, and maintained even if you don't find
the time to contribute.
I'm aware, that there might be issues that I did not mention here, and this
is meant as an idea and not as a complete solution...
Cheers,
Stefan
On Mon, Dec 28, 2009 at 15:03, David Cournapeau <cou...@gm...> wrote:
> (warning, long post)
>
> Hi there,
>
> As some of you already know, the packaging and distributions of
> scientific python packages have been a constant source of frustration.
> Open source is about making it easy for anyone to use software how
> they see fit, and I think python packaging infrastructure has not been
> very successfull for people not intimately familiar with python. A few
> weeks ago, after Guido visited Berkeley and was told how those issues
> were still there for the scientific community, he wrote an email
> asking whether current efforts on distutils-sig will be enough (see
> http://aspn.activestate.com/ASPN/Mail/Message/distutils-sig/3775972).
>
> Several of us have been participating to this discussion, but I feel
> like the divide between current efforts on distutils-sig and us (the
> SciPy community) is not getting smaller. At best, their efforts will
> be more work for us to track the new distribute fork, and more likely,
> it will be all for nothing as it won't solve any deep issue. To be
> honest, most of what is considered on distutils-sig sounds like
> anti-goals to me.
>
> Instead of keeping up with the frustrating process of "improving"
> distutils, I think we have enough smart people and manpower in the
> scientific community to go with our own solution. I am convinced it is
> doable because R or haskell, with a much smaller community than
> python, managed to pull out something with is miles ahead compared to
> pypi. The SciPy community is hopefully big enough so that a
> SciPy-specific solution may reach critical mass. Ideally, I wish we
> had something with the following capabilities:
>
> - easy to understand tools
> - http-based package repository ala CRAN, which would be easy to
> mirror and backup (through rsync-like tools)
> - decoupling the building, packaging and distribution of code and data
> - reliable install/uninstall/query of what is installed locally
> - facilities for building windows/max os x binaries
> - making the life of OS vendors (Linux, *BSD, etc...) easier
>
> The packaging part
> ==============
>
> Speaking is easy, so I started coding part of this toolset, called
> toydist (temporary name), which I presented at Scipy India a few days
> ago:
>
> http://github.com/cournape/toydist/
>
> Toydist is more or less a rip off of cabal
> (http://www.haskell.org/cabal/), and consist of three parts:
> - a core which builds a package description from a declarative file
> similar to cabal files. The file is almost purely declarative, and can
> be parsed so that no arbitrary code is executed, thus making it easy
> to sandbox packages builds (e.g. on a build farm).
> - a set of command line tools to configure, build, install, build
> installers (egg only for now) etc... from the declarative file
> - backward compatibility tools: a tool to convert existing setup.py
> to the new format has been written, and a tool to use distutils
> through the new format for backward compatibility with complex
> distutils extensions should be relatively easy.
>
> The core idea is to make the format just rich enough to describe most
> packages out there, but simple enough so interfacing it with external
> tools is possible and reliable. As a regular contributor to scons, I
> am all too aware that a build tool is a very complex beast to get
> right, and repeating their efforts does not make sense. Typically, I
> envision that complex packages such as numpy, scipy or matplotlib
> would use make/waf/scons for the build - in a sense, toydist is
> written so that writing something like numscons would be easier. OTOH,
> most if not all scikits should be buildable from a purely declarative
> file.
>
> To give you a feel of the format, here is a snippet for the grin
> package from Robert K. (automatically converted):
>
> Name: grin
> Version: 1.1.1
> Summary: A grep program configured the way I like it.
> Description:
> ====
> grin
> ====
>
> I wrote grin to help me search directories full of source code.
> The venerable
> GNU grep_ and find_ are great tools, but they fall just a little
> short for my
> normal use cases.
>
> <snip>
> License: BSD
> Platforms: UNKNOWN
> Classifiers:
> License :: OSI Approved :: BSD License,
> Development Status :: 5 - Production/Stable,
> Environment :: Console,
> Intended Audience :: Developers,
> Operating System :: OS Independent,
> Programming Language :: Python,
> Topic :: Utilities,
>
> ExtraSourceFiles:
> README.txt,
> setup.cfg,
> setup.py,
>
> Library:
> InstallDepends:
> argparse,
> Modules:
> grin,
>
> Executable: grin
> module: grin
> function: grin_main
>
> Executable: grind
> module: grin
> function: grind_main
>
> Although still very much experimental at this point, toydist already
> makes some things much easier than with distutils/setuptools:
> - path customization for any target can be done easily: you can
> easily add an option in the file so that configure --mynewdir=value
> works and is accessible at every step.
> - making packages FHS compliant is not a PITA anymore, and the scheme
> can be adapted to any OS, be it traditional FHS-like unix, mac os x,
> windows, etc...
> - All the options are accessible at every step (no more distutils
> commands nonsense)
> - data files can finally be handled correctly and consistently,
> instead of the 5 or 6 magics methods currently available in
> distutils/setuptools/numpy.distutils
> - building eggs does not involve setuptools anymore
> - not much coupling between package description and build
> infrastructure (building extensions is actually done through distutils
> ATM).
>
> Repository
> ========
>
> The goal here is to have something like CRAN
> (http://cran.r-project.org/web/views/), ideally with a build farm so
> that whenever anyone submits a package to our repository, it would
> automatically be checked, and built for windows/mac os x and maybe a
> few major linux distributions. One could investigate the build service
> from open suse to that end (http://en.opensuse.org/Build_Service),
> which is based on xen VM to build installers in a reproducible way.
>
> Installed package db
> ===============
>
> I believe that the current open source enstaller package from
> Enthought can be a good starting point. It is based on eggs, but eggs
> are only used as a distribution format (eggs are never installed as
> eggs AFAIK). You can easily remove packages, query installed versions,
> etc... Since toydist produces eggs, interoperation between toydist and
> enstaller should not be too difficult.
>
> What's next ?
> ==========
>
> At this point, I would like to ask for help and comments, in particular:
> - Does all this make sense, or hopelessly intractable ?
> - Besides the points I have mentioned, what else do you think is needed ?
> - There has already been some work for the scikits webportal, but I
> think we should bypass pypi entirely (the current philosophy of not
> enforcing consistent metadata does not make much sense to me, and is
> at the opposite of most other similar system out there).
> - I think a build farm for at least windows packages would be a
> killer feature, and enough incentive to push some people to use our
> new infrastructure. It would be good to have a windows guy familiar
> with windows sandboxing/virtualization to do something there. The
> people working on the opensuse build service have started working on
> windows support
> - I think being able to automatically convert most of scientific
> packages is a significant feature, and needs to be more robust - so
> anyone is welcomed to try converting existing setup.py with toydist
> (see toydist readme).
>
> thanks,
>
> David
>
>
> ------------------------------------------------------------------------------
> This SF.Net email is sponsored by the Verizon Developer Community
> Take advantage of Verizon's best-in-class app development support
> A streamlined, 14 day to market process makes app distribution fast and
> easy
> Join now and get one step closer to millions of Verizon customers
> http://p.sf.net/sfu/verizon-dev2dev
> _______________________________________________
> Matplotlib-devel mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>
From: David C. <cou...@gm...> - 2009年12月28日 14:03:25
(warning, long post)
Hi there,
 As some of you already know, the packaging and distributions of
scientific python packages have been a constant source of frustration.
Open source is about making it easy for anyone to use software how
they see fit, and I think python packaging infrastructure has not been
very successfull for people not intimately familiar with python. A few
weeks ago, after Guido visited Berkeley and was told how those issues
were still there for the scientific community, he wrote an email
asking whether current efforts on distutils-sig will be enough (see
http://aspn.activestate.com/ASPN/Mail/Message/distutils-sig/3775972).
Several of us have been participating to this discussion, but I feel
like the divide between current efforts on distutils-sig and us (the
SciPy community) is not getting smaller. At best, their efforts will
be more work for us to track the new distribute fork, and more likely,
it will be all for nothing as it won't solve any deep issue. To be
honest, most of what is considered on distutils-sig sounds like
anti-goals to me.
Instead of keeping up with the frustrating process of "improving"
distutils, I think we have enough smart people and manpower in the
scientific community to go with our own solution. I am convinced it is
doable because R or haskell, with a much smaller community than
python, managed to pull out something with is miles ahead compared to
pypi. The SciPy community is hopefully big enough so that a
SciPy-specific solution may reach critical mass. Ideally, I wish we
had something with the following capabilities:
 - easy to understand tools
 - http-based package repository ala CRAN, which would be easy to
mirror and backup (through rsync-like tools)
 - decoupling the building, packaging and distribution of code and data
 - reliable install/uninstall/query of what is installed locally
 - facilities for building windows/max os x binaries
 - making the life of OS vendors (Linux, *BSD, etc...) easier
The packaging part
==============
Speaking is easy, so I started coding part of this toolset, called
toydist (temporary name), which I presented at Scipy India a few days
ago:
http://github.com/cournape/toydist/
Toydist is more or less a rip off of cabal
(http://www.haskell.org/cabal/), and consist of three parts:
 - a core which builds a package description from a declarative file
similar to cabal files. The file is almost purely declarative, and can
be parsed so that no arbitrary code is executed, thus making it easy
to sandbox packages builds (e.g. on a build farm).
 - a set of command line tools to configure, build, install, build
installers (egg only for now) etc... from the declarative file
 - backward compatibility tools: a tool to convert existing setup.py
to the new format has been written, and a tool to use distutils
through the new format for backward compatibility with complex
distutils extensions should be relatively easy.
The core idea is to make the format just rich enough to describe most
packages out there, but simple enough so interfacing it with external
tools is possible and reliable. As a regular contributor to scons, I
am all too aware that a build tool is a very complex beast to get
right, and repeating their efforts does not make sense. Typically, I
envision that complex packages such as numpy, scipy or matplotlib
would use make/waf/scons for the build - in a sense, toydist is
written so that writing something like numscons would be easier. OTOH,
most if not all scikits should be buildable from a purely declarative
file.
To give you a feel of the format, here is a snippet for the grin
package from Robert K. (automatically converted):
Name: grin
Version: 1.1.1
Summary: A grep program configured the way I like it.
Description:
 ====
 grin
 ====
 I wrote grin to help me search directories full of source code.
The venerable
 GNU grep_ and find_ are great tools, but they fall just a little
short for my
 normal use cases.
 <snip>
License: BSD
Platforms: UNKNOWN
Classifiers:
 License :: OSI Approved :: BSD License,
 Development Status :: 5 - Production/Stable,
 Environment :: Console,
 Intended Audience :: Developers,
 Operating System :: OS Independent,
 Programming Language :: Python,
 Topic :: Utilities,
ExtraSourceFiles:
 README.txt,
 setup.cfg,
 setup.py,
Library:
 InstallDepends:
 argparse,
 Modules:
 grin,
Executable: grin
 module: grin
 function: grin_main
Executable: grind
 module: grin
 function: grind_main
Although still very much experimental at this point, toydist already
makes some things much easier than with distutils/setuptools:
 - path customization for any target can be done easily: you can
easily add an option in the file so that configure --mynewdir=value
works and is accessible at every step.
 - making packages FHS compliant is not a PITA anymore, and the scheme
can be adapted to any OS, be it traditional FHS-like unix, mac os x,
windows, etc...
 - All the options are accessible at every step (no more distutils
commands nonsense)
 - data files can finally be handled correctly and consistently,
instead of the 5 or 6 magics methods currently available in
distutils/setuptools/numpy.distutils
 - building eggs does not involve setuptools anymore
 - not much coupling between package description and build
infrastructure (building extensions is actually done through distutils
ATM).
Repository
========
The goal here is to have something like CRAN
(http://cran.r-project.org/web/views/), ideally with a build farm so
that whenever anyone submits a package to our repository, it would
automatically be checked, and built for windows/mac os x and maybe a
few major linux distributions. One could investigate the build service
from open suse to that end (http://en.opensuse.org/Build_Service),
which is based on xen VM to build installers in a reproducible way.
Installed package db
===============
I believe that the current open source enstaller package from
Enthought can be a good starting point. It is based on eggs, but eggs
are only used as a distribution format (eggs are never installed as
eggs AFAIK). You can easily remove packages, query installed versions,
etc... Since toydist produces eggs, interoperation between toydist and
enstaller should not be too difficult.
What's next ?
==========
At this point, I would like to ask for help and comments, in particular:
 - Does all this make sense, or hopelessly intractable ?
 - Besides the points I have mentioned, what else do you think is needed ?
 - There has already been some work for the scikits webportal, but I
think we should bypass pypi entirely (the current philosophy of not
enforcing consistent metadata does not make much sense to me, and is
at the opposite of most other similar system out there).
 - I think a build farm for at least windows packages would be a
killer feature, and enough incentive to push some people to use our
new infrastructure. It would be good to have a windows guy familiar
with windows sandboxing/virtualization to do something there. The
people working on the opensuse build service have started working on
windows support
 - I think being able to automatically convert most of scientific
packages is a significant feature, and needs to be more robust - so
anyone is welcomed to try converting existing setup.py with toydist
(see toydist readme).
thanks,
David

Showing results of 79

1 2 3 4 > >> (Page 1 of 4)
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /