SourceForge logo
SourceForge logo
Menu

matplotlib-users — Discussion related to using matplotlib

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
(3)
Jun
Jul
Aug
(12)
Sep
(12)
Oct
(56)
Nov
(65)
Dec
(37)
2004 Jan
(59)
Feb
(78)
Mar
(153)
Apr
(205)
May
(184)
Jun
(123)
Jul
(171)
Aug
(156)
Sep
(190)
Oct
(120)
Nov
(154)
Dec
(223)
2005 Jan
(184)
Feb
(267)
Mar
(214)
Apr
(286)
May
(320)
Jun
(299)
Jul
(348)
Aug
(283)
Sep
(355)
Oct
(293)
Nov
(232)
Dec
(203)
2006 Jan
(352)
Feb
(358)
Mar
(403)
Apr
(313)
May
(165)
Jun
(281)
Jul
(316)
Aug
(228)
Sep
(279)
Oct
(243)
Nov
(315)
Dec
(345)
2007 Jan
(260)
Feb
(323)
Mar
(340)
Apr
(319)
May
(290)
Jun
(296)
Jul
(221)
Aug
(292)
Sep
(242)
Oct
(248)
Nov
(242)
Dec
(332)
2008 Jan
(312)
Feb
(359)
Mar
(454)
Apr
(287)
May
(340)
Jun
(450)
Jul
(403)
Aug
(324)
Sep
(349)
Oct
(385)
Nov
(363)
Dec
(437)
2009 Jan
(500)
Feb
(301)
Mar
(409)
Apr
(486)
May
(545)
Jun
(391)
Jul
(518)
Aug
(497)
Sep
(492)
Oct
(429)
Nov
(357)
Dec
(310)
2010 Jan
(371)
Feb
(657)
Mar
(519)
Apr
(432)
May
(312)
Jun
(416)
Jul
(477)
Aug
(386)
Sep
(419)
Oct
(435)
Nov
(320)
Dec
(202)
2011 Jan
(321)
Feb
(413)
Mar
(299)
Apr
(215)
May
(284)
Jun
(203)
Jul
(207)
Aug
(314)
Sep
(321)
Oct
(259)
Nov
(347)
Dec
(209)
2012 Jan
(322)
Feb
(414)
Mar
(377)
Apr
(179)
May
(173)
Jun
(234)
Jul
(295)
Aug
(239)
Sep
(276)
Oct
(355)
Nov
(144)
Dec
(108)
2013 Jan
(170)
Feb
(89)
Mar
(204)
Apr
(133)
May
(142)
Jun
(89)
Jul
(160)
Aug
(180)
Sep
(69)
Oct
(136)
Nov
(83)
Dec
(32)
2014 Jan
(71)
Feb
(90)
Mar
(161)
Apr
(117)
May
(78)
Jun
(94)
Jul
(60)
Aug
(83)
Sep
(102)
Oct
(132)
Nov
(154)
Dec
(96)
2015 Jan
(45)
Feb
(138)
Mar
(176)
Apr
(132)
May
(119)
Jun
(124)
Jul
(77)
Aug
(31)
Sep
(34)
Oct
(22)
Nov
(23)
Dec
(9)
2016 Jan
(26)
Feb
(17)
Mar
(10)
Apr
(8)
May
(4)
Jun
(8)
Jul
(6)
Aug
(5)
Sep
(9)
Oct
(4)
Nov
Dec
2017 Jan
(5)
Feb
(7)
Mar
(1)
Apr
(5)
May
Jun
(3)
Jul
(6)
Aug
(1)
Sep
Oct
(2)
Nov
(1)
Dec
2018 Jan
Feb
Mar
Apr
(1)
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
2020 Jan
Feb
Mar
Apr
May
(1)
Jun
Jul
Aug
Sep
Oct
Nov
Dec
2025 Jan
(1)
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
S M T W T F S




1
(1)
2
(4)
3
(12)
4
(5)
5
(30)
6
(21)
7
(20)
8
(11)
9
(9)
10
(12)
11
(11)
12
(22)
13
(22)
14
(38)
15
(25)
16
(23)
17
(20)
18
(7)
19
(13)
20
(13)
21
(18)
22
(6)
23
(7)
24
(4)
25
(9)
26
(35)
27
(37)
28
(22)
29
(27)
30
(12)
31
(4)

Showing 22 results of 22

From: John H. <jd...@gm...> - 2009年01月12日 22:59:21
On Mon, Jan 12, 2009 at 4:35 PM, Michael Hearne <mh...@us...> wrote:
> Will fontdict continue to be a valid keyword argument for the text()
> function? It seems to work now (in version 0.98.5.1), but the help on
> this page:
>
> http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.text
>
> is ambiguous, as it lists two keyword arguments at the top
> ("fontdict","withdash"), but then does NOT show either of those two
> under the list of valid kwargs later on.
This is a very old property that we've never removed, but it predates
support of kwargs for properties and there is no reason to use it.
Just put your properties in a dict if you need to::
 d = dict(fontsize=15, color='red')
and pass them to text (or xlabel, etc) like::
 text(1, 2, 'hi', **d)
JDH
From: Michael H. <mh...@us...> - 2009年01月12日 22:35:52
Will fontdict continue to be a valid keyword argument for the text() 
function? It seems to work now (in version 0.98.5.1), but the help on 
this page:
http://matplotlib.sourceforge.net/api/pyplot_api.html#matplotlib.pyplot.text
is ambiguous, as it lists two keyword arguments at the top 
("fontdict","withdash"), but then does NOT show either of those two 
under the list of valid kwargs later on.
Thanks,
Mike
From: John H. <jd...@gm...> - 2009年01月12日 21:03:15
On Mon, Jan 12, 2009 at 2:51 PM, Gregory S Morin <gs...@cs...> wrote:
> Anyone who got this to build on Solaris have any info on which
> compiler/libraries you used, env vars, etc?
>
> I've been trying a few different configs, but haven't made any real
> progress. I'll have to poke at it when I get some more free time, but I'm
> hoping maybe someone else has run into this.
>
> Thanks John, for the comment, it is a good start.
I'll attach the Makefile I use on our solaris x86 system -- your
milage may vary!. The makefile is for upping, building, and
installing ipython, numpy, mpl and scipy from svn/bzr head:
HOME = /home/titan/johnh
SRCDIR = ${HOME}/python/svn
PREFIX=${HOME}/dev
INSTALL=/opt/basedir/research/site-packages
SVN = /opt/app/bin/svn
BZR = /opt/app/bin/bzr
CC=/opt/app/g++lib6/gcc-3.4/bin/gcc
CXX=/opt/app/g++lib6/gcc-3.4/bin/g++
PKG_CONFIG_PATH=/opt/app/gnome-2.6/lib/pkgconfig
INSTALL_LOCAL=${PREFIX}/lib/python2.4/site-packages
INSTALL_LOCAL=${PREFIX}/lib/python2.4/site-packages
clean_ipython:
	rm -rf ${SRCDIR}/ipython/build; true
clean_numpy:
	rm -rf ${SRCDIR}/numpy/build; true
clean_scipy:
	rm -rf ${SRCDIR}/scipy/build; true
clean_mpl:
	rm -rf ${SRCDIR}/mpl/build; true
clean: clean_ipython clean_numpy clean_scipy clean_mpl
up_ipython:
	${BZR} merge --pull --directory=${SRCDIR}/ipython
up_mpl:
	${SVN} up ${SRCDIR}/mpl
up_numpy:
	${SVN} up ${SRCDIR}/numpy
up_scipy:
	${SVN} up ${SRCDIR}/scipy
up: up_ipython up_numpy up_scipy up_mpl
local_ipython:
	cd ${SRCDIR}/ipython; python setup.py build
	rm -rf ${INSTALL_LOCAL}/ipython; true
	cd ${SRCDIR}/ipython; python setup.py install --prefix=${PREFIX}
local_numpy:
	cd ${SRCDIR}/numpy; CC=${CC} CXX=${CXX} python setup.py build
	rm -rf ${INSTALL_LOCAL}/numpy; true
	cd ${SRCDIR}/numpy; python setup.py install --prefix=${PREFIX}
local_scipy:
	cd ${SRCDIR}/scipy; CC=${CC} CXX=${CXX}
ATLAS=/opt/app/nonc++/atlas-3.7/lib PYTHONPATH=${INSTALL_LOCAL} python
setup.py build
	rm -rf ${INSTALL_LOCAL}/scipy; true
	cd ${SRCDIR}/scipy; CC=${CC} CXX=${CXX} PYTHONPATH=${INSTALL_LOCAL}
python setup.py install --prefix=${PREFIX}
local_mpl:
	cd ${SRCDIR}/mpl; CC=${CC} CXX=${CXX}
PKG_CONFIG_PATH=${PKG_CONFIG_PATH}
PYTHONPATH=${INSTALL_LOCAL}:/opt/app/g++lib6/python-2.4/lib/python2.4/site-packages/3rdParty/gtk-2.6
python setup.py build_ext -I
/opt/app/gnome-2.6/include:/opt/app/libpng/include:/opt/app/freetype-2.1.10/include:/opt/app/freetype-2.1.10/include/freetype2
 -L /opt/app/gnome-2.6/lib:/opt/app/libpng/lib:/opt/app/freetype-2.1.10/lib
 -R /opt/app/gnome-2.6/lib:/opt/app/freetype-2.1.10/lib:/opt/app/libpng/lib
build
	rm -rf ${INSTALL_LOCAL}/matplotlib; true
	cd ${SRCDIR}/mpl; PKG_CONFIG_PATH=${PKG_CONFIG_PATH}
PYTHONPATH=${INSTALL_LOCAL} python setup.py install --prefix=${PREFIX}
local: local_ipython local_numpy local_scipy local_mpl
all: clean up local
install:
	cp -r ${INSTALL} ${INSTALL}.bak
	rm -rf ${INSTALL}/IPython; true
	cd ${INSTALL_LOCAL}; cp -r IPython ${INSTALL}/
	rm -rf ${INSTALL}/numpy; true
	cd ${INSTALL_LOCAL}; cp -r numpy ${INSTALL}/
	rm -rf ${INSTALL}/scipy; true;
	cd ${INSTALL_LOCAL}; cp -r scipy ${INSTALL}/
	rm -rf ${INSTALL}/matplotlib ${INSTALL}/pylab* ${INSTALL}/dateutil
${INSTALL}/pytz; true
	cd ${INSTALL_LOCAL}; cp -r matplotlib pylab* dateutil pytz ${INSTALL}/
From: Gregory S M. <gs...@cs...> - 2009年01月12日 20:51:56
Anyone who got this to build on Solaris have any info on which 
compiler/libraries you used, env vars, etc?
I've been trying a few different configs, but haven't made any real 
progress. I'll have to poke at it when I get some more free time, but I'm 
hoping maybe someone else has run into this.
Thanks John, for the comment, it is a good start.
+----------------------------------+
| Gregory S. Morin |
| Graduate Assistant |
| RIT CS Dept. |
| |
| "Quis custodiet ipsos custodes?" |
+----------------------------------+
On Fri, 9 Jan 2009, John Hunter wrote:
> On Fri, Jan 9, 2009 at 1:13 PM, Gregory S Morin <gs...@cs...> wrote:
>
>
>> ImportError: ld.so.1: python: fatal: relocation error: file
>> /usr/local/versions/python-2.5.1/lib/python2.5/site-packages/matplotlib/_path.so:
>> symbol
>> __1cDstdMbasic_string4Ccn0ALchar_traits4Cc__n0AJallocator4Cc___J__nullref_:
>> referenced symbol not found
>
>
> Hard to say for sure, but this looks like a C++ name mangling error.
> These occur when you compile with one compiler or version (eg g++) and
> try to link with a lib compiled with another C++ compiler (eg the
> solaris compiler). In this case, since the name mangling looks like
> the stl string container, my guess is you are picking up the solaris
> c++ stdlib and compiling with g++.
>
> JDH
>
From: Jae-Joon L. <lee...@gm...> - 2009年01月12日 20:21:33
>
> For my current use it would be enough if savefig had an option
> bbox = 'tight'
> so that only the area actually drawn in was written
> to file.
>
> The problem is that if you set the fig size and then
> set the axis size proportionately, you must fiddle
> with things to get a tight fit to what's actually drawn
> (including the tick labels and axis labels).
It seems that you want not only to fix the size of the axes but also
to adjust its position (and the figure size accordingly). Adjusting
the position of the axes is not straight forward but take a look at
the following tutorial (if you haven't).
http://matplotlib.sourceforge.net/faq/howto_faq.html#automatically-make-room-for-tick-labels
Anyhow, there might not be a neat way to do this within the current
mpl framework.
On the other hand, I think it might be possible to implement
"bbox=tight" option by tweaking the figure class (or renderer) and
I'll think about it later if possible. Meanwhile, I'm afraid that
there is not much I can help.
Regards,
-JJ
From: Eric F. <ef...@ha...> - 2009年01月12日 19:53:32
Simson Garfinkel wrote:
> Hi. It's me again, asking about dates again.
> 
> is there any easy way to a collection using dates on the X axes? 
> I've taken the collection example from the website and adopted it so 
> that there is a use_dates flag. Set it to False and spirals demo 
> appears. Set it to True and I get this error:
Yes, it looks like a bug in the handling of units in the Collection base 
class; unit conversion is done at drawing time, but needs either to be 
done earlier, or to be done independently in the get_datalim method.
Maybe one of the units-support experts will pick this up and fix it. I 
can't do more now.
Eric
> 
> Traceback (most recent call last):
> File "mpl_collection2.py", line 51, in <module>
> ax.add_collection(col, autolim=True)
> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
> python2.6/site-packages/matplotlib/axes.py", line 1312, in 
> add_collection
> self.update_datalim(collection.get_datalim(self.transData))
> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
> python2.6/site-packages/matplotlib/collections.py", line 144, in 
> get_datalim
> offsets = transOffset.transform_non_affine(offsets)
> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
> python2.6/site-packages/matplotlib/transforms.py", line 1914, in 
> transform_non_affine
> self._a.transform(points))
> File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
> python2.6/site-packages/matplotlib/transforms.py", line 1408, in 
> transform
> return affine_transform(points, mtx)
> ValueError: Invalid vertices array.
> 
> 
> The code is below.
> 
> Thanks!
> 
> =========================
> import matplotlib
> import matplotlib.pyplot
> from matplotlib import collections, transforms
> from matplotlib.colors import colorConverter
> import numpy as N
> 
> import datetime
> 
> use_dates = False
> 
> nverts = 50
> npts = 100
> 
> # Make some spirals
> r = N.array(range(nverts))
> theta = N.array(range(nverts)) * (2*N.pi)/(nverts-1)
> xx = r * N.sin(theta)
> yy = r * N.cos(theta)
> spiral = zip(xx,yy)
> 
> # Make some offsets
> rs = N.random.RandomState([12345678])
> 
> if not use_dates:
> xo = [i for i in range(0,100)]
> else:
> xo = [datetime.date(1990,1,1)+datetime.timedelta(10)*i for i in 
> range(0,100)] # new version
> 
> yo = rs.randn(npts)
> xyo = zip(xo, yo)
> colors = [colorConverter.to_rgba(c) for c in 
> ('r','g','b','c','y','m','k')]
> 
> fig = matplotlib.pyplot.figure()
> ax = fig.add_subplot(1,1,1)
> 
> if use_dates:
> import matplotlib.dates as mdates
> years = mdates.YearLocator() # every year
> months = mdates.MonthLocator() # every month
> yearsFmt = mdates.DateFormatter('%Y')
> ax.xaxis.set_major_locator(years)
> ax.xaxis.set_major_formatter(yearsFmt)
> ax.set_xlim(datetime.date(1990,1,1),datetime.date(1992,12,31))
> 
> col = collections.LineCollection([spiral], offsets=xyo, 
> transOffset=ax.transData)
> trans = fig.dpi_scale_trans + transforms.Affine2D().scale(1.0/72.0)
> col.set_transform(trans) # the points to pixels transform
> ax.add_collection(col, autolim=True)
> col.set_color(colors)
> 
> ax.autoscale_view()
> ax.set_title('LineCollection using offsets')
> matplotlib.pyplot.show()
> 
> 
> ------------------------------------------------------------------------------
> Check out the new SourceForge.net Marketplace.
> It is the best place to buy or sell services for
> just about anything Open Source.
> http://p.sf.net/sfu/Xq1LFB
> _______________________________________________
> Matplotlib-users mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-users
From: Sandro T. <mo...@de...> - 2009年01月12日 19:22:00
On Mon, Jan 12, 2009 at 19:52, Mauro Cavalcanti <mau...@gm...> wrote:
> I must say I feel truly honoured. I never expected my humble complaint
> would merit the attention of one of the Debian maintainters!
eheh, well, we are not some sort of gods or what: we talk to mortals now :D
> I will follow the directions you provided - as I understood, they will
> result in the creation of a deb package for the latest version of
> Matplotlib? This will be great!
Yes: Ubuntu is a Debian-based distribution, so once you'll have the
deb files (built on your ubuntu box) it enough to
sudo dpkg -i python-matplotlib*.deb
and it will install them
> I will report the results to you (and the Matplotlib-users lists) as
> soon as possible.
Great. The only problem I see is that there are a lot of caveats in
building packages, and you can face some corner case (I couldn't cope
in my uber-fast introduction to deb building :) ).
Regards,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi
From: Mauro C. <mau...@gm...> - 2009年01月12日 18:56:32
Dear Sandro,
I must say I feel truly honoured. I never expected my humble complaint
would merit the attention of one of the Debian maintainters!
Well, thank you very much!
I will follow the directions you provided - as I understood, they will
result in the creation of a deb package for the latest version of
Matplotlib? This will be great!
I will report the results to you (and the Matplotlib-users lists) as
soon as possible.
With warmest regards,
2009年1月12日 Sandro Tosi <mo...@de...>:
> Hello Mauro,
> (thanks John to highlight me:) )
>
> On Mon, Jan 12, 2009 at 17:50, Mauro Cavalcanti <mau...@gm...> wrote:
>> 3) I am running MPL 0.98.5.1, installed with Ubuntu itself. I ran sudo
>> easy_install matplotlib in an attempt to install version 0.98.5.2, but
>> this gave no result (easy_Install tells me that I already have the
>> latest version of MPL)
>
> I'm the Debian maint (while working with Ubuntu one) so I suggest to
> use the code in our SVN repository to build the package:
>
> <get the mpl tarball, v0.98.5.2 should be fine, name it
> matplotlib_0.98.5.2.orig.tar.gz>
> <untar it in a dir called matplotlib-0.98.5.2, that will contain the mpl files>
> cd matplotlib-0.98.5.2
> svn co svn://svn.debian.org/svn/python-modules/packages/matplotlib/trunk/debian/
> sudo apt-get install build-essential
> debuild -us -uc
> <install the missing build-dependencies returned from above commands
> and reiter if needed>
>
> It's a little bit tedious (and the build will take a while) but in ..
> you should find the .deb to install.
>
> If "someone" would release .3, we could package it for Debian systems :) :)
>
>> Well, I have googled the error message: "ImportError: libffi.so.4:
>> cannot open shared object file: No such file or directory" and it
>> seems that a few other Python applications are crashing with this
>> error under Ubuntu Intrepid (but I could not find any solution, just
>> error reports).
>
> libffi received a SONAME bump to .5 (infact, now the package is called
> libffi5), so there might be something still referring the old library,
> that probably went removed during upgrade (Ubuntu QA work.....)
>
>> I am not sure if compiling from source would help in this case,
>> because the error seems to be related to a missing system (?) library
>> and not to MPL or any of the backends.
>
> yes, it would help. because relinking to new lib will fix those broken links.
>
> Let us know any progress.
>
> Regards,
> --
> Sandro Tosi (aka morph, morpheus, matrixhasu)
> My website: http://matrixhasu.altervista.org/
> Me at Debian: http://wiki.debian.org/SandroTosi
>
-- 
Dr. Mauro J. Cavalcanti
Ecoinformatics Studio
P.O. Box 46521, CEP 20551-970
Rio de Janeiro, RJ, BRASIL
E-mail: mau...@gm...
Web: http://studio.infobio.net
Linux Registered User #473524 * Ubuntu User #22717
"Life is complex. It consists of real and imaginary parts."
From: Sandro T. <mo...@de...> - 2009年01月12日 18:21:06
Hello Mauro,
(thanks John to highlight me:) )
On Mon, Jan 12, 2009 at 17:50, Mauro Cavalcanti <mau...@gm...> wrote:
> 3) I am running MPL 0.98.5.1, installed with Ubuntu itself. I ran sudo
> easy_install matplotlib in an attempt to install version 0.98.5.2, but
> this gave no result (easy_Install tells me that I already have the
> latest version of MPL)
I'm the Debian maint (while working with Ubuntu one) so I suggest to
use the code in our SVN repository to build the package:
<get the mpl tarball, v0.98.5.2 should be fine, name it
matplotlib_0.98.5.2.orig.tar.gz>
<untar it in a dir called matplotlib-0.98.5.2, that will contain the mpl files>
cd matplotlib-0.98.5.2
svn co svn://svn.debian.org/svn/python-modules/packages/matplotlib/trunk/debian/
sudo apt-get install build-essential
debuild -us -uc
<install the missing build-dependencies returned from above commands
and reiter if needed>
It's a little bit tedious (and the build will take a while) but in ..
you should find the .deb to install.
If "someone" would release .3, we could package it for Debian systems :) :)
> Well, I have googled the error message: "ImportError: libffi.so.4:
> cannot open shared object file: No such file or directory" and it
> seems that a few other Python applications are crashing with this
> error under Ubuntu Intrepid (but I could not find any solution, just
> error reports).
libffi received a SONAME bump to .5 (infact, now the package is called
libffi5), so there might be something still referring the old library,
that probably went removed during upgrade (Ubuntu QA work.....)
> I am not sure if compiling from source would help in this case,
> because the error seems to be related to a missing system (?) library
> and not to MPL or any of the backends.
yes, it would help. because relinking to new lib will fix those broken links.
Let us know any progress.
Regards,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi
From: Mauro C. <mau...@gm...> - 2009年01月12日 16:54:37
Dear John & Darren,
Thanks for your reply -- the "panic" I alluded for is simply the
result of three months' strain working day-by-day to develop a very
large application using wxPython/MPL/Basemap under Linux, along with a
RAM crash and, more recently, an upgrade form Ubuntu 8.04 (Hardy
Heron) to Ubuntu 8.10 (Intrepid Ibex -- I love the fondness the guys
at Canonical have for zoology).
Well, that said, here are my reply to the questions and comments:
1) I am not using pyplot in an embedded environment. Indeed,
MPL/Basemap is running very well embedded in my wxPython application,
*without* pyplot of course. The reported error appears only when I try
any test script using pyplot (these scripts are useful for testing
ideas in an interpreted language as Python, before putting them to
work in a GUI application). My application uses the wxagg backends,
which are work well.
2) All scripts using pyplot ran quite well under Ubuntu Hardy; they
are just presenting these issues after I upgraded to Intrepid (which,
BTW, looks much more stable in my system than was Hardy -- except for
this problem with MPL)
3) I am running MPL 0.98.5.1, installed with Ubuntu itself. I ran sudo
easy_install matplotlib in an attempt to install version 0.98.5.2, but
this gave no result (easy_Install tells me that I already have the
latest version of MPL)
Well, I have googled the error message: "ImportError: libffi.so.4:
cannot open shared object file: No such file or directory" and it
seems that a few other Python applications are crashing with this
error under Ubuntu Intrepid (but I could not find any solution, just
error reports).
I am not sure if compiling from source would help in this case,
because the error seems to be related to a missing system (?) library
and not to MPL or any of the backends.
So, for the time being, it looks that I cannot use MPL with pyplot
anymore, which is not entirely a huge problem (as I do not use the
library in MATLAB-style anyway) , but prevents me to use it to fast
testing of Basemap scripts.
With best regards,
2009年1月12日 John Hunter <jd...@gm...>:
> On Mon, Jan 12, 2009 at 7:44 AM, Mauro Cavalcanti <mau...@gm...> wrote:
>> Dear ALL,
>>
>> I just found the following error when trying to run a very simple test
>> MPL/Basemap script under Ubuntu Linux Intrepid. This does *not* happen
>> when not importing MPL pyplot (for example, in a wxPython embedded
>> app).
>>
>> Traceback (most recent call last):
>> from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK,
>> FigureCanvasGTK,\
>> File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gtk.py",
>> line 25, in <module>
>> from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK
>> File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gdk.py",
>> line 29, in <module>
>> from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array
>> ImportError: libffi.so.4: cannot open shared object file: No such file
>> or directory
>>
>> Any hints??? I am near panic now...
>
> No one ever made a dime by panicking :-)
>
> Did you compile mpl yourself or are you using a prepackaged version?
> If you are using the ubuntu package of mpl, this is likely a bug with
> the package and should be reported to ubuntu. If you compiled
> yourself, what version are you using? Can you attach the output of a
> clean build process?
>
> See::
>
> http://matplotlib.sourceforge.net/faq/troubleshooting_faq.html#reporting-problems
>
>
> Note that the problem appears in the gtk backend, so if it turns out
> to be an ubuntu problem and you need a solution right away, you should
> be able to use a different backend, eg tkagg, but please take the time
> to report it to the ubuntu package maintainer if it has not been
> already. It looks like it is related to this debian bug
>
> http://osdir.com/ml/linux.debian.devel.python.modules/2008-05/msg00009.html
>
> I've CCd Sandro (the debian, not the ubuntu, maintainter) who may have
> more information.
>
> JDH
>
-- 
Dr. Mauro J. Cavalcanti
Ecoinformatics Studio
P.O. Box 46521, CEP 20551-970
Rio de Janeiro, RJ, BRASIL
E-mail: mau...@gm...
Web: http://studio.infobio.net
Linux Registered User #473524 * Ubuntu User #22717
"Life is complex. It consists of real and imaginary parts."
From: Laurent D. <lau...@gm...> - 2009年01月12日 16:00:18
Thx you john, for you fast reply (as always).
You're true seems to be a py2exe optimization problem. I've got the good
origin of the problem.
So your true you don't have to include my ugly patch :)
Thx for the link, will write a little tuto on making .exe!
Regards,
Laurent
> -----Message d'origine-----
> De : John Hunter [mailto:jd...@gm...]
> Envoyé : lundi 12 janvier 2009 15:34
> À : Laurent Dufrechou
> Cc : mat...@li...
> Objet : Re: [Matplotlib-users] Problem in mlab.py while creating .exe
> 
> On Mon, Jan 12, 2009 at 5:04 AM, Laurent Dufrechou
> <lau...@gm...> wrote:
> 
> > I've created a little software for data analysis with your fantastic
> > library.
> >
> > I've made .exe with py2exe and since I've upgraded to last revision
> my .exe
> > was no more working.
> > I know this is a quick ugly fix, but I had to do a release so… J
> >
> > I don't use mlab.py but making a .exe seems to include it and
> import it…
> >
> > Any idea what should be THE good solution to correct mlab.py, and do
> you
> > mind to commit a patch for this issue?
> 
> I don't think we'll take a patch of the form you have provided. But
> I'm pretty sure there is a solution on the py2exe end that requires no
> changes to mpl. py2exe is removing the doc, probably as a space
> optimization. If you disable optimization in py2exe, the problem
> should go away, eg
> 
> http://lists.wxwidgets.org/pipermail/wxpython-users/2005-
> May/039506.html
> 
> > On the website there is no doc to make a .exe.
> >
> > I've spent a lot of time making a setup.py script to do a .exe. Have
> some
> > interest in it?
> >
> > Perhaps adding a section in the doc?
> 
> Yes, definitely, I would be very interested in a FAQ with some example
> code we could put in the examples dir. See
> 
> http://matplotlib.sourceforge.net/faq/howto_faq.html#submit-a-patch
> http://matplotlib.sourceforge.net/faq/howto_faq.html#contribute-to-
> matplotlib-documentation
> 
> for instructions on how to contribute.
> 
> JDH
From: Hani N. <na...@gm...> - 2009年01月12日 15:39:22
I did use sudo--here are the shell command and the install output:
sudo python setup.py install
============================================================================
BUILDING MATPLOTLIB
 matplotlib: 0.98.5.2
 python: 2.5.2 (r252:60911, Oct 5 2008, 19:29:17) [GCC
 4.3.2]
 platform: linux2
REQUIRED DEPENDENCIES
 numpy: 1.2.1
 freetype2: 9.18.3
OPTIONAL BACKEND DEPENDENCIES
 libpng: 1.2.27
 Tkinter: Tkinter: 50704, Tk: 8.4, Tcl: 8.4
 wxPython: no
 * wxPython not found
 Gtk+: gtk+: 2.14.4, glib: 2.18.2, pygtk: 2.13.0,
 pygobject: 2.15.3
 Mac OS X native: no
 Qt: no
 Qt4: no
 Cairo: 1.4.12
OPTIONAL DATE/TIMEZONE DEPENDENCIES
 datetime: present, version unknown
 dateutil: 1.4
 pytz: 2008b
OPTIONAL USETEX DEPENDENCIES
 dvipng: 1.11
 ghostscript: 8.63
 latex: 3.141592
 pdftops: 3.00
[Edit setup.cfg to suppress the above messages]
============================================================================
pymods ['pylab']
packages ['matplotlib', 'matplotlib.backends', 'matplotlib.projections',
'mpl_toolkits', 'matplotlib.numerix', 'matplotlib.numerix.mlab', '
matplotlib.numerix.ma', 'matplotlib.numerix.npyma',
'matplotlib.numerix.linear_algebra', 'matplotlib.numerix.random_array',
'matplotlib.numerix.fft', 'matplotlib.delaunay']
running install
running build
running build_py
copying lib/matplotlib/mpl-data/matplotlibrc ->
build/lib.linux-x86_64-2.5/matplotlib/mpl-data
copying lib/matplotlib/mpl-data/matplotlib.conf ->
build/lib.linux-x86_64-2.5/matplotlib/mpl-data
running build_ext
running install_lib
copying build/lib.linux-x86_64-2.5/mpl_toolkits/gtktools.py ->
/usr/lib/python2.5/site-packages/mpl_toolkits
error: /usr/lib/python2.5/site-packages/mpl_toolkits/gtktools.py: No such
file or directory
On Fri, Jan 9, 2009 at 10:02 PM, John Hunter <jd...@gm...> wrote:
> On Fri, Jan 9, 2009 at 10:56 AM, Hani Nakhoul <na...@gm...> wrote:
> > Hi all,
> > I'm having trouble compiling Matplotlib (0.98.5.2) on Ubuntu Intrepid
> (Linux
> > ubuntu 2.6.27-9-generic #1 SMP Thu Nov 20 22:15:32 UTC 2008 x86_64
> > GNU/Linux). Running install ultimately gives
>
> I would like to see the shell command you type for the install (did
> you use sudo?) and the complete output. You showed us the build
> output, which looks fine, so please post the *entire* install command
> and output.
>
> JDH
>
From: Alan G I. <ala...@gm...> - 2009年01月12日 15:10:21
>> The best I've come up with so far is to create a figure 
>> with a set size, and then add axes with a specified 
>> rectangle. 
On 1/12/2009 3:56 AM Jae-Joon Lee apparently wrote:
> I think the way you're doing is the easiest way. 
> Anyhow, can you provide an example of a "neat path"? In my view, fixed 
> axes size means fixed figure size (in most cases), and I'm not quite 
> sure how this can be improved. 
For my current use it would be enough if savefig had an option
 bbox = 'tight'
so that only the area actually drawn in was written
to file.
The problem is that if you set the fig size and then
set the axis size proportionately, you must fiddle
with things to get a tight fit to what's actually drawn
(including the tick labels and axis labels).
Cheers,
Alan Isaac
From: John H. <jd...@gm...> - 2009年01月12日 14:33:57
On Mon, Jan 12, 2009 at 5:04 AM, Laurent Dufrechou
<lau...@gm...> wrote:
> I've created a little software for data analysis with your fantastic
> library.
>
> I've made .exe with py2exe and since I've upgraded to last revision my .exe
> was no more working.
> I know this is a quick ugly fix, but I had to do a release so... J
>
> I don't use mlab.py but making a .exe seems to include it and import it...
>
> Any idea what should be THE good solution to correct mlab.py, and do you
> mind to commit a patch for this issue?
I don't think we'll take a patch of the form you have provided. But
I'm pretty sure there is a solution on the py2exe end that requires no
changes to mpl. py2exe is removing the doc, probably as a space
optimization. If you disable optimization in py2exe, the problem
should go away, eg
 http://lists.wxwidgets.org/pipermail/wxpython-users/2005-May/039506.html
> On the website there is no doc to make a .exe.
>
> I've spent a lot of time making a setup.py script to do a .exe. Have some
> interest in it?
>
> Perhaps adding a section in the doc?
Yes, definitely, I would be very interested in a FAQ with some example
code we could put in the examples dir. See
 http://matplotlib.sourceforge.net/faq/howto_faq.html#submit-a-patch
 http://matplotlib.sourceforge.net/faq/howto_faq.html#contribute-to-matplotlib-documentation
for instructions on how to contribute.
JDH
From: John H. <jd...@gm...> - 2009年01月12日 14:18:49
On Mon, Jan 12, 2009 at 7:44 AM, Mauro Cavalcanti <mau...@gm...> wrote:
> Dear ALL,
>
> I just found the following error when trying to run a very simple test
> MPL/Basemap script under Ubuntu Linux Intrepid. This does *not* happen
> when not importing MPL pyplot (for example, in a wxPython embedded
> app).
>
> Traceback (most recent call last):
> from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK,
> FigureCanvasGTK,\
> File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gtk.py",
> line 25, in <module>
> from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK
> File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gdk.py",
> line 29, in <module>
> from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array
> ImportError: libffi.so.4: cannot open shared object file: No such file
> or directory
>
> Any hints??? I am near panic now...
No one ever made a dime by panicking :-)
Did you compile mpl yourself or are you using a prepackaged version?
If you are using the ubuntu package of mpl, this is likely a bug with
the package and should be reported to ubuntu. If you compiled
yourself, what version are you using? Can you attach the output of a
clean build process?
See::
 http://matplotlib.sourceforge.net/faq/troubleshooting_faq.html#reporting-problems
Note that the problem appears in the gtk backend, so if it turns out
to be an ubuntu problem and you need a solution right away, you should
be able to use a different backend, eg tkagg, but please take the time
to report it to the ubuntu package maintainer if it has not been
already. It looks like it is related to this debian bug
 http://osdir.com/ml/linux.debian.devel.python.modules/2008-05/msg00009.html
I've CCd Sandro (the debian, not the ubuntu, maintainter) who may have
more information.
JDH
From: Darren D. <dsd...@gm...> - 2009年01月12日 14:14:31
On Mon, Jan 12, 2009 at 8:44 AM, Mauro Cavalcanti <mau...@gm...>wrote:
> Dear ALL,
>
> I just found the following error when trying to run a very simple test
> MPL/Basemap script under Ubuntu Linux Intrepid. This does *not* happen
> when not importing MPL pyplot (for example, in a wxPython embedded
> app).
>
> Traceback (most recent call last):
> File "/home/maurobio/Projetos/Croizat/test/mexico.py", line 2, in <module>
> import matplotlib.pyplot as plt
> File
> "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/pyplot.py",
> line 75, in <module>
> new_figure_manager, draw_if_interactive, show = pylab_setup()
> File
> "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/__init__.py",
> line 25, in pylab_setup
> globals(),locals(),[backend_name])
> File
> "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gtkagg.py",
> line 10, in <module>
> from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK,
> FigureCanvasGTK,\
> File
> "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gtk.py",
> line 25, in <module>
> from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK
> File
> "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gdk.py",
> line 29, in <module>
> from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array
> ImportError: libffi.so.4: cannot open shared object file: No such file
> or directory
>
> Any hints??? I am near panic now...
>
No need to panic. It looks like your default backend has been set to gtkagg.
I think you just need to set your backend to wxagg in your matplotlibrc
file: http://matplotlib.sourceforge.net/users/customizing.html
P.S. pyplot should not be used in an embedded environment, that interface is
meant for command line and scripting use. For embedded applications, you
should use the object-oriented interface. There are several wx examples at
http://matplotlib.sourceforge.net/examples/user_interfaces/index.html
From: Mauro C. <mau...@gm...> - 2009年01月12日 13:48:07
Dear ALL,
I just found the following error when trying to run a very simple test
MPL/Basemap script under Ubuntu Linux Intrepid. This does *not* happen
when not importing MPL pyplot (for example, in a wxPython embedded
app).
Traceback (most recent call last):
 File "/home/maurobio/Projetos/Croizat/test/mexico.py", line 2, in <module>
 import matplotlib.pyplot as plt
 File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/pyplot.py",
line 75, in <module>
 new_figure_manager, draw_if_interactive, show = pylab_setup()
 File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/__init__.py",
line 25, in pylab_setup
 globals(),locals(),[backend_name])
 File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gtkagg.py",
line 10, in <module>
 from matplotlib.backends.backend_gtk import gtk, FigureManagerGTK,
FigureCanvasGTK,\
 File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gtk.py",
line 25, in <module>
 from matplotlib.backends.backend_gdk import RendererGDK, FigureCanvasGDK
 File "/usr/lib/python2.5/site-packages/matplotlib-0.98.5.1-py2.5-linux-i686.egg/matplotlib/backends/backend_gdk.py",
line 29, in <module>
 from matplotlib.backends._backend_gdk import pixbuf_get_pixels_array
ImportError: libffi.so.4: cannot open shared object file: No such file
or directory
Any hints??? I am near panic now...
Thanks in advance!
-- 
Dr. Mauro J. Cavalcanti
Ecoinformatics Studio
P.O. Box 46521, CEP 20551-970
Rio de Janeiro, RJ, BRASIL
E-mail: mau...@gm...
Web: http://studio.infobio.net
Linux Registered User #473524 * Ubuntu User #22717
"Life is complex. It consists of real and imaginary parts."
From: Mauro C. <mau...@gm...> - 2009年01月12日 12:22:06
Dear Jeff,
Thanks a lot, as always, for your help! I just expected that it could
be some sort of 'one-line' solution to the removing of
parallel/meridian lines, avoiding the need to use a loop (as I want to
remove all the lines at once). But, of course, it works!
I will take this opportunity to ask you if there are plans to release,
as a regular MS-Windows executable, the lastest version of Basemap
(0.99.3). It happens that my biogeographic application is due to
official release in three days and it makes use of some critical
functionality that is available only in the lastest Basemap version.
Anyway, the online documentation for Basemap already indicates that
the current version is 0.99.3 (but the .exe files in the SF repository
are still from version 0.99.2).
With warmest regards,
2009年1月10日 Jeff Whitaker <js...@fa...>:
> Mauro Cavalcanti wrote:
>>
>> Dear Jeff & ALL,
>>
>> How can I get rid, programmatically, of lines drawn with the
>> drawparallels and drawmeridians in MPL/Basemap? These methods return
>> dictionaries, but calling the Python clear() method for dictionaries
>> (and redrawing the figure as usual, of course) does not work. No error
>> appears, simply nothing happens.
>>
>> Any hints?
>>
>> Thanks in advance!
>>
>> Best regards,
>>
>>
>
> Mauro: From the docstring
>
> "returns a dictionary whose keys are the parallel values, and whose values
> are tuples containing lists of the matplotlib.lines.Line2D and
> matplotlib.text.Text instances associated with each parallel."
>
> So, if "pd" is the dict returned by drawparallels, pd[30] would be a tuple
> of lists of Line2D and Text instances associated with the 30 degree
> parallel. You can call the remove() method on each of the items in those
> lists to remove them from the plot.
>
> For example,
>
> [jsws-MacBook:~] jwhitaker% ipython2.5 -pylab
> Python 2.5.4 (r254:67916, Jan 9 2009, 07:06:45)
> Type "copyright", "credits" or "license" for more information.
>
> IPython 0.9.1 -- An enhanced Interactive Python.
> ? -> Introduction and overview of IPython's features.
> %quickref -> Quick reference.
> help -> Python's own help system.
> object? -> Details about 'object'. ?object also works, ?? prints more.
>
> Welcome to pylab, a matplotlib-based Python environment.
> For more information, type 'help(pylab)'.
>
> In [1]: from mpl_toolkits.basemap import Basemap
>
> In [2]: m = Basemap()
> In [9]: pd=m.drawparallels(range(0,90,30),labels=[1,0,0,0])
>
> In [10]: pd
>
> Out[10]:
> {0: ([<matplotlib.lines.Line2D object at 0x17ea2bd0>],
> [<matplotlib.text.Text object at 0x17eab1f0>]),
> 30: ([<matplotlib.lines.Line2D object at 0x17ea2d30>],
> [<matplotlib.text.Text object at 0x17eab270>]),
> 60: ([<matplotlib.lines.Line2D object at 0x17ea2f30>],
> [<matplotlib.text.Text object at 0x17eab2f0>])}
>
> In [11]: pd[30]
>
> Out[11]:
> ([<matplotlib.lines.Line2D object at 0x17ea2d30>],
> [<matplotlib.text.Text object at 0x17eab270>])
>
> In [12]: pd[30][0][0]
>
> Out[12]: <matplotlib.lines.Line2D object at 0x17ea2d30>
>
> In [13]: pd[30][0][0].remove()
>
>
>
> HTH, -Jeff
>
-- 
Dr. Mauro J. Cavalcanti
Ecoinformatics Studio
P.O. Box 46521, CEP 20551-970
Rio de Janeiro, RJ, BRASIL
E-mail: mau...@gm...
Web: http://studio.infobio.net
Linux Registered User #473524 * Ubuntu User #22717
"Life is complex. It consists of real and imaginary parts."
From: Laurent D. <lau...@gm...> - 2009年01月12日 11:04:53
"""
Numerical python functions written for compatability with matlab(TM)
commands with the same names.
Matlab(TM) compatible functions
-------------------------------
:func:`cohere`
 Coherence (normalized cross spectral density)
:func:`csd`
 Cross spectral density uing Welch's average periodogram
:func:`detrend`
 Remove the mean or best fit line from an array
:func:`find`
 Return the indices where some condition is true;
 numpy.nonzero is similar but more general.
:func:`griddata`
 interpolate irregularly distributed data to a
 regular grid.
:func:`prctile`
 find the percentiles of a sequence
:func:`prepca`
 Principal Component Analysis
:func:`psd`
 Power spectral density uing Welch's average periodogram
:func:`rk4`
 A 4th order runge kutta integrator for 1D or ND systems
:func:`specgram`
 Spectrogram (power spectral density over segments of time)
Miscellaneous functions
-------------------------
Functions that don't exist in matlab(TM), but are useful anyway:
:meth:`cohere_pairs`
 Coherence over all pairs. This is not a matlab function, but we
 compute coherence a lot in my lab, and we compute it for a lot of
 pairs. This function is optimized to do this efficiently by
 caching the direct FFTs.
:meth:`rk4`
 A 4th order Runge-Kutta ODE integrator in case you ever find
 yourself stranded without scipy (and the far superior
 scipy.integrate tools)
record array helper functions
-------------------------------
A collection of helper methods for numpyrecord arrays
.. _htmlonly::
 See :ref:`misc-examples-index`
:meth:`rec2txt`
 pretty print a record array
:meth:`rec2csv`
 store record array in CSV file
:meth:`csv2rec`
 import record array from CSV file with type inspection
:meth:`rec_append_fields`
 adds field(s)/array(s) to record array
:meth:`rec_drop_fields`
 drop fields from record array
:meth:`rec_join`
 join two record arrays on sequence of fields
:meth:`rec_groupby`
 summarize data by groups (similar to SQL GROUP BY)
:meth:`rec_summarize`
 helper code to filter rec array fields into new fields
For the rec viewer functions(e rec2csv), there are a bunch of Format
objects you can pass into the functions that will do things like color
negative values red, set percent formatting and scaling, etc.
Example usage::
 r = csv2rec('somefile.csv', checkrows=0)
 formatd = dict(
 weight = FormatFloat(2),
 change = FormatPercent(2),
 cost = FormatThousands(2),
 )
 rec2excel(r, 'test.xls', formatd=formatd)
 rec2csv(r, 'test.csv', formatd=formatd)
 scroll = rec2gtk(r, formatd=formatd)
 win = gtk.Window()
 win.set_size_request(600,800)
 win.add(scroll)
 win.show_all()
 gtk.main()
Deprecated functions
---------------------
The following are deprecated; please import directly from numpy (with
care--function signatures may differ):
:meth:`conv`
 convolution (numpy.convolve)
:meth:`corrcoef`
 The matrix of correlation coefficients
:meth:`hist`
 Histogram (numpy.histogram)
:meth:`linspace`
 Linear spaced array from min to max
:meth:`load`
 load ASCII file - use numpy.loadtxt
:meth:`meshgrid`
 Make a 2D grid from 2 1 arrays (numpy.meshgrid)
:meth:`polyfit`
 least squares best polynomial fit of x to y (numpy.polyfit)
:meth:`polyval`
 evaluate a vector for a vector of polynomial coeffs (numpy.polyval)
:meth:`save`
 save ASCII file - use numpy.savetxt
:meth:`trapz`
 trapeziodal integration (trapz(x,y) -> numpy.trapz(y,x))
:meth:`vander`
 the Vandermonde matrix (numpy.vander)
"""
from __future__ import division
import csv, warnings, copy, os
import numpy as np
ma = np.ma
from matplotlib import verbose
import matplotlib.nxutils as nxutils
import matplotlib.cbook as cbook
# set is a new builtin function in 2.4; delete the following when
# support for 2.3 is dropped.
try:
 set
except NameError:
 from sets import Set as set
def linspace(*args, **kw):
 warnings.warn("use numpy.linspace", DeprecationWarning)
 return np.linspace(*args, **kw)
def meshgrid(x,y):
 warnings.warn("use numpy.meshgrid", DeprecationWarning)
 return np.meshgrid(x,y)
def mean(x, dim=None):
 warnings.warn("Use numpy.mean(x) or x.mean()", DeprecationWarning)
 if len(x)==0: return None
 return np.mean(x, axis=dim)
def logspace(xmin,xmax,N):
 return np.exp(np.linspace(np.log(xmin), np.log(xmax), N))
def _norm(x):
 "return sqrt(x dot x)"
 return np.sqrt(np.dot(x,x))
def window_hanning(x):
 "return x times the hanning window of len(x)"
 return np.hanning(len(x))*x
def window_none(x):
 "No window function; simply return x"
 return x
#from numpy import convolve as conv
def conv(x, y, mode=2):
 'convolve x with y'
 warnings.warn("Use numpy.convolve(x, y, mode='full')", DeprecationWarning)
 return np.convolve(x,y,mode)
def detrend(x, key=None):
 if key is None or key=='constant':
 return detrend_mean(x)
 elif key=='linear':
 return detrend_linear(x)
def demean(x, axis=0):
 "Return x minus its mean along the specified axis"
 x = np.asarray(x)
 if axis:
 ind = [slice(None)] * axis
 ind.append(np.newaxis)
 return x - x.mean(axis)[ind]
 return x - x.mean(axis)
def detrend_mean(x):
 "Return x minus the mean(x)"
 return x - x.mean()
def detrend_none(x):
 "Return x: no detrending"
 return x
def detrend_linear(y):
 "Return y minus best fit line; 'linear' detrending "
 # This is faster than an algorithm based on linalg.lstsq.
 x = np.arange(len(y), dtype=np.float_)
 C = np.cov(x, y, bias=1)
 b = C[0,1]/C[0,0]
 a = y.mean() - b*x.mean()
 return y - (b*x + a)
#This is a helper function that implements the commonality between the
#psd, csd, and spectrogram. It is *NOT* meant to be used outside of mlab
def _spectral_helper(x, y, NFFT=256, Fs=2, detrend=detrend_none,
 window=window_hanning, noverlap=0, pad_to=None, sides='default',
 scale_by_freq=None):
 #The checks for if y is x are so that we can use the same function to
 #implement the core of psd(), csd(), and spectrogram() without doing
 #extra calculations. We return the unaveraged Pxy, freqs, and t.
 same_data = y is x
 #Make sure we're dealing with a numpy array. If y and x were the same
 #object to start with, keep them that way
 x = np.asarray(x)
 if not same_data:
 y = np.asarray(y)
 # zero pad x and y up to NFFT if they are shorter than NFFT
 if len(x)<NFFT:
 n = len(x)
 x = np.resize(x, (NFFT,))
 x[n:] = 0
 if not same_data and len(y)<NFFT:
 n = len(y)
 y = np.resize(y, (NFFT,))
 y[n:] = 0
 if pad_to is None:
 pad_to = NFFT
 if scale_by_freq is None:
 warnings.warn("psd, csd, and specgram have changed to scale their "
 "densities by the sampling frequency for better MatLab "
 "compatibility. You can pass scale_by_freq=False to disable "
 "this behavior. Also, one-sided densities are scaled by a "
 "factor of 2.")
 scale_by_freq = True
 # For real x, ignore the negative frequencies unless told otherwise
 if (sides == 'default' and np.iscomplexobj(x)) or sides == 'twosided':
 numFreqs = pad_to
 scaling_factor = 1.
 elif sides in ('default', 'onesided'):
 numFreqs = pad_to//2 + 1
 scaling_factor = 2.
 else:
 raise ValueError("sides must be one of: 'default', 'onesided', or "
 "'twosided'")
 # Matlab divides by the sampling frequency so that density function
 # has units of dB/Hz and can be integrated by the plotted frequency
 # values. Perform the same scaling here.
 if scale_by_freq:
 scaling_factor /= Fs
 if cbook.iterable(window):
 assert(len(window) == NFFT)
 windowVals = window
 else:
 windowVals = window(np.ones((NFFT,), x.dtype))
 step = NFFT - noverlap
 ind = np.arange(0, len(x) - NFFT + 1, step)
 n = len(ind)
 Pxy = np.zeros((numFreqs,n), np.complex_)
 # do the ffts of the slices
 for i in range(n):
 thisX = x[ind[i]:ind[i]+NFFT]
 thisX = windowVals * detrend(thisX)
 fx = np.fft.fft(thisX, n=pad_to)
 if same_data:
 fy = fx
 else:
 thisY = y[ind[i]:ind[i]+NFFT]
 thisY = windowVals * detrend(thisY)
 fy = np.fft.fft(thisY, n=pad_to)
 Pxy[:,i] = np.conjugate(fx[:numFreqs]) * fy[:numFreqs]
 # Scale the spectrum by the norm of the window to compensate for
 # windowing loss; see Bendat & Piersol Sec 11.5.2. Also include
 # scaling factors for one-sided densities and dividing by the sampling
 # frequency, if desired.
 Pxy *= scaling_factor / (np.abs(windowVals)**2).sum()
 t = 1./Fs * (ind + NFFT / 2.)
 freqs = float(Fs) / pad_to * np.arange(numFreqs)
 return Pxy, freqs, t
#Split out these keyword docs so that they can be used elsewhere
kwdocd = dict()
kwdocd['PSD'] ="""
 Keyword arguments:
 *NFFT*: integer
 The number of data points used in each block for the FFT.
 Must be even; a power 2 is most efficient. The default value is 256.
 *Fs*: scalar
 The sampling frequency (samples per time unit). It is used
 to calculate the Fourier frequencies, freqs, in cycles per time
 unit. The default value is 2.
 *detrend*: callable
 The function applied to each segment before fft-ing,
 designed to remove the mean or linear trend. Unlike in
 matlab, where the *detrend* parameter is a vector, in
 matplotlib is it a function. The :mod:`~matplotlib.pylab`
 module defines :func:`~matplotlib.pylab.detrend_none`,
 :func:`~matplotlib.pylab.detrend_mean`, and
 :func:`~matplotlib.pylab.detrend_linear`, but you can use
 a custom function as well.
 *window*: callable or ndarray
 A function or a vector of length *NFFT*. To create window
 vectors see :func:`window_hanning`, :func:`window_none`,
 :func:`numpy.blackman`, :func:`numpy.hamming`,
 :func:`numpy.bartlett`, :func:`scipy.signal`,
 :func:`scipy.signal.get_window`, etc. The default is
 :func:`window_hanning`. If a function is passed as the
 argument, it must take a data segment as an argument and
 return the windowed version of the segment.
 *noverlap*: integer
 The number of points of overlap between blocks. The default value
 is 0 (no overlap).
 *pad_to*: integer
 The number of points to which the data segment is padded when
 performing the FFT. This can be different from *NFFT*, which
 specifies the number of data points used. While not increasing
 the actual resolution of the psd (the minimum distance between
 resolvable peaks), this can give more points in the plot,
 allowing for more detail. This corresponds to the *n* parameter
 in the call to fft(). The default is None, which sets *pad_to*
 equal to *NFFT*
 *sides*: [ 'default' | 'onesided' | 'twosided' ]
 Specifies which sides of the PSD to return. Default gives the
 default behavior, which returns one-sided for real data and both
 for complex data. 'onesided' forces the return of a one-sided PSD,
 while 'twosided' forces two-sided.
 *scale_by_freq*: boolean
 Specifies whether the resulting density values should be scaled
 by the scaling frequency, which gives density in units of Hz^-1.
 This allows for integration over the returned frequency values.
 The default is True for MatLab compatibility.
"""
def psd(x, NFFT=256, Fs=2, detrend=detrend_none, window=window_hanning,
 noverlap=0, pad_to=None, sides='default', scale_by_freq=None):
 """
 The power spectral density by Welch's average periodogram method.
 The vector *x* is divided into *NFFT* length blocks. Each block
 is detrended by the function *detrend* and windowed by the function
 *window*. *noverlap* gives the length of the overlap between blocks.
 The absolute(fft(block))**2 of each segment are averaged to compute
 *Pxx*, with a scaling to correct for power loss due to windowing.
 If len(*x*) < *NFFT*, it will be zero padded to *NFFT*.
 *x*
 Array or sequence containing the data
 %(PSD)s
 Returns the tuple (*Pxx*, *freqs*).
 Refs:
 Bendat & Piersol -- Random Data: Analysis and Measurement
 Procedures, John Wiley & Sons (1986)
 """
 Pxx,freqs = csd(x, x, NFFT, Fs, detrend, window, noverlap, pad_to, sides,
 scale_by_freq)
 return Pxx.real,freqs
if psd.__doc__ is not None:
 psd.__doc__ = psd.__doc__ % kwdocd
else:
 psd.__doc__ = ""
def csd(x, y, NFFT=256, Fs=2, detrend=detrend_none, window=window_hanning,
 noverlap=0, pad_to=None, sides='default', scale_by_freq=None):
 """
 The cross power spectral density by Welch's average periodogram
 method. The vectors *x* and *y* are divided into *NFFT* length
 blocks. Each block is detrended by the function *detrend* and
 windowed by the function *window*. *noverlap* gives the length
 of the overlap between blocks. The product of the direct FFTs
 of *x* and *y* are averaged over each segment to compute *Pxy*,
 with a scaling to correct for power loss due to windowing.
 If len(*x*) < *NFFT* or len(*y*) < *NFFT*, they will be zero
 padded to *NFFT*.
 *x*, *y*
 Array or sequence containing the data
 %(PSD)s
 Returns the tuple (*Pxy*, *freqs*).
 Refs:
 Bendat & Piersol -- Random Data: Analysis and Measurement
 Procedures, John Wiley & Sons (1986)
 """
 Pxy, freqs, t = _spectral_helper(x, y, NFFT, Fs, detrend, window,
 noverlap, pad_to, sides, scale_by_freq)
 if len(Pxy.shape) == 2 and Pxy.shape[1]>1:
 Pxy = Pxy.mean(axis=1)
 return Pxy, freqs
if csd.__doc__ is not None:
 csd.__doc__ = csd.__doc__ % kwdocd
else:
 csd.__doc__ = ""
	
def specgram(x, NFFT=256, Fs=2, detrend=detrend_none, window=window_hanning,
 noverlap=128, pad_to=None, sides='default', scale_by_freq=None):
 """
 Compute a spectrogram of data in *x*. Data are split into *NFFT*
 length segements and the PSD of each section is computed. The
 windowing function *window* is applied to each segment, and the
 amount of overlap of each segment is specified with *noverlap*.
 If *x* is real (i.e. non-complex) only the spectrum of the positive
 frequencie is returned. If *x* is complex then the complete
 spectrum is returned.
 %(PSD)s
 Returns a tuple (*Pxx*, *freqs*, *t*):
 - *Pxx*: 2-D array, columns are the periodograms of
 successive segments
 - *freqs*: 1-D array of frequencies corresponding to the rows
 in Pxx
 - *t*: 1-D array of times corresponding to midpoints of
 segments.
 .. seealso::
 :func:`psd`:
 :func:`psd` differs in the default overlap; in returning
 the mean of the segment periodograms; and in not returning
 times.
 """
 assert(NFFT > noverlap)
 Pxx, freqs, t = _spectral_helper(x, x, NFFT, Fs, detrend, window,
 noverlap, pad_to, sides, scale_by_freq)
 Pxx = Pxx.real #Needed since helper implements generically
 if (np.iscomplexobj(x) and sides == 'default') or sides == 'twosided':
 # center the frequency range at zero
 freqs = np.concatenate((freqs[NFFT/2:]-Fs,freqs[:NFFT/2]))
 Pxx = np.concatenate((Pxx[NFFT/2:,:],Pxx[:NFFT/2,:]),0)
 return Pxx, freqs, t
if specgram.__doc__ is not None:
 specgram.__doc__ = specgram.__doc__ % kwdocd
else:
 specgram.__doc__ = ""
	
_coh_error = """Coherence is calculated by averaging over *NFFT*
length segments. Your signal is too short for your choice of *NFFT*.
"""
def cohere(x, y, NFFT=256, Fs=2, detrend=detrend_none, window=window_hanning,
 noverlap=0, pad_to=None, sides='default', scale_by_freq=None):
 """
 The coherence between *x* and *y*. Coherence is the normalized
 cross spectral density:
 .. math::
 C_{xy} = \\frac{|P_{xy}|^2}{P_{xx}P_{yy}}
 *x*, *y*
 Array or sequence containing the data
 %(PSD)s
 The return value is the tuple (*Cxy*, *f*), where *f* are the
 frequencies of the coherence vector. For cohere, scaling the
 individual densities by the sampling frequency has no effect, since
 the factors cancel out.
 .. seealso::
 :func:`psd` and :func:`csd`:
 For information about the methods used to compute
 :math:`P_{xy}`, :math:`P_{xx}` and :math:`P_{yy}`.
 """
 if len(x)<2*NFFT:
 raise ValueError(_coh_error)
 Pxx, f = psd(x, NFFT, Fs, detrend, window, noverlap, pad_to, sides,
 scale_by_freq)
 Pyy, f = psd(y, NFFT, Fs, detrend, window, noverlap, pad_to, sides,
 scale_by_freq)
 Pxy, f = csd(x, y, NFFT, Fs, detrend, window, noverlap, pad_to, sides,
 scale_by_freq)
 Cxy = np.divide(np.absolute(Pxy)**2, Pxx*Pyy)
 Cxy.shape = (len(f),)
 return Cxy, f
if cohere.__doc__ is not None:
 cohere.__doc__ = cohere.__doc__ % kwdocd
else:
 cohere.__doc__ = ""
def corrcoef(*args):
 """
 corrcoef(*X*) where *X* is a matrix returns a matrix of correlation
 coefficients for the columns of *X*
 corrcoef(*x*, *y*) where *x* and *y* are vectors returns the matrix of
 correlation coefficients for *x* and *y*.
 Numpy arrays can be real or complex.
 The correlation matrix is defined from the covariance matrix *C*
 as
 .. math::
 r_{ij} = \\frac{C_{ij}}{\\sqrt{C_{ii}C_{jj}}}
 """
 warnings.warn("Use numpy.corrcoef", DeprecationWarning)
 kw = dict(rowvar=False)
 return np.corrcoef(*args, **kw)
def polyfit(*args, **kwargs):
 u"""
 polyfit(*x*, *y*, *N*)
 Do a best fit polynomial of order *N* of *y* to *x*. Return value
 is a vector of polynomial coefficients [pk ... p1 p0]. Eg, for
 *N*=2::
 p2*x0^2 + p1*x0 + p0 = y1
 p2*x1^2 + p1*x1 + p0 = y1
 p2*x2^2 + p1*x2 + p0 = y2
 .....
 p2*xk^2 + p1*xk + p0 = yk
 Method: if *X* is a the Vandermonde Matrix computed from *x* (see
 `vandermonds
 <http://mathworld.wolfram.com/VandermondeMatrix.html>`_), then the
 polynomial least squares solution is given by the '*p*' in
 X*p = y
 where *X* is a (len(*x*) \N{MULTIPLICATION SIGN} *N* + 1) matrix,
 *p* is a *N*+1 length vector, and *y* is a (len(*x*)
 \N{MULTIPLICATION SIGN} 1) vector.
 This equation can be solved as
 .. math::
 p = (X_t X)^-1 X_t y
 where :math:`X_t` is the transpose of *X* and -1 denotes the
 inverse. Numerically, however, this is not a good method, so we
 use :func:`numpy.linalg.lstsq`.
 For more info, see `least squares fitting
 <http://mathworld.wolfram.com/LeastSquaresFittingPolynomial.html>`_,
 but note that the *k*'s and *n*'s in the superscripts and
 subscripts on that page. The linear algebra is correct, however.
 .. seealso::
 :func:`polyval`
 """
 warnings.warn("use numpy.poyfit", DeprecationWarning)
 return np.polyfit(*args, **kwargs)
def polyval(*args, **kwargs):
 """
 *y* = polyval(*p*, *x*)
 *p* is a vector of polynomial coeffients and *y* is the polynomial
 evaluated at *x*.
 Example code to remove a polynomial (quadratic) trend from y::
 p = polyfit(x, y, 2)
 trend = polyval(p, x)
 resid = y - trend
 .. seealso::
 :func:`polyfit`
 """
 warnings.warn("use numpy.polyval", DeprecationWarning)
 return np.polyval(*args, **kwargs)
def vander(*args, **kwargs):
 """
 *X* = vander(*x*, *N* = *None*)
 The Vandermonde matrix of vector *x*. The *i*-th column of *X* is the
 the *i*-th power of *x*. *N* is the maximum power to compute; if *N* is
 *None* it defaults to len(*x*).
 """
 warnings.warn("Use numpy.vander()", DeprecationWarning)
 return np.vander(*args, **kwargs)
def donothing_callback(*args):
 pass
def cohere_pairs( X, ij, NFFT=256, Fs=2, detrend=detrend_none,
 window=window_hanning, noverlap=0,
 preferSpeedOverMemory=True,
 progressCallback=donothing_callback,
 returnPxx=False):
 u"""
 Cxy, Phase, freqs = cohere_pairs(X, ij, ...)
 Compute the coherence for all pairs in *ij*. *X* is a
 (*numSamples*, *numCols*) numpy array. *ij* is a list of tuples
 (*i*, *j*). Each tuple is a pair of indexes into the columns of *X*
 for which you want to compute coherence. For example, if *X* has 64
 columns, and you want to compute all nonredundant pairs, define *ij*
 as::
 ij = []
 for i in range(64):
 for j in range(i+1,64):
 ij.append( (i, j) )
 The other function arguments, except for *preferSpeedOverMemory*
 (see below), are explained in the help string of :func:`psd`.
 Return value is a tuple (*Cxy*, *Phase*, *freqs*).
 - *Cxy*: a dictionary of (*i*, *j*) tuples -> coherence vector for that
 pair. I.e., ``Cxy[(i,j)] = cohere(X[:,i], X[:,j])``. Number of
 dictionary keys is ``len(ij)``.
 - *Phase*: a dictionary of phases of the cross spectral density at
 each frequency for each pair. The keys are ``(i,j)``.
 - *freqs*: a vector of frequencies, equal in length to either
 the coherence or phase vectors for any (*i*, *j*) key.. Eg,
 to make a coherence Bode plot::
 subplot(211)
 plot( freqs, Cxy[(12,19)])
 subplot(212)
 plot( freqs, Phase[(12,19)])
 For a large number of pairs, :func:`cohere_pairs` can be much more
 efficient than just calling :func:`cohere` for each pair, because
 it caches most of the intensive computations. If *N* is the
 number of pairs, this function is O(N) for most of the heavy
 lifting, whereas calling cohere for each pair is
 O(N\N{SUPERSCRIPT TWO}). However, because of the caching, it is
 also more memory intensive, making 2 additional complex arrays
 with approximately the same number of elements as *X*.
 The parameter *preferSpeedOverMemory*, if *False*, limits the
 caching by only making one, rather than two, complex cache arrays.
 This is useful if memory becomes critical. Even when
 *preferSpeedOverMemory* is *False*, :func:`cohere_pairs` will
 still give significant performace gains over calling
 :func:`cohere` for each pair, and will use subtantially less
 memory than if *preferSpeedOverMemory* is *True*. In my tests
 with a (43000, 64) array over all non-redundant pairs,
 *preferSpeedOverMemory* = *True* delivered a 33% performace boost
 on a 1.7GHZ Athlon with 512MB RAM compared with
 *preferSpeedOverMemory* = *False*. But both solutions were more
 than 10x faster than naievly crunching all possible pairs through
 cohere.
 .. seealso::
 :file:`test/cohere_pairs_test.py` in the src tree:
 For an example script that shows that this
 :func:`cohere_pairs` and :func:`cohere` give the same
 results for a given pair.
 """
 numRows, numCols = X.shape
 # zero pad if X is too short
 if numRows < NFFT:
 tmp = X
 X = np.zeros( (NFFT, numCols), X.dtype)
 X[:numRows,:] = tmp
 del tmp
 numRows, numCols = X.shape
 # get all the columns of X that we are interested in by checking
 # the ij tuples
 seen = {}
 for i,j in ij:
 seen[i]=1; seen[j] = 1
 allColumns = seen.keys()
 Ncols = len(allColumns)
 del seen
 # for real X, ignore the negative frequencies
 if np.iscomplexobj(X): numFreqs = NFFT
 else: numFreqs = NFFT//2+1
 # cache the FFT of every windowed, detrended NFFT length segement
 # of every channel. If preferSpeedOverMemory, cache the conjugate
 # as well
 if cbook.iterable(window):
 assert(len(window) == NFFT)
 windowVals = window
 else:
 windowVals = window(np.ones((NFFT,), typecode(X)))
 ind = range(0, numRows-NFFT+1, NFFT-noverlap)
 numSlices = len(ind)
 FFTSlices = {}
 FFTConjSlices = {}
 Pxx = {}
 slices = range(numSlices)
 normVal = norm(windowVals)**2
 for iCol in allColumns:
 progressCallback(i/Ncols, 'Cacheing FFTs')
 Slices = np.zeros( (numSlices,numFreqs), dtype=np.complex_)
 for iSlice in slices:
 thisSlice = X[ind[iSlice]:ind[iSlice]+NFFT, iCol]
 thisSlice = windowVals*detrend(thisSlice)
 Slices[iSlice,:] = fft(thisSlice)[:numFreqs]
 FFTSlices[iCol] = Slices
 if preferSpeedOverMemory:
 FFTConjSlices[iCol] = conjugate(Slices)
 Pxx[iCol] = np.divide(np.mean(absolute(Slices)**2), normVal)
 del Slices, ind, windowVals
 # compute the coherences and phases for all pairs using the
 # cached FFTs
 Cxy = {}
 Phase = {}
 count = 0
 N = len(ij)
 for i,j in ij:
 count +=1
 if count%10==0:
 progressCallback(count/N, 'Computing coherences')
 if preferSpeedOverMemory:
 Pxy = FFTSlices[i] * FFTConjSlices[j]
 else:
 Pxy = FFTSlices[i] * np.conjugate(FFTSlices[j])
 if numSlices>1: Pxy = np.mean(Pxy)
 Pxy = np.divide(Pxy, normVal)
 Cxy[(i,j)] = np.divide(np.absolute(Pxy)**2, Pxx[i]*Pxx[j])
 Phase[(i,j)] = np.arctan2(Pxy.imag, Pxy.real)
 freqs = Fs/NFFT*np.arange(numFreqs)
 if returnPxx:
 return Cxy, Phase, freqs, Pxx
 else:
 return Cxy, Phase, freqs
def entropy(y, bins):
 r"""
 Return the entropy of the data in *y*.
 .. math::
 \sum p_i \log_2(p_i)
 where :math:`p_i` is the probability of observing *y* in the
 :math:`i^{th}` bin of *bins*. *bins* can be a number of bins or a
 range of bins; see :func:`numpy.histogram`.
 Compare *S* with analytic calculation for a Gaussian::
 x = mu + sigma * randn(200000)
 Sanalytic = 0.5 * ( 1.0 + log(2*pi*sigma**2.0) )
 """
 n,bins = np.histogram(y, bins)
 n = n.astype(np.float_)
 n = np.take(n, np.nonzero(n)[0]) # get the positive
 p = np.divide(n, len(y))
 delta = bins[1]-bins[0]
 S = -1.0*np.sum(p*log(p)) + log(delta)
 #S = -1.0*np.sum(p*log(p))
 return S
def hist(y, bins=10, normed=0):
 """
 Return the histogram of *y* with *bins* equally sized bins. If
 bins is an array, use those bins. Return value is (*n*, *x*)
 where *n* is the count for each bin in *x*.
 If *normed* is *False*, return the counts in the first element of
 the returned tuple. If *normed* is *True*, return the probability
 density :math:`\\frac{n}{(len(y)\mathrm{dbin}}`.
 If *y* has rank > 1, it will be raveled. If *y* is masked, only the
 unmasked values will be used.
 Credits: the Numeric 22 documentation
 """
 warnings.warn("Use numpy.histogram()", DeprecationWarning)
 return np.histogram(y, bins=bins, range=None, normed=normed)
def normpdf(x, *args):
 "Return the normal pdf evaluated at *x*; args provides *mu*, *sigma*"
 mu, sigma = args
 return 1./(np.sqrt(2*np.pi)*sigma)*np.exp(-0.5 * (1./sigma*(x - mu))**2)
def levypdf(x, gamma, alpha):
 "Returm the levy pdf evaluated at *x* for params *gamma*, *alpha*"
 N = len(x)
 if N%2 != 0:
 raise ValueError, 'x must be an event length array; try\n' + \
 'x = np.linspace(minx, maxx, N), where N is even'
 dx = x[1]-x[0]
 f = 1/(N*dx)*np.arange(-N/2, N/2, np.float_)
 ind = np.concatenate([np.arange(N/2, N, int),
 np.arange(0, N/2, int)])
 df = f[1]-f[0]
 cfl = exp(-gamma*np.absolute(2*pi*f)**alpha)
 px = np.fft.fft(np.take(cfl,ind)*df).astype(np.float_)
 return np.take(px, ind)
def find(condition):
 "Return the indices where ravel(condition) is true"
 res, = np.nonzero(np.ravel(condition))
 return res
def trapz(x, y):
 """
 Trapezoidal integral of *y*(*x*).
 """
 warnings.warn("Use numpy.trapz(y,x) instead of trapz(x,y)", DeprecationWarning)
 return np.trapz(y, x)
 #if len(x)!=len(y):
 # raise ValueError, 'x and y must have the same length'
 #if len(x)<2:
 # raise ValueError, 'x and y must have > 1 element'
 #return np.sum(0.5*np.diff(x)*(y[1:]+y[:-1]))
def longest_contiguous_ones(x):
 """
 Return the indices of the longest stretch of contiguous ones in *x*,
 assuming *x* is a vector of zeros and ones. If there are two
 equally long stretches, pick the first.
 """
 x = np.ravel(x)
 if len(x)==0:
 return np.array([])
 ind = (x==0).nonzero()[0]
 if len(ind)==0:
 return np.arange(len(x))
 if len(ind)==len(x):
 return np.array([])
 y = np.zeros( (len(x)+2,), x.dtype)
 y[1:-1] = x
 dif = np.diff(y)
 up = (dif == 1).nonzero()[0];
 dn = (dif == -1).nonzero()[0];
 i = (dn-up == max(dn - up)).nonzero()[0][0]
 ind = np.arange(up[i], dn[i])
 return ind
def longest_ones(x):
 '''alias for longest_contiguous_ones'''
 return longest_contiguous_ones(x)
def prepca(P, frac=0):
 """
 Compute the principal components of *P*. *P* is a (*numVars*,
 *numObs*) array. *frac* is the minimum fraction of variance that a
 component must contain to be included.
 Return value is a tuple of the form (*Pcomponents*, *Trans*,
 *fracVar*) where:
 - *Pcomponents* : a (numVars, numObs) array
 - *Trans* : the weights matrix, ie, *Pcomponents* = *Trans* *
 *P*
 - *fracVar* : the fraction of the variance accounted for by each
 component returned
 A similar function of the same name was in the Matlab (TM)
 R13 Neural Network Toolbox but is not found in later versions;
 its successor seems to be called "processpcs".
 """
 U,s,v = np.linalg.svd(P)
 varEach = s**2/P.shape[1]
 totVar = varEach.sum()
 fracVar = varEach/totVar
 ind = slice((fracVar>=frac).sum())
 # select the components that are greater
 Trans = U[:,ind].transpose()
 # The transformed data
 Pcomponents = np.dot(Trans,P)
 return Pcomponents, Trans, fracVar[ind]
def prctile(x, p = (0.0, 25.0, 50.0, 75.0, 100.0)):
 """
 Return the percentiles of *x*. *p* can either be a sequence of
 percentile values or a scalar. If *p* is a sequence, the ith
 element of the return sequence is the *p*(i)-th percentile of *x*.
 If *p* is a scalar, the largest value of *x* less than or equal to
 the *p* percentage point in the sequence is returned.
 """
 x = np.array(x).ravel() # we need a copy
 x.sort()
 Nx = len(x)
 if not cbook.iterable(p):
 return x[int(p*Nx/100.0)]
 p = np.asarray(p)* Nx/100.0
 ind = p.astype(int)
 ind = np.where(ind>=Nx, Nx-1, ind)
 return x.take(ind)
def prctile_rank(x, p):
 """
 Return the rank for each element in *x*, return the rank
 0..len(*p*). Eg if *p* = (25, 50, 75), the return value will be a
 len(*x*) array with values in [0,1,2,3] where 0 indicates the
 value is less than the 25th percentile, 1 indicates the value is
 >= the 25th and < 50th percentile, ... and 3 indicates the value
 is above the 75th percentile cutoff.
 *p* is either an array of percentiles in [0..100] or a scalar which
 indicates how many quantiles of data you want ranked.
 """
 if not cbook.iterable(p):
 p = np.arange(100.0/p, 100.0, 100.0/p)
 else:
 p = np.asarray(p)
 if p.max()<=1 or p.min()<0 or p.max()>100:
 raise ValueError('percentiles should be in range 0..100, not 0..1')
 ptiles = prctile(x, p)
 return np.searchsorted(ptiles, x)
def center_matrix(M, dim=0):
 """
 Return the matrix *M* with each row having zero mean and unit std.
 If *dim* = 1 operate on columns instead of rows. (*dim* is
 opposite to the numpy axis kwarg.)
 """
 M = np.asarray(M, np.float_)
 if dim:
 M = (M - M.mean(axis=0)) / M.std(axis=0)
 else:
 M = (M - M.mean(axis=1)[:,np.newaxis])
 M = M / M.std(axis=1)[:,np.newaxis]
 return M
def rk4(derivs, y0, t):
 """
 Integrate 1D or ND system of ODEs using 4-th order Runge-Kutta.
 This is a toy implementation which may be useful if you find
 yourself stranded on a system w/o scipy. Otherwise use
 :func:`scipy.integrate`.
 *y0*
 initial state vector
 *t*
 sample times
 *derivs*
 returns the derivative of the system and has the
 signature ``dy = derivs(yi, ti)``
 Example 1 ::
 ## 2D system
 def derivs6(x,t):
 d1 = x[0] + 2*x[1]
 d2 = -3*x[0] + 4*x[1]
 return (d1, d2)
 dt = 0.0005
 t = arange(0.0, 2.0, dt)
 y0 = (1,2)
 yout = rk4(derivs6, y0, t)
 Example 2::
 ## 1D system
 alpha = 2
 def derivs(x,t):
 return -alpha*x + exp(-t)
 y0 = 1
 yout = rk4(derivs, y0, t)
 If you have access to scipy, you should probably be using the
 scipy.integrate tools rather than this function.
 """
 try: Ny = len(y0)
 except TypeError:
 yout = np.zeros( (len(t),), np.float_)
 else:
 yout = np.zeros( (len(t), Ny), np.float_)
 yout[0] = y0
 i = 0
 for i in np.arange(len(t)-1):
 thist = t[i]
 dt = t[i+1] - thist
 dt2 = dt/2.0
 y0 = yout[i]
 k1 = np.asarray(derivs(y0, thist))
 k2 = np.asarray(derivs(y0 + dt2*k1, thist+dt2))
 k3 = np.asarray(derivs(y0 + dt2*k2, thist+dt2))
 k4 = np.asarray(derivs(y0 + dt*k3, thist+dt))
 yout[i+1] = y0 + dt/6.0*(k1 + 2*k2 + 2*k3 + k4)
 return yout
def bivariate_normal(X, Y, sigmax=1.0, sigmay=1.0,
 mux=0.0, muy=0.0, sigmaxy=0.0):
 """
 Bivariate Gaussian distribution for equal shape *X*, *Y*.
 See `bivariate normal
 <http://mathworld.wolfram.com/BivariateNormalDistribution.html>`_
 at mathworld.
 """
 Xmu = X-mux
 Ymu = Y-muy
 rho = sigmaxy/(sigmax*sigmay)
 z = Xmu**2/sigmax**2 + Ymu**2/sigmay**2 - 2*rho*Xmu*Ymu/(sigmax*sigmay)
 denom = 2*np.pi*sigmax*sigmay*np.sqrt(1-rho**2)
 return np.exp( -z/(2*(1-rho**2))) / denom
def get_xyz_where(Z, Cond):
 """
 *Z* and *Cond* are *M* x *N* matrices. *Z* are data and *Cond* is
 a boolean matrix where some condition is satisfied. Return value
 is (*x*, *y*, *z*) where *x* and *y* are the indices into *Z* and
 *z* are the values of *Z* at those indices. *x*, *y*, and *z* are
 1D arrays.
 """
 X,Y = np.indices(Z.shape)
 return X[Cond], Y[Cond], Z[Cond]
def get_sparse_matrix(M,N,frac=0.1):
 """
 Return a *M* x *N* sparse matrix with *frac* elements randomly
 filled.
 """
 data = np.zeros((M,N))*0.
 for i in range(int(M*N*frac)):
 x = np.random.randint(0,M-1)
 y = np.random.randint(0,N-1)
 data[x,y] = np.random.rand()
 return data
def dist(x,y):
 """
 Return the distance between two points.
 """
 d = x-y
 return np.sqrt(np.dot(d,d))
def dist_point_to_segment(p, s0, s1):
 """
 Get the distance of a point to a segment.
 *p*, *s0*, *s1* are *xy* sequences
 This algorithm from
 http://softsurfer.com/Archive/algorithm_0102/algorithm_0102.htm#Distance%20to%20Ray%20or%20Segment
 """
 p = np.asarray(p, np.float_)
 s0 = np.asarray(s0, np.float_)
 s1 = np.asarray(s1, np.float_)
 v = s1 - s0
 w = p - s0
 c1 = np.dot(w,v);
 if ( c1 <= 0 ):
 return dist(p, s0);
 c2 = np.dot(v,v)
 if ( c2 <= c1 ):
 return dist(p, s1);
 b = c1 / c2
 pb = s0 + b * v;
 return dist(p, pb)
def segments_intersect(s1, s2):
 """
 Return *True* if *s1* and *s2* intersect.
 *s1* and *s2* are defined as::
 s1: (x1, y1), (x2, y2)
 s2: (x3, y3), (x4, y4)
 """
 (x1, y1), (x2, y2) = s1
 (x3, y3), (x4, y4) = s2
 den = ((y4-y3) * (x2-x1)) - ((x4-x3)*(y2-y1))
 n1 = ((x4-x3) * (y1-y3)) - ((y4-y3)*(x1-x3))
 n2 = ((x2-x1) * (y1-y3)) - ((y2-y1)*(x1-x3))
 if den == 0:
 # lines parallel
 return False
 u1 = n1/den
 u2 = n2/den
 return 0.0 <= u1 <= 1.0 and 0.0 <= u2 <= 1.0
def fftsurr(x, detrend=detrend_none, window=window_none):
 """
 Compute an FFT phase randomized surrogate of *x*.
 """
 if cbook.iterable(window):
 x=window*detrend(x)
 else:
 x = window(detrend(x))
 z = np.fft.fft(x)
 a = 2.*np.pi*1j
 phase = a * np.random.rand(len(x))
 z = z*np.exp(phase)
 return np.fft.ifft(z).real
def liaupunov(x, fprime):
 """
 *x* is a very long trajectory from a map, and *fprime* returns the
 derivative of *x*.
 Returns :
 .. math::
 \lambda = \\frac{1}{n}\\sum \\ln|f^'(x_i)|
 .. seealso::
 Sec 10.5 Strogatz (1994) "Nonlinear Dynamics and Chaos".
 `Wikipedia article on Lyapunov Exponent
 <http://en.wikipedia.org/wiki/Lyapunov_exponent>`_.
 .. note::
 What the function here calculates may not be what you really want;
 *caveat emptor*.
 It also seems that this function's name is badly misspelled.
 """
 return np.mean(np.log(np.absolute(fprime(x))))
class FIFOBuffer:
 """
 A FIFO queue to hold incoming *x*, *y* data in a rotating buffer
 using numpy arrays under the hood. It is assumed that you will
 call asarrays much less frequently than you add data to the queue
 -- otherwise another data structure will be faster.
 This can be used to support plots where data is added from a real
 time feed and the plot object wants to grab data from the buffer
 and plot it to screen less freqeuently than the incoming.
 If you set the *dataLim* attr to
 :class:`~matplotlib.transforms.BBox` (eg
 :attr:`matplotlib.Axes.dataLim`), the *dataLim* will be updated as
 new data come in.
 TODO: add a grow method that will extend nmax
 .. note::
 mlab seems like the wrong place for this class.
 """
 def __init__(self, nmax):
 """
 Buffer up to *nmax* points.
 """
 self._xa = np.zeros((nmax,), np.float_)
 self._ya = np.zeros((nmax,), np.float_)
 self._xs = np.zeros((nmax,), np.float_)
 self._ys = np.zeros((nmax,), np.float_)
 self._ind = 0
 self._nmax = nmax
 self.dataLim = None
 self.callbackd = {}
 def register(self, func, N):
 """
 Call *func* every time *N* events are passed; *func* signature
 is ``func(fifo)``.
 """
 self.callbackd.setdefault(N, []).append(func)
 def add(self, x, y):
 """
 Add scalar *x* and *y* to the queue.
 """
 if self.dataLim is not None:
 xys = ((x,y),)
 self.dataLim.update(xys, -1) #-1 means use the default ignore setting
 ind = self._ind % self._nmax
 #print 'adding to fifo:', ind, x, y
 self._xs[ind] = x
 self._ys[ind] = y
 for N,funcs in self.callbackd.items():
 if (self._ind%N)==0:
 for func in funcs:
 func(self)
 self._ind += 1
 def last(self):
 """
 Get the last *x*, *y* or *None*. *None* if no data set.
 """
 if self._ind==0: return None, None
 ind = (self._ind-1) % self._nmax
 return self._xs[ind], self._ys[ind]
 def asarrays(self):
 """
 Return *x* and *y* as arrays; their length will be the len of
 data added or *nmax*.
 """
 if self._ind<self._nmax:
 return self._xs[:self._ind], self._ys[:self._ind]
 ind = self._ind % self._nmax
 self._xa[:self._nmax-ind] = self._xs[ind:]
 self._xa[self._nmax-ind:] = self._xs[:ind]
 self._ya[:self._nmax-ind] = self._ys[ind:]
 self._ya[self._nmax-ind:] = self._ys[:ind]
 return self._xa, self._ya
 def update_datalim_to_current(self):
 """
 Update the *datalim* in the current data in the fifo.
 """
 if self.dataLim is None:
 raise ValueError('You must first set the dataLim attr')
 x, y = self.asarrays()
 self.dataLim.update_numerix(x, y, True)
def movavg(x,n):
 """
 Compute the len(*n*) moving average of *x*.
 """
 w = np.empty((n,), dtype=np.float_)
 w[:] = 1.0/n
 return np.convolve(x, w, mode='valid')
def save(fname, X, fmt='%.18e',delimiter=' '):
 """
 Save the data in *X* to file *fname* using *fmt* string to convert the
 data to strings.
 *fname* can be a filename or a file handle. If the filename ends
 in '.gz', the file is automatically saved in compressed gzip
 format. The :func:`load` function understands gzipped files
 transparently.
 Example usage::
 save('test.out', X) # X is an array
 save('test1.out', (x,y,z)) # x,y,z equal sized 1D arrays
 save('test2.out', x) # x is 1D
 save('test3.out', x, fmt='%1.4e') # use exponential notation
 *delimiter* is used to separate the fields, eg. *delimiter* ','
 for comma-separated values.
 """
 if cbook.is_string_like(fname):
 if fname.endswith('.gz'):
 import gzip
 fh = gzip.open(fname,'wb')
 else:
 fh = file(fname,'w')
 elif hasattr(fname, 'seek'):
 fh = fname
 else:
 raise ValueError('fname must be a string or file handle')
 X = np.asarray(X)
 origShape = None
 if X.ndim == 1:
 origShape = X.shape
 X.shape = len(X), 1
 for row in X:
 fh.write(delimiter.join([fmt%val for val in row]) + '\n')
 if origShape is not None:
 X.shape = origShape
def load(fname,comments='#',delimiter=None, converters=None,skiprows=0,
 usecols=None, unpack=False, dtype=np.float_):
 """
 Load ASCII data from *fname* into an array and return the array.
 The data must be regular, same number of values in every row
 *fname* can be a filename or a file handle. Support for gzipped
 files is automatic, if the filename ends in '.gz'.
 matfile data is not supported; for that, use :mod:`scipy.io.mio`
 module.
 Example usage::
 X = load('test.dat') # data in two columns
 t = X[:,0]
 y = X[:,1]
 Alternatively, you can do the same with "unpack"; see below::
 X = load('test.dat') # a matrix of data
 x = load('test.dat') # a single column of data
 - *comments*: the character used to indicate the start of a comment
 in the file
 - *delimiter* is a string-like character used to seperate values
 in the file. If *delimiter* is unspecified or *None*, any
 whitespace string is a separator.
 - *converters*, if not *None*, is a dictionary mapping column number to
 a function that will convert that column to a float (or the optional
 *dtype* if specified). Eg, if column 0 is a date string::
 converters = {0:datestr2num}
 - *skiprows* is the number of rows from the top to skip.
 - *usecols*, if not *None*, is a sequence of integer column indexes to
 extract where 0 is the first column, eg ``usecols=[1,4,5]`` to extract
 just the 2nd, 5th and 6th columns
 - *unpack*, if *True*, will transpose the matrix allowing you to unpack
 into named arguments on the left hand side::
 t,y = load('test.dat', unpack=True) # for two column data
 x,y,z = load('somefile.dat', usecols=[3,5,7], unpack=True)
 - *dtype*: the array will have this dtype. default: ``numpy.float_``
 .. seealso::
 See :file:`examples/pylab_examples/load_converter.py` in the source tree:
 Exercises many of these options.
 """
 if converters is None: converters = {}
 fh = cbook.to_filehandle(fname)
 X = []
 if delimiter==' ':
 # space splitting is a special case since x.split() is what
 # you want, not x.split(' ')
 def splitfunc(x):
 return x.split()
 else:
 def splitfunc(x):
 return x.split(delimiter)
 converterseq = None
 for i,line in enumerate(fh):
 if i<skiprows: continue
 line = line.split(comments, 1)[0].strip()
 if not len(line): continue
 if converterseq is None:
 converterseq = [converters.get(j,float)
 for j,val in enumerate(splitfunc(line))]
 if usecols is not None:
 vals = splitfunc(line)
 row = [converterseq[j](vals[j]) for j in usecols]
 else:
 row = [converterseq[j](val)
 for j,val in enumerate(splitfunc(line))]
 thisLen = len(row)
 X.append(row)
 X = np.array(X, dtype)
 r,c = X.shape
 if r==1 or c==1:
 X.shape = max(r,c),
 if unpack: return X.transpose()
 else: return X
def slopes(x,y):
 """
 SLOPES calculate the slope y'(x) Given data vectors X and Y SLOPES
 calculates Y'(X), i.e the slope of a curve Y(X). The slope is
 estimated using the slope obtained from that of a parabola through
 any three consecutive points.
 This method should be superior to that described in the appendix
 of A CONSISTENTLY WELL BEHAVED METHOD OF INTERPOLATION by Russel
 W. Stineman (Creative Computing July 1980) in at least one aspect:
 Circles for interpolation demand a known aspect ratio between x-
 and y-values. For many functions, however, the abscissa are given
 in different dimensions, so an aspect ratio is completely
 arbitrary.
 The parabola method gives very similar results to the circle
 method for most regular cases but behaves much better in special
 cases
 Norbert Nemec, Institute of Theoretical Physics, University or
 Regensburg, April 2006 Norbert.Nemec at physik.uni-regensburg.de
 (inspired by a original implementation by Halldor Bjornsson,
 Icelandic Meteorological Office, March 2006 halldor at vedur.is)
 """
 # Cast key variables as float.
 x=np.asarray(x, np.float_)
 y=np.asarray(y, np.float_)
 yp=np.zeros(y.shape, np.float_)
 dx=x[1:] - x[:-1]
 dy=y[1:] - y[:-1]
 dydx = dy/dx
 yp[1:-1] = (dydx[:-1] * dx[1:] + dydx[1:] * dx[:-1])/(dx[1:] + dx[:-1])
 yp[0] = 2.0 * dy[0]/dx[0] - yp[1]
 yp[-1] = 2.0 * dy[-1]/dx[-1] - yp[-2]
 return yp
def stineman_interp(xi,x,y,yp=None):
 """
 STINEMAN_INTERP Well behaved data interpolation. Given data
 vectors X and Y, the slope vector YP and a new abscissa vector XI
 the function stineman_interp(xi,x,y,yp) uses Stineman
 interpolation to calculate a vector YI corresponding to XI.
 Here's an example that generates a coarse sine curve, then
 interpolates over a finer abscissa:
 x = linspace(0,2*pi,20); y = sin(x); yp = cos(x)
 xi = linspace(0,2*pi,40);
 yi = stineman_interp(xi,x,y,yp);
 plot(x,y,'o',xi,yi)
 The interpolation method is described in the article A
 CONSISTENTLY WELL BEHAVED METHOD OF INTERPOLATION by Russell
 W. Stineman. The article appeared in the July 1980 issue of
 Creative Computing with a note from the editor stating that while
 they were
 not an academic journal but once in a while something serious
 and original comes in adding that this was
 "apparently a real solution" to a well known problem.
 For yp=None, the routine automatically determines the slopes using
 the "slopes" routine.
 X is assumed to be sorted in increasing order
 For values xi[j] < x[0] or xi[j] > x[-1], the routine tries a
 extrapolation. The relevance of the data obtained from this, of
 course, questionable...
 original implementation by Halldor Bjornsson, Icelandic
 Meteorolocial Office, March 2006 halldor at vedur.is
 completely reworked and optimized for Python by Norbert Nemec,
 Institute of Theoretical Physics, University or Regensburg, April
 2006 Norbert.Nemec at physik.uni-regensburg.de
 """
 # Cast key variables as float.
 x=np.asarray(x, np.float_)
 y=np.asarray(y, np.float_)
 assert x.shape == y.shape
 N=len(y)
 if yp is None:
 yp = slopes(x,y)
 else:
 yp=np.asarray(yp, np.float_)
 xi=np.asarray(xi, np.float_)
 yi=np.zeros(xi.shape, np.float_)
 # calculate linear slopes
 dx = x[1:] - x[:-1]
 dy = y[1:] - y[:-1]
 s = dy/dx #note length of s is N-1 so last element is #N-2
 # find the segment each xi is in
 # this line actually is the key to the efficiency of this implementation
 idx = np.searchsorted(x[1:-1], xi)
 # now we have generally: x[idx[j]] <= xi[j] <= x[idx[j]+1]
 # except at the boundaries, where it may be that xi[j] < x[0] or xi[j] > x[-1]
 # the y-values that would come out from a linear interpolation:
 sidx = s.take(idx)
 xidx = x.take(idx)
 yidx = y.take(idx)
 xidxp1 = x.take(idx+1)
 yo = yidx + sidx * (xi - xidx)
 # the difference that comes when using the slopes given in yp
 dy1 = (yp.take(idx)- sidx) * (xi - xidx) # using the yp slope of the left point
 dy2 = (yp.take(idx+1)-sidx) * (xi - xidxp1) # using the yp slope of the right point
 dy1dy2 = dy1*dy2
 # The following is optimized for Python. The solution actually
 # does more calculations than necessary but exploiting the power
 # of numpy, this is far more efficient than coding a loop by hand
 # in Python
 yi = yo + dy1dy2 * np.choose(np.array(np.sign(dy1dy2), np.int32)+1,
 ((2*xi-xidx-xidxp1)/((dy1-dy2)*(xidxp1-xidx)),
 0.0,
 1/(dy1+dy2),))
 return yi
def inside_poly(points, verts):
 """
 points is a sequence of x,y points
 verts is a sequence of x,y vertices of a poygon
 return value is a sequence of indices into points for the points
 that are inside the polygon
 """
 res, = np.nonzero(nxutils.points_inside_poly(points, verts))
 return res
def poly_below(ymin, xs, ys):
 """
 given a arrays *xs* and *ys*, return the vertices of a polygon
 that has a scalar lower bound *ymin* and an upper bound at the *ys*.
 intended for use with Axes.fill, eg::
 xv, yv = poly_below(0, x, y)
 ax.fill(xv, yv)
 """
 return poly_between(xs, ys, xmin)
def poly_between(x, ylower, yupper):
 """
 given a sequence of x, ylower and yupper, return the polygon that
 fills the regions between them. ylower or yupper can be scalar or
 iterable. If they are iterable, they must be equal in length to x
 return value is x, y arrays for use with Axes.fill
 """
 Nx = len(x)
 if not cbook.iterable(ylower):
 ylower = ylower*np.ones(Nx)
 if not cbook.iterable(yupper):
 yupper = yupper*np.ones(Nx)
 x = np.concatenate( (x, x[::-1]) )
 y = np.concatenate( (yupper, ylower[::-1]) )
 return x,y
### the following code was written and submitted by Fernando Perez
### from the ipython numutils package under a BSD license
# begin fperez functions
"""
A set of convenient utilities for numerical work.
Most of this module requires numpy or is meant to be used with it.
Copyright (c) 2001-2004, Fernando Perez. <Fer...@co...>
All rights reserved.
This license was generated from the BSD license template as found in:
http://www.opensource.org/licenses/bsd-license.php
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
 * Redistributions of source code must retain the above copyright notice,
 this list of conditions and the following disclaimer.
 * Redistributions in binary form must reproduce the above copyright
 notice, this list of conditions and the following disclaimer in the
 documentation and/or other materials provided with the distribution.
 * Neither the name of the IPython project nor the names of its
 contributors may be used to endorse or promote products derived from
 this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""
import operator
import math
#*****************************************************************************
# Globals
#****************************************************************************
# function definitions
exp_safe_MIN = math.log(2.2250738585072014e-308)
exp_safe_MAX = 1.7976931348623157e+308
def exp_safe(x):
 """
 Compute exponentials which safely underflow to zero.
 Slow, but convenient to use. Note that numpy provides proper
 floating point exception handling with access to the underlying
 hardware.
 """
 if type(x) is np.ndarray:
 return exp(np.clip(x,exp_safe_MIN,exp_safe_MAX))
 else:
 return math.exp(x)
def amap(fn,*args):
 """
 amap(function, sequence[, sequence, ...]) -> array.
 Works like :func:`map`, but it returns an array. This is just a
 convenient shorthand for ``numpy.array(map(...))``.
 """
 return np.array(map(fn,*args))
#from numpy import zeros_like
def zeros_like(a):
 """
 Return an array of zeros of the shape and typecode of *a*.
 """
 warnings.warn("Use numpy.zeros_like(a)", DeprecationWarning)
 return np.zeros_like(a)
#from numpy import sum as sum_flat
def sum_flat(a):
 """
 Return the sum of all the elements of *a*, flattened out.
 It uses ``a.flat``, and if *a* is not contiguous, a call to
 ``ravel(a)`` is made.
 """
 warnings.warn("Use numpy.sum(a) or a.sum()", DeprecationWarning)
 return np.sum(a)
#from numpy import mean as mean_flat
def mean_flat(a):
 """
 Return the mean of all the elements of *a*, flattened out.
 """
 warnings.warn("Use numpy.mean(a) or a.mean()", DeprecationWarning)
 return np.mean(a)
def rms_flat(a):
 """
 Return the root mean square of all the elements of *a*, flattened out.
 """
 return np.sqrt(np.mean(np.absolute(a)**2))
def l1norm(a):
 """
 Return the *l1* norm of *a*, flattened out.
 Implemented as a separate function (not a call to :func:`norm` for speed).
 """
 return np.sum(np.absolute(a))
def l2norm(a):
 """
 Return the *l2* norm of *a*, flattened out.
 Implemented as a separate function (not a call to :func:`norm` for speed).
 """
 return np.sqrt(np.sum(np.absolute(a)**2))
def norm_flat(a,p=2):
 """
 norm(a,p=2) -> l-p norm of a.flat
 Return the l-p norm of *a*, considered as a flat array. This is NOT a true
 matrix norm, since arrays of arbitrary rank are always flattened.
 *p* can be a number or the string 'Infinity' to get the L-infinity norm.
 """
 # This function was being masked by a more general norm later in
 # the file. We may want to simply delete it.
 if p=='Infinity':
 return np.amax(np.absolute(a))
 else:
 return (np.sum(np.absolute(a)**p))**(1.0/p)
def frange(xini,xfin=None,delta=None,**kw):
 """
 frange([start,] stop[, step, keywords]) -> array of floats
 Return a numpy ndarray containing a progression of floats. Similar to
 :func:`numpy.arange`, but defaults to a closed interval.
 ``frange(x0, x1)`` returns ``[x0, x0+1, x0+2, ..., x1]``; *start*
 defaults to 0, and the endpoint *is included*. This behavior is
 different from that of :func:`range` and
 :func:`numpy.arange`. This is deliberate, since :func:`frange`
 will probably be more useful for generating lists of points for
 function evaluation, and endpoints are often desired in this
 use. The usual behavior of :func:`range` can be obtained by
 setting the keyword *closed* = 0, in this case, :func:`frange`
 basically becomes :func:numpy.arange`.
 When *step* is given, it specifies the increment (or
 decrement). All arguments can be floating point numbers.
 ``frange(x0,x1,d)`` returns ``[x0,x0+d,x0+2d,...,xfin]`` where
 *xfin* <= *x1*.
 :func:`frange` can also be called with the keyword *npts*. This
 sets the number of points the list should contain (and overrides
 the value *step* might have been given). :func:`numpy.arange`
 doesn't offer this option.
 Examples::
 >>> frange(3)
 array([ 0., 1., 2., 3.])
 >>> frange(3,closed=0)
 array([ 0., 1., 2.])
 >>> frange(1,6,2)
 array([1, 3, 5]) or 1,3,5,7, depending on floating point vagueries
 >>> frange(1,6.5,npts=5)
 array([ 1. , 2.375, 3.75 , 5.125, 6.5 ])
 """
 #defaults
 kw.setdefault('closed',1)
 endpoint = kw['closed'] != 0
 # funny logic to allow the *first* argument to be optional (like range())
 # This was modified with a simpler version from a similar frange() found
 # at http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/66472
 if xfin == None:
 xfin = xini + 0.0
 xini = 0.0
 if delta == None:
 delta = 1.0
 # compute # of points, spacing and return final list
 try:
 npts=kw['npts']
 delta=(xfin-xini)/float(npts-endpoint)
 except KeyError:
 npts = int(round((xfin-xini)/delta)) + endpoint
 #npts = int(floor((xfin-xini)/delta)*(1.0+1e-10)) + endpoint
 # round finds the nearest, so the endpoint can be up to
 # delta/2 larger than xfin.
 return np.arange(npts)*delta+xini
# end frange()
#import numpy.diag as diagonal_matrix
def diagonal_matrix(diag):
 """
 Return square diagonal matrix whose non-zero elements are given by the
 input array.
 """
 warnings.warn("Use numpy.diag(d)", DeprecationWarning)
 return np.diag(diag)
def identity(n, rank=2, dtype='l', typecode=None):
 """
 Returns the identity matrix of shape (*n*, *n*, ..., *n*) (rank *r*).
 For ranks higher than 2, this object is simply a multi-index Kronecker
 delta::
 / 1 if i0=i1=...=iR,
 id[i0,i1,...,iR] = -|
 \ 0 otherwise.
 Optionally a *dtype* (or typecode) may be given (it defaults to 'l').
 Since rank defaults to 2, this function behaves in the default case (when
 only *n* is given) like ``numpy.identity(n)`` -- but surprisingly, it is
 much faster.
 """
 if typecode is not None:
 warnings.warn("Use dtype kwarg instead of typecode",
 DeprecationWarning)
 dtype = typecode
 iden = np.zeros((n,)*rank, dtype)
 for i in range(n):
 idx = (i,)*rank
 iden[idx] = 1
 return iden
def base_repr (number, base = 2, padding = 0):
 """
 Return the representation of a *number* in any given *base*.
 """
 chars = '0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ'
 if number < base: \
 return (padding - 1) * chars [0] + chars [int (number)]
 max_exponent = int (math.log (number)/math.log (base))
 max_power = long (base) ** max_exponent
 lead_digit = int (number/max_power)
 return chars [lead_digit] + \
 base_repr (number - max_power * lead_digit, base, \
 max (padding - 1, max_exponent))
def binary_repr(number, max_length = 1025):
 """
 Return the binary representation of the input *number* as a
 string.
 This is more efficient than using :func:`base_repr` with base 2.
 Increase the value of max_length for very large numbers. Note that
 on 32-bit machines, 2**1023 is the largest integer power of 2
 which can be converted to a Python float.
 """
 #assert number < 2L << max_length
 shifts = map (operator.rshift, max_le...
 
[truncated message content]
From: Jae-Joon L. <lee...@gm...> - 2009年01月12日 08:57:00
>
> The best I've come up with so far is to
> create a figure with a set size, and then
> add axes with a specified rectangle.
> But this does not give a neat path to
> what I want.
>
Alan,
I think the way you're doing is the easiest way.
Anyhow, can you provide an example of a "neat path"? In my view, fixed
axes size means fixed figure size (in most cases), and I'm not quite
sure how this can be improved.
This might not be very helpful but I recently added an
"axes_divider.py" in the example directory which includes helper
classes to calculate the axes size in drawing time. And It is possible
to have the axes size fixed in inches (check demo_fixed_size_axes
function in the file) regardless of the figure size, but the method is
basically same as the one you're using.
Regards,
-JJ
From: Simson G. <si...@ac...> - 2009年01月12日 06:33:44
Hi. It's me again, asking about dates again.
is there any easy way to a collection using dates on the X axes? 
I've taken the collection example from the website and adopted it so 
that there is a use_dates flag. Set it to False and spirals demo 
appears. Set it to True and I get this error:
Traceback (most recent call last):
 File "mpl_collection2.py", line 51, in <module>
 ax.add_collection(col, autolim=True)
 File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
python2.6/site-packages/matplotlib/axes.py", line 1312, in 
add_collection
 self.update_datalim(collection.get_datalim(self.transData))
 File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
python2.6/site-packages/matplotlib/collections.py", line 144, in 
get_datalim
 offsets = transOffset.transform_non_affine(offsets)
 File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
python2.6/site-packages/matplotlib/transforms.py", line 1914, in 
transform_non_affine
 self._a.transform(points))
 File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ 
python2.6/site-packages/matplotlib/transforms.py", line 1408, in 
transform
 return affine_transform(points, mtx)
ValueError: Invalid vertices array.
The code is below.
Thanks!
=========================
import matplotlib
import matplotlib.pyplot
from matplotlib import collections, transforms
from matplotlib.colors import colorConverter
import numpy as N
import datetime
use_dates = False
nverts = 50
npts = 100
# Make some spirals
r = N.array(range(nverts))
theta = N.array(range(nverts)) * (2*N.pi)/(nverts-1)
xx = r * N.sin(theta)
yy = r * N.cos(theta)
spiral = zip(xx,yy)
# Make some offsets
rs = N.random.RandomState([12345678])
if not use_dates:
 xo = [i for i in range(0,100)]
else:
 xo = [datetime.date(1990,1,1)+datetime.timedelta(10)*i for i in 
range(0,100)] # new version
yo = rs.randn(npts)
xyo = zip(xo, yo)
colors = [colorConverter.to_rgba(c) for c in 
('r','g','b','c','y','m','k')]
fig = matplotlib.pyplot.figure()
ax = fig.add_subplot(1,1,1)
if use_dates:
 import matplotlib.dates as mdates
 years = mdates.YearLocator() # every year
 months = mdates.MonthLocator() # every month
 yearsFmt = mdates.DateFormatter('%Y')
 ax.xaxis.set_major_locator(years)
 ax.xaxis.set_major_formatter(yearsFmt)
 ax.set_xlim(datetime.date(1990,1,1),datetime.date(1992,12,31))
col = collections.LineCollection([spiral], offsets=xyo, 
transOffset=ax.transData)
trans = fig.dpi_scale_trans + transforms.Affine2D().scale(1.0/72.0)
col.set_transform(trans) # the points to pixels transform
ax.add_collection(col, autolim=True)
col.set_color(colors)
ax.autoscale_view()
ax.set_title('LineCollection using offsets')
matplotlib.pyplot.show()
From: Tom K. <tp...@kr...> - 2009年01月12日 02:34:27
Hi, I am cross-posting this from wxPython users list since the demo is an
example wxPython app with embedded matplotlib objects.
To the wxpython / matplotlib community: 
I wanted to share the enclosed "Fourier Demo" GUI, which is a
reimplementation of one of the very first MATLAB GUIs that I worked on at
MathWorks in 1993 (right when Handle Graphics was introduced in 
MATLAB 4). It presents you with two waveforms - a Fourier transform pair -
and allows you to manipulate some parameters (via clicking the waveforms and
dragging, and controls) and shows how the waveforms are related. 
I was very happy about how easily it came together and the performance of
the resulting GUI. In particular the matplotlib events and interaction with
wx is quite nice. The 'hitlist' of matplotlib figures is very convenient.
Note this is some of my first wx GUI programming so if you see anything that
could be handled with a better pattern / class / etc, I am very open to
such suggestions! 
Sincerely, 
 Tom K. 
http://www.nabble.com/file/p21407452/sigdemo.py sigdemo.py 
-- 
View this message in context: http://www.nabble.com/fourier-demo-tp21407452p21407452.html
Sent from the matplotlib - users mailing list archive at Nabble.com.

Showing 22 results of 22

Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /