You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(12) |
Sep
(12) |
Oct
(56) |
Nov
(65) |
Dec
(37) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(59) |
Feb
(78) |
Mar
(153) |
Apr
(205) |
May
(184) |
Jun
(123) |
Jul
(171) |
Aug
(156) |
Sep
(190) |
Oct
(120) |
Nov
(154) |
Dec
(223) |
2005 |
Jan
(184) |
Feb
(267) |
Mar
(214) |
Apr
(286) |
May
(320) |
Jun
(299) |
Jul
(348) |
Aug
(283) |
Sep
(355) |
Oct
(293) |
Nov
(232) |
Dec
(203) |
2006 |
Jan
(352) |
Feb
(358) |
Mar
(403) |
Apr
(313) |
May
(165) |
Jun
(281) |
Jul
(316) |
Aug
(228) |
Sep
(279) |
Oct
(243) |
Nov
(315) |
Dec
(345) |
2007 |
Jan
(260) |
Feb
(323) |
Mar
(340) |
Apr
(319) |
May
(290) |
Jun
(296) |
Jul
(221) |
Aug
(292) |
Sep
(242) |
Oct
(248) |
Nov
(242) |
Dec
(332) |
2008 |
Jan
(312) |
Feb
(359) |
Mar
(454) |
Apr
(287) |
May
(340) |
Jun
(450) |
Jul
(403) |
Aug
(324) |
Sep
(349) |
Oct
(385) |
Nov
(363) |
Dec
(437) |
2009 |
Jan
(500) |
Feb
(301) |
Mar
(409) |
Apr
(486) |
May
(545) |
Jun
(391) |
Jul
(518) |
Aug
(497) |
Sep
(492) |
Oct
(429) |
Nov
(357) |
Dec
(310) |
2010 |
Jan
(371) |
Feb
(657) |
Mar
(519) |
Apr
(432) |
May
(312) |
Jun
(416) |
Jul
(477) |
Aug
(386) |
Sep
(419) |
Oct
(435) |
Nov
(320) |
Dec
(202) |
2011 |
Jan
(321) |
Feb
(413) |
Mar
(299) |
Apr
(215) |
May
(284) |
Jun
(203) |
Jul
(207) |
Aug
(314) |
Sep
(321) |
Oct
(259) |
Nov
(347) |
Dec
(209) |
2012 |
Jan
(322) |
Feb
(414) |
Mar
(377) |
Apr
(179) |
May
(173) |
Jun
(234) |
Jul
(295) |
Aug
(239) |
Sep
(276) |
Oct
(355) |
Nov
(144) |
Dec
(108) |
2013 |
Jan
(170) |
Feb
(89) |
Mar
(204) |
Apr
(133) |
May
(142) |
Jun
(89) |
Jul
(160) |
Aug
(180) |
Sep
(69) |
Oct
(136) |
Nov
(83) |
Dec
(32) |
2014 |
Jan
(71) |
Feb
(90) |
Mar
(161) |
Apr
(117) |
May
(78) |
Jun
(94) |
Jul
(60) |
Aug
(83) |
Sep
(102) |
Oct
(132) |
Nov
(154) |
Dec
(96) |
2015 |
Jan
(45) |
Feb
(138) |
Mar
(176) |
Apr
(132) |
May
(119) |
Jun
(124) |
Jul
(77) |
Aug
(31) |
Sep
(34) |
Oct
(22) |
Nov
(23) |
Dec
(9) |
2016 |
Jan
(26) |
Feb
(17) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(8) |
Jul
(6) |
Aug
(5) |
Sep
(9) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(5) |
Feb
(7) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
|
1
(8) |
2
(2) |
3
(1) |
4
(2) |
5
(15) |
6
(12) |
7
(10) |
8
(2) |
9
(5) |
10
(5) |
11
(8) |
12
(12) |
13
(26) |
14
(10) |
15
(11) |
16
(2) |
17
(3) |
18
(19) |
19
(25) |
20
(11) |
21
(8) |
22
(8) |
23
(2) |
24
|
25
(8) |
26
(4) |
27
(2) |
28
(5) |
29
(3) |
30
(5) |
On Wed, Jun 6, 2012 at 3:32 PM, kamel maths <kam...@gm...> wrote: > Hi, > > for this script: > ------------------------------------ > from pylab import * > > fig = figure() > ax = fig.add_subplot(111) > ax.axis('equal') > > x = linspace(-2, 3, 50) > ax.plot(x, sin(x)) > > show() > --------------------------------- > If I try to get ymax with ax.get_ylim(), i obtain 1.0 whereas I observe it > is 2.0. > How can I obtain 2.0 for ymax ? > > Thanks. > > Kamel > Hi Kamel, I'm not seeing the same result: I actually get back (-1.94, 1.94) from `get_ylim`. When do you call `get_ylim`? Do you call it *after* calling `plot`? -Tony
Hi, for this script: ------------------------------------ from pylab import * fig = figure() ax = fig.add_subplot(111) ax.axis('equal') x = linspace(-2, 3, 50) ax.plot(x, sin(x)) show() --------------------------------- If I try to get ymax with ax.get_ylim(), i obtain 1.0 whereas I observe it is 2.0. How can I obtain 2.0 for ymax ? Thanks. Kamel
> From: Eric Firing [mailto:ef...@ha...] > Sent: Wednesday, June 06, 2012 13:41 > To: mat...@li... > Subject: Re: [Matplotlib-users] scatter plot with constant x > > On 06/06/2012 06:42 AM, Ethan Gutmann wrote: > >> ... > >> No, but you can do this: > >> > >> plt.plot([3] * 4, [60, 80, 120, 180], ...) > > > > This started from a simple enough question, but it got me > thinking about what the fastest way to do this is (in case > you have HUGE arrays, or many loops over them). [...] > Since we end up needing float64 anyway: > > In [3]: %timeit l=np.empty(10000,dtype=np.float64); l.fill(3) > 100000 loops, best of 3: 14.1 us per loop > > In [4]: %timeit l=np.zeros(10000,dtype=np.float64);l[:]=3 > 10000 loops, best of 3: 26.6 us per loop > > Eric Numpy's as_strided came to mind; it can make a large array that's really a view of a one-element array: In [1]: as_strided = np.lib.stride_tricks.as_strided In [2]: s = as_strided(np.array([3], dtype=np.float64), shape=(10000,), ...: strides=(0,)) In [3]: s[0] = 4 In [4]: s[9999] # all elements share data Out[4]: 4.0 It's somewhat slower to create the base array and the view than to create and fill a 10000-element array: In [5]: %timeit l = np.empty(10000, dtype=np.float64); l.fill(3) 100000 loops, best of 3: 10.1 us per loop In [6]: %timeit s = as_strided(np.array([3], dtype=np.float64), shape=(10000,), strides=(0,)) # line broken for email 10000 loops, best of 3: 21.6 us per loop However, once created, its contents may be changed much more quickly: In [7]: l = np.empty(10000, dtype=np.float64) In [8]: %timeit l.fill(3) 100000 loops, best of 3: 7.71 us per loop In [9]: %timeit s[0] = 3 10000000 loops, best of 3: 116 ns per loop Numpy's broadcast_arrays uses as_strided under the hood. Code could look like: x, y = np.broadcast_arrays(3, [60, 80, 120, 180]) plt.plot(x, y, '+') x[0] = 21 # new x for all samples plt.plot(x, y, 'x')
On Jun 6, 2012, at 11:41 AM, Eric Firing wrote: > Since we end up needing float64 anyway: > > In [3]: %timeit l=np.empty(10000,dtype=np.float64); l.fill(3) > 100000 loops, best of 3: 14.1 us per loop nice, fill and empty seem to be responsible for about half the speed up each, good tools to know about.
On 06/06/2012 06:42 AM, Ethan Gutmann wrote: >> ... >> No, but you can do this: >> >> plt.plot([3] * 4, [60, 80, 120, 180], ...) > > This started from a simple enough question, but it got me thinking about what the fastest way to do this is (in case you have HUGE arrays, or many loops over them). This may be old news to some of you, but I thought it was interesting: > > In ipython --pylab > > In [1]: %timeit l=[3]*10000 > 10000 loops, best of 3: 53.3 us per loop > > In [2]: %timeit l=np.zeros(10000)+3 > 10000 loops, best of 3: 26.9 us per loop > > In [3]: %timeit l=np.ones(10000)*3 > 10000 loops, best of 3: 32.9 us per loop > > In [4]: %timeit l=(np.zeros(1)+3).repeat(10000) > 10000 loops, best of 3: 87.4 us per loop > > In [5]: %timeit l=np.zeros(10000);l[:]=3 > 10000 loops, best of 3: 21.6 us per loop > > In [6]: %timeit l=np.zeros(10000,dtype=np.uint8);l[:]=3 > 100000 loops, best of 3: 13.9 us per loop > > Using int16, int32, float32 get progressively slower to the default float64 case listed on line [5], changing the datatype in other methods doesn't result in nearly as large a speed up as it does in the last case. > > Ben's method is probably the most elegant for small arrays, but does any one else have a faster way to do this? (I'm assuming no use of blitz, inline C, f2py, but if you think you can do it faster in one of those, show me the way). > Since we end up needing float64 anyway: In [3]: %timeit l=np.empty(10000,dtype=np.float64); l.fill(3) 100000 loops, best of 3: 14.1 us per loop In [4]: %timeit l=np.zeros(10000,dtype=np.float64);l[:]=3 10000 loops, best of 3: 26.6 us per loop Eric > Sorry, maybe this is more appropriate on the numpy list. > > Ethan > > > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users
On 06/06/2012 12:54 PM, Ethan Gutmann wrote: > On Jun 6, 2012, at 10:49 AM, Michael Droettboom wrote: >> Interesting result. Note, however, that matplotlib will eventually turn >> all data arrays into float64 at rendering time, so any speed advantage >> to using integers will be lost by the subsequent conversion, I suspect. > I don't think it does if you pass uint8 to imshow, but otherwise you might be right. Sure. I was referring to scatter here. With imshow, of course, everything is ultimately turned into 8-bits-per-plane rgba. Mike
On Jun 6, 2012, at 10:49 AM, Michael Droettboom wrote: > Interesting result. Note, however, that matplotlib will eventually turn > all data arrays into float64 at rendering time, so any speed advantage > to using integers will be lost by the subsequent conversion, I suspect. I don't think it does if you pass uint8 to imshow, but otherwise you might be right. ethan
On 06/06/2012 12:42 PM, Ethan Gutmann wrote: >> ... >> No, but you can do this: >> >> plt.plot([3] * 4, [60, 80, 120, 180], ...) > Using int16, int32, float32 get progressively slower to the default float64 case listed on line [5], changing the datatype in other methods doesn't result in nearly as large a speed up as it does in the last case. > Interesting result. Note, however, that matplotlib will eventually turn all data arrays into float64 at rendering time, so any speed advantage to using integers will be lost by the subsequent conversion, I suspect. Mike
> ... > No, but you can do this: > > plt.plot([3] * 4, [60, 80, 120, 180], ...) This started from a simple enough question, but it got me thinking about what the fastest way to do this is (in case you have HUGE arrays, or many loops over them). This may be old news to some of you, but I thought it was interesting: In ipython --pylab In [1]: %timeit l=[3]*10000 10000 loops, best of 3: 53.3 us per loop In [2]: %timeit l=np.zeros(10000)+3 10000 loops, best of 3: 26.9 us per loop In [3]: %timeit l=np.ones(10000)*3 10000 loops, best of 3: 32.9 us per loop In [4]: %timeit l=(np.zeros(1)+3).repeat(10000) 10000 loops, best of 3: 87.4 us per loop In [5]: %timeit l=np.zeros(10000);l[:]=3 10000 loops, best of 3: 21.6 us per loop In [6]: %timeit l=np.zeros(10000,dtype=np.uint8);l[:]=3 100000 loops, best of 3: 13.9 us per loop Using int16, int32, float32 get progressively slower to the default float64 case listed on line [5], changing the datatype in other methods doesn't result in nearly as large a speed up as it does in the last case. Ben's method is probably the most elegant for small arrays, but does any one else have a faster way to do this? (I'm assuming no use of blitz, inline C, f2py, but if you think you can do it faster in one of those, show me the way). Sorry, maybe this is more appropriate on the numpy list. Ethan
On Tue, Jun 5, 2012 at 11:53 AM, Ulrich vor dem Esche < ulr...@go...> wrote: > Hey! :o) > This should be simple, but i cant manage: I need to plot many dots with > the same x, like > > plt.plot([3,3,3,3],[60,80,120,180],'+',markersize=8,mec='k') > > The array for x values is silly, especially since the number of y values > may be rather large. Is there a way to enter a constant there? > > Cheers to you all! > Ulli > > No, but you can do this: plt.plot([3] * 4, [60, 80, 120, 180], ...) Does that help? Ben Root
On Wed, Jun 6, 2012 at 8:01 AM, Guillaume Gay <gui...@mi...> wrote: > Le 05/06/2012 16:25, Tom Dimiduk a écrit : >> Is any of this stuff I should be looking to upstream or split off into >> the start of a scientific imaging library for python? > Have you had a look at skimage https://github.com/scikits-image ? > > > BTW I uses matplotlib (and the whole pylab suite) in my projects for all > the visualisation. > A (peer reviewed published) example here: > https://github.com/Kinetochore-segregation > > Best > > Guillaume The Spyder (http://code.google.com/p/spyderlib/) python-based matlab clone uses matplotlib for plotting. Python(X,Y) (http://code.google.com/p/pythonxy/) is an integrated windows python release that includes a ton of science, engineering, and mathematics-oriented python packages, including matplotlib. Numpy uses small bits of matplotlib when building the documentation, but I don't know if that counts (I think it may even use it for building matplotlib-related parts of the documentation, in which case it really doesn't count). I know someone is working on a pure python backend for the Cantor advanced mathematics software (http://edu.kde.org/cantor/). The project only started recently, however (see http://blog.filipesaraiva.info/?p=779 ). There is also already a sage backend for Cantor, which of course uses matplotlib for plotting because that is what sage uses. -Todd
Le 05/06/2012 16:25, Tom Dimiduk a écrit : > Is any of this stuff I should be looking to upstream or split off into > the start of a scientific imaging library for python? Have you had a look at skimage https://github.com/scikits-image ? BTW I uses matplotlib (and the whole pylab suite) in my projects for all the visualisation. A (peer reviewed published) example here: https://github.com/Kinetochore-segregation Best Guillaume