You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(12) |
Sep
(12) |
Oct
(56) |
Nov
(65) |
Dec
(37) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(59) |
Feb
(78) |
Mar
(153) |
Apr
(205) |
May
(184) |
Jun
(123) |
Jul
(171) |
Aug
(156) |
Sep
(190) |
Oct
(120) |
Nov
(154) |
Dec
(223) |
2005 |
Jan
(184) |
Feb
(267) |
Mar
(214) |
Apr
(286) |
May
(320) |
Jun
(299) |
Jul
(348) |
Aug
(283) |
Sep
(355) |
Oct
(293) |
Nov
(232) |
Dec
(203) |
2006 |
Jan
(352) |
Feb
(358) |
Mar
(403) |
Apr
(313) |
May
(165) |
Jun
(281) |
Jul
(316) |
Aug
(228) |
Sep
(279) |
Oct
(243) |
Nov
(315) |
Dec
(345) |
2007 |
Jan
(260) |
Feb
(323) |
Mar
(340) |
Apr
(319) |
May
(290) |
Jun
(296) |
Jul
(221) |
Aug
(292) |
Sep
(242) |
Oct
(248) |
Nov
(242) |
Dec
(332) |
2008 |
Jan
(312) |
Feb
(359) |
Mar
(454) |
Apr
(287) |
May
(340) |
Jun
(450) |
Jul
(403) |
Aug
(324) |
Sep
(349) |
Oct
(385) |
Nov
(363) |
Dec
(437) |
2009 |
Jan
(500) |
Feb
(301) |
Mar
(409) |
Apr
(486) |
May
(545) |
Jun
(391) |
Jul
(518) |
Aug
(497) |
Sep
(492) |
Oct
(429) |
Nov
(357) |
Dec
(310) |
2010 |
Jan
(371) |
Feb
(657) |
Mar
(519) |
Apr
(432) |
May
(312) |
Jun
(416) |
Jul
(477) |
Aug
(386) |
Sep
(419) |
Oct
(435) |
Nov
(320) |
Dec
(202) |
2011 |
Jan
(321) |
Feb
(413) |
Mar
(299) |
Apr
(215) |
May
(284) |
Jun
(203) |
Jul
(207) |
Aug
(314) |
Sep
(321) |
Oct
(259) |
Nov
(347) |
Dec
(209) |
2012 |
Jan
(322) |
Feb
(414) |
Mar
(377) |
Apr
(179) |
May
(173) |
Jun
(234) |
Jul
(295) |
Aug
(239) |
Sep
(276) |
Oct
(355) |
Nov
(144) |
Dec
(108) |
2013 |
Jan
(170) |
Feb
(89) |
Mar
(204) |
Apr
(133) |
May
(142) |
Jun
(89) |
Jul
(160) |
Aug
(180) |
Sep
(69) |
Oct
(136) |
Nov
(83) |
Dec
(32) |
2014 |
Jan
(71) |
Feb
(90) |
Mar
(161) |
Apr
(117) |
May
(78) |
Jun
(94) |
Jul
(60) |
Aug
(83) |
Sep
(102) |
Oct
(132) |
Nov
(154) |
Dec
(96) |
2015 |
Jan
(45) |
Feb
(138) |
Mar
(176) |
Apr
(132) |
May
(119) |
Jun
(124) |
Jul
(77) |
Aug
(31) |
Sep
(34) |
Oct
(22) |
Nov
(23) |
Dec
(9) |
2016 |
Jan
(26) |
Feb
(17) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(8) |
Jul
(6) |
Aug
(5) |
Sep
(9) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(5) |
Feb
(7) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
(2) |
2
(13) |
3
(13) |
4
(11) |
5
(15) |
6
(16) |
7
(1) |
8
(2) |
9
(1) |
10
(16) |
11
(19) |
12
(8) |
13
(20) |
14
(9) |
15
(2) |
16
(9) |
17
(29) |
18
(14) |
19
(13) |
20
(10) |
21
(1) |
22
(3) |
23
(4) |
24
(26) |
25
(11) |
26
(11) |
27
(8) |
28
(4) |
29
(2) |
30
(10) |
31
(17) |
|
|
|
|
I made quite a few changes, am not sure what exactly triggered it but the charts using a derivative of plot_day_summary2 are taking about 0.9 seconds each which is closer to what they used to take when John firt wrote the function for me. I'll be spending some time doing thorugh the profiler this weekend. Will post the results to this list. Since I generate different size and format charts for the same stock and time range. I was goign to remove some of the price array related function out of plot_day_summary so that they can be used across the different charts. This sounded rather intersting since my charts, like most other, have only three colors in them. How much work would it be and any guesses on how much time would it save? >Most 2d line drawings (generally the kind you expect to go very >fast) use very few colors (certainly fewer than 100). In that case, >using a color table could cut the image size by ~1/3. > >That might actually help performance, as it seems that the size of the >image (and the transfer to backend -- with wx, for example, this must >be converted to a wx bitmap) may be a significant bottleneck, at least >from what I see. > >I'm not saying that this would be easy to do, and it would not be very >useful for most images, but it might help performance for 2d line >drawings. Thanks, VJ
Darren Dale wrote: > 3) a script (not ipython output) that I generates the eps file Just so you know, if in ipython you type %history -n, you'll get a screen dump of your history without line numbers, so you can copy/paste easily. Or, if you type %logstart foo.py, ipython will start logging all input (it writes the previous session input and continues from then on) to file foo.py. Cheers, f
Hi George, On Tuesday 10 May 2005 6:37 pm, George Nurser wrote: > I found that my earlier problem in which matplotlib failed when making > the font database (in loading pylab the first time) came from a VT102 > font in my ~/Library/Fonts (on OSx 10.3). Moving the font to a > directory off the font path allowed successful loading. > > Next problem .... interactive running, using the TkAgg backend, from > ipython -pylab draws pretty graphs -- very impressive -- and I can save > to .png fine. But saving to postscript gives a very large file, which > moreover seems to be corrupted. Editing the .matplotlibrc file to use > the PS backend directly & starting up ipython again gives the same very > large, corrupt, PS file. > > matplotlib 0.80, ipython 0.6.13 + zlib, libpng, tk_inter, freetype > 2.1.9, wx-2.6-mac-unicode, standard Apple Python 2.3, updated to > MacPython, Numeric. OS X 10.3.9. Would you send me, off list:=20 1) your .matplotlibrc file if you have customized it 2) the corrupt eps file 3) a script (not ipython output) that I generates the eps file Also, if you set verbose.level : debug in your .matplotlibrc, does it tell = you=20 anything pertinent? Darren
I found that my earlier problem in which matplotlib failed when making the font database (in loading pylab the first time) came from a VT102 font in my ~/Library/Fonts (on OSx 10.3). Moving the font to a directory off the font path allowed successful loading. Next problem .... interactive running, using the TkAgg backend, from ipython -pylab draws pretty graphs -- very impressive -- and I can save to .png fine. But saving to postscript gives a very large file, which moreover seems to be corrupted. Editing the .matplotlibrc file to use the PS backend directly & starting up ipython again gives the same very large, corrupt, PS file. matplotlib 0.80, ipython 0.6.13 + zlib, libpng, tk_inter, freetype 2.1.9, wx-2.6-mac-unicode, standard Apple Python 2.3, updated to MacPython, Numeric. OS X 10.3.9. -George Nurser. E.g. (from ipython) In [1]: dt = 0.01 In [2]: t = arange(0,10,dt) In [3]: s = sin(t) In [4]: subplot(211) Out[4]: <matplotlib.axes.Subplot instance at 0x3fa8af8> In [5]: plot(t,s) Out[5]: [<matplotlib.lines.Line2D instance at 0x3fb73f0>] In [6]: c = cos(t) In [7]: subplot(212) Out[7]: <matplotlib.axes.Subplot instance at 0x3fb7f30> In [8]: plot(t,c) Out[8]: [<matplotlib.lines.Line2D instance at 0x3fc0ee0>] ... draws 2 subplots fine. In [9]: savefig('/Users/agn/sin.png') ... produces 28kB .png file In [10]: savefig('/Users/agn/sin.ps') ... produces 4.7 MB ps file. Running ps2pdf sin.ps fails ... something to do with Lucida Grande Error: /invalidfont in -dict- Operand stack: LucidaGrande --dict:10/14(L)-- Font LucidaGrande --dict:10/14(L)-- LucidaGrande Execution stack: %interp_exit .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- --nostringval-- --nostringval-- false 1 %stopped_push 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop 1 3 %oparray_pop .runexec2 --nostringval-- --nostringval-- --nostringval-- 2 %stopped_push --nostringval-- 2 4 %oparray_pop 3 4 %oparray_pop --nostringval-- --nostringval-- --nostringval-- 7 5 %oparray_pop --nostringval-- 7 5 %oparray_pop Dictionary stack: --dict:1111/1686(ro)(G)-- --dict:0/20(G)-- --dict:70/200(L)-- --dict:6/7(L)-- --dict:17/17(ro)(G)-- Current allocation mode is local Last OS error: 2 Current file position is 4703628 AFPL Ghostscript 8.13: Unrecoverable error, exit code 1 Segmentation fault
On May 10, 2005, at 4:37 PM, John Hunter wrote: > Ken> No, I am not using them, but I haven't been doing any plots > Ken> like that myself. I was hoping to be helpful, but that > Ken> appears to have backfired. Serves me right for not searching > Ken> the list and reading finance.py more closely. :-/ > > I wasn't being critical -- your profile numbers were right on the > money. I didn't feel that I had been criticized, although my annoyance with myself may have affected the tone of my email. I'm in the process of repeating my profiling with plot_day_summary2(), although the results of the first couple of runs indicate that Agg is dominating the run time. Ken
>>>>> "Ken" == Ken McIvor <mc...@ii...> writes: Ken> No, I am not using them, but I haven't been doing any plots Ken> like that myself. I was hoping to be helpful, but that Ken> appears to have backfired. Serves me right for not searching Ken> the list and reading finance.py more closely. :-/ I wasn't being critical -- your profile numbers were right on the money. JDH
On May 10, 2005, at 4:03 PM, John Hunter wrote: > Well, yes, if you are using plot_day_summary you are dead in the > water. I specifically wrote plot_day_summary2 using line collections > for Vinj to solve the performance problems that creating so many line > instances causes. In fact, it was a request by Vinj that led to the > creation of LineCollections in the first place. Are you not using > them? :-( No, I am not using them, but I haven't been doing any plots like that myself. I was hoping to be helpful, but that appears to have backfired. Serves me right for not searching the list and reading finance.py more closely. :-/ Ken
On Tuesday 10 May 2005 4:50 pm, John Hunter wrote: > A poor matlab user trying to figure out how to make his plots look as > good as matplotlib Congratulations to everyone who makes Matplotlib kick ass.
>>>>> "Ken" == Ken McIvor <mc...@ii...> writes: Ken> John Hunter wrote: >> But there are plenty of opportunities for optimization in >> matplotlib and the way it is used, so code snippets and >> profiler results will be most helpful. Ken> I spent a bit of time earlier today looking over the code to Ken> plot_day_summary(). My hunch at the time was that creating Ken> three Line2D objects per data point was a source of overhead, Ken> so I put together a script to profile things. The results Ken> I've gathered are rather inconclusive to me, but they may Ken> help others formulate an opinion. Ken> The script and results are at Ken> http://agni.phys.iit.edu/~kmcivor/plot_day/ Ken> Vinj, you said you were using a derivation of Ken> plot_day_summary() to plot charts of size 1000x700. I have Ken> no idea what you mean about the chart size. Having access to Ken> your plotting function and an example data set would allow me Ken> to write a more accurate profiling script. Well, yes, if you are using plot_day_summary you are dead in the water. I specifically wrote plot_day_summary2 using line collections for Vinj to solve the performance problems that creating so many line instances causes. In fact, it was a request by Vinj that led to the creation of LineCollections in the first place. Are you not using them? :-( See also examples/line_collections.py in CVS and http://matplotlib.sourceforge.net/matplotlib.collections.html JDH Ken> Ken Ken> ------------------------------------------------------- This Ken> SF.Net email is sponsored by Oracle Space Sweepstakes Want to Ken> be the first software developer in space? Enter now for the Ken> Oracle Space Sweepstakes! Ken> http://ads.osdn.com/?ad_id=7393&alloc_id=16281&op=click Ken> _______________________________________________ Ken> Matplotlib-users mailing list Ken> Mat...@li... Ken> https://lists.sourceforge.net/lists/listinfo/matplotlib-users
John Hunter wrote: > But there are plenty of opportunities for optimization in matplotlib > and the way it is used, so code snippets and profiler results will be > most helpful. I spent a bit of time earlier today looking over the code to plot_day_summary(). My hunch at the time was that creating three Line2D objects per data point was a source of overhead, so I put together a script to profile things. The results I've gathered are rather inconclusive to me, but they may help others formulate an opinion. The script and results are at http://agni.phys.iit.edu/~kmcivor/plot_day/ Vinj, you said you were using a derivation of plot_day_summary() to plot charts of size 1000x700. I have no idea what you mean about the chart size. Having access to your plotting function and an example data set would allow me to write a more accurate profiling script. Ken
A poor matlab user trying to figure out how to make his plots look as good as matplotlib http://groups-beta.google.com/group/comp.soft-sys.matlab/browse_thread/thread/49da04677b84c563/6eee1ce5e9c5d9b4?q=matplotlib&rnum=1&hl=en#6eee1ce5e9c5d9b4 :-) !! JDH
Something I told before can be related to some size problem for a svg and it's sure for an eps file. All the point of a curve is present inside the eps file even if the axes are smaller. To see this effect you can do like I explain in this message. http://sourceforge.net/mailarchive/message.php?msg_id=11005531 To avoid to have this problem I cut by hand the spectra before to plot them but it's not very convenient and that means another step before to plot anything but that's work and the size of the output file decrease dramatically. my 2 cents. N. Matt Newville wrote: >John, > >If I understand correctly, the agg backend use an image >representation that has an rgb triple for each pixel. Is >that correct? > >Most 2d line drawings (generally the kind you expect to go very >fast) use very few colors (certainly fewer than 100). In that >case, using a color table could cut the image size by ~1/3. > >That might actually help performance, as it seems that the size >of the image (and the transfer to backend -- with wx, for >example, this must be converted to a wx bitmap) may be a >significant bottleneck, at least from what I see. > >I'm not saying that this would be easy to do, and it would not >be very useful for most images, but it might help performance >for 2d line drawings. > >--Matt > > > >------------------------------------------------------- >This SF.Net email is sponsored by Oracle Space Sweepstakes >Want to be the first software developer in space? >Enter now for the Oracle Space Sweepstakes! >http://ads.osdn.com/?ad_id=7393&alloc_id=16281&op=click >_______________________________________________ >Matplotlib-users mailing list >Mat...@li... >https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > >
John, If I understand correctly, the agg backend use an image representation that has an rgb triple for each pixel. Is that correct? Most 2d line drawings (generally the kind you expect to go very fast) use very few colors (certainly fewer than 100). In that case, using a color table could cut the image size by ~1/3. That might actually help performance, as it seems that the size of the image (and the transfer to backend -- with wx, for example, this must be converted to a wx bitmap) may be a significant bottleneck, at least from what I see. I'm not saying that this would be easy to do, and it would not be very useful for most images, but it might help performance for 2d line drawings. --Matt
>>>>> "Chris" == Chris Barker <Chr...@no...> writes: Chris> Someone here may have some ideas, but you're really going Chris> to need to do some profiling to find out where your Chris> bottlenecks are. If most of the time is spent in the actual Chris> Agg drawing calls, no amount of Pyrex, psyco, etc will Chris> help. Seconded. Spend some time with the profiler and when you find the bottlenecks, post what you learn. Chris> By they way, I haven't looked in the source, but does Chris> matplotlib (and the Agg back-end) use "vectorized" calls to Chris> the drawing? AS an example, with wxPython, calling Chris> DC.DrawPointList() with an NX2 array of point coordinates Chris> is orders of magnitude faster than looping through the Chris> array and calling DC.DrawPoint() thousands of times. The Chris> overhead of that round-trip between Python and C++ is Chris> substantial. Maybe tricks like that could speed up the Agg Chris> back-end too. No, there is no low hanging fruit like that for the typical use cases (line and marker drawing in agg). All of those loops happen in extension code over numerix arrays. But there are plenty of opportunities for optimization in matplotlib and the way it is used, so code snippets and profiler results will be most helpful. JDH
Someone here may have some ideas, but you're really going to need to do some profiling to find out where your bottlenecks are. If most of the time is spent in the actual Agg drawing calls, no amount of Pyrex, psyco, etc will help. By they way, I haven't looked in the source, but does matplotlib (and the Agg back-end) use "vectorized" calls to the drawing? AS an example, with wxPython, calling DC.DrawPointList() with an NX2 array of point coordinates is orders of magnitude faster than looping through the array and calling DC.DrawPoint() thousands of times. The overhead of that round-trip between Python and C++ is substantial. Maybe tricks like that could speed up the Agg back-end too. -Chris Vinj Vinj wrote: > I'm currently use the Agg background, and my graphs > are taking about 2.5 seconds each. I'm primarily using > matplotlib to generate stock ticker graphs and will > generating tens of thousands of graphs throughout the > day. > > I read that the SVG was the fastest backend. I tried > it but the speed comes out to be around 4 seconds. The > file size from Agg is about 50k and from SVG is about > 400k which might explain why it's taking so long. > > I'm using a derivation of the plotDaySummary to do the > charts. the charts are large size 1000x700 > > I tried psyco but that improved the chart times by > about 7%. > > Any and all sugestions would be appreciated. Do you > think rewriting the plot_day_summary with Pyrex would > help? > > Thanks, > > VJ > > > ------------------------------------------------------- > This SF.Net email is sponsored by Oracle Space Sweepstakes > Want to be the first software developer in space? > Enter now for the Oracle Space Sweepstakes! > http://ads.osdn.com/?ad_id=7393&alloc_id=16281&op=click > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users -- Christopher Barker, Ph.D. Oceanographer NOAA/OR&R/HAZMAT (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chr...@no...
I'm currently use the Agg background, and my graphs are taking about 2.5 seconds each. I'm primarily using matplotlib to generate stock ticker graphs and will generating tens of thousands of graphs throughout the day. I read that the SVG was the fastest backend. I tried it but the speed comes out to be around 4 seconds. The file size from Agg is about 50k and from SVG is about 400k which might explain why it's taking so long. I'm using a derivation of the plotDaySummary to do the charts. the charts are large size 1000x700 I tried psyco but that improved the chart times by about 7%. Any and all sugestions would be appreciated. Do you think rewriting the plot_day_summary with Pyrex would help? Thanks, VJ