You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(12) |
Sep
(12) |
Oct
(56) |
Nov
(65) |
Dec
(37) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(59) |
Feb
(78) |
Mar
(153) |
Apr
(205) |
May
(184) |
Jun
(123) |
Jul
(171) |
Aug
(156) |
Sep
(190) |
Oct
(120) |
Nov
(154) |
Dec
(223) |
2005 |
Jan
(184) |
Feb
(267) |
Mar
(214) |
Apr
(286) |
May
(320) |
Jun
(299) |
Jul
(348) |
Aug
(283) |
Sep
(355) |
Oct
(293) |
Nov
(232) |
Dec
(203) |
2006 |
Jan
(352) |
Feb
(358) |
Mar
(403) |
Apr
(313) |
May
(165) |
Jun
(281) |
Jul
(316) |
Aug
(228) |
Sep
(279) |
Oct
(243) |
Nov
(315) |
Dec
(345) |
2007 |
Jan
(260) |
Feb
(323) |
Mar
(340) |
Apr
(319) |
May
(290) |
Jun
(296) |
Jul
(221) |
Aug
(292) |
Sep
(242) |
Oct
(248) |
Nov
(242) |
Dec
(332) |
2008 |
Jan
(312) |
Feb
(359) |
Mar
(454) |
Apr
(287) |
May
(340) |
Jun
(450) |
Jul
(403) |
Aug
(324) |
Sep
(349) |
Oct
(385) |
Nov
(363) |
Dec
(437) |
2009 |
Jan
(500) |
Feb
(301) |
Mar
(409) |
Apr
(486) |
May
(545) |
Jun
(391) |
Jul
(518) |
Aug
(497) |
Sep
(492) |
Oct
(429) |
Nov
(357) |
Dec
(310) |
2010 |
Jan
(371) |
Feb
(657) |
Mar
(519) |
Apr
(432) |
May
(312) |
Jun
(416) |
Jul
(477) |
Aug
(386) |
Sep
(419) |
Oct
(435) |
Nov
(320) |
Dec
(202) |
2011 |
Jan
(321) |
Feb
(413) |
Mar
(299) |
Apr
(215) |
May
(284) |
Jun
(203) |
Jul
(207) |
Aug
(314) |
Sep
(321) |
Oct
(259) |
Nov
(347) |
Dec
(209) |
2012 |
Jan
(322) |
Feb
(414) |
Mar
(377) |
Apr
(179) |
May
(173) |
Jun
(234) |
Jul
(295) |
Aug
(239) |
Sep
(276) |
Oct
(355) |
Nov
(144) |
Dec
(108) |
2013 |
Jan
(170) |
Feb
(89) |
Mar
(204) |
Apr
(133) |
May
(142) |
Jun
(89) |
Jul
(160) |
Aug
(180) |
Sep
(69) |
Oct
(136) |
Nov
(83) |
Dec
(32) |
2014 |
Jan
(71) |
Feb
(90) |
Mar
(161) |
Apr
(117) |
May
(78) |
Jun
(94) |
Jul
(60) |
Aug
(83) |
Sep
(102) |
Oct
(132) |
Nov
(154) |
Dec
(96) |
2015 |
Jan
(45) |
Feb
(138) |
Mar
(176) |
Apr
(132) |
May
(119) |
Jun
(124) |
Jul
(77) |
Aug
(31) |
Sep
(34) |
Oct
(22) |
Nov
(23) |
Dec
(9) |
2016 |
Jan
(26) |
Feb
(17) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(8) |
Jul
(6) |
Aug
(5) |
Sep
(9) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(5) |
Feb
(7) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
(3) |
2
(9) |
3
(6) |
4
(2) |
5
(19) |
6
(7) |
7
(3) |
8
(5) |
9
(6) |
10
(13) |
11
(19) |
12
(16) |
13
(9) |
14
(17) |
15
(5) |
16
(12) |
17
(12) |
18
(5) |
19
(16) |
20
(10) |
21
(9) |
22
(3) |
23
(8) |
24
(5) |
25
(13) |
26
(11) |
27
(21) |
28
(9) |
29
(11) |
30
(6) |
31
(5) |
|
|
|
|
On Thu, Jul 5, 2012 at 12:17 PM, Benjamin Root <ben...@ou...> wrote: > > Actually, looking at Fabrice's code, you might be able to get it to be > slightly faster. Lines 39-41 should be protected by a "if i == 0" > statement because it only needs to be done once. Furthermore, you might > get some more improvements if you allow the subplots to share_all, in which > case, you only need to set the limits and maybe the scale and the locator > once. > > Cheers! > Ben Root > > Good catch. Bringing lines 39-41 in the if i==0 block makes the label texts appear jagged. See my output for this case at -> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed3_jaggedlabels.pdf Putting these lines right below main fig and grid object creations make them look normal, and this case saves me 3-5 more seconds. Setting share_all option to 1 makes x-ticks unreasonably placed on the axes. As if the share_all option is applied only to the first plot call. See the example output at -> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed3_badxaxes.pdf I actually started with share_all=1 from this example -> http://matplotlib.sourceforge.net/mpl_toolkits/axes_grid/examples/demo_axes_grid.py Particularly the construction given in def demo_grid_with_single_cbar(fig). However I noticed, this behavior earlier and explicit grid calls solved this issue.
On Thu, Jul 5, 2012 at 1:55 PM, Gökhan Sever <gok...@gm...> wrote: > And you might get back more memory if you didn't have to have all the data >> in memory at once, but that may or may not help you. The only other >> suggestion I can make is to attempt to eliminate the overhead in the inner >> loop. Essentially, I would try making a single figure and a single >> AxesGrid object (before the outer loop). Then go over each subplot in the >> AxesGrid object and set the limits, the log scale, the ticks and the tick >> locater (I wouldn't be surprised if that is eating up cpu cycles). All of >> this would be done once before the loop you have right now. Then create >> the PdfPages object, and loop over all of the plots you have, essentially >> recycling the figure and AxesGrid object. >> >> At end of the outer loop, instead of closing the figure, you should call >> "remove()" for each plot element you made. Essentially, as you loop over >> the inner loop, save the output of the plot() call to a list, and then when >> done with those plots, pop each element of that list and call "remove()" to >> take it out of the subplot. This will let the subplot axes retain the >> properties you set earlier. >> >> I hope that made sense. >> Ben Root >> >> > Hi Ben, > > I should have data the available at once, as I directly read that array > from a netcdf file. The memory requirement for my data is small comparing > to overhead added once plot creation is started. Fabrice's reply includes > most of what you describe except the remove call part. These changes made > big impact to lower my execution times. Thank you again for your > explanation. > > > -- > Gökhan > Actually, looking at Fabrice's code, you might be able to get it to be slightly faster. Lines 39-41 should be protected by a "if i == 0" statement because it only needs to be done once. Furthermore, you might get some more improvements if you allow the subplots to share_all, in which case, you only need to set the limits and maybe the scale and the locator once. Cheers! Ben Root
> > >> 38 * 16 = 608 > 80 / 608 = 0.1316 seconds per plot > > At this point, I doubt you are going to get much more speed-ups. Glad to > be of help! > > Fabrice -- Good suggestion! I should have thought of that given how much > I use that technique in doing animation. > > Ben Root > > I am including profiled runs for the records --only first 10 lines to keep e-mail shorter. Total times are longer comparing to the raw run -p executions. I believe profiled run has its own call overhead. I1 run -p test_speed.py 171889738 function calls (169109959 primitive calls) in 374.311 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 4548012 34.583 0.000 34.583 0.000 {numpy.core.multiarray.array} 1778401 21.012 0.000 46.227 0.000 path.py:86(__init__) 521816 17.844 0.000 17.844 0.000 artist.py:74(__init__) 2947090 15.432 0.000 15.432 0.000 weakref.py:243(__init__) 1778401 9.515 0.000 9.515 0.000 {method 'all' of 'numpy.ndarray' objects} 13691669 8.654 0.000 8.654 0.000 {getattr} 1085280 8.550 0.000 17.629 0.000 core.py:2749(_update_from) 1299904 7.809 0.000 76.060 0.000 markers.py:115(_recache) 38 7.378 0.194 7.378 0.194 {gc.collect} 13564851 6.768 0.000 6.768 0.000 {isinstance} I1 run -p test_speed3.py 61658708 function calls (60685172 primitive calls) in 100.934 seconds Ordered by: internal time ncalls tottime percall cumtime percall filename:lineno(function) 937414 6.638 0.000 6.638 0.000 {numpy.core.multiarray.array} 374227 4.377 0.000 7.500 0.000 path.py:198(iter_segments) 6974613 3.866 0.000 3.866 0.000 {getattr} 542640 3.809 0.000 7.900 0.000 core.py:2749(_update_from) 141361 3.665 0.000 7.136 0.000 transforms.py:99(invalidate) 324688/161136 2.780 0.000 27.747 0.000 transforms.py:1729(transform) 64448 2.753 0.000 64.921 0.001 lines.py:463(draw) 231195 2.748 0.000 7.072 0.000 path.py:86(__init__) 684970/679449 2.679 0.000 3.888 0.000 backend_pdf.py:128(pdfRepr) 67526 2.651 0.000 7.522 0.000 backend_pdf.py:1226(pathOperations) -- Gökhan
> > And you might get back more memory if you didn't have to have all the data > in memory at once, but that may or may not help you. The only other > suggestion I can make is to attempt to eliminate the overhead in the inner > loop. Essentially, I would try making a single figure and a single > AxesGrid object (before the outer loop). Then go over each subplot in the > AxesGrid object and set the limits, the log scale, the ticks and the tick > locater (I wouldn't be surprised if that is eating up cpu cycles). All of > this would be done once before the loop you have right now. Then create > the PdfPages object, and loop over all of the plots you have, essentially > recycling the figure and AxesGrid object. > > At end of the outer loop, instead of closing the figure, you should call > "remove()" for each plot element you made. Essentially, as you loop over > the inner loop, save the output of the plot() call to a list, and then when > done with those plots, pop each element of that list and call "remove()" to > take it out of the subplot. This will let the subplot axes retain the > properties you set earlier. > > I hope that made sense. > Ben Root > > Hi Ben, I should have data the available at once, as I directly read that array from a netcdf file. The memory requirement for my data is small comparing to overhead added once plot creation is started. Fabrice's reply includes most of what you describe except the remove call part. These changes made big impact to lower my execution times. Thank you again for your explanation. -- Gökhan
On Thu, Jul 5, 2012 at 1:45 PM, Gökhan Sever <gok...@gm...> wrote: > > > On Thu, Jul 5, 2012 at 11:15 AM, Fabrice Silva <si...@lm...>wrote: > >> >> >> > At end of the outer loop, instead of closing the figure, you should >> > call "remove()" for each plot element you made. Essentially, as you >> > loop over the inner loop, save the output of the plot() call to a >> > list, and then when done with those plots, pop each element of that >> > list and call "remove()" to take it out of the subplot. This will let >> > the subplot axes retain the properties you set earlier. >> >> Instead of remove()'ing the graphical elements, you can also reuse them >> if the kind of plots you intend to do is the same along the figure >> for simple plots. See : http://paste.debian.net/177857/ > > > I was close to getting the script run as you pasted. (One minor correction > in your script is indexing L1 and L2, either L1[0] or L1, (comma) required > in the assignments since grid.plot returns a list) The key here was "reuse" > as you told. Memory consumption almost drops to half comparing to the > test_speed2.py script run. Now I am down to ~1 minutes from about ~4 > minutes execution times, which is indeed quite significant, provided that I > experiment on 6 such 38 pages plots. > > nums = 2 > I1 time run test_speed3.py > CPU times: user 8.19 s, sys: 0.07 s, total: 8.26 s > Wall time: 8.49 s > > nums=38 > I1 time run test_speed3.py > CPU times: user 78.84 s, sys: 0.19 s, total: 79.03 s > Wall time: 80.88 s > > Thanks Fabrice for your feedback. > > 38 * 16 = 608 80 / 608 = 0.1316 seconds per plot At this point, I doubt you are going to get much more speed-ups. Glad to be of help! Fabrice -- Good suggestion! I should have thought of that given how much I use that technique in doing animation. Ben Root
On Thu, Jul 5, 2012 at 11:15 AM, Fabrice Silva <si...@lm...>wrote: > > > > At end of the outer loop, instead of closing the figure, you should > > call "remove()" for each plot element you made. Essentially, as you > > loop over the inner loop, save the output of the plot() call to a > > list, and then when done with those plots, pop each element of that > > list and call "remove()" to take it out of the subplot. This will let > > the subplot axes retain the properties you set earlier. > > Instead of remove()'ing the graphical elements, you can also reuse them > if the kind of plots you intend to do is the same along the figure > for simple plots. See : http://paste.debian.net/177857/ I was close to getting the script run as you pasted. (One minor correction in your script is indexing L1 and L2, either L1[0] or L1, (comma) required in the assignments since grid.plot returns a list) The key here was "reuse" as you told. Memory consumption almost drops to half comparing to the test_speed2.py script run. Now I am down to ~1 minutes from about ~4 minutes execution times, which is indeed quite significant, provided that I experiment on 6 such 38 pages plots. nums = 2 I1 time run test_speed3.py CPU times: user 8.19 s, sys: 0.07 s, total: 8.26 s Wall time: 8.49 s nums=38 I1 time run test_speed3.py CPU times: user 78.84 s, sys: 0.19 s, total: 79.03 s Wall time: 80.88 s Thanks Fabrice for your feedback.
> At end of the outer loop, instead of closing the figure, you should > call "remove()" for each plot element you made. Essentially, as you > loop over the inner loop, save the output of the plot() call to a > list, and then when done with those plots, pop each element of that > list and call "remove()" to take it out of the subplot. This will let > the subplot axes retain the properties you set earlier. Instead of remove()'ing the graphical elements, you can also reuse them if the kind of plots you intend to do is the same along the figure for simple plots. See : http://paste.debian.net/177857/ -- Fabrice Silva
On Thu, Jul 5, 2012 at 11:36 AM, Gökhan Sever <gok...@gm...> wrote: > > > On Thu, Jul 5, 2012 at 8:45 AM, Gökhan Sever <gok...@gm...>wrote: > >> On Thu, Jul 5, 2012 at 7:29 AM, Benjamin Root <ben...@ou...> wrote: >> >>> >>> >>> On Wed, Jul 4, 2012 at 1:17 PM, Gökhan Sever <gok...@gm...>wrote: >>> >>>> Hello, >>>> >>>> I am working on creating some distribution plots to analyze cloud >>>> droplet and drop features. You can see one such plot at >>>> http://atmos.uwyo.edu/~gsever/data/rf06_1second/rf06_belowcloud_SurfaceArea_1second.pdf >>>> This file contains 38 pages and each page has 16 panels created via >>>> MPL's AxesGrid toolkit. I am using PdfPages from pdf backend profile to >>>> construct this multi-page plot. The original code that is used to create >>>> this plot is in >>>> http://code.google.com/p/ccnworks/source/browse/trunk/parcel_drizzle/rf06_moments.py >>>> >>>> The problem I am reporting is due to the lengthier plot creation times. >>>> It takes about 4 minutes to create such plot in my laptop. To better >>>> demonstrate the issue I created a sample script which you can use to >>>> reproduce my timing results --well based on pseudo/random data points. All >>>> my data points in the original script are float64 so I use float64 in the >>>> sample script as well. >>>> >>>> The script is at >>>> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py I also >>>> included 2 pages output running the script with nums=2 setting >>>> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf >>>> Comparing my original output, indeed cloud particles are not from a >>>> normal distribution :) >>>> >>>> Joke aside, running with nums=2 for 2 pages >>>> >>>> time run test_speed.py >>>> CPU times: user 12.39 s, sys: 0.10 s, total: 12.49 s >>>> Wall time: 12.84 s >>>> >>>> when nums=38, just like my original script, then I get similar timing >>>> to my original run >>>> >>>> time run test.py >>>> CPU times: user 227.39 s, sys: 1.74 s, total: 229.13 s >>>> Wall time: 234.87 s >>>> >>>> In addition to these longer plot creation times, 38 pages plot creation >>>> consumes about 3 GB memory. I am wondering if there are tricks to improve >>>> plot creation times as well as more efficiently using the memory. >>>> Attempting to create two such distributions blocks my machine eating 6 GB >>>> of ram space. >>>> >>>> Using Python 2.7, NumPy 2.0.0.dev-7e202a2, IPython >>>> 0.13.beta1, matplotlib 1.1.1rc on Fedora 16 (x86_64) >>>> >>>> Thanks. >>>> >>>> -- >>>> Gökhan >>>> >>>> >>> Gokhan, >>> >>> Looking through your code, I see that you have all of the figure objects >>> available all at once, rather than one at a time. In belowcloud_M0(), you >>> create all of your figure objects and AxesGrid objects in list >>> comprehensions, and then you have multiple for-loops that performs a >>> particular action on each of these. Then you create your PdfPages object >>> and loop over each of the figures, saving it to the page. >>> >>> I would do it quite differently. At the beginning of the function, >>> create your PdfPages object. Then have a single loop over "range(nums)" >>> where you create a figure object and an AxesGrid object. Do your 16 (or >>> less) plots, and any other text you need for that figure. Save it to the >>> PdfPage object, and then close the figure object. When the loop is done, >>> close the PdfPages object. >>> >>> I think you will see huge performance improvement that way. >>> >>> Cheers! >>> Ben Root >>> >>> >> >> Hi, >> >> Could you try the files again? I believe I have given read permission for >> outside access. >> >> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py >> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf >> >> >> Ben, >> >> Thanks for your suggestion. I will give it a try and report back here. >> >> >> -- >> Gökhan >> > > Hi, > > Please ignore the "pass" line in the first for loop in > http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py > > I have the 2nd version of this script at > http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed2.py > following Ben's suggestion. Now there are two main loops, the former is > for one Figure and AxesGrid object creation, then the inner loop is for > plotting and decorating the grid objects. I am assuming that my inner loop > is correct. Tracking the xx variable helps me to plot the right index from > the concX data array on to the right grid element. > > Timings are below. As you can see, these are similar to my test_speed.py > version of the runs. > > nums = 2 > I1 time run test_speed2.py > CPU times: user 10.85 s, sys: 0.10 s, total: 10.95 s > Wall time: 11.19 s > > nums=38 > I1 time run test_speed2.py > CPU times: user 232.73 s, sys: 0.28 s, total: 233.01 s > Wall time: 238.75 s > > However, I have my 3GB memory back free. > > Any other suggestions? > > > -- > Gökhan > > And you might get back more memory if you didn't have to have all the data in memory at once, but that may or may not help you. The only other suggestion I can make is to attempt to eliminate the overhead in the inner loop. Essentially, I would try making a single figure and a single AxesGrid object (before the outer loop). Then go over each subplot in the AxesGrid object and set the limits, the log scale, the ticks and the tick locater (I wouldn't be surprised if that is eating up cpu cycles). All of this would be done once before the loop you have right now. Then create the PdfPages object, and loop over all of the plots you have, essentially recycling the figure and AxesGrid object. At end of the outer loop, instead of closing the figure, you should call "remove()" for each plot element you made. Essentially, as you loop over the inner loop, save the output of the plot() call to a list, and then when done with those plots, pop each element of that list and call "remove()" to take it out of the subplot. This will let the subplot axes retain the properties you set earlier. I hope that made sense. Ben Root
On Thu, Jul 5, 2012 at 8:45 AM, Gökhan Sever <gok...@gm...> wrote: > On Thu, Jul 5, 2012 at 7:29 AM, Benjamin Root <ben...@ou...> wrote: > >> >> >> On Wed, Jul 4, 2012 at 1:17 PM, Gökhan Sever <gok...@gm...>wrote: >> >>> Hello, >>> >>> I am working on creating some distribution plots to analyze cloud >>> droplet and drop features. You can see one such plot at >>> http://atmos.uwyo.edu/~gsever/data/rf06_1second/rf06_belowcloud_SurfaceArea_1second.pdf >>> This file contains 38 pages and each page has 16 panels created via >>> MPL's AxesGrid toolkit. I am using PdfPages from pdf backend profile to >>> construct this multi-page plot. The original code that is used to create >>> this plot is in >>> http://code.google.com/p/ccnworks/source/browse/trunk/parcel_drizzle/rf06_moments.py >>> >>> The problem I am reporting is due to the lengthier plot creation times. >>> It takes about 4 minutes to create such plot in my laptop. To better >>> demonstrate the issue I created a sample script which you can use to >>> reproduce my timing results --well based on pseudo/random data points. All >>> my data points in the original script are float64 so I use float64 in the >>> sample script as well. >>> >>> The script is at >>> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py I also >>> included 2 pages output running the script with nums=2 setting >>> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf >>> Comparing my original output, indeed cloud particles are not from a >>> normal distribution :) >>> >>> Joke aside, running with nums=2 for 2 pages >>> >>> time run test_speed.py >>> CPU times: user 12.39 s, sys: 0.10 s, total: 12.49 s >>> Wall time: 12.84 s >>> >>> when nums=38, just like my original script, then I get similar timing to >>> my original run >>> >>> time run test.py >>> CPU times: user 227.39 s, sys: 1.74 s, total: 229.13 s >>> Wall time: 234.87 s >>> >>> In addition to these longer plot creation times, 38 pages plot creation >>> consumes about 3 GB memory. I am wondering if there are tricks to improve >>> plot creation times as well as more efficiently using the memory. >>> Attempting to create two such distributions blocks my machine eating 6 GB >>> of ram space. >>> >>> Using Python 2.7, NumPy 2.0.0.dev-7e202a2, IPython >>> 0.13.beta1, matplotlib 1.1.1rc on Fedora 16 (x86_64) >>> >>> Thanks. >>> >>> -- >>> Gökhan >>> >>> >> Gokhan, >> >> Looking through your code, I see that you have all of the figure objects >> available all at once, rather than one at a time. In belowcloud_M0(), you >> create all of your figure objects and AxesGrid objects in list >> comprehensions, and then you have multiple for-loops that performs a >> particular action on each of these. Then you create your PdfPages object >> and loop over each of the figures, saving it to the page. >> >> I would do it quite differently. At the beginning of the function, >> create your PdfPages object. Then have a single loop over "range(nums)" >> where you create a figure object and an AxesGrid object. Do your 16 (or >> less) plots, and any other text you need for that figure. Save it to the >> PdfPage object, and then close the figure object. When the loop is done, >> close the PdfPages object. >> >> I think you will see huge performance improvement that way. >> >> Cheers! >> Ben Root >> >> > > Hi, > > Could you try the files again? I believe I have given read permission for > outside access. > > http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py > http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf > > > Ben, > > Thanks for your suggestion. I will give it a try and report back here. > > > -- > Gökhan > Hi, Please ignore the "pass" line in the first for loop in http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py I have the 2nd version of this script at http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed2.py following Ben's suggestion. Now there are two main loops, the former is for one Figure and AxesGrid object creation, then the inner loop is for plotting and decorating the grid objects. I am assuming that my inner loop is correct. Tracking the xx variable helps me to plot the right index from the concX data array on to the right grid element. Timings are below. As you can see, these are similar to my test_speed.py version of the runs. nums = 2 I1 time run test_speed2.py CPU times: user 10.85 s, sys: 0.10 s, total: 10.95 s Wall time: 11.19 s nums=38 I1 time run test_speed2.py CPU times: user 232.73 s, sys: 0.28 s, total: 233.01 s Wall time: 238.75 s However, I have my 3GB memory back free. Any other suggestions? -- Gökhan
On Thu, Jul 5, 2012 at 7:29 AM, Benjamin Root <ben...@ou...> wrote: > > > On Wed, Jul 4, 2012 at 1:17 PM, Gökhan Sever <gok...@gm...>wrote: > >> Hello, >> >> I am working on creating some distribution plots to analyze cloud droplet >> and drop features. You can see one such plot at >> http://atmos.uwyo.edu/~gsever/data/rf06_1second/rf06_belowcloud_SurfaceArea_1second.pdf >> This file contains 38 pages and each page has 16 panels created via >> MPL's AxesGrid toolkit. I am using PdfPages from pdf backend profile to >> construct this multi-page plot. The original code that is used to create >> this plot is in >> http://code.google.com/p/ccnworks/source/browse/trunk/parcel_drizzle/rf06_moments.py >> >> The problem I am reporting is due to the lengthier plot creation times. >> It takes about 4 minutes to create such plot in my laptop. To better >> demonstrate the issue I created a sample script which you can use to >> reproduce my timing results --well based on pseudo/random data points. All >> my data points in the original script are float64 so I use float64 in the >> sample script as well. >> >> The script is at >> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py I also >> included 2 pages output running the script with nums=2 setting >> http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf >> Comparing my original output, indeed cloud particles are not from a >> normal distribution :) >> >> Joke aside, running with nums=2 for 2 pages >> >> time run test_speed.py >> CPU times: user 12.39 s, sys: 0.10 s, total: 12.49 s >> Wall time: 12.84 s >> >> when nums=38, just like my original script, then I get similar timing to >> my original run >> >> time run test.py >> CPU times: user 227.39 s, sys: 1.74 s, total: 229.13 s >> Wall time: 234.87 s >> >> In addition to these longer plot creation times, 38 pages plot creation >> consumes about 3 GB memory. I am wondering if there are tricks to improve >> plot creation times as well as more efficiently using the memory. >> Attempting to create two such distributions blocks my machine eating 6 GB >> of ram space. >> >> Using Python 2.7, NumPy 2.0.0.dev-7e202a2, IPython >> 0.13.beta1, matplotlib 1.1.1rc on Fedora 16 (x86_64) >> >> Thanks. >> >> -- >> Gökhan >> >> > Gokhan, > > Looking through your code, I see that you have all of the figure objects > available all at once, rather than one at a time. In belowcloud_M0(), you > create all of your figure objects and AxesGrid objects in list > comprehensions, and then you have multiple for-loops that performs a > particular action on each of these. Then you create your PdfPages object > and loop over each of the figures, saving it to the page. > > I would do it quite differently. At the beginning of the function, create > your PdfPages object. Then have a single loop over "range(nums)" where you > create a figure object and an AxesGrid object. Do your 16 (or less) plots, > and any other text you need for that figure. Save it to the PdfPage > object, and then close the figure object. When the loop is done, close the > PdfPages object. > > I think you will see huge performance improvement that way. > > Cheers! > Ben Root > > Hi, Could you try the files again? I believe I have given read permission for outside access. http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf Ben, Thanks for your suggestion. I will give it a try and report back here. -- Gökhan
On Thu, Jul 5, 2012 at 8:12 AM, David Kremer <dav...@gm...>wrote: > Hello, > > I know how one can increase the font size in the xlabel and ylabel > fields. I use the methode presented in: > > > http://matplotlib.sourceforge.net/examples/pylab_examples/dannys_example.html > > But I don't know how to increase the font size for the numbers labelling > the xticks or the yticks, and found nothing in the doc. May you help ? > > David > > Hi David, You can call `tick_params`; e.g., plt.tick_params(labelsize=20) If you want to change just one axis ax.xaxis.set_tick_params(labelsize=20) This assumes your axes object (from `plt.subplots`, `fig.add_subplot`, `plt.gca()`, etc.) is named `ax` . Best, -Tony
On Wed, Jul 4, 2012 at 1:17 PM, Gökhan Sever <gok...@gm...> wrote: > Hello, > > I am working on creating some distribution plots to analyze cloud droplet > and drop features. You can see one such plot at > http://atmos.uwyo.edu/~gsever/data/rf06_1second/rf06_belowcloud_SurfaceArea_1second.pdf > This file contains 38 pages and each page has 16 panels created via > MPL's AxesGrid toolkit. I am using PdfPages from pdf backend profile to > construct this multi-page plot. The original code that is used to create > this plot is in > http://code.google.com/p/ccnworks/source/browse/trunk/parcel_drizzle/rf06_moments.py > > The problem I am reporting is due to the lengthier plot creation times. It > takes about 4 minutes to create such plot in my laptop. To better > demonstrate the issue I created a sample script which you can use to > reproduce my timing results --well based on pseudo/random data points. All > my data points in the original script are float64 so I use float64 in the > sample script as well. > > The script is at > http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py I also > included 2 pages output running the script with nums=2 setting > http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf > Comparing my original output, indeed cloud particles are not from a normal > distribution :) > > Joke aside, running with nums=2 for 2 pages > > time run test_speed.py > CPU times: user 12.39 s, sys: 0.10 s, total: 12.49 s > Wall time: 12.84 s > > when nums=38, just like my original script, then I get similar timing to > my original run > > time run test.py > CPU times: user 227.39 s, sys: 1.74 s, total: 229.13 s > Wall time: 234.87 s > > In addition to these longer plot creation times, 38 pages plot creation > consumes about 3 GB memory. I am wondering if there are tricks to improve > plot creation times as well as more efficiently using the memory. > Attempting to create two such distributions blocks my machine eating 6 GB > of ram space. > > Using Python 2.7, NumPy 2.0.0.dev-7e202a2, IPython > 0.13.beta1, matplotlib 1.1.1rc on Fedora 16 (x86_64) > > Thanks. > > -- > Gökhan > > Gokhan, Looking through your code, I see that you have all of the figure objects available all at once, rather than one at a time. In belowcloud_M0(), you create all of your figure objects and AxesGrid objects in list comprehensions, and then you have multiple for-loops that performs a particular action on each of these. Then you create your PdfPages object and loop over each of the figures, saving it to the page. I would do it quite differently. At the beginning of the function, create your PdfPages object. Then have a single loop over "range(nums)" where you create a figure object and an AxesGrid object. Do your 16 (or less) plots, and any other text you need for that figure. Save it to the PdfPage object, and then close the figure object. When the loop is done, close the PdfPages object. I think you will see huge performance improvement that way. Cheers! Ben Root
Hello, I know how one can increase the font size in the xlabel and ylabel fields. I use the methode presented in: http://matplotlib.sourceforge.net/examples/pylab_examples/dannys_example.html But I don't know how to increase the font size for the numbers labelling the xticks or the yticks, and found nothing in the doc. May you help ? David
Your files do not seem to be readable: http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf Nicolas On Jul 4, 2012, at 19:17 , Gökhan Sever wrote: > Hello, > > I am working on creating some distribution plots to analyze cloud droplet and drop features. You can see one such plot at http://atmos.uwyo.edu/~gsever/data/rf06_1second/rf06_belowcloud_SurfaceArea_1second.pdf > This file contains 38 pages and each page has 16 panels created via MPL's AxesGrid toolkit. I am using PdfPages from pdf backend profile to construct this multi-page plot. The original code that is used to create this plot is in http://code.google.com/p/ccnworks/source/browse/trunk/parcel_drizzle/rf06_moments.py > > The problem I am reporting is due to the lengthier plot creation times. It takes about 4 minutes to create such plot in my laptop. To better demonstrate the issue I created a sample script which you can use to reproduce my timing results --well based on pseudo/random data points. All my data points in the original script are float64 so I use float64 in the sample script as well. > > The script is at http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.py I also included 2 pages output running the script with nums=2 setting http://atmos.uwyo.edu/~gsever/data/matplotlib/test_speed.pdf > Comparing my original output, indeed cloud particles are not from a normal distribution :) > > Joke aside, running with nums=2 for 2 pages > > time run test_speed.py > CPU times: user 12.39 s, sys: 0.10 s, total: 12.49 s > Wall time: 12.84 s > > when nums=38, just like my original script, then I get similar timing to my original run > > time run test.py > CPU times: user 227.39 s, sys: 1.74 s, total: 229.13 s > Wall time: 234.87 s > > In addition to these longer plot creation times, 38 pages plot creation consumes about 3 GB memory. I am wondering if there are tricks to improve plot creation times as well as more efficiently using the memory. Attempting to create two such distributions blocks my machine eating 6 GB of ram space. > > Using Python 2.7, NumPy 2.0.0.dev-7e202a2, IPython 0.13.beta1, matplotlib 1.1.1rc on Fedora 16 (x86_64) > > Thanks. > > -- > Gökhan > ------------------------------------------------------------------------------ > Live Security Virtual Conference > Exclusive live event will cover all the ways today's security and > threat landscape has changed and how IT managers can respond. Discussions > will include endpoint security, mobile security and the latest in malware > threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/_______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users
On Wed, Jul 4, 2012 at 11:11 PM, Jorge Scandaliaris <jor...@ya...>wrote: > > De: Tony Yu <ts...@gm...> > > > > > >Just a wild guess: Any chance you're using some GUI-toolkit-specific > functionality? > > > > > > > Can you elaborate please? I use the GTKAgg backend, and I guess IPython > has specific mechanisms (I don't know the details) for keeping both the > plot and the interpreter responsive, that's the --pylab option I mention. > However, why am I the only asking for help here? Is it that just a few > people use events and gtk backend and IPython? > My script, although larger and by now more complex, follows the lines of > the example I gave earlier... > > Jorge > Sorry, I wasn't suggesting that it was a problem with the GTK backend; I was asking if you were explicitly calling GTK. IPython --pylab starts an event loop for the default matplotlib backend (you can check this by printing `plt.rcParams['backend']`). If for some reason your default backend is not GTK and --pylab starts an event loop for that backend, then GTK-specific calls could cause problems. If this is the problem, you could set the backend in your matplotlibrc file. Alternatively, IPython has a -gthread command line option. Like I said, this just a wild guess. Without example code, it's hard to pin down. -Tony P.S. sorry for the duplicate, Jorge. I forgot to reply-all.
> De: Tony Yu <ts...@gm...> > > >Just a wild guess: Any chance you're using some GUI-toolkit-specific functionality? > > Can you elaborate please? I use the GTKAgg backend, and I guess IPython has specific mechanisms (I don't know the details) for keeping both the plot and the interpreter responsive, that's the --pylab option I mention. However, why am I the only asking for help here? Is it that just a few people use events and gtk backend and IPython? My script, although larger and by now more complex, follows the lines of the example I gave earlier... Jorge
On Wed, Jul 4, 2012 at 10:14 PM, Jorge Scandaliaris <jor...@ya...>wrote: > Hi, > Are there any caveats when using events together with IPython that you > are aware of? I have some code that I use to interactively explore > images that works OK when run from a Python interpreter but does not > (see note) when run from IPython. With --pylab option the problems are > more evident, but don't disappear completely without it. If I test > with some of the examples (i.e. > http://matplotlib.sourceforge.net/examples/event_handling/data_browser.html > ), > they work OK in both cases. I can't really post a working example, as > my code has grown large, and I am still trying to pin point the root > cause. I am posting also at the IPython ml, who knows > > I'll appreciate any hints and or related problems you might have had, > that help me find a solution to this. > > Jorge > > > Note: Does not run refers to *some* of the event handling defined not > working, for example I have defined specific keys that trigger the > loading of a new image > > Just a wild guess: Any chance you're using some GUI-toolkit-specific functionality? -Tony
Hi, Are there any caveats when using events together with IPython that you are aware of? I have some code that I use to interactively explore images that works OK when run from a Python interpreter but does not (see note) when run from IPython. With --pylab option the problems are more evident, but don't disappear completely without it. If I test with some of the examples (i.e. http://matplotlib.sourceforge.net/examples/event_handling/data_browser.html), they work OK in both cases. I can't really post a working example, as my code has grown large, and I am still trying to pin point the root cause. I am posting also at the IPython ml, who knows I'll appreciate any hints and or related problems you might have had, that help me find a solution to this. Jorge Note: Does not run refers to *some* of the event handling defined not working, for example I have defined specific keys that trigger the loading of a new image