You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
(12) |
Sep
(12) |
Oct
(56) |
Nov
(65) |
Dec
(37) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(59) |
Feb
(78) |
Mar
(153) |
Apr
(205) |
May
(184) |
Jun
(123) |
Jul
(171) |
Aug
(156) |
Sep
(190) |
Oct
(120) |
Nov
(154) |
Dec
(223) |
2005 |
Jan
(184) |
Feb
(267) |
Mar
(214) |
Apr
(286) |
May
(320) |
Jun
(299) |
Jul
(348) |
Aug
(283) |
Sep
(355) |
Oct
(293) |
Nov
(232) |
Dec
(203) |
2006 |
Jan
(352) |
Feb
(358) |
Mar
(403) |
Apr
(313) |
May
(165) |
Jun
(281) |
Jul
(316) |
Aug
(228) |
Sep
(279) |
Oct
(243) |
Nov
(315) |
Dec
(345) |
2007 |
Jan
(260) |
Feb
(323) |
Mar
(340) |
Apr
(319) |
May
(290) |
Jun
(296) |
Jul
(221) |
Aug
(292) |
Sep
(242) |
Oct
(248) |
Nov
(242) |
Dec
(332) |
2008 |
Jan
(312) |
Feb
(359) |
Mar
(454) |
Apr
(287) |
May
(340) |
Jun
(450) |
Jul
(403) |
Aug
(324) |
Sep
(349) |
Oct
(385) |
Nov
(363) |
Dec
(437) |
2009 |
Jan
(500) |
Feb
(301) |
Mar
(409) |
Apr
(486) |
May
(545) |
Jun
(391) |
Jul
(518) |
Aug
(497) |
Sep
(492) |
Oct
(429) |
Nov
(357) |
Dec
(310) |
2010 |
Jan
(371) |
Feb
(657) |
Mar
(519) |
Apr
(432) |
May
(312) |
Jun
(416) |
Jul
(477) |
Aug
(386) |
Sep
(419) |
Oct
(435) |
Nov
(320) |
Dec
(202) |
2011 |
Jan
(321) |
Feb
(413) |
Mar
(299) |
Apr
(215) |
May
(284) |
Jun
(203) |
Jul
(207) |
Aug
(314) |
Sep
(321) |
Oct
(259) |
Nov
(347) |
Dec
(209) |
2012 |
Jan
(322) |
Feb
(414) |
Mar
(377) |
Apr
(179) |
May
(173) |
Jun
(234) |
Jul
(295) |
Aug
(239) |
Sep
(276) |
Oct
(355) |
Nov
(144) |
Dec
(108) |
2013 |
Jan
(170) |
Feb
(89) |
Mar
(204) |
Apr
(133) |
May
(142) |
Jun
(89) |
Jul
(160) |
Aug
(180) |
Sep
(69) |
Oct
(136) |
Nov
(83) |
Dec
(32) |
2014 |
Jan
(71) |
Feb
(90) |
Mar
(161) |
Apr
(117) |
May
(78) |
Jun
(94) |
Jul
(60) |
Aug
(83) |
Sep
(102) |
Oct
(132) |
Nov
(154) |
Dec
(96) |
2015 |
Jan
(45) |
Feb
(138) |
Mar
(176) |
Apr
(132) |
May
(119) |
Jun
(124) |
Jul
(77) |
Aug
(31) |
Sep
(34) |
Oct
(22) |
Nov
(23) |
Dec
(9) |
2016 |
Jan
(26) |
Feb
(17) |
Mar
(10) |
Apr
(8) |
May
(4) |
Jun
(8) |
Jul
(6) |
Aug
(5) |
Sep
(9) |
Oct
(4) |
Nov
|
Dec
|
2017 |
Jan
(5) |
Feb
(7) |
Mar
(1) |
Apr
(5) |
May
|
Jun
(3) |
Jul
(6) |
Aug
(1) |
Sep
|
Oct
(2) |
Nov
(1) |
Dec
|
2018 |
Jan
|
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2025 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
|
|
|
|
1
(1) |
2
(4) |
3
(12) |
4
(5) |
5
(30) |
6
(21) |
7
(20) |
8
(11) |
9
(9) |
10
(12) |
11
(11) |
12
(22) |
13
(22) |
14
(38) |
15
(25) |
16
(23) |
17
(20) |
18
(7) |
19
(13) |
20
(13) |
21
(18) |
22
(6) |
23
(7) |
24
(4) |
25
(9) |
26
(35) |
27
(37) |
28
(22) |
29
(27) |
30
(12) |
31
(4) |
can anyone recommend the best method in which to open a 2nd Tk window from matplotlib? One where you might enter some data to be read by the main script in an interactive way. Current attempting an Tk commands will just reajust the plot window. J
Hi all, I use Matplotlib and LaTeX to produce essentially two types of documents; technical reports for a large corporation and scientific papers. Thus, I use several LaTeX cls-files which uses different fonts. What is the most convenient way to get Matplotlib to use the same fonts as my main document and also to quickly switch between the different document types? So far I've tried reading a file with settings specific to the current document and using "rcParams.update(params)" to dynamically change the settings. This way I can get the right font for legends and labels, but I have not figured out how to get correct fonts for the numbers on the x- and y-axes. Matplotlib uses whatever is default in my LaTeX installation (Computer Modern?). I use "text.usetex: True". This must be a problem that I share with others. Does anyone have a good solution? Best regards, Johan
On Sun, Jan 4, 2009 at 10:27 AM, Nitin Bhide <nit...@ya...> wrote: > I am using figure.legend function to create a legend. See the code line below. > > legend = fig.legend(ax.get_lines(), labellist, ncol=4, loc='upper center', prop=fontprop) > > However, using the loc='upper center' where legend overlapps the axes title. I also tried the setting loc='lower center'. In this case, figure legend overlapps the x-axis label. > > Am I missing some parameter ? How I can place the figure legend below the x-axis label ? Is there a way to place the figure legend below the axes title ? It always helps if you post complete working examples that illustrate your problem, then we can modify them to show you how to fix your problem. In this case you may want to look at fig.subplots_adjust to lower the top of your subplot so that the axes title does not overlap the figure legend:: fig.subplots_adjust(top=0.8) JDH
I am using figure.legend function to create a legend. See the code line below. legend = fig.legend(ax.get_lines(), labellist, ncol=4, loc='upper center', prop=fontprop) However, using the loc='upper center' where legend overlapps the axes title. I also tried the setting loc='lower center'. In this case, figure legend overlapps the x-axis label. Am I missing some parameter ? How I can place the figure legend below the x-axis label ? Is there a way to place the figure legend below the axes title ? regards, Nitin Nitin Bhide -------------------------------------------------------------------- Never attribute to malice that which can be adequately explained by stupidity -- Hanlon's razor *****************************************************************************
The recarrays were what csv2rec is returning and that's why I was using it. And this afternoon was the first time I was hearing about red arrays so I was trying to get my stuff done with the wrong tool. I've changed my code to use regular txt file loading and all works great. Thanks everybody for their help! Anton Sent from my iPhone On Jan 3, 2009, at 5:00 PM, Ryan May <rm...@gm...> wrote: Pierre GM wrote: Note also that you are not limited to recarrays: you can use what's called a flexible-type arrays, which still gives the possibility to access individual fields by keys, without the overload of recarrays (where fields can also be accessed as attributes). For example: x=np.array([(1,10.), (2,20.)], dtype=[('A',int),('B',float)]) x['A'] array([1, 2]) True, but the problem in this case is that he wants to access by column number, which you can't really do with recarray or flexible dtype arrays. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma ------------------------------------------------------------------------------ _______________________________________________ Matplotlib-users mailing list Mat...@li... https://lists.sourceforge.net/lists/listinfo/matplotlib-users
Pierre GM wrote: > Note also that you are not limited to recarrays: you can use what's > called a flexible-type arrays, which still gives the possibility to > access individual fields by keys, without the overload of recarrays > (where fields can also be accessed as attributes). For example: > >>> x=np.array([(1,10.), (2,20.)], dtype=[('A',int),('B',float)]) > >>>x['A'] > array([1, 2]) True, but the problem in this case is that he wants to access by column number, which you can't really do with recarray or flexible dtype arrays. Ryan -- Ryan May Graduate Research Assistant School of Meteorology University of Oklahoma
Hi all, Does anyone know if it's possible to make the polar plot look like a 12- or 24-hr clockface? I.e. 0 (or 12) at the top rather than the right, and labelled in 12ths (or 24ths) instead of degrees? Thanks, G
FYI, I recoded np.loadtxt to handle missing data, automatic name definition and conversion functions, as a merge of np.loadtxt and mlab.csv2rec. You can access the code here: https://code.launchpad.net/~pierregm/numpy/numpy_addons Hopefully these functions will make it to numpy at one point or another. Note also that you are not limited to recarrays: you can use what's called a flexible-type arrays, which still gives the possibility to access individual fields by keys, without the overload of recarrays (where fields can also be accessed as attributes). For example: >>> x=np.array([(1,10.), (2,20.)], dtype=[('A',int),('B',float)]) >>>x['A'] array([1, 2]) On Jan 3, 2009, at 12:59 PM, Patrick Marsh wrote: > In my limited opinion, numpy's loadtxt is the way to go. Loadtxt > doesn't care about the headerYou can read in the arrays like this: > > # read in all 5 columns as text > col1, col2, col3, col4, col5 = np.loadtxt(filename, dtype=dtype, > unpack=True) > > or if you want to skip the column headings and read in just a specific > data type of just the last column > > # read in only column 5, as a specific dtype, and exclude the column > 5 heading > col5_no_header = np.loadtxt(filename, skiprows=1, usecols=(5), > dtype=dtype, unpack=True) > > > -Patrick > > > > > > > On Sat, Jan 3, 2009 at 11:39 AM, antonv <vas...@ya...> > wrote: >> >> I am plotting the data in those csv files and the forst 4 columns >> in the >> files have the same title but the 5th has the name based on the >> date and >> time so it would be unique in each of the files. As I have about >> 600 files >> to batch process, adjusting my script manually is not an option. >> >> The way I have it for one test file is: >> >> r = mlab.csv2rec('test.csv') >> #i know that the column name for the 5th column is 'htsgw_12191800' >> #so to read the data in the 5th column i just use: >> z = r.htsgw_12191800 >> >> What i need is to be able to get that data by specifying the column >> number >> as that stays the same in all files. >> >> I'll look at numpy but I hope there is a simpler way. >> >> Thanks, >> Anton >> >> >> >> Patrick Marsh-2 wrote: >>> >>> I'm not sure what you are needing it for, but I would suggest >>> looking >>> into numpy's loadtxt function. You can use this to load the csv >>> data >>> into numpy arrays and pass the resulting arrays arround. >>> >>> -Patrick >>> >>> >>> >>> >>> >>> >>> On Sat, Jan 3, 2009 at 11:21 AM, antonv >>> <vas...@ya...> wrote: >>>> >>>> Hi all, >>>> >>>> I have a lot of csv files to process, all of them with the same >>>> number of >>>> columns. The only issue is that each file has a unique column >>>> name for >>>> the >>>> fourth column. >>>> >>>> All the csv2rec examples I found are using the r.column_name >>>> format to >>>> access the data in that column which is of no use for me because >>>> of the >>>> unique names. Is there a way to access that data using the column >>>> number? >>>> I >>>> bet this should be something simple but I cannot figure it out... >>>> >>>> Thanks in advance, >>>> Anton >>>> -- >>>> View this message in context: >>>> http://www.nabble.com/csv2rec-column-names-tp21267055p21267055.html >>>> Sent from the matplotlib - users mailing list archive at >>>> Nabble.com. >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> _______________________________________________ >>>> Matplotlib-users mailing list >>>> Mat...@li... >>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Matplotlib-users mailing list >>> Mat...@li... >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>> >>> >> >> -- >> View this message in context: http://www.nabble.com/csv2rec-column-names-tp21267055p21267232.html >> Sent from the matplotlib - users mailing list archive at Nabble.com. >> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Matplotlib-users mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >> > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users
Michael Droettboom escribió: > Luis Saavedra wrote: >> Hi list, >> >> When the 'pylab' module is loaded the function 'strtod' does not work >> well. >> > Can you elaborate on how it doesn't work? >> I suppose that this is not new: >> http://www.python.org/search/hypermail/python-1994q2/0750.html >> >> and the question is: any solution? >> > There's a solution in that link: use one of the many alternatives that > are known to be more consistent across various versions of UNIX. >> In this linkit exists a better description of the problem: >> >> http://mbdynsimsuite.sourceforge.net/build_mbdyn_bindings.html >> > I think this is not a matplotlib-specific problem, but a Python one, > since Python provides its own strtod definition -- in an apparent > attempt to get around its differences on different platforms. If you > grep over the matplotlib source code, "strtod" isn't even there. > > Cheers, > Mike Sorry for the noise, please ignore. This problem is produced due to the fact that 'pylab' load the 'gtk' module and when 'gtk' is loaded it load my 'locale'(es_CL with LC_NUMERIC=','), and it is fine! My module is the one that must not be dependent on the locale (LC_NUMERIC). Regards, Luis.
You're right! I read more about recarrays and they were built specially for being called by the column name, so I shouldn't have used csv2rec from the start! Thanks for the quick responses! Anton Patrick Marsh-2 wrote: > > In my limited opinion, numpy's loadtxt is the way to go. Loadtxt > doesn't care about the headerYou can read in the arrays like this: > > # read in all 5 columns as text > col1, col2, col3, col4, col5 = np.loadtxt(filename, dtype=dtype, > unpack=True) > > or if you want to skip the column headings and read in just a specific > data type of just the last column > > # read in only column 5, as a specific dtype, and exclude the column 5 > heading > col5_no_header = np.loadtxt(filename, skiprows=1, usecols=(5), > dtype=dtype, unpack=True) > > > -Patrick > > > > > > > On Sat, Jan 3, 2009 at 11:39 AM, antonv <vas...@ya...> wrote: >> >> I am plotting the data in those csv files and the forst 4 columns in the >> files have the same title but the 5th has the name based on the date and >> time so it would be unique in each of the files. As I have about 600 >> files >> to batch process, adjusting my script manually is not an option. >> >> The way I have it for one test file is: >> >> r = mlab.csv2rec('test.csv') >> #i know that the column name for the 5th column is 'htsgw_12191800' >> #so to read the data in the 5th column i just use: >> z = r.htsgw_12191800 >> >> What i need is to be able to get that data by specifying the column >> number >> as that stays the same in all files. >> >> I'll look at numpy but I hope there is a simpler way. >> >> Thanks, >> Anton >> >> >> >> Patrick Marsh-2 wrote: >>> >>> I'm not sure what you are needing it for, but I would suggest looking >>> into numpy's loadtxt function. You can use this to load the csv data >>> into numpy arrays and pass the resulting arrays arround. >>> >>> -Patrick >>> >>> >>> >>> >>> >>> >>> On Sat, Jan 3, 2009 at 11:21 AM, antonv <vas...@ya...> >>> wrote: >>>> >>>> Hi all, >>>> >>>> I have a lot of csv files to process, all of them with the same number >>>> of >>>> columns. The only issue is that each file has a unique column name for >>>> the >>>> fourth column. >>>> >>>> All the csv2rec examples I found are using the r.column_name format to >>>> access the data in that column which is of no use for me because of the >>>> unique names. Is there a way to access that data using the column >>>> number? >>>> I >>>> bet this should be something simple but I cannot figure it out... >>>> >>>> Thanks in advance, >>>> Anton >>>> -- >>>> View this message in context: >>>> http://www.nabble.com/csv2rec-column-names-tp21267055p21267055.html >>>> Sent from the matplotlib - users mailing list archive at Nabble.com. >>>> >>>> >>>> ------------------------------------------------------------------------------ >>>> _______________________________________________ >>>> Matplotlib-users mailing list >>>> Mat...@li... >>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Matplotlib-users mailing list >>> Mat...@li... >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>> >>> >> >> -- >> View this message in context: >> http://www.nabble.com/csv2rec-column-names-tp21267055p21267232.html >> Sent from the matplotlib - users mailing list archive at Nabble.com. >> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Matplotlib-users mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >> > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > -- View this message in context: http://www.nabble.com/csv2rec-column-names-tp21267055p21267490.html Sent from the matplotlib - users mailing list archive at Nabble.com.
In my limited opinion, numpy's loadtxt is the way to go. Loadtxt doesn't care about the headerYou can read in the arrays like this: # read in all 5 columns as text col1, col2, col3, col4, col5 = np.loadtxt(filename, dtype=dtype, unpack=True) or if you want to skip the column headings and read in just a specific data type of just the last column # read in only column 5, as a specific dtype, and exclude the column 5 heading col5_no_header = np.loadtxt(filename, skiprows=1, usecols=(5), dtype=dtype, unpack=True) -Patrick On Sat, Jan 3, 2009 at 11:39 AM, antonv <vas...@ya...> wrote: > > I am plotting the data in those csv files and the forst 4 columns in the > files have the same title but the 5th has the name based on the date and > time so it would be unique in each of the files. As I have about 600 files > to batch process, adjusting my script manually is not an option. > > The way I have it for one test file is: > > r = mlab.csv2rec('test.csv') > #i know that the column name for the 5th column is 'htsgw_12191800' > #so to read the data in the 5th column i just use: > z = r.htsgw_12191800 > > What i need is to be able to get that data by specifying the column number > as that stays the same in all files. > > I'll look at numpy but I hope there is a simpler way. > > Thanks, > Anton > > > > Patrick Marsh-2 wrote: >> >> I'm not sure what you are needing it for, but I would suggest looking >> into numpy's loadtxt function. You can use this to load the csv data >> into numpy arrays and pass the resulting arrays arround. >> >> -Patrick >> >> >> >> >> >> >> On Sat, Jan 3, 2009 at 11:21 AM, antonv <vas...@ya...> wrote: >>> >>> Hi all, >>> >>> I have a lot of csv files to process, all of them with the same number of >>> columns. The only issue is that each file has a unique column name for >>> the >>> fourth column. >>> >>> All the csv2rec examples I found are using the r.column_name format to >>> access the data in that column which is of no use for me because of the >>> unique names. Is there a way to access that data using the column number? >>> I >>> bet this should be something simple but I cannot figure it out... >>> >>> Thanks in advance, >>> Anton >>> -- >>> View this message in context: >>> http://www.nabble.com/csv2rec-column-names-tp21267055p21267055.html >>> Sent from the matplotlib - users mailing list archive at Nabble.com. >>> >>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Matplotlib-users mailing list >>> Mat...@li... >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Matplotlib-users mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >> >> > > -- > View this message in context: http://www.nabble.com/csv2rec-column-names-tp21267055p21267232.html > Sent from the matplotlib - users mailing list archive at Nabble.com. > > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users >
Hi, I am extensively using the great interactive features of pylab. I'd like to use text in my figures and make it become larger when I zoom into it. (now it always remains the same absolute size in the figure). Is there a way to do this? (maybe its even possible to only show the text when a certain zoom level is reached?) Thanks a lot for any hints, Ulrich -- View this message in context: http://www.nabble.com/Text-zoom-font-size-tp21267362p21267362.html Sent from the matplotlib - users mailing list archive at Nabble.com.
I am plotting the data in those csv files and the forst 4 columns in the files have the same title but the 5th has the name based on the date and time so it would be unique in each of the files. As I have about 600 files to batch process, adjusting my script manually is not an option. The way I have it for one test file is: r = mlab.csv2rec('test.csv') #i know that the column name for the 5th column is 'htsgw_12191800' #so to read the data in the 5th column i just use: z = r.htsgw_12191800 What i need is to be able to get that data by specifying the column number as that stays the same in all files. I'll look at numpy but I hope there is a simpler way. Thanks, Anton Patrick Marsh-2 wrote: > > I'm not sure what you are needing it for, but I would suggest looking > into numpy's loadtxt function. You can use this to load the csv data > into numpy arrays and pass the resulting arrays arround. > > -Patrick > > > > > > > On Sat, Jan 3, 2009 at 11:21 AM, antonv <vas...@ya...> wrote: >> >> Hi all, >> >> I have a lot of csv files to process, all of them with the same number of >> columns. The only issue is that each file has a unique column name for >> the >> fourth column. >> >> All the csv2rec examples I found are using the r.column_name format to >> access the data in that column which is of no use for me because of the >> unique names. Is there a way to access that data using the column number? >> I >> bet this should be something simple but I cannot figure it out... >> >> Thanks in advance, >> Anton >> -- >> View this message in context: >> http://www.nabble.com/csv2rec-column-names-tp21267055p21267055.html >> Sent from the matplotlib - users mailing list archive at Nabble.com. >> >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Matplotlib-users mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >> > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > -- View this message in context: http://www.nabble.com/csv2rec-column-names-tp21267055p21267232.html Sent from the matplotlib - users mailing list archive at Nabble.com.
I'm not sure what you are needing it for, but I would suggest looking into numpy's loadtxt function. You can use this to load the csv data into numpy arrays and pass the resulting arrays arround. -Patrick On Sat, Jan 3, 2009 at 11:21 AM, antonv <vas...@ya...> wrote: > > Hi all, > > I have a lot of csv files to process, all of them with the same number of > columns. The only issue is that each file has a unique column name for the > fourth column. > > All the csv2rec examples I found are using the r.column_name format to > access the data in that column which is of no use for me because of the > unique names. Is there a way to access that data using the column number? I > bet this should be something simple but I cannot figure it out... > > Thanks in advance, > Anton > -- > View this message in context: http://www.nabble.com/csv2rec-column-names-tp21267055p21267055.html > Sent from the matplotlib - users mailing list archive at Nabble.com. > > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users >
Hi all, I have a lot of csv files to process, all of them with the same number of columns. The only issue is that each file has a unique column name for the fourth column. All the csv2rec examples I found are using the r.column_name format to access the data in that column which is of no use for me because of the unique names. Is there a way to access that data using the column number? I bet this should be something simple but I cannot figure it out... Thanks in advance, Anton -- View this message in context: http://www.nabble.com/csv2rec-column-names-tp21267055p21267055.html Sent from the matplotlib - users mailing list archive at Nabble.com.
Luis Saavedra wrote: > Hi list, > > When the 'pylab' module is loaded the function 'strtod' does not work well. > Can you elaborate on how it doesn't work? > I suppose that this is not new: > http://www.python.org/search/hypermail/python-1994q2/0750.html > > and the question is: any solution? > There's a solution in that link: use one of the many alternatives that are known to be more consistent across various versions of UNIX. > In this linkit exists a better description of the problem: > > http://mbdynsimsuite.sourceforge.net/build_mbdyn_bindings.html > I think this is not a matplotlib-specific problem, but a Python one, since Python provides its own strtod definition -- in an apparent attempt to get around its differences on different platforms. If you grep over the matplotlib source code, "strtod" isn't even there. Cheers, Mike
Dear ALL, Is there any way to ***exclude*** (make invlsible) one of more of the standard buttons which are displayed in the toolbar (either the "Classic" or the "Toolbar2") of the MPL backends? Thanks in advance for any assistance you can provide! Best regards, -- Dr. Mauro J. Cavalcanti Ecoinformatics Studio P.O. Box 46521, CEP 20551-970 Rio de Janeiro, RJ, BRASIL E-mail: mau...@gm... Web: http://studio.infobio.net Linux Registered User #473524 * Ubuntu User #22717 "Life is complex. It consists of real and imaginary parts."
Hi list, When the 'pylab' module is loaded the function 'strtod' does not work well. I suppose that this is not new: http://www.python.org/search/hypermail/python-1994q2/0750.html and the question is: any solution? In this linkit exists a better description of the problem: http://mbdynsimsuite.sourceforge.net/build_mbdyn_bindings.html regards, Luis.
Hi. I found a bug in the macos backend. I get this error when I turn on log plotting on the Y axes; it doesn't happen when I turn off log plotting. It also doesn't happen when I use agg.pdf driver. Is this a well-known problem, or should I put together a little demo that demonstrates it? Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/figure.py", line 772, in draw for a in self.axes: a.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/axes.py", line 1601, in draw a.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/axis.py", line 710, in draw tick.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/axis.py", line 193, in draw self.label1.draw(renderer) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/text.py", line 502, in draw ismath=ismath) File "/Library/Frameworks/Python.framework/Versions/2.6/lib/ python2.6/site-packages/matplotlib/backends/backend_macosx.py", line 120, in draw_text self._draw_mathtext(gc, x, y, s, prop, angle) File "/Librrks/Python.framework/Versions/2.6/lib/python2.6/site- packages/matplotlib/backends/backend_macosx.py", line 112, in _draw_mathtext gc.draw_mathtext(x, y, angle, 255 - image.as_array()) TypeError: image has incorrect type (should be uint8)
Eric Firing wrote: > Michael Droettboom wrote: >> This is a very good test to have -- we should add it to >> backend_driver.py. >> >> FWIW, SVG appears to behave similarly to PDF, and has a "miter-limit" >> property to control when to bevel vs. miter when the mode is set to >> "miter". (Though the default threshold appears to be different.) I >> didn't look into Ps, but it may have a similar configuration. >> >> Agg also has a miter limit (which is not currently settable from >> Python), but it reverts to what it calls "smart" beveling, rather >> than regular beveling when below that limit. It does, however, have >> another miter mode called "miter_join_revert" which doesn't do smart >> beveling. (Try the "conv_stroke" example in the Agg source). In >> fact, it has this comment associated with it (agg_math_stroke.h:275): >> >> // For the compatibility with SVG, PDF, etc, >> // we use a simple bevel join instead of >> // "smart" bevel >> >> I think there is an argument to be made that we should use this mode >> instead in the Agg backend, to be consistent with PDF and SVG (see >> patch below). >> >> On a higher level, however, I am a bit concerned about this miter >> limit concept. It seems a bit dangerous for scientific plots: there >> is a large change in the look of the corner after only a small change >> in angle around the miter limit threshold. It may make the data >> appear to have large changes where in fact the changes are small. >> This seems like an argument of "bevel" or "round" as the default line >> join (it is currently "miter"). I also like the idea of "round" join >> keeping the corner close to the actual data point. >> >> Now, if the user wants miter joins, they can, but I would suggest >> that we set the miter-limit threshold in all backends high enough >> such that it "always miters" and "never bevels". I can't see why >> (for other than artistic purposes), one would want to sometimes miter >> or bevel depending on angle in a scientific plot. We can expose >> miter limit as a parameter somehow, but for a default, I think it >> should be "always miter". What do you think? > > Everything you said above sounds right to me, for plotting data--round > joins are likely to provide a good combination of aesthetics and > faithfulness to the data. Miters are needed for things like arrows > (e.g. in quiver), and pcolor patches. > Good "point" (pun intended). If we change the default from miter, we will need to ensure that arrows etc. are explicitly asking for miter joins. Cheers, Mike
Michael Droettboom wrote: > This is a very good test to have -- we should add it to backend_driver.py. > > FWIW, SVG appears to behave similarly to PDF, and has a "miter-limit" > property to control when to bevel vs. miter when the mode is set to > "miter". (Though the default threshold appears to be different.) I > didn't look into Ps, but it may have a similar configuration. > > Agg also has a miter limit (which is not currently settable from > Python), but it reverts to what it calls "smart" beveling, rather than > regular beveling when below that limit. It does, however, have another > miter mode called "miter_join_revert" which doesn't do smart beveling. > (Try the "conv_stroke" example in the Agg source). In fact, it has this > comment associated with it (agg_math_stroke.h:275): > > // For the compatibility with SVG, PDF, etc, > // we use a simple bevel join instead of > // "smart" bevel > > I think there is an argument to be made that we should use this mode > instead in the Agg backend, to be consistent with PDF and SVG (see patch > below). > > On a higher level, however, I am a bit concerned about this miter limit > concept. It seems a bit dangerous for scientific plots: there is a > large change in the look of the corner after only a small change in > angle around the miter limit threshold. It may make the data appear to > have large changes where in fact the changes are small. This seems like > an argument of "bevel" or "round" as the default line join (it is > currently "miter"). I also like the idea of "round" join keeping the > corner close to the actual data point. > > Now, if the user wants miter joins, they can, but I would suggest that > we set the miter-limit threshold in all backends high enough such that > it "always miters" and "never bevels". I can't see why (for other than > artistic purposes), one would want to sometimes miter or bevel depending > on angle in a scientific plot. We can expose miter limit as a parameter > somehow, but for a default, I think it should be "always miter". What > do you think? Mike, Everything you said above sounds right to me, for plotting data--round joins are likely to provide a good combination of aesthetics and faithfulness to the data. Miters are needed for things like arrows (e.g. in quiver), and pcolor patches. Eric > > Cheers, > Mike > > Index: _backend_agg.cpp > =================================================================== > --- _backend_agg.cpp (revision 6731) > +++ _backend_agg.cpp (working copy) > @@ -209,7 +209,7 @@ > std::string joinstyle = Py::String( gc.getAttr("_joinstyle") ); > > if (joinstyle=="miter") > - join = agg::miter_join; > + join = agg::miter_join_revert; > else if (joinstyle=="round") > join = agg::round_join; > else if(joinstyle=="bevel") > > Jouni K. Seppänen wrote: >> Michael Droettboom <md...@st...> >> writes: >> >> >>> Passing solid_joinstyle='bevel' does resolve the problem on both 0.91.x >>> and 0.98.x. Additionally, path simplification (which is a new feature >>> on 0.98.x) also resolves this problem (set rcParam path.simplify to True). >>> >> It seems that agg and pdf have different ways to render small angles >> when the join style is 'miter'. Pdf has a limit (settable in the pdf >> file) beyond which it reverts to bevel-style angles, agg seems to always >> do the miter join but cuts it off at some distance. >> >> Here's a test script, and screenshots of (a part of) the agg and pdf >> outputs when the join style is 'miter'. >> >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------ >> >> ------------------------------------------------------------------------------ >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> Matplotlib-users mailing list >> Mat...@li... >> https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users
This is a very good test to have -- we should add it to backend_driver.py. FWIW, SVG appears to behave similarly to PDF, and has a "miter-limit" property to control when to bevel vs. miter when the mode is set to "miter". (Though the default threshold appears to be different.) I didn't look into Ps, but it may have a similar configuration. Agg also has a miter limit (which is not currently settable from Python), but it reverts to what it calls "smart" beveling, rather than regular beveling when below that limit. It does, however, have another miter mode called "miter_join_revert" which doesn't do smart beveling. (Try the "conv_stroke" example in the Agg source). In fact, it has this comment associated with it (agg_math_stroke.h:275): // For the compatibility with SVG, PDF, etc, // we use a simple bevel join instead of // "smart" bevel I think there is an argument to be made that we should use this mode instead in the Agg backend, to be consistent with PDF and SVG (see patch below). On a higher level, however, I am a bit concerned about this miter limit concept. It seems a bit dangerous for scientific plots: there is a large change in the look of the corner after only a small change in angle around the miter limit threshold. It may make the data appear to have large changes where in fact the changes are small. This seems like an argument of "bevel" or "round" as the default line join (it is currently "miter"). I also like the idea of "round" join keeping the corner close to the actual data point. Now, if the user wants miter joins, they can, but I would suggest that we set the miter-limit threshold in all backends high enough such that it "always miters" and "never bevels". I can't see why (for other than artistic purposes), one would want to sometimes miter or bevel depending on angle in a scientific plot. We can expose miter limit as a parameter somehow, but for a default, I think it should be "always miter". What do you think? Cheers, Mike Index: _backend_agg.cpp =================================================================== --- _backend_agg.cpp (revision 6731) +++ _backend_agg.cpp (working copy) @@ -209,7 +209,7 @@ std::string joinstyle = Py::String( gc.getAttr("_joinstyle") ); if (joinstyle=="miter") - join = agg::miter_join; + join = agg::miter_join_revert; else if (joinstyle=="round") join = agg::round_join; else if(joinstyle=="bevel") Jouni K. Seppänen wrote: > Michael Droettboom <md...@st...> > writes: > > >> Passing solid_joinstyle='bevel' does resolve the problem on both 0.91.x >> and 0.98.x. Additionally, path simplification (which is a new feature >> on 0.98.x) also resolves this problem (set rcParam path.simplify to True). >> > > It seems that agg and pdf have different ways to render small angles > when the join style is 'miter'. Pdf has a limit (settable in the pdf > file) beyond which it reverts to bevel-style angles, agg seems to always > do the miter join but cuts it off at some distance. > > Here's a test script, and screenshots of (a part of) the agg and pdf > outputs when the join style is 'miter'. > > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------ > > ------------------------------------------------------------------------------ > ------------------------------------------------------------------------ > > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users
So my beginner saga continues with another question: I am trying to create a custom colormap using ListedColormap. The custom color map is for a range of values between 0 and 20 while the data in my file is between 0 and 8. Now my issue is that when plotting the graph the colormap is using the 0 to 8 values from the file. How can I force it to use all the 21 values in the colormap? Thanks, Anton Jeff Whitaker wrote: > > antonv wrote: >> Hey Jeff, >> >> I've got it sorted out a bit now. You're right the data was an output >> from >> Degrib and I had the option to output the csv's with or without data in >> the >> land areas. As before I was using a program that was placing the pixels >> in >> an image based on the X and Y columns it didn't create an issue. That was >> an >> easy fix by switching the option in the Degrib export. >> >> Also by looking at your example I realized the way the contourf function >> requests the Z data and by just reshaping the array I was able to make >> all >> the stuff work properly. Numpy is amazing by the way! I had no idea how >> easy >> you can work with huge arrays! >> >> My new issue is that I need to mask the land areas in the Z array so I >> would >> have a clean plot over the basemap. Any ideas on how to achieve that? >> > > Create a masked array. Say the values in the Z array set to 1.e30 over > land areas in the CSV file. > > from numpy import ma > Z = ma.masked_values(Z,1.e30) > > Then plot with contourf as before and the land areas will not be > contoured. > > -Jeff >> Thanks, >> Anton >> >> >> Jeff Whitaker wrote: >> >>> antonv wrote: >>> >>>> It seems that I just cannot grasp the way the data needs to be >>>> formatted >>>> for >>>> this to work... >>>> I've used the griddata sample that James posted but it takes about 10 >>>> minutes to prep the data for plotting so that solution seems to be out >>>> of >>>> discussion. >>>> >>>> I guess my issue is that I don't know what type of data is required by >>>> contourf function. Also as Jeff was saying earlier, the data is read >>>> from >>>> a >>>> grib file so supposedly it's already gridded. I've also looked at the >>>> basemap demo >>>> (http://matplotlib.sourceforge.net/users/screenshots.html#basemap-demo) >>>> and >>>> the data is read from 3 files, one for Lat one for Long and the Last >>>> for >>>> Z >>>> Data. Is there a way to automatically extract the data from the grib >>>> file >>>> to >>>> a format similar to the one used in the basemap example? >>>> >>>> >>> Anton: I just looked at your csv file and I think I know what the >>> problem is. Whatever program you used to dump the grib data did not >>> write all the data - the missing land values were skipped. That means >>> you don't have the full rectangular array of data. I think you have two >>> choices: >>> >>> 1) insert the missing land values into the array, either in the csv file >>> or into the array after it is read in from the csv file. What program >>> did you use to dump the GRIB data to a CSV file? >>> >>> 2) use a python grib interface. If you're on Windows, PyNIO won't >>> work. I've written my own module (pygrib2 - >>> http://code.google.com/p/pygrib2) which you should be able to compile on >>> windows. You'll need the png and jasper (jpeg2000) libraries, however. >>> >>> I recommend (2) - in the time you've already spent messing with that csv >>> file, you could have already gotten a real python grib reader working! >>> >>> -Jeff >>> >>>> Jeff Whitaker wrote: >>>> >>>> >>>>> Mauro Cavalcanti wrote: >>>>> >>>>> >>>>>> Dear Anton, >>>>>> >>>>>> 2008年12月23日 antonv <vas...@ya...>: >>>>>> >>>>>> >>>>>> >>>>>>> Also, because I figured out the data I need and already have the >>>>>>> scripts in place >>>>>>> to extract the CSV files I would really like to keep it that way. >>>>>>> Would >>>>>>> it be possible to >>>>>>> just show me how to get from the csv file to the plot? >>>>>>> >>>>>>> >>>>>>> >>>>>> Here is a short recipe: >>>>>> >>>>>> import numpy as np >>>>>> >>>>>> f = open("file.csv", "r") >>>>>> coords = np.loadtxt(f, delimiter=",", skiprows=1) >>>>>> lon = coords[:,0] >>>>>> lat = coords[:,1] >>>>>> dat = coords[:,2] >>>>>> >>>>>> where "file.csv" is a regular comma-separated values file in the >>>>>> format: >>>>>> >>>>>> Lat,Lon,Dat >>>>>> -61.05,10.4,20 >>>>>> -79.43,9.15,50 >>>>>> -70.66,9.53,10 >>>>>> -63.11,7.91,40 >>>>>> ... >>>>>> >>>>>> Hope this helps! >>>>>> >>>>>> Best regards, >>>>>> >>>>>> >>>>>> >>>>>> >>>>> Since the arrays are 2D (for gridded data), a reshape is also needed, >>>>> i.e. >>>>> >>>>> lon.shape = (nlats,nlons) >>>>> lat.shape = (nlats,nlons) >>>>> data.shape = (nlats,nlons) >>>>> >>>>> You'll need to know what the grid dimensons (nlats,nlons) are. >>>>> >>>>> -Jeff >>>>> >>>>> ------------------------------------------------------------------------------ >>>>> _______________________________________________ >>>>> Matplotlib-users mailing list >>>>> Mat...@li... >>>>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>>>> >>>>> >>>>> >>>>> >>>> >>>> >>> ------------------------------------------------------------------------------ >>> _______________________________________________ >>> Matplotlib-users mailing list >>> Mat...@li... >>> https://lists.sourceforge.net/lists/listinfo/matplotlib-users >>> >>> >>> >> >> > > > ------------------------------------------------------------------------------ > _______________________________________________ > Matplotlib-users mailing list > Mat...@li... > https://lists.sourceforge.net/lists/listinfo/matplotlib-users > > -- View this message in context: http://www.nabble.com/Plotting-NOAA-data...-tp21139727p21244624.html Sent from the matplotlib - users mailing list archive at Nabble.com.