SourceForge logo
SourceForge logo
Menu

matplotlib-users — Discussion related to using matplotlib

You can subscribe to this list here.

2003 Jan
Feb
Mar
Apr
May
(3)
Jun
Jul
Aug
(12)
Sep
(12)
Oct
(56)
Nov
(65)
Dec
(37)
2004 Jan
(59)
Feb
(78)
Mar
(153)
Apr
(205)
May
(184)
Jun
(123)
Jul
(171)
Aug
(156)
Sep
(190)
Oct
(120)
Nov
(154)
Dec
(223)
2005 Jan
(184)
Feb
(267)
Mar
(214)
Apr
(286)
May
(320)
Jun
(299)
Jul
(348)
Aug
(283)
Sep
(355)
Oct
(293)
Nov
(232)
Dec
(203)
2006 Jan
(352)
Feb
(358)
Mar
(403)
Apr
(313)
May
(165)
Jun
(281)
Jul
(316)
Aug
(228)
Sep
(279)
Oct
(243)
Nov
(315)
Dec
(345)
2007 Jan
(260)
Feb
(323)
Mar
(340)
Apr
(319)
May
(290)
Jun
(296)
Jul
(221)
Aug
(292)
Sep
(242)
Oct
(248)
Nov
(242)
Dec
(332)
2008 Jan
(312)
Feb
(359)
Mar
(454)
Apr
(287)
May
(340)
Jun
(450)
Jul
(403)
Aug
(324)
Sep
(349)
Oct
(385)
Nov
(363)
Dec
(437)
2009 Jan
(500)
Feb
(301)
Mar
(409)
Apr
(486)
May
(545)
Jun
(391)
Jul
(518)
Aug
(497)
Sep
(492)
Oct
(429)
Nov
(357)
Dec
(310)
2010 Jan
(371)
Feb
(657)
Mar
(519)
Apr
(432)
May
(312)
Jun
(416)
Jul
(477)
Aug
(386)
Sep
(419)
Oct
(435)
Nov
(320)
Dec
(202)
2011 Jan
(321)
Feb
(413)
Mar
(299)
Apr
(215)
May
(284)
Jun
(203)
Jul
(207)
Aug
(314)
Sep
(321)
Oct
(259)
Nov
(347)
Dec
(209)
2012 Jan
(322)
Feb
(414)
Mar
(377)
Apr
(179)
May
(173)
Jun
(234)
Jul
(295)
Aug
(239)
Sep
(276)
Oct
(355)
Nov
(144)
Dec
(108)
2013 Jan
(170)
Feb
(89)
Mar
(204)
Apr
(133)
May
(142)
Jun
(89)
Jul
(160)
Aug
(180)
Sep
(69)
Oct
(136)
Nov
(83)
Dec
(32)
2014 Jan
(71)
Feb
(90)
Mar
(161)
Apr
(117)
May
(78)
Jun
(94)
Jul
(60)
Aug
(83)
Sep
(102)
Oct
(132)
Nov
(154)
Dec
(96)
2015 Jan
(45)
Feb
(138)
Mar
(176)
Apr
(132)
May
(119)
Jun
(124)
Jul
(77)
Aug
(31)
Sep
(34)
Oct
(22)
Nov
(23)
Dec
(9)
2016 Jan
(26)
Feb
(17)
Mar
(10)
Apr
(8)
May
(4)
Jun
(8)
Jul
(6)
Aug
(5)
Sep
(9)
Oct
(4)
Nov
Dec
2017 Jan
(5)
Feb
(7)
Mar
(1)
Apr
(5)
May
Jun
(3)
Jul
(6)
Aug
(1)
Sep
Oct
(2)
Nov
(1)
Dec
2018 Jan
Feb
Mar
Apr
(1)
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
2020 Jan
Feb
Mar
Apr
May
(1)
Jun
Jul
Aug
Sep
Oct
Nov
Dec
2025 Jan
(1)
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
S M T W T F S






1
(1)
2
3
(10)
4
(17)
5
(7)
6
(21)
7
(15)
8
(6)
9
(7)
10
(8)
11
(6)
12
(11)
13
(11)
14
(13)
15
(4)
16
(5)
17
(8)
18
(8)
19
(15)
20
(3)
21
(10)
22
(5)
23
(7)
24
(8)
25
(29)
26
(26)
27
(7)
28
(2)
29
(3)
30
(3)






Showing results of 26

1 2 > >> (Page 1 of 2)
From: Michael R. <raw...@ya...> - 2012年09月26日 22:23:12
>________________________________
> From: Jeff Whitaker <jef...@no...>
>To: Benjamin Root <ben...@ou...>; Michael Rawlins <ra...@ge...>; Matplotlib Users <mat...@li...> 
>Sent: Wednesday, September 26, 2012 5:10 PM
>Subject: Re: [Matplotlib-users] error reading netcdf file
> 
>
>Michael: You can't change an attribute in a read-only dataset. It should never have worked.
>
>-Jeff
>
>
Thanks Jeff and Ben.
Below is a complete version of one such program I use. I was give the program by a colleague and I do not know who wrote that code. My previous post showed only as far as the missing data statement.
I'm relatively new to this software. It looks like the missing value goes into the maskdat array and maskdat is an input to pcolor function. Perhaps the best way is to get it working (shading a map) with just the array from the netCDF file read. Essentailly the missing value is to shade some grids outside the region white. 
Mike
#!/usr/bin/env python
# v0.5 19 June 2010
# General purpose plotter of 2-D gridded data from NetCDF files,
# plotted with map boundaries.
# NetCDF file should have data in either 2-D, 3-D or 4-D arrays.
# Works with the netCDF files in the tutorial, and also the
# files available for download at:
# http://www.cdc.noaa.gov/cdc/data.ncep.reanalysis.html 
# Adapted from the basemap example plotmap_pcolor.py,
# Some of the variable names from that example are retained.
#
# Uses basemap's pcolor function. Pcolor accepts arrays
# of the longitude and latitude points of the vertices on the pixels,
# as well as an array with the numerical value in the pixel. 
verbose=0 #verbose=2 says a bit more
import sys,getopt
from mpl_toolkits.basemap import Basemap, shiftgrid, cm 
#from netCDF3 import Dataset as NetCDFFile 
from mpl_toolkits.basemap import NetCDFFile
from pylab import *
alloptions, otherargs= getopt.getopt(sys.argv[1:],'ro:p:X:Y:v:t:l:u:n:') # note 
the : after o and p
proj='lam'
#plotfile=None
#plotfile='testmap2.png'
usejetrev=False
colorbounds=[None,None]
extratext=""
xvar=None
yvar=None
thevar=None
thetitle='Bias - Winter Air Temperature'
#thetitle='Bias - Spring Air Temperature'
#thetitle='Bias - Summer Air Temperature'
#thetitle='Bias - Autumn Air Temperature'
therec=None
thelev=None
cbot=None
ctop=None
startlon=-180 #default assumption for starting longitude
for theopt,thearg in alloptions:
  print theopt,thearg
  if theopt=='-o': # -o needs filename after it, which is now thearg
    plotfile=thearg  
  elif theopt=='-p': 
    proj=thearg
  elif theopt=='-X': 
    xvar=thearg
  elif theopt=='-Y': 
    yvar=thearg
  elif theopt=='-v': 
    thevar=thearg
  elif theopt=='-t': 
    thetitle=thearg
  elif theopt=='-l': 
    cbot=thearg
  elif theopt=='-u': 
    ctop=thearg
  elif theopt=='-n': 
    therec=thearg
  elif theopt=='-m': 
    thelev=thearg
  elif theopt=='-r': 
    usejetrev=True
  else: #something went wrong
    print "hmm, what are these??? ", theopt, thearg
    sys.exit()
print otherargs
try:
  ncname=otherargs[0]
  ncfile = NetCDFFile(ncname, 'r')
except:
#  ncname = raw_input("\nenter NetCDF file name =>")
#  ncfile = NetCDFFile(ncname, 'r')
  ncfile = NetCDFFile('simple_xy.nc', 'r')  # Here's filename
  
if verbose>0: #examine the NetCDF file
  print "GLOBAL ATTRIBUTES:"
  allattr=dir(ncfile)
  normalattr=dir(NetCDFFile('/tmp/throwaway.nc','w'))
  for item in allattr:
    if item not in normalattr: print item,': ',getattr(ncfile,item)
#  for name in ncfile.ncattrs():
#      print name,'=', getattr(ncfile,name)
#      if name=='Conventions' and getattr(ncfile,name)=='COARDS
':
#        nmcrean=True
#        startlon=0.
#        print "guessing this is an NMC reananalysis file
"
#      else:
#        nmcrean=False
  print "\n\nDIMENSIONS:"
  for name in ncfile.dimensions.keys():
    print "\nDIMENSION:",
    try:
      print name, ncfile.variables[name][:]
    except:
      print name, ncfile.dimensions[name]
    try:
      print '  ',ncfile.variables[name].units
    except:
      pass
    try:
      print '  ',ncfile.variables[name].gridtype
    except:
      pass
  print "\n'*****************'\nVARIABLES:"
  for name in ncfile.variables.keys():
    if name in ncfile.dimensions.keys(): continue
    print "\nVARIABLE:",
    print name, ncfile.variables[name].dimensions,
    try:
      print '  ',ncfile.variables[name].units
    except:
      pass
    try:
      print "  missing value=",ncfile.variables[name].missing
_value
    except:
      pass
#
print "********************\n"
#if not xvar: xvar=raw_input("\nenter X variable=>")
xvar='rlon'
print "shape of "+xvar+': ',ncfile.variables[xvar].shape
if ncfile.variables[xvar][:].ndim ==1:
  print "X is independent of Y"
  lon1d=ncfile.variables[xvar][:]
else:
  lon1d=False
  if ncfile.variables[xvar][:].ndim ==2: lon2d=ncfile.variables[xvar][:]
  if ncfile.variables[xvar][:].ndim ==3: lon2d=ncfile.variables[xvar][0,:
,:] #WRF
  print "shape of lond2d:", lon2d.shape
#print type(lon1d),lon1d,type(lon1d[1]),lon1d[1]
#
#if not yvar: yvar=raw_input("\nenter Y variable=>")
yvar='rlat'
print "shape of "+yvar+': ',ncfile.variables[yvar].shape
if ncfile.variables[yvar][:].ndim ==1:
  print "Y is independent of X"
  lat1d=ncfile.variables[yvar][:]
else:
  lat1d=False
  if ncfile.variables[yvar][:].ndim ==2: lat2d=ncfile.variables[yvar][:]
  if ncfile.variables[yvar][:].ndim ==3: lat2d=ncfile.variables[yvar][0,:
,:] #WRF
  print "shape of lat2d:", lat2d.shape
#
#if not thevar: thevar=raw_input("\nenter variable to plot=>")
#thevar='nseasondays'
thevar='diff'
#thevar='mean'
data=ncfile.variables[thevar]
dataa=data[:]
theshape=dataa.shape
try:
  add_offset=ncfile.variables[thevar].add_offset
  print "will use add_offset=",add_offset
except:
  add_offset=0.  
print "shape of "+thevar+': ',theshape
if len(theshape)>2:
  print "there are apparently",theshape[0],"records"
  if not therec: therec=raw_input("enter record number to plot=>")
  therec=int(therec)
  extratext=" rec=%d" %therec
if len(theshape)>3:
  print "there are apparently",theshape[1],"levels"
  if not thelev: thelev=raw_input("enter level number to plot=>")
  thelev=int(thelev)
  extratext=extratext+" lev=%d" %thelev
if len(theshape)>3: 
  slice=dataa[therec,thelev,:,:]+add_offset
elif len(theshape)>2: 
  slice=dataa[therec,:,:]+add_offset
else:
  slice=dataa+add_offset
#data.missing_value=0
#data.missing_value=-999
data.missing_value=-9.99
try:
  mval=data.missing_value
  print " using specified missing value =",mval
except:
  mval=1.e30
  print " faking missing value=",mval
# make a masked array: using missing value(s) in mval
maskdat=ma.masked_values(slice,mval)
datamax=max(maskdat.compressed().flat)
datamin=min(maskdat.compressed().flat)
print "\n"+thevar+" has maximum ", datamax," and minimum:", datamin
#colorbounds=[None,None]
#if not cbot: cbot=raw_input("enter lower bound on color scale =>")
#if cbot: colorbounds[0]=float(cbot)
#if not ctop: ctop=raw_input("enter upper bound on color scale =>")
#if ctop: colorbounds[1]=float(ctop)
colorbounds=[-3,3]      # for diff in C
print "using clim=",colorbounds
# shift lons and lats by 1/2 grid increment (so values represent the vertices
# of the grid box surrounding the data value, as pcolor expects).
if type(lon1d)!=type(False): #lon1d does exist
  delon = lon1d[1]-lon1d[0]
  delat = lat1d[1]-lat1d[0]
  lons = zeros(len(lon1d)+1,'d')
  lats = zeros(len(lat1d)+1,'d')
  lons[0:len(lon1d)] = lon1d-0.5*delon
  lons[-1] = lon1d[-1]+0.5*delon
  lats[0:len(lat1d)] = lat1d-0.5*delon
  lats[-1] = lat1d[-1]+0.5*delon
  lons, lats = meshgrid(lons, lats)
else:
  xdim,ydim=lon2d.shape
  lons=zeros([xdim+1,ydim+1],'d')
  lats=zeros([xdim+1,ydim+1],'d')
  for i in range(1,xdim):
    for j in range(1,ydim):
      lats[i,j]=.5*lat2d[i,j]+.5*lat2d[i-1,j-1]   
      lons[i,j]=.5*lon2d[i,j]+.5*lon2d[i-1,j-1]
  for i in range(1,xdim):
    lons[i,-1]=-lons[i,-3]+2*lons[i,-2]          
    lats[i,-1]=-lats[i,-3]+2*lats[i,-2]          
    lons[i,0]=-lons[i,2]+2*lons[i,1]          
    lats[i,0]=-lats[i,2]+2*lats[i,1]          
  for j in range(ydim+1):
    lons[-1,j]=-lons[-3,j]+2*lons[-2,j]          
    lats[-1,j]=-lats[-3,j]+2*lats[-2,j]          
    lons[0,j]=2*lons[1,j]-lons[2,j]          
    lats[0,j]=2*lats[1,j]-lats[2,j]
#
# alter a matplotlib color table, 
# cm.jet is very useful scheme, but reversed colors are better for drought
cmap = cm.get_cmap('jet', 10)  # 10 discrete colors
#cmap = cm.get_cmap('summer_r', 10)  # 10 discrete colors
print "\nPlotting, please wait...maybe more than 10 seconds"
if proj=='lam': #Lambert Conformal
#  m = Basemap(llcrnrlon=-118.0,llcrnrlat=24.,urcrnrlon=-60.0,urcrnrlat=48.
0,\
#      resolution='l',area_thresh=1000.,projection='lcc',\
#      lat_1=70.,lon_0=-98.)
  m = Basemap(llcrnrlon=-80.6,llcrnrlat=38.4,urcrnrlon=-66.0,urcrnrlat=47.
7,\
      resolution='l',area_thresh=1000.,projection='lcc',\
      lat_1=65.,lon_0=-73.3)
  xtxt=200000. #offset for text
  ytxt=200000.
  parallels = arange(38.,48.,2.)
  meridians = arange(-80.,-66.,2.)
else: #cylindrical is default
#  m = Basemap(llcrnrlon=-180.,llcrnrlat=-90,urcrnrlon=180.,urcrnrlat=90.,\
#      resolution='c',area_thresh=10000.,projection='cyl')
  m = Basemap(llcrnrlon=startlon,llcrnrlat=-90,urcrnrlon=startlon+360.,urc
rnrlat=90.,\
      resolution='c',area_thresh=10000.,projection='cyl')
  xtxt=1.
  ytxt=0.
  parallels = arange(-90.,90.,30.)
  if startlon==-180:
    meridians = arange(-180.,180.,60.)
  else:
    meridians = arange(0.,360.,60.)
if verbose>1: print m.__doc__ 
plt.rcParams['font.size'] = 20
xsize = rcParams['figure.figsize'][0]
fig=figure(figsize=(xsize,m.aspect*xsize))
#ax = fig.add_axes([0.08,0.1,0.7,0.7],axisbg='white')
ax = fig.add_axes([0.07,0.00,0.8,1.0],axisbg='white')
# make a pcolor plot.
x, y = m(lons, lats)
p = m.pcolor(x,y,maskdat,shading='flat',cmap=cmap)
clim(*colorbounds)
# axes units units are left, bottom, width, height
#cax = axes([0.85, 0.1, 0.05, 0.7]) # colorbar axes for map w/ no graticule
cax = axes([0.88, 0.1, 0.06, 0.81]) # colorbar axes for map w/ graticule
#if datamax>1000.:
#  colorbar(format='%7.1e', cax=cax) # draw colorbar
#elif datamax>5.:
#  colorbar(format='%d', cax=cax) # draw colorbar
#else:
#  colorbar(format='%5.2f', cax=cax) # draw colorbar
colorbar(format='%3.1f', cax=cax)
#text(0.7,1.02,'mm')
text(0.3,1.02,'$^{o}$C',fontsize=20)
axes(ax) # make the original axes current again
# plot blue dot on Norman OK and label it as such.
#if startlon==-180:
#  xpt,ypt = m(-97.4836,35.2556) 
#else:
#  xpt,ypt = m(-97.4836+360.,35.2556) 
#m.plot([xpt],[ypt],'bo') 
#text(xpt+xtxt,ypt+ytxt,'Norman')
xpt,ypt = m(-70.7500,41.7500)
text(xpt,ypt,'N')
# draw coastlines and political boundaries.
m.drawcoastlines(linewidth=2.0)
m.drawcountries(linewidth=2.0)
m.drawstates(linewidth=2.0)
# draw parallels and meridians.
# label on left, right and bottom of map.
#m.drawparallels(parallels,labels=[1,0,0,0])
#m.drawmeridians(meridians,labels=[1,1,0,1])
if not thetitle:
  title(thevar+extratext,fontsize=20)
else:
  title(thetitle,fontsize=20)
  
#if plotfile:
#  savefig(plotfile, dpi=72, facecolor='w', bbox_inches='tight', edgecolor=
'w', orientation='portrait')
#else:
#  show()
plt.savefig('map.png')
plt.savefig('map.eps')
show()
From: Jeff W. <jef...@no...> - 2012年09月26日 21:10:24
On 9/26/12 1:41 PM, Benjamin Root wrote:
>
>
> On Wed, Sep 26, 2012 at 2:07 PM, Michael Rawlins <raw...@ya... 
> <mailto:raw...@ya...>> wrote:
>
>
>
> ------------------------------------------------------------------------
> *From:* Benjamin Root <ben...@ou... <mailto:ben...@ou...>>
> *To:* Michael Rawlins <raw...@ya...
> <mailto:raw...@ya...>>
> *Cc:* "mat...@li...
> <mailto:mat...@li...>"
> <mat...@li...
> <mailto:mat...@li...>>
> *Sent:* Wednesday, September 26, 2012 10:33 AM
> *Subject:* Re: [Matplotlib-users] error reading netcdf file
>
>
>
> On Wed, Sep 26, 2012 at 10:27 AM, Michael Rawlins
> <raw...@ya... <mailto:raw...@ya...>> wrote:
>
> Recently built and installed netCDF4-1.0. I'm running a
> script that has worked on two other linux OS systems. Error:
>
> File "test.py <http://test.py/>", line 96, in <module>
> data.missing_value=-9.99
> File "netCDF4.pyx", line 2570, in
> netCDF4.Variable.__setattr__ (netCDF4.c:28242)
> File "netCDF4.pyx", line 2392, in
> netCDF4.Variable.setncattr (netCDF4.c:26309)
> File "netCDF4.pyx", line 1013, in netCDF4._set_att
> (netCDF4.c:12699)
> AttributeError: NetCDF: Write to read only
>
>
> The statement in the code triggers the error is:
>
> data.missing_value=-9.99 .
>
> MR
>
>
>
>
> This typically happens when one opens a netCDF4 Dataset object
> in 'r' mode, and/or if the file permissions for the file was
> set to read-only. When modifying an attribute, it is
> technically trying to write to the file.
>
> Ben Root
>
>
>
> Here is the file open statement:
>
> ncfile = NetCDFFile('statsPrcp_Uncertainty2_winter.nc', 'r')
>
> This worked fine in earlier versions. No where in the code do
> I try to write to a netCDF file. So I'm confused as to why
> specifying the read of the netCDF file would result in an
> error when designating the missing data value.
>
> Here's the code:
>
> # Works with the netCDF files in the tutorial, and also the
> # files available for download at:
> # http://www.cdc.noaa.gov/cdc/data.ncep.reanalysis.html
> # Adapted from the basemap example plotmap_pcolor.py,
> # Some of the variable names from that example are retained.
> #
> # Uses basemap's pcolor function. Pcolor accepts arrays
> # of the longitude and latitude points of the vertices on the
> pixels,
> # as well as an array with the numerical value in the pixel.
>
> verbose=0 #verbose=2 says a bit more
>
> import sys,getopt
> from mpl_toolkits.basemap import Basemap, shiftgrid, cm
> from netCDF4 import Dataset as NetCDFFile
> #from mpl_toolkits.basemap import NetCDFFile
> from pylab import *
> import matplotlib.pyplot as plt
>
> alloptions, otherargs=
> getopt.getopt(sys.argv[1:],'ro:p:X:Y:v:t:l:u:n:') # note the :
> after o and p
> proj='lam'
>
> cmap = cm.get_cmap('jet_r', 10) # 10 discrete colors
>
> print "\nPlotting, please wait...maybe more than 10 seconds"
> if proj=='lam': #Lambert Conformal
> m =
> Basemap(llcrnrlon=-80.6,llcrnrlat=38.4,urcrnrlon=-66.0,urcrnrlat=47.7,\
> resolution='l',area_thresh=1000.,projection='lcc',\
> lat_1=65.,lon_0=-73.3)
> xtxt=200000. #offset for text
> ytxt=200000.
> parallels = arange(38.,48.,2.)
> meridians = arange(-80.,-64.,2.)
>
> xsize = rcParams['figure.figsize'][0]
> fig=figure(figsize=(xsize,m.aspect*xsize))
> #cax = axes([0.88, 0.1, 0.06, 0.81]) # colorbar axes for map
> w/ graticule
>
> ############################################################################################
> subplots_adjust(left=None, bottom=None, right=0.9, top=None,
> wspace=0.05, hspace=0.03)
> # Make the first map at upper left
> plt.subplot(2,2,1)
>
> ncfile = NetCDFFile('statsPrcp_Uncertainty2_winter.nc', 'r')
>
> xvar='rlon'
> print "shape of "+xvar+': ',ncfile.variables[xvar].shape
> if ncfile.variables[xvar][:].ndim ==1:
> print "X is independent of Y"
> lon1d=ncfile.variables[xvar][:]
> else:
> lon1d=False
> if ncfile.variables[xvar][:].ndim ==2:
> lon2d=ncfile.variables[xvar][:]
> if ncfile.variables[xvar][:].ndim ==3:
> lon2d=ncfile.variables[xvar][0,:,:]
> print "shape of lond2d:", lon2d.shape
>
> yvar='rlat'
> print "shape of "+yvar+': ',ncfile.variables[yvar].shape
> if ncfile.variables[yvar][:].ndim ==1:
> print "Y is independent of X"
> lat1d=ncfile.variables[yvar][:]
> else:
> lat1d=False
> if ncfile.variables[yvar][:].ndim ==2:
> lat2d=ncfile.variables[yvar][:]
> if ncfile.variables[yvar][:].ndim ==3:
> lat2d=ncfile.variables[yvar][0,:,:]
> print "shape of lat2d:", lat2d.shape
>
> thevar='diff'
> data=ncfile.variables[thevar]
> dataa=data[:]
> theshape=dataa.shape
> try:
> add_offset=ncfile.variables[thevar].add_offset
> print "will use add_offset=",add_offset
> except:
> add_offset=0.
> print "shape of "+thevar+': ',theshape
> if len(theshape)>2:
> print "there are apparently",theshape[0],"records"
> if not therec: therec=raw_input("enter record number to
> plot=>")
> therec=int(therec)
> extratext=" rec=%d" %therec
> if len(theshape)>3:
> print "there are apparently",theshape[1],"levels"
> if not thelev: thelev=raw_input("enter level number to
> plot=>")
> thelev=int(thelev)
> extratext=extratext+" lev=%d" %thelev
> if len(theshape)>3:
> slice=dataa[therec,thelev,:,:]+add_offset
> elif len(theshape)>2:
> slice=dataa[therec,:,:]+add_offset
> else:
> slice=dataa+add_offset
>
> data.missing_value=-9.99
>
>
> I think I know why this used to work in the past. I think netCDF4 
> used to recognize the "missing_value" attribute as a special value. 
> It is now ._FillValue. But I am still not sure you can save a value 
> to that attribute in a read-only Dataset. You might have to perform 
> the mask after extracting the data.
>
> Note, setting the _FillValue *after* extracting the data to the 
> "dataa" variable won't change "dataa". I don't see how setting 
> data.missing_value is supposed to actually help you.
>
> Ben Root
Michael: You can't change an attribute in a read-only dataset. It 
should never have worked.
-Jeff
-- 
Jeffrey S. Whitaker Phone : (303)497-6313
Meteorologist FAX : (303)497-6449
NOAA/OAR/PSD R/PSD1 Email : Jef...@no...
325 Broadway Office : Skaggs Research Cntr 1D-113
Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg
From: Benjamin R. <ben...@ou...> - 2012年09月26日 19:41:54
On Wed, Sep 26, 2012 at 2:07 PM, Michael Rawlins <raw...@ya...>wrote:
>
>
> ------------------------------
> *From:* Benjamin Root <ben...@ou...>
> *To:* Michael Rawlins <raw...@ya...>
> *Cc:* "mat...@li..." <
> mat...@li...>
> *Sent:* Wednesday, September 26, 2012 10:33 AM
> *Subject:* Re: [Matplotlib-users] error reading netcdf file
>
>
>
> On Wed, Sep 26, 2012 at 10:27 AM, Michael Rawlins <raw...@ya...>wrote:
>
> Recently built and installed netCDF4-1.0. I'm running a script that has
> worked on two other linux OS systems. Error:
>
> File "test.py", line 96, in <module>
> data.missing_value=-9.99
> File "netCDF4.pyx", line 2570, in netCDF4.Variable.__setattr__
> (netCDF4.c:28242)
> File "netCDF4.pyx", line 2392, in netCDF4.Variable.setncattr
> (netCDF4.c:26309)
> File "netCDF4.pyx", line 1013, in netCDF4._set_att (netCDF4.c:12699)
> AttributeError: NetCDF: Write to read only
>
>
> The statement in the code triggers the error is:
>
> data.missing_value=-9.99 .
>
> MR
>
>
>
>
> This typically happens when one opens a netCDF4 Dataset object in 'r'
> mode, and/or if the file permissions for the file was set to read-only.
> When modifying an attribute, it is technically trying to write to the file.
>
> Ben Root
>
>
>
> Here is the file open statement:
>
> ncfile = NetCDFFile('statsPrcp_Uncertainty2_winter.nc', 'r')
>
> This worked fine in earlier versions. No where in the code do I try to
> write to a netCDF file. So I'm confused as to why specifying the read of
> the netCDF file would result in an error when designating the missing data
> value.
>
> Here's the code:
>
> # Works with the netCDF files in the tutorial, and also the
> # files available for download at:
> # http://www.cdc.noaa.gov/cdc/data.ncep.reanalysis.html
> # Adapted from the basemap example plotmap_pcolor.py,
> # Some of the variable names from that example are retained.
> #
> # Uses basemap's pcolor function. Pcolor accepts arrays
> # of the longitude and latitude points of the vertices on the pixels,
> # as well as an array with the numerical value in the pixel.
>
> verbose=0 #verbose=2 says a bit more
>
> import sys,getopt
> from mpl_toolkits.basemap import Basemap, shiftgrid, cm
> from netCDF4 import Dataset as NetCDFFile
> #from mpl_toolkits.basemap import NetCDFFile
> from pylab import *
> import matplotlib.pyplot as plt
>
> alloptions, otherargs= getopt.getopt(sys.argv[1:],'ro:p:X:Y:v:t:l:u:n:') #
> note the : after o and p
> proj='lam'
>
> cmap = cm.get_cmap('jet_r', 10) # 10 discrete colors
>
> print "\nPlotting, please wait...maybe more than 10 seconds"
> if proj=='lam': #Lambert Conformal
> m =
> Basemap(llcrnrlon=-80.6,llcrnrlat=38.4,urcrnrlon=-66.0,urcrnrlat=47.7,\
> resolution='l',area_thresh=1000.,projection='lcc',\
> lat_1=65.,lon_0=-73.3)
> xtxt=200000. #offset for text
> ytxt=200000.
> parallels = arange(38.,48.,2.)
> meridians = arange(-80.,-64.,2.)
>
> xsize = rcParams['figure.figsize'][0]
> fig=figure(figsize=(xsize,m.aspect*xsize))
> #cax = axes([0.88, 0.1, 0.06, 0.81]) # colorbar axes for map w/ graticule
>
>
> ############################################################################################
> subplots_adjust(left=None, bottom=None, right=0.9, top=None, wspace=0.05,
> hspace=0.03)
> # Make the first map at upper left
> plt.subplot(2,2,1)
>
> ncfile = NetCDFFile('statsPrcp_Uncertainty2_winter.nc', 'r')
>
> xvar='rlon'
> print "shape of "+xvar+': ',ncfile.variables[xvar].shape
> if ncfile.variables[xvar][:].ndim ==1:
> print "X is independent of Y"
> lon1d=ncfile.variables[xvar][:]
> else:
> lon1d=False
> if ncfile.variables[xvar][:].ndim ==2: lon2d=ncfile.variables[xvar][:]
> if ncfile.variables[xvar][:].ndim ==3:
> lon2d=ncfile.variables[xvar][0,:,:]
> print "shape of lond2d:", lon2d.shape
>
> yvar='rlat'
> print "shape of "+yvar+': ',ncfile.variables[yvar].shape
> if ncfile.variables[yvar][:].ndim ==1:
> print "Y is independent of X"
> lat1d=ncfile.variables[yvar][:]
> else:
> lat1d=False
> if ncfile.variables[yvar][:].ndim ==2: lat2d=ncfile.variables[yvar][:]
> if ncfile.variables[yvar][:].ndim ==3:
> lat2d=ncfile.variables[yvar][0,:,:]
> print "shape of lat2d:", lat2d.shape
>
> thevar='diff'
> data=ncfile.variables[thevar]
> dataa=data[:]
> theshape=dataa.shape
> try:
> add_offset=ncfile.variables[thevar].add_offset
> print "will use add_offset=",add_offset
> except:
> add_offset=0.
> print "shape of "+thevar+': ',theshape
> if len(theshape)>2:
> print "there are apparently",theshape[0],"records"
> if not therec: therec=raw_input("enter record number to plot=>")
> therec=int(therec)
> extratext=" rec=%d" %therec
> if len(theshape)>3:
> print "there are apparently",theshape[1],"levels"
> if not thelev: thelev=raw_input("enter level number to plot=>")
> thelev=int(thelev)
> extratext=extratext+" lev=%d" %thelev
> if len(theshape)>3:
> slice=dataa[therec,thelev,:,:]+add_offset
> elif len(theshape)>2:
> slice=dataa[therec,:,:]+add_offset
> else:
> slice=dataa+add_offset
>
> data.missing_value=-9.99
>
>
> I think I know why this used to work in the past. I think netCDF4 used to
recognize the "missing_value" attribute as a special value. It is now
._FillValue. But I am still not sure you can save a value to that
attribute in a read-only Dataset. You might have to perform the mask after
extracting the data.
Note, setting the _FillValue *after* extracting the data to the "dataa"
variable won't change "dataa". I don't see how setting data.missing_value
is supposed to actually help you.
Ben Root
From: Michael R. <raw...@ya...> - 2012年09月26日 18:07:15
>________________________________
> From: Benjamin Root <ben...@ou...>
>To: Michael Rawlins <raw...@ya...> 
>Cc: "mat...@li..." <mat...@li...> 
>Sent: Wednesday, September 26, 2012 10:33 AM
>Subject: Re: [Matplotlib-users] error reading netcdf file
> 
>
>
>
>
>On Wed, Sep 26, 2012 at 10:27 AM, Michael Rawlins <raw...@ya...> wrote:
>
>Recently built and installed netCDF4-1.0. I'm running a script that has worked on two other linux OS systems. Error:
>>
>>File "test.py", line 96, in <module>
>>  data.missing_value=-9.99
>> File "netCDF4.pyx", line 2570, in netCDF4.Variable.__setattr__ (netCDF4.c:28242)
>> File "netCDF4.pyx", line 2392, in netCDF4.Variable.setncattr (netCDF4.c:26309)
>> File "netCDF4.pyx", line 1013, in netCDF4._set_att (netCDF4.c:12699)
>>AttributeError: NetCDF: Write to read only
>>
>>
>>The statement in the code triggers the error is:
>>
>>data.missing_value=-9.99 .
>>
>>MR
>>
>>
>
>
>This typically happens when one opens a netCDF4 Dataset object in 'r' mode, and/or if the file permissions for the file was set to read-only. When modifying an attribute, it is technically trying to write to the file.
>
>Ben Root
>
>
>
>Here is the file open statement:
>
>ncfile = NetCDFFile('statsPrcp_Uncertainty2_winter.nc', 'r')
>
>This worked fine in earlier versions. No where in the code do I try to write to a netCDF file. So I'm confused as to why specifying the read of the netCDF file would result in an error when designating the missing data value.
>
>Here's the code:
>
># Works with the netCDF files in the tutorial, and also the
># files available for download at:
># http://www.cdc.noaa.gov/cdc/data.ncep.reanalysis.html 
># Adapted from the basemap example plotmap_pcolor.py,
># Some of the variable names from that example are retained.
>#
># Uses basemap's pcolor function. Pcolor accepts arrays
># of the longitude and latitude points of the vertices on the pixels,
># as well as an array with the numerical value in the pixel. 
>
>verbose=0 #verbose=2 says a bit more
>
>import sys,getopt
>from mpl_toolkits.basemap import Basemap, shiftgrid, cm 
>from netCDF4 import Dataset as NetCDFFile 
>#from mpl_toolkits.basemap import NetCDFFile
>from pylab import *
>import matplotlib.pyplot as plt
>
>alloptions, otherargs= getopt.getopt(sys.argv[1:],'ro:p:X:Y:v:t:l:u:n:') # note the : after o and p
>proj='lam'
>
>cmap = cm.get_cmap('jet_r', 10)  # 10 discrete colors
>
>print "\nPlotting, please wait...maybe more than 10 seconds"
>if proj=='lam': #Lambert Conformal
>  m = Basemap(llcrnrlon=-80.6,llcrnrlat=38.4,urcrnrlon=-66.0,urcrnrlat=47.7,\
>      resolution='l',area_thresh=1000.,projection='lcc',\
>      lat_1=65.,lon_0=-73.3)
>xtxt=200000. #offset for text
>ytxt=200000.
>parallels = arange(38.,48.,2.)
>meridians = arange(-80.,-64.,2.)
>
>xsize = rcParams['figure.figsize'][0]
>fig=figure(figsize=(xsize,m.aspect*xsize))
>#cax = axes([0.88, 0.1, 0.06, 0.81]) # colorbar axes for map w/ graticule
>
>############################################################################################
>subplots_adjust(left=None, bottom=None, right=0.9, top=None, wspace=0.05, hspace=0.03)
># Make the first map at upper left
>plt.subplot(2,2,1)
>
>ncfile = NetCDFFile('statsPrcp_Uncertainty2_winter.nc', 'r')
>
>xvar='rlon'
>print "shape of "+xvar+': ',ncfile.variables[xvar].shape
>if ncfile.variables[xvar][:].ndim ==1:
>  print "X is independent of Y"
>  lon1d=ncfile.variables[xvar][:]
>else:
>  lon1d=False
>  if ncfile.variables[xvar][:].ndim ==2: lon2d=ncfile.variables[xvar][:]
>  if ncfile.variables[xvar][:].ndim ==3: lon2d=ncfile.variables[xvar][0,:,:]
>  print "shape of lond2d:", lon2d.shape
>
>yvar='rlat'
>print "shape of "+yvar+': ',ncfile.variables[yvar].shape
>if ncfile.variables[yvar][:].ndim ==1:
>  print "Y is independent of X"
>  lat1d=ncfile.variables[yvar][:]
>else:
>  lat1d=False
>  if ncfile.variables[yvar][:].ndim ==2: lat2d=ncfile.variables[yvar][:]
>  if ncfile.variables[yvar][:].ndim ==3: lat2d=ncfile.variables[yvar][0,:,:] 
>  print "shape of lat2d:", lat2d.shape
>
>thevar='diff'
>data=ncfile.variables[thevar]
>dataa=data[:]
>theshape=dataa.shape
>try:
>  add_offset=ncfile.variables[thevar].add_offset
>  print "will use add_offset=",add_offset
>except:
>  add_offset=0.  
>print "shape of "+thevar+': ',theshape
>if len(theshape)>2:
>  print "there are apparently",theshape[0],"records"
>  if not therec: therec=raw_input("enter record number to plot=>")
>  therec=int(therec)
>  extratext=" rec=%d" %therec
>if len(theshape)>3:
>  print "there are apparently",theshape[1],"levels"
>  if not thelev: thelev=raw_input("enter level number to plot=>")
>  thelev=int(thelev)
>  extratext=extratext+" lev=%d" %thelev
>if len(theshape)>3: 
>  slice=dataa[therec,thelev,:,:]+add_offset
>elif len(theshape)>2: 
>  slice=dataa[therec,:,:]+add_offset
>else:
>  slice=dataa+add_offset
>
>data.missing_value=-9.99
>
>
>
>
>
>
From: Benjamin R. <ben...@ou...> - 2012年09月26日 16:32:05
On Wed, Sep 26, 2012 at 12:10 PM, Paul Tremblay <pau...@gm...>wrote:
> Thanks. I know when doing 8/9 in python 2.x you get 0. With python 3 you
> get a decimal (Hooray, Python 3!).
>
> I ran the script I submitted with python 3. Do I need to change the
> defects and totals from integers to floats to make my chart work
> universally?
>
> P.
>
>
That or just one of the two. Either way would work.
Ben Root
From: Craig t. D. <des...@gm...> - 2012年09月26日 16:22:19
So I rebuilt Python so that I would have Tkinter support, and using
TkAgg I get the behavior I want, so it's not mission critical to me to
figure this out anymore. But if anyone is interested in tracking it
down I will be glad to help.
On Wed, Sep 26, 2012 at 8:06 AM, Craig the Demolishor
<des...@gm...> wrote:
> Hi all,
> I've been having some trouble getting interactive mode to work
> correctly with the WxAgg backend. I have these versions:
>
> [craigb@xxxxxx hists]$ python
> Python 2.7.3 (default, Aug 13 2012, 16:13:38)
> [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2
> Type "help", "copyright", "credits" or "license" for more information.
>>>> import matplotlib
>>>> matplotlib.__version__
> '1.1.1rc'
>>>> import wx
>>>> wx.__version__
> '2.8.12.1'
>>>>
>
>
> When I do something like this,
>
>>>> from matplotlib import pyplot
>>>> import numpy
>>>> pyplot.plot(numpy.arange(10), numpy.arange(10), 'r.')
> [<matplotlib.lines.Line2D object at 0x5561750>]
>>>> pyplot.show()
>
> I get a good-looking window with my plot. I can mouse over the window
> and get coordinates, there are buttons on the bottom of the window
> that I can push, etc. However, if I turn on interactive mode (I read
> somewhere that it's bad to turn on interactive mode after you've
> already made a plot, so below I've exited and started a new
> interactive session),
>
>>>> from matplotlib import pyplot
>>>> import numpy
>>>> pyplot.ion()
>>>> pyplot.plot(numpy.arange(10), numpy.arange(10), 'r.')
> [<matplotlib.lines.Line2D object at 0x153e2ad0>]
>
> Nothing shows at this point. If I continue,
>
>>>> pyplot.show()
>
> Still nothing shows. If I exit right now, I get a brief glimpse of my
> plot before python exits completely. I can get the plot to show by
> doing
>
>>>> pyplot.draw()
>
> But it's only the top half of the plot, and if I drag another window
> on top of it, it doesn't automatically redraw. No buttons, no
> mouse-over niceness. Issuing pyplot.draw() again gets me the full
> plot, but no mouse-overs, no buttons, and no redraw.
>
> Both MPL and wxPython were built from source on a RHEL5.8 machine, so
> maybe it's the libraries I am linking against...? Anyway, I've
> attached two screenshots of what my plots look like when they finally
> are drawn. Many many thanks in advance!
>
> --cb
From: Paul T. <pau...@gm...> - 2012年09月26日 16:14:58
I noticed today that when I create a bar graph with zero values that the
labels don't align correctly:
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure()
names = ['name a', 'name b', 'name c', 'named', 'name e', 'name f']
defects = [0, 0, 0, 5, 6, 7]
ax = fig.add_subplot(111)
widths=.75
ind = np.arange(len(defects)) # the x locations for the groups
ind = ind + widths
bar = ax.bar(ind, defects, width=widths)
plt.xticks(ind + widths/2.0 , names, rotation=90)
ax.set_xlabel('Sorter')
ax.set_ylabel('Percentage (Average)')
plt.subplots_adjust( bottom=0.30)
ax.yaxis.grid(True, linestyle='-', which='major', color='lightgrey',
alpha=0.5)
ax.set_axisbelow(True)
plt.savefig('temp.png')
defects = [1, 2, 3, 5, 6, 7]
# same code...
When run with the defects with non zero values, the labels appear as I
expect; with zero values, the left label aligns incorrectly with the y axis.
Thanks
P.
From: Paul T. <pau...@gm...> - 2012年09月26日 16:10:31
Thanks. I know when doing 8/9 in python 2.x you get 0. With python 3 you
get a decimal (Hooray, Python 3!).
I ran the script I submitted with python 3. Do I need to change the defects
and totals from integers to floats to make my chart work universally?
P.
On Wed, Sep 26, 2012 at 4:31 AM, Pierre Haessig <pie...@cr...>wrote:
> Hi,
>
> Just a detail :
>
> Le 26/09/2012 04:29, Paul Tremblay a écrit :
>
> percent = (np.divide(the_cumsum, the_sum)) * 100
>
> This lines doesn't work on my computer (numpy 1.6.2)
>
> Indeed, there is a casting issue :
> In [2]: percent
> Out[2]: array([ 0, 0, 0, 0, 100])
>
> However, using the regular "/" operator instead of np.divide gives the
> proper result:
> In [8]: the_cumsum/the_sum*100
> Out[8]: array([ 42.10526316, 71.05263158, 90.78947368, 97.36842105,
> 100. ])
>
> Best,
> Pierre
>
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Matplotlib-users mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-users
>
>
From: Benjamin R. <ben...@ou...> - 2012年09月26日 15:10:39
On Wed, Sep 26, 2012 at 11:07 AM, Pierre Haessig
<pie...@cr...>wrote:
> Le 26/09/2012 15:30, Benjamin Root a écrit :
> >
> > Probably could have the two axes listen for an "xlim_changed" event,
> > check to see if it belongs to its twin, and update itself accordingly
> > (without emitting).
> I guess you mean "ylim_changed" event ?
>
> (Maybe my description was not very clear, but the issue I'm having is
> with respective placement of yticks on left and right sides.
> xticks are the same, so no problem with these.)
>
> Best,
> Pierre
>
> Correct... twinx should have them listen to each other's ylim_changed,
twiny should have them listen to each other's xlim_changed. Thanks for
catching that...
Ben Root
From: Pierre H. <pie...@cr...> - 2012年09月26日 15:07:59
Attachments: signature.asc
Le 26/09/2012 15:30, Benjamin Root a écrit :
>
> Probably could have the two axes listen for an "xlim_changed" event,
> check to see if it belongs to its twin, and update itself accordingly
> (without emitting).
I guess you mean "ylim_changed" event ?
(Maybe my description was not very clear, but the issue I'm having is
with respective placement of yticks on left and right sides.
xticks are the same, so no problem with these.)
Best,
Pierre
From: Craig t. D. <des...@gm...> - 2012年09月26日 15:06:41
Hi all,
 I've been having some trouble getting interactive mode to work
correctly with the WxAgg backend. I have these versions:
[craigb@xxxxxx hists]$ python
Python 2.7.3 (default, Aug 13 2012, 16:13:38)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import matplotlib
>>> matplotlib.__version__
'1.1.1rc'
>>> import wx
>>> wx.__version__
'2.8.12.1'
>>>
When I do something like this,
>>> from matplotlib import pyplot
>>> import numpy
>>> pyplot.plot(numpy.arange(10), numpy.arange(10), 'r.')
[<matplotlib.lines.Line2D object at 0x5561750>]
>>> pyplot.show()
I get a good-looking window with my plot. I can mouse over the window
and get coordinates, there are buttons on the bottom of the window
that I can push, etc. However, if I turn on interactive mode (I read
somewhere that it's bad to turn on interactive mode after you've
already made a plot, so below I've exited and started a new
interactive session),
>>> from matplotlib import pyplot
>>> import numpy
>>> pyplot.ion()
>>> pyplot.plot(numpy.arange(10), numpy.arange(10), 'r.')
[<matplotlib.lines.Line2D object at 0x153e2ad0>]
Nothing shows at this point. If I continue,
>>> pyplot.show()
Still nothing shows. If I exit right now, I get a brief glimpse of my
plot before python exits completely. I can get the plot to show by
doing
>>> pyplot.draw()
But it's only the top half of the plot, and if I drag another window
on top of it, it doesn't automatically redraw. No buttons, no
mouse-over niceness. Issuing pyplot.draw() again gets me the full
plot, but no mouse-overs, no buttons, and no redraw.
Both MPL and wxPython were built from source on a RHEL5.8 machine, so
maybe it's the libraries I am linking against...? Anyway, I've
attached two screenshots of what my plots look like when they finally
are drawn. Many many thanks in advance!
--cb
From: Pierre H. <pie...@cr...> - 2012年09月26日 15:04:55
Attachments: signature.asc
Le 26/09/2012 15:25, Benjamin Root a écrit :
>
> Actually, if you are using the latest numpy (the 1.7 beta), that will
> also not work unless you are using py3k or did "from __future__ import
> division". Well, actually, using np.divide will always result in
> integer division (this may or may not be a bug).
Good point, I forgot I had set "from __future__ import division" some
months ago in my IPython startup settings.
So indeed, explicit casting to float is the safest approach.
Best,
Pierre
From: Benjamin R. <ben...@ou...> - 2012年09月26日 14:33:36
On Wed, Sep 26, 2012 at 10:27 AM, Michael Rawlins <raw...@ya...>wrote:
> Recently built and installed netCDF4-1.0. I'm running a script that has
> worked on two other linux OS systems. Error:
>
> File "test.py", line 96, in <module>
> data.missing_value=-9.99
> File "netCDF4.pyx", line 2570, in netCDF4.Variable.__setattr__
> (netCDF4.c:28242)
> File "netCDF4.pyx", line 2392, in netCDF4.Variable.setncattr
> (netCDF4.c:26309)
> File "netCDF4.pyx", line 1013, in netCDF4._set_att (netCDF4.c:12699)
> AttributeError: NetCDF: Write to read only
>
>
> The statement in the code triggers the error is:
>
> data.missing_value=-9.99 .
>
> MR
>
>
>
This typically happens when one opens a netCDF4 Dataset object in 'r' mode,
and/or if the file permissions for the file was set to read-only. When
modifying an attribute, it is technically trying to write to the file.
Ben Root
From: Michael R. <raw...@ya...> - 2012年09月26日 14:27:11
Recently built and installed netCDF4-1.0. I'm running a script that has worked on two other linux OS systems. Error:
File "test.py", line 96, in <module>
  data.missing_value=-9.99
 File "netCDF4.pyx", line 2570, in netCDF4.Variable.__setattr__ (netCDF4.c:28242)
 File "netCDF4.pyx", line 2392, in netCDF4.Variable.setncattr (netCDF4.c:26309)
 File "netCDF4.pyx", line 1013, in netCDF4._set_att (netCDF4.c:12699)
AttributeError: NetCDF: Write to read only
The statement in the code triggers the error is:
data.missing_value=-9.99 .
MR
From: Michael D. <md...@st...> - 2012年09月26日 14:18:24
On 09/26/2012 09:33 AM, Benjamin Root wrote:
>
>
> On Wed, Sep 26, 2012 at 9:10 AM, Michael Droettboom <md...@st... 
> <mailto:md...@st...>> wrote:
>
> On 09/26/2012 12:28 AM, jos...@gm...
> <mailto:jos...@gm...> wrote:
> > On Wed, Sep 26, 2012 at 12:05 AM, Paul Tremblay
> <pau...@gm... <mailto:pau...@gm...>> wrote:
> >> In R, there are many default data sets one can use to both
> illustrate code
> >> and explore the scripting language. Instead of having to fake
> data, one can
> >> pull from meaningful data sets, created in the real world. For
> example, this
> >> one liner actually produces a plot:
> >>
> >> plot(mtcars$hp~mtcars$mpg)
> >>
> >> where mtcars refers to a built-in data set taken from Motor
> Trend Magazine.
> >> I don't believe matplotlib has anything similar. I have started
> to download
> >> some of the R data sets and store them as pickles for my own
> use. Does
> >> anyone else have any interest in creating a repository for
> these data sets
> >> or otherwise sharing them in some way?
> > Vincent converted several R datasets back to csv, that can be easily
> > loaded from the web with, for example, pandas.
> > http://vincentarelbundock.github.com/Rdatasets/
> > The collection is a bit random.
> >
> > statsmodels has some datasets that we use for examples and tests
> > http://statsmodels.sourceforge.net/devel/datasets/index.html
> > We were always a bit slow with adding datasets because we were too
> > cautious about licensing issues. But R seems to get away with
> > considering most datasets to be public domain.
> > We keep adding datasets to statsmodels as we need them for new
> models.
> >
> > The machine learning packages like sklearn have packaged the typical
> > machine learning datasets.
> >
> > If you are interested, you could join up with statsmodels or with
> > Vincent to expand on what's available.
> >
> It seems to me like contributing to (rather than duplicating) the work
> of one of these projects would be a great idea. It would also be nice
> to add functionality in matplotlib to make it easier to download these
> things as a one-off -- obviously not exactly the same syntax as
> with R,
> but ideally with a single function call.
>
> Mike
>
>
> We did have such a thing. matplotlib.cbook.get_sample_data(). I think 
> we got rid of it for 1.2.0?
It was removed because the server side was a moving target and would 
constantly break. It was based on pulling files out of the svn (and 
later git) repository, and sourceforge and github have had a habit of 
changing the urls used to do so. All of the data that was there was 
moved into the main repository and is now installed alongside 
matplotlib, so get_sample_data() still works.
See this PR: https://github.com/matplotlib/matplotlib/pull/498
I should have mentioned it earlier, that we do have a very small set of 
standard data sets included there -- but these other projects linked to 
above are much better and more extensive. If we can rely on them to 
have static urls over time, I think they are much better options than 
anything matplotlib has had in the past.
Mike
From: <jos...@gm...> - 2012年09月26日 13:41:39
On Wed, Sep 26, 2012 at 9:33 AM, Benjamin Root <ben...@ou...> wrote:
>
>
> On Wed, Sep 26, 2012 at 9:10 AM, Michael Droettboom <md...@st...> wrote:
>>
>> On 09/26/2012 12:28 AM, jos...@gm... wrote:
>> > On Wed, Sep 26, 2012 at 12:05 AM, Paul Tremblay
>> > <pau...@gm...> wrote:
>> >> In R, there are many default data sets one can use to both illustrate
>> >> code
>> >> and explore the scripting language. Instead of having to fake data, one
>> >> can
>> >> pull from meaningful data sets, created in the real world. For example,
>> >> this
>> >> one liner actually produces a plot:
>> >>
>> >> plot(mtcars$hp~mtcars$mpg)
>> >>
>> >> where mtcars refers to a built-in data set taken from Motor Trend
>> >> Magazine.
>> >> I don't believe matplotlib has anything similar. I have started to
>> >> download
>> >> some of the R data sets and store them as pickles for my own use. Does
>> >> anyone else have any interest in creating a repository for these data
>> >> sets
>> >> or otherwise sharing them in some way?
>> > Vincent converted several R datasets back to csv, that can be easily
>> > loaded from the web with, for example, pandas.
>> > http://vincentarelbundock.github.com/Rdatasets/
>> > The collection is a bit random.
>> >
>> > statsmodels has some datasets that we use for examples and tests
>> > http://statsmodels.sourceforge.net/devel/datasets/index.html
>> > We were always a bit slow with adding datasets because we were too
>> > cautious about licensing issues. But R seems to get away with
>> > considering most datasets to be public domain.
>> > We keep adding datasets to statsmodels as we need them for new models.
>> >
>> > The machine learning packages like sklearn have packaged the typical
>> > machine learning datasets.
>> >
>> > If you are interested, you could join up with statsmodels or with
>> > Vincent to expand on what's available.
>> >
>> It seems to me like contributing to (rather than duplicating) the work
>> of one of these projects would be a great idea. It would also be nice
>> to add functionality in matplotlib to make it easier to download these
>> things as a one-off -- obviously not exactly the same syntax as with R,
>> but ideally with a single function call.
>>
>> Mike
>>
>
> We did have such a thing. matplotlib.cbook.get_sample_data(). I think we
> got rid of it for 1.2.0?
I don't know the details, but it looks like in pandas they spend some
time on python 3 compatibility, in case that was a problem
https://github.com/pydata/pandas/pull/970
Josef
>
> Ben
>
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Matplotlib-users mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-users
>
From: Benjamin R. <ben...@ou...> - 2012年09月26日 13:33:47
On Wed, Sep 26, 2012 at 9:10 AM, Michael Droettboom <md...@st...> wrote:
> On 09/26/2012 12:28 AM, jos...@gm... wrote:
> > On Wed, Sep 26, 2012 at 12:05 AM, Paul Tremblay <pau...@gm...>
> wrote:
> >> In R, there are many default data sets one can use to both illustrate
> code
> >> and explore the scripting language. Instead of having to fake data, one
> can
> >> pull from meaningful data sets, created in the real world. For example,
> this
> >> one liner actually produces a plot:
> >>
> >> plot(mtcars$hp~mtcars$mpg)
> >>
> >> where mtcars refers to a built-in data set taken from Motor Trend
> Magazine.
> >> I don't believe matplotlib has anything similar. I have started to
> download
> >> some of the R data sets and store them as pickles for my own use. Does
> >> anyone else have any interest in creating a repository for these data
> sets
> >> or otherwise sharing them in some way?
> > Vincent converted several R datasets back to csv, that can be easily
> > loaded from the web with, for example, pandas.
> > http://vincentarelbundock.github.com/Rdatasets/
> > The collection is a bit random.
> >
> > statsmodels has some datasets that we use for examples and tests
> > http://statsmodels.sourceforge.net/devel/datasets/index.html
> > We were always a bit slow with adding datasets because we were too
> > cautious about licensing issues. But R seems to get away with
> > considering most datasets to be public domain.
> > We keep adding datasets to statsmodels as we need them for new models.
> >
> > The machine learning packages like sklearn have packaged the typical
> > machine learning datasets.
> >
> > If you are interested, you could join up with statsmodels or with
> > Vincent to expand on what's available.
> >
> It seems to me like contributing to (rather than duplicating) the work
> of one of these projects would be a great idea. It would also be nice
> to add functionality in matplotlib to make it easier to download these
> things as a one-off -- obviously not exactly the same syntax as with R,
> but ideally with a single function call.
>
> Mike
>
>
We did have such a thing. matplotlib.cbook.get_sample_data(). I think we
got rid of it for 1.2.0?
Ben
From: Benjamin R. <ben...@ou...> - 2012年09月26日 13:30:54
On Wed, Sep 26, 2012 at 5:58 AM, Pierre Haessig <pie...@cr...>wrote:
> Hello,
>
> In relation to the recent thread on pareto chart, I have a question with
> regards to the synchronization of ticks location when using twinx plots.
> This question may have been adressed in the past, but my Google search
> on this topic was unfruitful. Sorry if this question was already answered.
>
> Basically, when using twinx two plot two different data sets, the scale
> of the data, in the general case, is to be different (thus the need for
> twinx). However, adding a grid to such a twinx plot leads to a very
> irregular placement because the ticks positions are not "synchronized".
> (I pasted a quick example to illustrate the tick placement issue at the
> end of this message)
>
> Ideally, I would like that both axis share a common placement for the
> ticks. Is there a way to do that ?
>
> I guess this would require that the two ticks Locators share "some
> algorithm" so that they would "agree" on the placement instead of
> working separetely. Other option may be that one Locator could work as a
> slave of the other one. (I'm just trying to draw a rough picture)
>
> Best,
> Pierre
>
> quick example to illustrate the tick placement issue :
>
> import matplotlib.pyplot as plt
>
> ax1 = plt.subplot(111)
> ax1.plot([1,2,3], 'bo-')
> ax1.grid(True)
>
> ax2 = plt.twinx(ax1)
> ax2.plot([11,12], 'rs-')
> ax2.grid(True)
>
> ax1.set_xlim(-0.5,2.5)
>
> plt.show()
>
>
>
Probably could have the two axes listen for an "xlim_changed" event, check
to see if it belongs to its twin, and update itself accordingly (without
emitting).
Ben Root
From: Benjamin R. <ben...@ou...> - 2012年09月26日 13:25:56
On Wed, Sep 26, 2012 at 4:31 AM, Pierre Haessig <pie...@cr...>wrote:
> Hi,
>
> Just a detail :
>
> Le 26/09/2012 04:29, Paul Tremblay a écrit :
>
> percent = (np.divide(the_cumsum, the_sum)) * 100
>
> This lines doesn't work on my computer (numpy 1.6.2)
>
> Indeed, there is a casting issue :
> In [2]: percent
> Out[2]: array([ 0, 0, 0, 0, 100])
>
> However, using the regular "/" operator instead of np.divide gives the
> proper result:
> In [8]: the_cumsum/the_sum*100
> Out[8]: array([ 42.10526316, 71.05263158, 90.78947368, 97.36842105,
> 100. ])
>
> Best,
> Pierre
>
>
Actually, if you are using the latest numpy (the 1.7 beta), that will also
not work unless you are using py3k or did "from __future__ import
division". Well, actually, using np.divide will always result in integer
division (this may or may not be a bug).
The correct thing to do until we move on to py3k with its "true division"
is to make sure we cast one of the operands as floats:
the_sum.astype('f'). Plus, using '/' is more concise and readable.
Cheers!
Ben Root
From: Michael D. <md...@st...> - 2012年09月26日 13:13:27
On 09/26/2012 12:28 AM, jos...@gm... wrote:
> On Wed, Sep 26, 2012 at 12:05 AM, Paul Tremblay <pau...@gm...> wrote:
>> In R, there are many default data sets one can use to both illustrate code
>> and explore the scripting language. Instead of having to fake data, one can
>> pull from meaningful data sets, created in the real world. For example, this
>> one liner actually produces a plot:
>>
>> plot(mtcars$hp~mtcars$mpg)
>>
>> where mtcars refers to a built-in data set taken from Motor Trend Magazine.
>> I don't believe matplotlib has anything similar. I have started to download
>> some of the R data sets and store them as pickles for my own use. Does
>> anyone else have any interest in creating a repository for these data sets
>> or otherwise sharing them in some way?
> Vincent converted several R datasets back to csv, that can be easily
> loaded from the web with, for example, pandas.
> http://vincentarelbundock.github.com/Rdatasets/
> The collection is a bit random.
>
> statsmodels has some datasets that we use for examples and tests
> http://statsmodels.sourceforge.net/devel/datasets/index.html
> We were always a bit slow with adding datasets because we were too
> cautious about licensing issues. But R seems to get away with
> considering most datasets to be public domain.
> We keep adding datasets to statsmodels as we need them for new models.
>
> The machine learning packages like sklearn have packaged the typical
> machine learning datasets.
>
> If you are interested, you could join up with statsmodels or with
> Vincent to expand on what's available.
>
It seems to me like contributing to (rather than duplicating) the work 
of one of these projects would be a great idea. It would also be nice 
to add functionality in matplotlib to make it easier to download these 
things as a one-off -- obviously not exactly the same syntax as with R, 
but ideally with a single function call.
Mike
From: Pierre H. <pie...@cr...> - 2012年09月26日 09:58:43
Attachments: signature.asc
Hello,
In relation to the recent thread on pareto chart, I have a question with
regards to the synchronization of ticks location when using twinx plots.
This question may have been adressed in the past, but my Google search
on this topic was unfruitful. Sorry if this question was already answered.
Basically, when using twinx two plot two different data sets, the scale
of the data, in the general case, is to be different (thus the need for
twinx). However, adding a grid to such a twinx plot leads to a very
irregular placement because the ticks positions are not "synchronized".
(I pasted a quick example to illustrate the tick placement issue at the
end of this message)
Ideally, I would like that both axis share a common placement for the
ticks. Is there a way to do that ?
I guess this would require that the two ticks Locators share "some
algorithm" so that they would "agree" on the placement instead of
working separetely. Other option may be that one Locator could work as a
slave of the other one. (I'm just trying to draw a rough picture)
Best,
Pierre
quick example to illustrate the tick placement issue :
import matplotlib.pyplot as plt
ax1 = plt.subplot(111)
ax1.plot([1,2,3], 'bo-')
ax1.grid(True)
ax2 = plt.twinx(ax1)
ax2.plot([11,12], 'rs-')
ax2.grid(True)
ax1.set_xlim(-0.5,2.5)
plt.show()
From: Pierre H. <pie...@cr...> - 2012年09月26日 08:31:56
Attachments: signature.asc
Hi,
Just a detail :
Le 26/09/2012 04:29, Paul Tremblay a écrit :
> percent = (np.divide(the_cumsum, the_sum)) * 100
This lines doesn't work on my computer (numpy 1.6.2)
Indeed, there is a casting issue :
In [2]: percent
Out[2]: array([ 0, 0, 0, 0, 100])
However, using the regular "/" operator instead of np.divide gives the
proper result:
In [8]: the_cumsum/the_sum*100
Out[8]: array([ 42.10526316, 71.05263158, 90.78947368, 
97.36842105, 100. ])
Best,
Pierre
From: <jos...@gm...> - 2012年09月26日 04:28:48
On Wed, Sep 26, 2012 at 12:05 AM, Paul Tremblay <pau...@gm...> wrote:
> In R, there are many default data sets one can use to both illustrate code
> and explore the scripting language. Instead of having to fake data, one can
> pull from meaningful data sets, created in the real world. For example, this
> one liner actually produces a plot:
>
> plot(mtcars$hp~mtcars$mpg)
>
> where mtcars refers to a built-in data set taken from Motor Trend Magazine.
> I don't believe matplotlib has anything similar. I have started to download
> some of the R data sets and store them as pickles for my own use. Does
> anyone else have any interest in creating a repository for these data sets
> or otherwise sharing them in some way?
Vincent converted several R datasets back to csv, that can be easily
loaded from the web with, for example, pandas.
http://vincentarelbundock.github.com/Rdatasets/
The collection is a bit random.
statsmodels has some datasets that we use for examples and tests
http://statsmodels.sourceforge.net/devel/datasets/index.html
We were always a bit slow with adding datasets because we were too
cautious about licensing issues. But R seems to get away with
considering most datasets to be public domain.
We keep adding datasets to statsmodels as we need them for new models.
The machine learning packages like sklearn have packaged the typical
machine learning datasets.
If you are interested, you could join up with statsmodels or with
Vincent to expand on what's available.
Josef
>
> Paul
>
> ------------------------------------------------------------------------------
> Live Security Virtual Conference
> Exclusive live event will cover all the ways today's security and
> threat landscape has changed and how IT managers can respond. Discussions
> will include endpoint security, mobile security and the latest in malware
> threats. http://www.accelacomm.com/jaw/sfrnl04242012/114/50122263/
> _______________________________________________
> Matplotlib-users mailing list
> Mat...@li...
> https://lists.sourceforge.net/lists/listinfo/matplotlib-users
>
From: Paul T. <pau...@gm...> - 2012年09月26日 04:05:48
In R, there are many default data sets one can use to both illustrate code
and explore the scripting language. Instead of having to fake data, one can
pull from meaningful data sets, created in the real world. For example,
this one liner actually produces a plot:
plot(mtcars$hp~mtcars$mpg)
where mtcars refers to a built-in data set taken from Motor Trend Magazine.
I don't believe matplotlib has anything similar. I have started to download
some of the R data sets and store them as pickles for my own use. Does
anyone else have any interest in creating a repository for these data sets
or otherwise sharing them in some way?
Paul
From: Michael D. <md...@st...> - 2012年09月26日 03:01:33
It doesn't have that functionality because we haven't needed it 
internally. Feel free to add it if you need it.
Mike
On 09/25/2012 07:13 PM, Damon McDougall wrote:
> Hi,
>
> I'm playing with cbook.Grouper(), and I see that join() adds elements.
> How do I remove elements?
>
> Best,
> Damon
>

Showing results of 26

1 2 > >> (Page 1 of 2)
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /