SourceForge logo
SourceForge logo
Menu

matplotlib-checkins — Commit notification. DO NOT POST to this list, just subscribe to it.

You can subscribe to this list here.

2007 Jan
Feb
Mar
Apr
May
Jun
Jul
(115)
Aug
(120)
Sep
(137)
Oct
(170)
Nov
(461)
Dec
(263)
2008 Jan
(120)
Feb
(74)
Mar
(35)
Apr
(74)
May
(245)
Jun
(356)
Jul
(240)
Aug
(115)
Sep
(78)
Oct
(225)
Nov
(98)
Dec
(271)
2009 Jan
(132)
Feb
(84)
Mar
(74)
Apr
(56)
May
(90)
Jun
(79)
Jul
(83)
Aug
(296)
Sep
(214)
Oct
(76)
Nov
(82)
Dec
(66)
2010 Jan
(46)
Feb
(58)
Mar
(51)
Apr
(77)
May
(58)
Jun
(126)
Jul
(128)
Aug
(64)
Sep
(50)
Oct
(44)
Nov
(48)
Dec
(54)
2011 Jan
(68)
Feb
(52)
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
(1)
2018 Jan
Feb
Mar
Apr
May
(1)
Jun
Jul
Aug
Sep
Oct
Nov
Dec
S M T W T F S




1
(20)
2
(5)
3
(2)
4
(4)
5
(14)
6
(19)
7
(17)
8
(14)
9
(34)
10
(16)
11
(1)
12
(22)
13
(21)
14
(36)
15
(28)
16
(20)
17
(23)
18
(10)
19
(4)
20
(16)
21
(17)
22
(7)
23
(6)
24
(3)
25
(1)
26
(27)
27
(13)
28
(26)
29
(10)
30
(25)

Showing results of 461

<< < 1 2 3 4 5 6 .. 19 > >> (Page 4 of 19)
Revision: 4459
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4459&view=rev
Author: jswhit
Date: 2007年11月26日 15:06:15 -0800 (2007年11月26日)
Log Message:
-----------
docstring modifications.
Modified Paths:
--------------
 trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
Modified: trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
===================================================================
--- trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py	2007年11月26日 22:53:37 UTC (rev 4458)
+++ trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py	2007年11月26日 23:06:15 UTC (rev 4459)
@@ -53,8 +53,11 @@
 
 def NetCDFFile(file):
 """NetCDF File reader. API is the same as Scientific.IO.NetCDF.
- if 'file' is a URL that starts with 'http', the pydap client is
- is used to read the data over http."""
+ If 'file' is a URL that starts with 'http', it is assumed
+ to be a remote OPenDAP dataest, and the python dap client is used
+ to retrieve the data. Only the OPenDAP Array and Grid data
+ types are recognized. If file does not start with 'http', it
+ is assumed to be a local NetCDF file."""
 if file.startswith('http'):
 return _RemoteFile(file)
 else:
@@ -361,8 +364,3 @@
 
 def typecode(self):
 return ['b', 'c', 'h', 'i', 'f', 'd'][self._nc_type-1]
-
- 
-def _test():
- import doctest
- doctest.testmod()
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4458
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4458&view=rev
Author: jswhit
Date: 2007年11月26日 14:53:37 -0800 (2007年11月26日)
Log Message:
-----------
make close method a no-op for remote datasets.
Modified Paths:
--------------
 trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
Modified: trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
===================================================================
--- trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py	2007年11月26日 21:53:01 UTC (rev 4457)
+++ trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py	2007年11月26日 22:53:37 UTC (rev 4458)
@@ -114,7 +114,8 @@
 self.variables[name] = _RemoveVariable(d)
 
 def close(self):
- self._buffer.close()
+ # this is a no-op provided for compatibility
+ pass
 
 
 class _RemoveVariable(object):
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <jd...@us...> - 2007年11月26日 21:53:07
Revision: 4457
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4457&view=rev
Author: jdh2358
Date: 2007年11月26日 13:53:01 -0800 (2007年11月26日)
Log Message:
-----------
fixed a bug in unit processing -- thanks chris
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/axes.py
 trunk/matplotlib/lib/matplotlib/mlab.py
Modified: trunk/matplotlib/lib/matplotlib/axes.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/axes.py	2007年11月26日 19:21:23 UTC (rev 4456)
+++ trunk/matplotlib/lib/matplotlib/axes.py	2007年11月26日 21:53:01 UTC (rev 4457)
@@ -182,7 +182,7 @@
 
 def __call__(self, *args, **kwargs):
 
- if self.axes.xaxis is not None and self.axes.xaxis is not None:
+ if self.axes.xaxis is not None and self.axes.yaxis is not None:
 xunits = kwargs.pop( 'xunits', self.axes.xaxis.units)
 yunits = kwargs.pop( 'yunits', self.axes.yaxis.units)
 if xunits!=self.axes.xaxis.units:
@@ -1289,6 +1289,8 @@
 if self.axison and self._frameon: self.axesPatch.draw(renderer)
 artists = []
 
+
+
 if len(self.images)<=1 or renderer.option_image_nocomposite():
 for im in self.images:
 im.draw(renderer)
@@ -3313,7 +3315,7 @@
 self.hold(holdstate) # restore previous hold state
 
 if adjust_xlim:
- xmin, xmax = self.dataLim.intervalx().get_bounds() 
+ xmin, xmax = self.dataLim.intervalx().get_bounds()
 xmin = npy.amin(width)
 if xerr is not None:
 xmin = xmin - npy.amax(xerr)
Modified: trunk/matplotlib/lib/matplotlib/mlab.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/mlab.py	2007年11月26日 19:21:23 UTC (rev 4456)
+++ trunk/matplotlib/lib/matplotlib/mlab.py	2007年11月26日 21:53:01 UTC (rev 4457)
@@ -14,10 +14,7 @@
 * find - Return the indices where some condition is true;
 numpy.nonzero is similar but more general.
 
- * polyfit - least squares best polynomial fit of x to y
 
- * polyval - evaluate a vector for a vector of polynomial coeffs
-
 * prctile - find the percentiles of a sequence
 
 * prepca - Principal Component Analysis
@@ -29,11 +26,14 @@
 
 The following are deprecated; please import directly from numpy
 (with care--function signatures may differ):
+
 * conv - convolution (numpy.convolve)
 * corrcoef - The matrix of correlation coefficients
 * hist -- Histogram (numpy.histogram)
 * linspace -- Linear spaced array from min to max
 * meshgrid
+ * polyfit - least squares best polynomial fit of x to y
+ * polyval - evaluate a vector for a vector of polynomial coeffs
 * trapz - trapeziodal integration (trapz(x,y) -> numpy.trapz(y,x))
 * vander - the Vandermonde matrix
 
@@ -46,13 +46,13 @@
 
 = record array helper functions =
 
- rec2csv : store record array in CSV file
- rec2excel : store record array in excel worksheet - required pyExcelerator
- rec2gtk : put record array in GTK treeview - requires gtk
- csv2rec : import record array from CSV file with type inspection
- rec_append_field : add a field/array to record array
- rec_drop_fields : drop fields from record array
- rec_join : join two record arrays on sequence of fields
+ * rec2csv : store record array in CSV file
+ * rec2excel : store record array in excel worksheet - required pyExcelerator
+ * rec2gtk : put record array in GTK treeview - requires gtk
+ * csv2rec : import record array from CSV file with type inspection
+ * rec_append_field : add a field/array to record array
+ * rec_drop_fields : drop fields from record array
+ * rec_join : join two record arrays on sequence of fields
 
 For the rec viewer clases (rec2csv, rec2excel and rec2gtk), there are
 a bunch of Format objects you can pass into the functions that will do
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4456
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4456&view=rev
Author: jswhit
Date: 2007年11月26日 11:21:23 -0800 (2007年11月26日)
Log Message:
-----------
add docstring.
Modified Paths:
--------------
 trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
Modified: trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
===================================================================
--- trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py	2007年11月26日 19:14:29 UTC (rev 4455)
+++ trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py	2007年11月26日 19:21:23 UTC (rev 4456)
@@ -52,10 +52,13 @@
 _typecodes = dict([[_v,_k] for _k,_v in typemap.items()])
 
 def NetCDFFile(file):
+ """NetCDF File reader. API is the same as Scientific.IO.NetCDF.
+ if 'file' is a URL that starts with 'http', the pydap client is
+ is used to read the data over http."""
 if file.startswith('http'):
- return RemoteFile(file)
+ return _RemoteFile(file)
 else:
- return LocalFile(file)
+ return _LocalFile(file)
 
 def _maskandscale(var,datout):
 if hasattr(var, 'missing_value') and (datout == var.missing_value).any():
@@ -68,7 +71,7 @@
 pass
 return datout
 
-class RemoteFile(object):
+class _RemoteFile(object):
 """A NetCDF file reader. API is the same as Scientific.IO.NetCDF."""
 
 def __init__(self, file):
@@ -108,13 +111,13 @@
 for k,d in self._buffer.iteritems():
 if isinstance(d, GridType) or isinstance(d, ArrayType):
 name = k
- self.variables[name] = RemoteVariable(d)
+ self.variables[name] = _RemoveVariable(d)
 
 def close(self):
 self._buffer.close()
 
 
-class RemoteVariable(object):
+class _RemoveVariable(object):
 def __init__(self, var):
 self._var = var
 self.dtype = var.type
@@ -134,7 +137,7 @@
 return _typecodes[self.dtype]
 
 
-class LocalFile(object):
+class _LocalFile(object):
 """A NetCDF file reader. API is the same as Scientific.IO.NetCDF."""
 
 def __init__(self, file):
@@ -266,7 +269,7 @@
 # Read offset.
 begin = [self._unpack_int, self._unpack_int64][self.version_byte-1]()
 
- return LocalVariable(self._buffer.fileno(), nc_type, vsize, begin, shape, dimensions, attributes, isrec, self._recsize)
+ return _LocalVariable(self._buffer.fileno(), nc_type, vsize, begin, shape, dimensions, attributes, isrec, self._recsize)
 
 def _read_values(self, n, nc_type):
 bytes = [1, 1, 2, 4, 4, 8]
@@ -305,7 +308,7 @@
 self._buffer.close()
 
 
-class LocalVariable(object):
+class _LocalVariable(object):
 def __init__(self, fileno, nc_type, vsize, begin, shape, dimensions, attributes, isrec=False, recsize=0):
 self._nc_type = nc_type
 self._vsize = vsize
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 19:14:42
Revision: 4455
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4455&view=rev
Author: jswhit
Date: 2007年11月26日 11:14:29 -0800 (2007年11月26日)
Log Message:
-----------
adjust map projection region
Modified Paths:
--------------
 trunk/toolkits/basemap/examples/fcstmaps.py
Modified: trunk/toolkits/basemap/examples/fcstmaps.py
===================================================================
--- trunk/toolkits/basemap/examples/fcstmaps.py	2007年11月26日 19:08:46 UTC (rev 4454)
+++ trunk/toolkits/basemap/examples/fcstmaps.py	2007年11月26日 19:14:29 UTC (rev 4455)
@@ -84,9 +84,7 @@
 t2min = t2mvar[0:ntimes,:,:]
 t2m = numpy.zeros((ntimes,nlats,nlons+1),t2min.dtype)
 # create Basemap instance for Orthographic projection.
-m = Basemap(lon_0=-105,lat_0=40,
- rsphere=6371200.,
- resolution='c',area_thresh=5000.,projection='ortho')
+m = Basemap(lon_0=-90,lat_0=60,projection='ortho')
 # add wrap-around point in longitude.
 for nt in range(ntimes):
 t2m[nt,:,:], lons = addcyclic(t2min[nt,:,:], lons1)
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 19:09:49
Revision: 4454
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4454&view=rev
Author: jswhit
Date: 2007年11月26日 11:08:46 -0800 (2007年11月26日)
Log Message:
-----------
bump version number
Modified Paths:
--------------
 trunk/toolkits/basemap/Changelog
 trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/basemap.py
 trunk/toolkits/basemap/setup.py
Modified: trunk/toolkits/basemap/Changelog
===================================================================
--- trunk/toolkits/basemap/Changelog	2007年11月26日 19:03:56 UTC (rev 4453)
+++ trunk/toolkits/basemap/Changelog	2007年11月26日 19:08:46 UTC (rev 4454)
@@ -1,5 +1,6 @@
+version 0.9.8 (not yet released)
 * modify NetCDFFile to use dap module to read remote
- datasets over http. Include dap module.
+ datasets over http. Include dap and httplib2 modules.
 * modify NetCDFFile to automatically apply scale_factor
 and add_offset, and return masked arrays masked where
 data == missing_value or _FillValue.
Modified: trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/basemap.py
===================================================================
--- trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/basemap.py	2007年11月26日 19:03:56 UTC (rev 4453)
+++ trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/basemap.py	2007年11月26日 19:08:46 UTC (rev 4454)
@@ -19,7 +19,7 @@
 # basemap data files now installed in lib/matplotlib/toolkits/basemap/data
 basemap_datadir = os.sep.join([os.path.dirname(__file__), 'data'])
 
-__version__ = '0.9.7'
+__version__ = '0.9.8'
 
 # supported map projections.
 _projnames = {'cyl' : 'Cylindrical Equidistant',
Modified: trunk/toolkits/basemap/setup.py
===================================================================
--- trunk/toolkits/basemap/setup.py	2007年11月26日 19:03:56 UTC (rev 4453)
+++ trunk/toolkits/basemap/setup.py	2007年11月26日 19:08:46 UTC (rev 4454)
@@ -134,7 +134,7 @@
 package_data = {'matplotlib.toolkits.basemap':pyproj_datafiles+basemap_datafiles}
 setup(
 name = "basemap",
- version = "0.9.7",
+ version = "0.9.8",
 description = "Plot data on map projections with matplotlib",
 long_description = """
 An add-on toolkit for matplotlib that lets you plot data
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 19:04:30
Revision: 4453
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4453&view=rev
Author: jswhit
Date: 2007年11月26日 11:03:56 -0800 (2007年11月26日)
Log Message:
-----------
update fcstmaps
Modified Paths:
--------------
 trunk/toolkits/basemap/examples/README
Modified: trunk/toolkits/basemap/examples/README
===================================================================
--- trunk/toolkits/basemap/examples/README	2007年11月26日 19:02:33 UTC (rev 4452)
+++ trunk/toolkits/basemap/examples/README	2007年11月26日 19:03:56 UTC (rev 4453)
@@ -48,10 +48,10 @@
 geos_demo_2.py demonstrates how to read in a geostationary satellite image
 from a jpeg file, then plot only a portion of the full earth (contributed
 by Scott Sinclair, requires PIL).
-fcstmaps.py is a sample multi-panel plot. Care is taken to preserve the aspect ratio of the map in each panel, and a common title and colorbar is created. 
-Requires the opendap module (http://opendap.oceanografia.org) and an 
-active internet connection to fetch the data to be plotted.
 
+fcstmaps.py is a sample multi-panel plot that accesses
+data over http using the dap module. An internet connection is required.
+
 wiki_example.py is the example from the MatplotlibCookbook scipy wiki page
 (http://www.scipy.org/wikis/topical_software/MatplotlibCookbook/wikipage_view).
 
@@ -104,4 +104,4 @@
 
 run_all.py is a driver script that runs all the examples except fcstmaps.py,
 testgdal.py, geos_demo_2.py, warpimage.py, and pnganim.py (which
-rely on external dependencies).
+rely on external dependencies and/or an internet connection).
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 19:02:42
Revision: 4452
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4452&view=rev
Author: jswhit
Date: 2007年11月26日 11:02:33 -0800 (2007年11月26日)
Log Message:
-----------
only install dap and httplib2 if not already available
Modified Paths:
--------------
 trunk/toolkits/basemap/setup.py
Modified: trunk/toolkits/basemap/setup.py
===================================================================
--- trunk/toolkits/basemap/setup.py	2007年11月26日 18:58:41 UTC (rev 4451)
+++ trunk/toolkits/basemap/setup.py	2007年11月26日 19:02:33 UTC (rev 4452)
@@ -102,16 +102,17 @@
 define_macros = dbf_macros()) ]
 
 # install dap and httplib2, if not already available.
-#try:
-# from dap import client
-#except ImportError:
-packages = packages + ['dap','dap.util','dap.parsers']
-package_dirs['dap'] = os.path.join('lib','dap')
-#try:
-# import httplib2
-#except ImportError:
-packages = packages + ['httplib2']
-package_dirs['httlib2'] = os.path.join('lib','httplib2')
+# only a subset of dap is installed (the client, not the server)
+try:
+ from dap import client
+except ImportError:
+ packages = packages + ['dap','dap.util','dap.parsers']
+ package_dirs['dap'] = os.path.join('lib','dap')
+try:
+ import httplib2
+except ImportError:
+ packages = packages + ['httplib2']
+ package_dirs['httlib2'] = os.path.join('lib','httplib2')
 
 if 'setuptools' in sys.modules:
 # Are we running with setuptools?
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 18:58:55
Revision: 4451
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4451&view=rev
Author: jswhit
Date: 2007年11月26日 10:58:41 -0800 (2007年11月26日)
Log Message:
-----------
add httplib2, since dap requires it.
Modified Paths:
--------------
 trunk/toolkits/basemap/MANIFEST.in
 trunk/toolkits/basemap/setup.py
Added Paths:
-----------
 trunk/toolkits/basemap/lib/httplib2/
 trunk/toolkits/basemap/lib/httplib2/__init__.py
 trunk/toolkits/basemap/lib/httplib2/iri2uri.py
Modified: trunk/toolkits/basemap/MANIFEST.in
===================================================================
--- trunk/toolkits/basemap/MANIFEST.in	2007年11月26日 18:48:01 UTC (rev 4450)
+++ trunk/toolkits/basemap/MANIFEST.in	2007年11月26日 18:58:41 UTC (rev 4451)
@@ -75,6 +75,7 @@
 include MANIFEST.in 
 recursive-include geos-2.2.3 *
 recursive-include lib/dap *
+recursive-include lib/httplib2 *
 recursive-include lib/dbflib *
 recursive-include lib/shapelib *
 include lib/matplotlib/toolkits/basemap/data/5minmask.bin
Added: trunk/toolkits/basemap/lib/httplib2/__init__.py
===================================================================
--- trunk/toolkits/basemap/lib/httplib2/__init__.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/httplib2/__init__.py	2007年11月26日 18:58:41 UTC (rev 4451)
@@ -0,0 +1,1123 @@
+from __future__ import generators
+"""
+httplib2
+
+A caching http interface that supports ETags and gzip
+to conserve bandwidth. 
+
+Requires Python 2.3 or later
+
+Changelog:
+2007年08月18日, Rick: Modified so it's able to use a socks proxy if needed.
+
+"""
+
+__author__ = "Joe Gregorio (jo...@bi...)"
+__copyright__ = "Copyright 2006, Joe Gregorio"
+__contributors__ = ["Thomas Broyer (t.b...@lt...)",
+ "James Antill",
+ "Xavier Verges Farrero",
+ "Jonathan Feinberg",
+ "Blair Zajac",
+ "Sam Ruby",
+ "Louis Nyffenegger"]
+__license__ = "MIT"
+__version__ = "$Rev: 259 $"
+
+import re 
+import sys 
+import md5
+import email
+import email.Utils
+import email.Message
+import StringIO
+import gzip
+import zlib
+import httplib
+import urlparse
+import base64
+import os
+import copy
+import calendar
+import time
+import random
+import sha
+import hmac
+from gettext import gettext as _
+import socket
+
+try:
+ import socks
+except ImportError:
+ socks = None
+
+if sys.version_info >= (2,3):
+ from iri2uri import iri2uri
+else:
+ def iri2uri(uri):
+ return uri
+
+__all__ = ['Http', 'Response', 'ProxyInfo', 'HttpLib2Error',
+ 'RedirectMissingLocation', 'RedirectLimit', 'FailedToDecompressContent', 
+ 'UnimplementedDigestAuthOptionError', 'UnimplementedHmacDigestAuthOptionError',
+ 'debuglevel']
+
+
+# The httplib debug level, set to a non-zero value to get debug output
+debuglevel = 0
+
+# Python 2.3 support
+if sys.version_info < (2,4):
+ def sorted(seq):
+ seq.sort()
+ return seq
+
+# Python 2.3 support
+def HTTPResponse__getheaders(self):
+ """Return list of (header, value) tuples."""
+ if self.msg is None:
+ raise httplib.ResponseNotReady()
+ return self.msg.items()
+
+if not hasattr(httplib.HTTPResponse, 'getheaders'):
+ httplib.HTTPResponse.getheaders = HTTPResponse__getheaders
+
+# All exceptions raised here derive from HttpLib2Error
+class HttpLib2Error(Exception): pass
+
+# Some exceptions can be caught and optionally 
+# be turned back into responses. 
+class HttpLib2ErrorWithResponse(HttpLib2Error):
+ def __init__(self, desc, response, content):
+ self.response = response
+ self.content = content
+ HttpLib2Error.__init__(self, desc)
+
+class RedirectMissingLocation(HttpLib2ErrorWithResponse): pass
+class RedirectLimit(HttpLib2ErrorWithResponse): pass
+class FailedToDecompressContent(HttpLib2ErrorWithResponse): pass
+class UnimplementedDigestAuthOptionError(HttpLib2ErrorWithResponse): pass
+class UnimplementedHmacDigestAuthOptionError(HttpLib2ErrorWithResponse): pass
+
+class RelativeURIError(HttpLib2Error): pass
+class ServerNotFoundError(HttpLib2Error): pass
+
+# Open Items:
+# -----------
+# Proxy support
+
+# Are we removing the cached content too soon on PUT (only delete on 200 Maybe?)
+
+# Pluggable cache storage (supports storing the cache in
+# flat files by default. We need a plug-in architecture
+# that can support Berkeley DB and Squid)
+
+# == Known Issues ==
+# Does not handle a resource that uses conneg and Last-Modified but no ETag as a cache validator.
+# Does not handle Cache-Control: max-stale
+# Does not use Age: headers when calculating cache freshness.
+
+
+# The number of redirections to follow before giving up.
+# Note that only GET redirects are automatically followed.
+# Will also honor 301 requests by saving that info and never
+# requesting that URI again.
+DEFAULT_MAX_REDIRECTS = 5
+
+# Which headers are hop-by-hop headers by default
+HOP_BY_HOP = ['connection', 'keep-alive', 'proxy-authenticate', 'proxy-authorization', 'te', 'trailers', 'transfer-encoding', 'upgrade']
+
+def _get_end2end_headers(response):
+ hopbyhop = list(HOP_BY_HOP)
+ hopbyhop.extend([x.strip() for x in response.get('connection', '').split(',')])
+ return [header for header in response.keys() if header not in hopbyhop]
+
+URI = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
+
+def parse_uri(uri):
+ """Parses a URI using the regex given in Appendix B of RFC 3986.
+
+ (scheme, authority, path, query, fragment) = parse_uri(uri)
+ """
+ groups = URI.match(uri).groups()
+ return (groups[1], groups[3], groups[4], groups[6], groups[8])
+
+def urlnorm(uri):
+ (scheme, authority, path, query, fragment) = parse_uri(uri)
+ if not scheme or not authority:
+ raise RelativeURIError("Only absolute URIs are allowed. uri = %s" % uri)
+ authority = authority.lower()
+ scheme = scheme.lower()
+ if not path: 
+ path = "/"
+ # Could do syntax based normalization of the URI before
+ # computing the digest. See Section 6.2.2 of Std 66.
+ request_uri = query and "?".join([path, query]) or path
+ scheme = scheme.lower()
+ defrag_uri = scheme + "://" + authority + request_uri
+ return scheme, authority, request_uri, defrag_uri
+
+
+# Cache filename construction (original borrowed from Venus http://intertwingly.net/code/venus/)
+re_url_scheme = re.compile(r'^\w+://')
+re_slash = re.compile(r'[?/:|]+')
+
+def safename(filename):
+ """Return a filename suitable for the cache.
+
+ Strips dangerous and common characters to create a filename we
+ can use to store the cache in.
+ """
+
+ try:
+ if re_url_scheme.match(filename):
+ if isinstance(filename,str):
+ filename = filename.decode('utf-8')
+ filename = filename.encode('idna')
+ else:
+ filename = filename.encode('idna')
+ except UnicodeError:
+ pass
+ if isinstance(filename,unicode):
+ filename=filename.encode('utf-8')
+ filemd5 = md5.new(filename).hexdigest()
+ filename = re_url_scheme.sub("", filename)
+ filename = re_slash.sub(",", filename)
+
+ # limit length of filename
+ if len(filename)>200:
+ filename=filename[:200]
+ return ",".join((filename, filemd5))
+
+NORMALIZE_SPACE = re.compile(r'(?:\r\n)?[ \t]+')
+def _normalize_headers(headers):
+ return dict([ (key.lower(), NORMALIZE_SPACE.sub(value, ' ').strip()) for (key, value) in headers.iteritems()])
+
+def _parse_cache_control(headers):
+ retval = {}
+ if headers.has_key('cache-control'):
+ parts = headers['cache-control'].split(',')
+ parts_with_args = [tuple([x.strip() for x in part.split("=")]) for part in parts if -1 != part.find("=")]
+ parts_wo_args = [(name.strip(), 1) for name in parts if -1 == name.find("=")]
+ retval = dict(parts_with_args + parts_wo_args)
+ return retval 
+
+# Whether to use a strict mode to parse WWW-Authenticate headers
+# Might lead to bad results in case of ill-formed header value,
+# so disabled by default, falling back to relaxed parsing.
+# Set to true to turn on, usefull for testing servers.
+USE_WWW_AUTH_STRICT_PARSING = 0
+
+# In regex below:
+# [^0円-\x1f\x7f-\xff()<>@,;:\\\"/[\]?={} \t]+ matches a "token" as defined by HTTP
+# "(?:[^0円-\x08\x0A-\x1f\x7f-\xff\\\"]|\\[0円-\x7f])*?" matches a "quoted-string" as defined by HTTP, when LWS have already been replaced by a single space
+# Actually, as an auth-param value can be either a token or a quoted-string, they are combined in a single pattern which matches both:
+# \"?((?<=\")(?:[^0円-\x1f\x7f-\xff\\\"]|\\[0円-\x7f])*?(?=\")|(?<!\")[^0円-\x08\x0A-\x1f\x7f-\xff()<>@,;:\\\"/[\]?={} \t]+(?!\"))\"?
+WWW_AUTH_STRICT = re.compile(r"^(?:\s*(?:,\s*)?([^0円-\x1f\x7f-\xff()<>@,;:\\\"/[\]?={} \t]+)\s*=\s*\"?((?<=\")(?:[^0円-\x08\x0A-\x1f\x7f-\xff\\\"]|\\[0円-\x7f])*?(?=\")|(?<!\")[^0円-\x1f\x7f-\xff()<>@,;:\\\"/[\]?={} \t]+(?!\"))\"?)(.*)$")
+WWW_AUTH_RELAXED = re.compile(r"^(?:\s*(?:,\s*)?([^ \t\r\n=]+)\s*=\s*\"?((?<=\")(?:[^\\\"]|\\.)*?(?=\")|(?<!\")[^ \t\r\n,]+(?!\"))\"?)(.*)$")
+UNQUOTE_PAIRS = re.compile(r'\\(.)')
+def _parse_www_authenticate(headers, headername='www-authenticate'):
+ """Returns a dictionary of dictionaries, one dict
+ per auth_scheme."""
+ retval = {}
+ if headers.has_key(headername):
+ authenticate = headers[headername].strip()
+ www_auth = USE_WWW_AUTH_STRICT_PARSING and WWW_AUTH_STRICT or WWW_AUTH_RELAXED
+ while authenticate:
+ # Break off the scheme at the beginning of the line
+ if headername == 'authentication-info':
+ (auth_scheme, the_rest) = ('digest', authenticate) 
+ else:
+ (auth_scheme, the_rest) = authenticate.split(" ", 1)
+ # Now loop over all the key value pairs that come after the scheme, 
+ # being careful not to roll into the next scheme
+ match = www_auth.search(the_rest)
+ auth_params = {}
+ while match:
+ if match and len(match.groups()) == 3:
+ (key, value, the_rest) = match.groups()
+ auth_params[key.lower()] = UNQUOTE_PAIRS.sub(r'1円', value) # '\\'.join([x.replace('\\', '') for x in value.split('\\\\')])
+ match = www_auth.search(the_rest)
+ retval[auth_scheme.lower()] = auth_params
+ authenticate = the_rest.strip()
+ return retval
+
+
+def _entry_disposition(response_headers, request_headers):
+ """Determine freshness from the Date, Expires and Cache-Control headers.
+
+ We don't handle the following:
+
+ 1. Cache-Control: max-stale
+ 2. Age: headers are not used in the calculations.
+
+ Not that this algorithm is simpler than you might think 
+ because we are operating as a private (non-shared) cache.
+ This lets us ignore 's-maxage'. We can also ignore
+ 'proxy-invalidate' since we aren't a proxy.
+ We will never return a stale document as 
+ fresh as a design decision, and thus the non-implementation 
+ of 'max-stale'. This also lets us safely ignore 'must-revalidate' 
+ since we operate as if every server has sent 'must-revalidate'.
+ Since we are private we get to ignore both 'public' and
+ 'private' parameters. We also ignore 'no-transform' since
+ we don't do any transformations. 
+ The 'no-store' parameter is handled at a higher level.
+ So the only Cache-Control parameters we look at are:
+
+ no-cache
+ only-if-cached
+ max-age
+ min-fresh
+ """
+ 
+ retval = "STALE"
+ cc = _parse_cache_control(request_headers)
+ cc_response = _parse_cache_control(response_headers)
+
+ if request_headers.has_key('pragma') and request_headers['pragma'].lower().find('no-cache') != -1:
+ retval = "TRANSPARENT"
+ if 'cache-control' not in request_headers:
+ request_headers['cache-control'] = 'no-cache'
+ elif cc.has_key('no-cache'):
+ retval = "TRANSPARENT"
+ elif cc_response.has_key('no-cache'):
+ retval = "STALE"
+ elif cc.has_key('only-if-cached'):
+ retval = "FRESH"
+ elif response_headers.has_key('date'):
+ date = calendar.timegm(email.Utils.parsedate_tz(response_headers['date']))
+ now = time.time()
+ current_age = max(0, now - date)
+ if cc_response.has_key('max-age'):
+ try:
+ freshness_lifetime = int(cc_response['max-age'])
+ except ValueError:
+ freshness_lifetime = 0
+ elif response_headers.has_key('expires'):
+ expires = email.Utils.parsedate_tz(response_headers['expires'])
+ if None == expires:
+ freshness_lifetime = 0
+ else:
+ freshness_lifetime = max(0, calendar.timegm(expires) - date)
+ else:
+ freshness_lifetime = 0
+ if cc.has_key('max-age'):
+ try:
+ freshness_lifetime = int(cc['max-age'])
+ except ValueError:
+ freshness_lifetime = 0
+ if cc.has_key('min-fresh'):
+ try:
+ min_fresh = int(cc['min-fresh'])
+ except ValueError:
+ min_fresh = 0
+ current_age += min_fresh 
+ if freshness_lifetime > current_age:
+ retval = "FRESH"
+ return retval 
+
+def _decompressContent(response, new_content):
+ content = new_content
+ try:
+ encoding = response.get('content-encoding', None)
+ if encoding in ['gzip', 'deflate']:
+ if encoding == 'gzip':
+ content = gzip.GzipFile(fileobj=StringIO.StringIO(new_content)).read()
+ if encoding == 'deflate':
+ content = zlib.decompress(content)
+ response['content-length'] = str(len(content))
+ del response['content-encoding']
+ except IOError:
+ content = ""
+ raise FailedToDecompressContent(_("Content purported to be compressed with %s but failed to decompress.") % response.get('content-encoding'), response, content)
+ return content
+
+def _updateCache(request_headers, response_headers, content, cache, cachekey):
+ if cachekey:
+ cc = _parse_cache_control(request_headers)
+ cc_response = _parse_cache_control(response_headers)
+ if cc.has_key('no-store') or cc_response.has_key('no-store'):
+ cache.delete(cachekey)
+ else:
+ info = email.Message.Message()
+ for key, value in response_headers.iteritems():
+ if key not in ['status','content-encoding','transfer-encoding']:
+ info[key] = value
+
+ status = response_headers.status
+ if status == 304:
+ status = 200
+
+ status_header = 'status: %d\r\n' % response_headers.status
+
+ header_str = info.as_string()
+
+ header_str = re.sub("\r(?!\n)|(?<!\r)\n", "\r\n", header_str)
+ text = "".join([status_header, header_str, content])
+
+ cache.set(cachekey, text)
+
+def _cnonce():
+ dig = md5.new("%s:%s" % (time.ctime(), ["0123456789"[random.randrange(0, 9)] for i in range(20)])).hexdigest()
+ return dig[:16]
+
+def _wsse_username_token(cnonce, iso_now, password):
+ return base64.encodestring(sha.new("%s%s%s" % (cnonce, iso_now, password)).digest()).strip()
+
+
+# For credentials we need two things, first 
+# a pool of credential to try (not necesarily tied to BAsic, Digest, etc.)
+# Then we also need a list of URIs that have already demanded authentication
+# That list is tricky since sub-URIs can take the same auth, or the 
+# auth scheme may change as you descend the tree.
+# So we also need each Auth instance to be able to tell us
+# how close to the 'top' it is.
+
+class Authentication(object):
+ def __init__(self, credentials, host, request_uri, headers, response, content, http):
+ (scheme, authority, path, query, fragment) = parse_uri(request_uri)
+ self.path = path
+ self.host = host
+ self.credentials = credentials
+ self.http = http
+
+ def depth(self, request_uri):
+ (scheme, authority, path, query, fragment) = parse_uri(request_uri)
+ return request_uri[len(self.path):].count("/")
+
+ def inscope(self, host, request_uri):
+ # XXX Should we normalize the request_uri?
+ (scheme, authority, path, query, fragment) = parse_uri(request_uri)
+ return (host == self.host) and path.startswith(self.path)
+
+ def request(self, method, request_uri, headers, content):
+ """Modify the request headers to add the appropriate
+ Authorization header. Over-rise this in sub-classes."""
+ pass
+
+ def response(self, response, content):
+ """Gives us a chance to update with new nonces
+ or such returned from the last authorized response.
+ Over-rise this in sub-classes if necessary.
+
+ Return TRUE is the request is to be retried, for 
+ example Digest may return stale=true.
+ """
+ return False
+
+
+
+class BasicAuthentication(Authentication):
+ def __init__(self, credentials, host, request_uri, headers, response, content, http):
+ Authentication.__init__(self, credentials, host, request_uri, headers, response, content, http)
+
+ def request(self, method, request_uri, headers, content):
+ """Modify the request headers to add the appropriate
+ Authorization header."""
+ headers['authorization'] = 'Basic ' + base64.encodestring("%s:%s" % self.credentials).strip() 
+
+
+class DigestAuthentication(Authentication):
+ """Only do qop='auth' and MD5, since that 
+ is all Apache currently implements"""
+ def __init__(self, credentials, host, request_uri, headers, response, content, http):
+ Authentication.__init__(self, credentials, host, request_uri, headers, response, content, http)
+ challenge = _parse_www_authenticate(response, 'www-authenticate')
+ self.challenge = challenge['digest']
+ qop = self.challenge.get('qop')
+ self.challenge['qop'] = ('auth' in [x.strip() for x in qop.split()]) and 'auth' or None
+ if self.challenge['qop'] is None:
+ raise UnimplementedDigestAuthOptionError( _("Unsupported value for qop: %s." % qop))
+ self.challenge['algorithm'] = self.challenge.get('algorithm', 'MD5')
+ if self.challenge['algorithm'] != 'MD5':
+ raise UnimplementedDigestAuthOptionError( _("Unsupported value for algorithm: %s." % self.challenge['algorithm']))
+ self.A1 = "".join([self.credentials[0], ":", self.challenge['realm'], ":", self.credentials[1]]) 
+ self.challenge['nc'] = 1
+
+ def request(self, method, request_uri, headers, content, cnonce = None):
+ """Modify the request headers"""
+ H = lambda x: md5.new(x).hexdigest()
+ KD = lambda s, d: H("%s:%s" % (s, d))
+ A2 = "".join([method, ":", request_uri])
+ self.challenge['cnonce'] = cnonce or _cnonce() 
+ request_digest = '"%s"' % KD(H(self.A1), "%s:%s:%s:%s:%s" % (self.challenge['nonce'], 
+ '%08x' % self.challenge['nc'], 
+ self.challenge['cnonce'], 
+ self.challenge['qop'], H(A2)
+ )) 
+ headers['Authorization'] = 'Digest username="%s", realm="%s", nonce="%s", uri="%s", algorithm=%s, response=%s, qop=%s, nc=%08x, cnonce="%s"' % (
+ self.credentials[0], 
+ self.challenge['realm'],
+ self.challenge['nonce'],
+ request_uri, 
+ self.challenge['algorithm'],
+ request_digest,
+ self.challenge['qop'],
+ self.challenge['nc'],
+ self.challenge['cnonce'],
+ )
+ self.challenge['nc'] += 1
+
+ def response(self, response, content):
+ if not response.has_key('authentication-info'):
+ challenge = _parse_www_authenticate(response, 'www-authenticate').get('digest', {})
+ if 'true' == challenge.get('stale'):
+ self.challenge['nonce'] = challenge['nonce']
+ self.challenge['nc'] = 1 
+ return True
+ else:
+ updated_challenge = _parse_www_authenticate(response, 'authentication-info').get('digest', {})
+
+ if updated_challenge.has_key('nextnonce'):
+ self.challenge['nonce'] = updated_challenge['nextnonce']
+ self.challenge['nc'] = 1 
+ return False
+
+
+class HmacDigestAuthentication(Authentication):
+ """Adapted from Robert Sayre's code and DigestAuthentication above."""
+ __author__ = "Thomas Broyer (t.b...@lt...)"
+
+ def __init__(self, credentials, host, request_uri, headers, response, content, http):
+ Authentication.__init__(self, credentials, host, request_uri, headers, response, content, http)
+ challenge = _parse_www_authenticate(response, 'www-authenticate')
+ self.challenge = challenge['hmacdigest']
+ # TODO: self.challenge['domain']
+ self.challenge['reason'] = self.challenge.get('reason', 'unauthorized')
+ if self.challenge['reason'] not in ['unauthorized', 'integrity']:
+ self.challenge['reason'] = 'unauthorized'
+ self.challenge['salt'] = self.challenge.get('salt', '')
+ if not self.challenge.get('snonce'):
+ raise UnimplementedHmacDigestAuthOptionError( _("The challenge doesn't contain a server nonce, or this one is empty."))
+ self.challenge['algorithm'] = self.challenge.get('algorithm', 'HMAC-SHA-1')
+ if self.challenge['algorithm'] not in ['HMAC-SHA-1', 'HMAC-MD5']:
+ raise UnimplementedHmacDigestAuthOptionError( _("Unsupported value for algorithm: %s." % self.challenge['algorithm']))
+ self.challenge['pw-algorithm'] = self.challenge.get('pw-algorithm', 'SHA-1')
+ if self.challenge['pw-algorithm'] not in ['SHA-1', 'MD5']:
+ raise UnimplementedHmacDigestAuthOptionError( _("Unsupported value for pw-algorithm: %s." % self.challenge['pw-algorithm']))
+ if self.challenge['algorithm'] == 'HMAC-MD5':
+ self.hashmod = md5
+ else:
+ self.hashmod = sha
+ if self.challenge['pw-algorithm'] == 'MD5':
+ self.pwhashmod = md5
+ else:
+ self.pwhashmod = sha
+ self.key = "".join([self.credentials[0], ":",
+ self.pwhashmod.new("".join([self.credentials[1], self.challenge['salt']])).hexdigest().lower(),
+ ":", self.challenge['realm']
+ ])
+ self.key = self.pwhashmod.new(self.key).hexdigest().lower()
+
+ def request(self, method, request_uri, headers, content):
+ """Modify the request headers"""
+ keys = _get_end2end_headers(headers)
+ keylist = "".join(["%s " % k for k in keys])
+ headers_val = "".join([headers[k] for k in keys])
+ created = time.strftime('%Y-%m-%dT%H:%M:%SZ',time.gmtime())
+ cnonce = _cnonce()
+ request_digest = "%s:%s:%s:%s:%s" % (method, request_uri, cnonce, self.challenge['snonce'], headers_val)
+ request_digest = hmac.new(self.key, request_digest, self.hashmod).hexdigest().lower()
+ headers['Authorization'] = 'HMACDigest username="%s", realm="%s", snonce="%s", cnonce="%s", uri="%s", created="%s", response="%s", headers="%s"' % (
+ self.credentials[0], 
+ self.challenge['realm'],
+ self.challenge['snonce'],
+ cnonce,
+ request_uri, 
+ created,
+ request_digest,
+ keylist,
+ )
+
+ def response(self, response, content):
+ challenge = _parse_www_authenticate(response, 'www-authenticate').get('hmacdigest', {})
+ if challenge.get('reason') in ['integrity', 'stale']:
+ return True
+ return False
+
+
+class WsseAuthentication(Authentication):
+ """This is thinly tested and should not be relied upon.
+ At this time there isn't any third party server to test against.
+ Blogger and TypePad implemented this algorithm at one point
+ but Blogger has since switched to Basic over HTTPS and 
+ TypePad has implemented it wrong, by never issuing a 401
+ challenge but instead requiring your client to telepathically know that
+ their endpoint is expecting WSSE profile="UsernameToken"."""
+ def __init__(self, credentials, host, request_uri, headers, response, content, http):
+ Authentication.__init__(self, credentials, host, request_uri, headers, response, content, http)
+
+ def request(self, method, request_uri, headers, content):
+ """Modify the request headers to add the appropriate
+ Authorization header."""
+ headers['Authorization'] = 'WSSE profile="UsernameToken"'
+ iso_now = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.gmtime())
+ cnonce = _cnonce()
+ password_digest = _wsse_username_token(cnonce, iso_now, self.credentials[1])
+ headers['X-WSSE'] = 'UsernameToken Username="%s", PasswordDigest="%s", Nonce="%s", Created="%s"' % (
+ self.credentials[0],
+ password_digest,
+ cnonce,
+ iso_now)
+
+class GoogleLoginAuthentication(Authentication):
+ def __init__(self, credentials, host, request_uri, headers, response, content, http):
+ from urllib import urlencode
+ Authentication.__init__(self, credentials, host, request_uri, headers, response, content, http)
+ challenge = _parse_www_authenticate(response, 'www-authenticate')
+ service = challenge['googlelogin'].get('service', 'xapi')
+ # Bloggger actually returns the service in the challenge
+ # For the rest we guess based on the URI
+ if service == 'xapi' and request_uri.find("calendar") > 0:
+ service = "cl"
+ # No point in guessing Base or Spreadsheet
+ #elif request_uri.find("spreadsheets") > 0:
+ # service = "wise"
+
+ auth = dict(Email=credentials[0], Passwd=credentials[1], service=service, source=headers['user-agent'])
+ resp, content = self.http.request("https://www.google.com/accounts/ClientLogin", method="POST", body=urlencode(auth), headers={'Content-Type': 'application/x-www-form-urlencoded'})
+ lines = content.split('\n')
+ d = dict([tuple(line.split("=", 1)) for line in lines if line])
+ if resp.status == 403:
+ self.Auth = ""
+ else:
+ self.Auth = d['Auth']
+
+ def request(self, method, request_uri, headers, content):
+ """Modify the request headers to add the appropriate
+ Authorization header."""
+ headers['authorization'] = 'GoogleLogin Auth=' + self.Auth 
+
+
+AUTH_SCHEME_CLASSES = {
+ "basic": BasicAuthentication,
+ "wsse": WsseAuthentication,
+ "digest": DigestAuthentication,
+ "hmacdigest": HmacDigestAuthentication,
+ "googlelogin": GoogleLoginAuthentication
+}
+
+AUTH_SCHEME_ORDER = ["hmacdigest", "googlelogin", "digest", "wsse", "basic"]
+
+def _md5(s):
+ return 
+
+class FileCache(object):
+ """Uses a local directory as a store for cached files.
+ Not really safe to use if multiple threads or processes are going to 
+ be running on the same cache.
+ """
+ def __init__(self, cache, safe=safename): # use safe=lambda x: md5.new(x).hexdigest() for the old behavior
+ self.cache = cache
+ self.safe = safe
+ if not os.path.exists(cache): 
+ os.makedirs(self.cache)
+
+ def get(self, key):
+ retval = None
+ cacheFullPath = os.path.join(self.cache, self.safe(key))
+ try:
+ f = file(cacheFullPath, "r")
+ retval = f.read()
+ f.close()
+ except IOError:
+ pass
+ return retval
+
+ def set(self, key, value):
+ cacheFullPath = os.path.join(self.cache, self.safe(key))
+ f = file(cacheFullPath, "w")
+ f.write(value)
+ f.close()
+
+ def delete(self, key):
+ cacheFullPath = os.path.join(self.cache, self.safe(key))
+ if os.path.exists(cacheFullPath):
+ os.remove(cacheFullPath)
+
+class Credentials(object):
+ def __init__(self):
+ self.credentials = []
+
+ def add(self, name, password, domain=""):
+ self.credentials.append((domain.lower(), name, password))
+
+ def clear(self):
+ self.credentials = []
+
+ def iter(self, domain):
+ for (cdomain, name, password) in self.credentials:
+ if cdomain == "" or domain == cdomain:
+ yield (name, password) 
+
+class KeyCerts(Credentials):
+ """Identical to Credentials except that
+ name/password are mapped to key/cert."""
+ pass
+
+
+class ProxyInfo(object):
+ """Collect information required to use a proxy."""
+ def __init__(self, proxy_type, proxy_host, proxy_port, proxy_rdns=None, proxy_user=None, proxy_pass=None):
+ """The parameter proxy_type must be set to one of socks.PROXY_TYPE_XXX
+ constants. For example:
+
+p = ProxyInfo(proxy_type=socks.PROXY_TYPE_HTTP, proxy_host='localhost', proxy_port=8000)
+ """
+ self.proxy_type, self.proxy_host, self.proxy_port, self.proxy_rdns, self.proxy_user, self.proxy_pass = proxy_type, proxy_host, proxy_port, proxy_rdns, proxy_user, proxy_pass
+
+ def astuple(self):
+ return (self.proxy_type, self.proxy_host, self.proxy_port, self.proxy_rdns,
+ self.proxy_user, self.proxy_pass)
+
+ def isgood(self):
+ return socks and (self.proxy_host != None) and (self.proxy_port != None)
+
+
+class HTTPConnectionWithTimeout(httplib.HTTPConnection):
+ """HTTPConnection subclass that supports timeouts"""
+
+ def __init__(self, host, port=None, strict=None, timeout=None, proxy_info=None):
+ httplib.HTTPConnection.__init__(self, host, port, strict)
+ self.timeout = timeout
+ self.proxy_info = proxy_info
+
+ def connect(self):
+ """Connect to the host and port specified in __init__."""
+ # Mostly verbatim from httplib.py.
+ msg = "getaddrinfo returns an empty list"
+ for res in socket.getaddrinfo(self.host, self.port, 0,
+ socket.SOCK_STREAM):
+ af, socktype, proto, canonname, sa = res
+ try:
+ if self.proxy_info and self.proxy_info.isgood():
+ self.sock = socks.socksocket(af, socktype, proto)
+ self.sock.setproxy(*self.proxy_info.astuple())
+ else:
+ self.sock = socket.socket(af, socktype, proto)
+ # Different from httplib: support timeouts.
+ if self.timeout is not None:
+ self.sock.settimeout(self.timeout)
+ # End of difference from httplib.
+ if self.debuglevel > 0:
+ print "connect: (%s, %s)" % (self.host, self.port)
+ self.sock.connect(sa)
+ except socket.error, msg:
+ if self.debuglevel > 0:
+ print 'connect fail:', (self.host, self.port)
+ if self.sock:
+ self.sock.close()
+ self.sock = None
+ continue
+ break
+ if not self.sock:
+ raise socket.error, msg
+
+class HTTPSConnectionWithTimeout(httplib.HTTPSConnection):
+ "This class allows communication via SSL."
+
+ def __init__(self, host, port=None, key_file=None, cert_file=None,
+ strict=None, timeout=None, proxy_info=None):
+ self.timeout = timeout
+ self.proxy_info = proxy_info
+ httplib.HTTPSConnection.__init__(self, host, port=port, key_file=key_file,
+ cert_file=cert_file, strict=strict)
+
+ def connect(self):
+ "Connect to a host on a given (SSL) port."
+
+ if self.proxy_info and self.proxy_info.isgood():
+ self.sock.setproxy(*self.proxy_info.astuple())
+ sock.setproxy(*self.proxy_info.astuple())
+ else:
+ sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ if self.timeout is not None:
+ sock.settimeout(self.timeout)
+ sock.connect((self.host, self.port))
+ ssl = socket.ssl(sock, self.key_file, self.cert_file)
+ self.sock = httplib.FakeSocket(sock, ssl)
+
+
+
+class Http(object):
+ """An HTTP client that handles:
+- all methods
+- caching
+- ETags
+- compression,
+- HTTPS
+- Basic
+- Digest
+- WSSE
+
+and more.
+ """
+ def __init__(self, cache=None, timeout=None, proxy_info=None):
+ """The value of proxy_info is a ProxyInfo instance.
+
+If 'cache' is a string then it is used as a directory name
+for a disk cache. Otherwise it must be an object that supports
+the same interface as FileCache."""
+ self.proxy_info = proxy_info
+ # Map domain name to an httplib connection
+ self.connections = {}
+ # The location of the cache, for now a directory
+ # where cached responses are held.
+ if cache and isinstance(cache, str):
+ self.cache = FileCache(cache)
+ else:
+ self.cache = cache
+
+ # Name/password
+ self.credentials = Credentials()
+
+ # Key/cert
+ self.certificates = KeyCerts()
+
+ # authorization objects
+ self.authorizations = []
+
+ # If set to False then no redirects are followed, even safe ones.
+ self.follow_redirects = True
+
+ # If 'follow_redirects' is True, and this is set to True then
+ # all redirecs are followed, including unsafe ones.
+ self.follow_all_redirects = False
+
+ self.ignore_etag = False
+
+ self.force_exception_to_status_code = False 
+
+ self.timeout = timeout
+
+ def _auth_from_challenge(self, host, request_uri, headers, response, content):
+ """A generator that creates Authorization objects
+ that can be applied to requests.
+ """
+ challenges = _parse_www_authenticate(response, 'www-authenticate')
+ for cred in self.credentials.iter(host):
+ for scheme in AUTH_SCHEME_ORDER:
+ if challenges.has_key(scheme):
+ yield AUTH_SCHEME_CLASSES[scheme](cred, host, request_uri, headers, response, content, self)
+
+ def add_credentials(self, name, password, domain=""):
+ """Add a name and password that will be used
+ any time a request requires authentication."""
+ self.credentials.add(name, password, domain)
+
+ def add_certificate(self, key, cert, domain):
+ """Add a key and cert that will be used
+ any time a request requires authentication."""
+ self.certificates.add(key, cert, domain)
+
+ def clear_credentials(self):
+ """Remove all the names and passwords
+ that are used for authentication"""
+ self.credentials.clear()
+ self.authorizations = []
+
+ def _conn_request(self, conn, request_uri, method, body, headers):
+ for i in range(2):
+ try:
+ conn.request(method, request_uri, body, headers)
+ response = conn.getresponse()
+ except socket.gaierror:
+ conn.close()
+ raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
+ except httplib.HTTPException, e:
+ if i == 0:
+ conn.close()
+ conn.connect()
+ continue
+ else:
+ raise
+ else:
+ content = response.read()
+ response = Response(response)
+ if method != "HEAD":
+ content = _decompressContent(response, content)
+
+ break;
+ return (response, content)
+
+
+ def _request(self, conn, host, absolute_uri, request_uri, method, body, headers, redirections, cachekey):
+ """Do the actual request using the connection object
+ and also follow one level of redirects if necessary"""
+
+ auths = [(auth.depth(request_uri), auth) for auth in self.authorizations if auth.inscope(host, request_uri)]
+ auth = auths and sorted(auths)[0][1] or None
+ if auth: 
+ auth.request(method, request_uri, headers, body)
+
+ (response, content) = self._conn_request(conn, request_uri, method, body, headers)
+
+ if auth: 
+ if auth.response(response, body):
+ auth.request(method, request_uri, headers, body)
+ (response, content) = self._conn_request(conn, request_uri, method, body, headers )
+ response._stale_digest = 1
+
+ if response.status == 401:
+ for authorization in self._auth_from_challenge(host, request_uri, headers, response, content):
+ authorization.request(method, request_uri, headers, body) 
+ (response, content) = self._conn_request(conn, request_uri, method, body, headers, )
+ if response.status != 401:
+ self.authorizations.append(authorization)
+ authorization.response(response, body)
+ break
+
+ if (self.follow_all_redirects or (method in ["GET", "HEAD"]) or response.status == 303):
+ if self.follow_redirects and response.status in [300, 301, 302, 303, 307]:
+ # Pick out the location header and basically start from the beginning
+ # remembering first to strip the ETag header and decrement our 'depth'
+ if redirections:
+ if not response.has_key('location') and response.status != 300:
+ raise RedirectMissingLocation( _("Redirected but the response is missing a Location: header."), response, content)
+ # Fix-up relative redirects (which violate an RFC 2616 MUST)
+ if response.has_key('location'):
+ location = response['location']
+ (scheme, authority, path, query, fragment) = parse_uri(location)
+ if authority == None:
+ response['location'] = urlparse.urljoin(absolute_uri, location)
+ if response.status == 301 and method in ["GET", "HEAD"]:
+ response['-x-permanent-redirect-url'] = response['location']
+ if not response.has_key('content-location'):
+ response['content-location'] = absolute_uri 
+ _updateCache(headers, response, content, self.cache, cachekey)
+ if headers.has_key('if-none-match'):
+ del headers['if-none-match']
+ if headers.has_key('if-modified-since'):
+ del headers['if-modified-since']
+ if response.has_key('location'):
+ location = response['location']
+ old_response = copy.deepcopy(response)
+ if not old_response.has_key('content-location'):
+ old_response['content-location'] = absolute_uri 
+ redirect_method = ((response.status == 303) and (method not in ["GET", "HEAD"])) and "GET" or method
+ (response, content) = self.request(location, redirect_method, body=body, headers = headers, redirections = redirections - 1)
+ response.previous = old_response
+ else:
+ raise RedirectLimit( _("Redirected more times than rediection_limit allows."), response, content)
+ elif response.status in [200, 203] and method == "GET":
+ # Don't cache 206's since we aren't going to handle byte range requests
+ if not response.has_key('content-location'):
+ response['content-location'] = absolute_uri 
+ _updateCache(headers, response, content, self.cache, cachekey)
+
+ return (response, content)
+
+
+# Need to catch and rebrand some exceptions
+# Then need to optionally turn all exceptions into status codes
+# including all socket.* and httplib.* exceptions.
+
+
+ def request(self, uri, method="GET", body=None, headers=None, redirections=DEFAULT_MAX_REDIRECTS, connection_type=None):
+ """ Performs a single HTTP request.
+The 'uri' is the URI of the HTTP resource and can begin 
+with either 'http' or 'https'. The value of 'uri' must be an absolute URI.
+
+The 'method' is the HTTP method to perform, such as GET, POST, DELETE, etc. 
+There is no restriction on the methods allowed.
+
+The 'body' is the entity body to be sent with the request. It is a string
+object.
+
+Any extra headers that are to be sent with the request should be provided in the
+'headers' dictionary.
+
+The maximum number of redirect to follow before raising an 
+exception is 'redirections. The default is 5.
+
+The return value is a tuple of (response, content), the first 
+being and instance of the 'Response' class, the second being 
+a string that contains the response entity body.
+ """
+ try:
+ if headers is None:
+ headers = {}
+ else:
+ headers = _normalize_headers(headers)
+
+ if not headers.has_key('user-agent'):
+ headers['user-agent'] = "Python-httplib2/%s" % __version__
+
+ uri = iri2uri(uri)
+
+ (scheme, authority, request_uri, defrag_uri) = urlnorm(uri)
+
+ conn_key = scheme+":"+authority
+ if conn_key in self.connections:
+ conn = self.connections[conn_key]
+ else:
+ if not connection_type:
+ connection_type = (scheme == 'https') and HTTPSConnectionWithTimeout or HTTPConnectionWithTimeout
+ certs = list(self.certificates.iter(authority))
+ if scheme == 'https' and certs:
+ conn = self.connections[conn_key] = connection_type(authority, key_file=certs[0][0],
+ cert_file=certs[0][1], timeout=self.timeout, proxy_info=self.proxy_info)
+ else:
+ conn = self.connections[conn_key] = connection_type(authority, timeout=self.timeout, proxy_info=self.proxy_info)
+ conn.set_debuglevel(debuglevel)
+
+ if method in ["GET", "HEAD"] and 'range' not in headers:
+ headers['accept-encoding'] = 'compress, gzip'
+
+ info = email.Message.Message()
+ cached_value = None
+ if self.cache:
+ cachekey = defrag_uri
+ cached_value = self.cache.get(cachekey)
+ if cached_value:
+ info = email.message_from_string(cached_value)
+ try:
+ content = cached_value.split('\r\n\r\n', 1)[1]
+ except IndexError:
+ self.cache.delete(cachekey)
+ cachekey = None
+ cached_value = None
+ else:
+ cachekey = None
+
+ if method in ["PUT"] and self.cache and info.has_key('etag') and not self.ignore_etag and 'if-match' not in headers:
+ # http://www.w3.org/1999/04/Editing/
+ headers['if-match'] = info['etag']
+
+ if method not in ["GET", "HEAD"] and self.cache and cachekey:
+ # RFC 2616 Section 13.10
+ self.cache.delete(cachekey)
+
+ if cached_value and method in ["GET", "HEAD"] and self.cache and 'range' not in headers:
+ if info.has_key('-x-permanent-redirect-url'):
+ # Should cached permanent redirects be counted in our redirection count? For now, yes.
+ (response, new_content) = self.request(info['-x-permanent-redirect-url'], "GET", headers = headers, redirections = redirections - 1)
+ response.previous = Response(info)
+ response.previous.fromcache = True
+ else:
+ # Determine our course of action:
+ # Is the cached entry fresh or stale?
+ # Has the client requested a non-cached response?
+ # 
+ # There seems to be three possible answers: 
+ # 1. [FRESH] Return the cache entry w/o doing a GET
+ # 2. [STALE] Do the GET (but add in cache validators if available)
+ # 3. [TRANSPARENT] Do a GET w/o any cache validators (Cache-Control: no-cache) on the request
+ entry_disposition = _entry_disposition(info, headers) 
+ 
+ if entry_disposition == "FRESH":
+ if not cached_value:
+ info['status'] = '504'
+ content = ""
+ response = Response(info)
+ if cached_value:
+ response.fromcache = True
+ return (response, content)
+
+ if entry_disposition == "STALE":
+ if info.has_key('etag') and not self.ignore_etag and not 'if-none-match' in headers:
+ headers['if-none-match'] = info['etag']
+ if info.has_key('last-modified') and not 'last-modified' in headers:
+ headers['if-modified-since'] = info['last-modified']
+ elif entry_disposition == "TRANSPARENT":
+ pass
+
+ (response, new_content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
+
+ if response.status == 304 and method == "GET":
+ # Rewrite the cache entry with the new end-to-end headers
+ # Take all headers that are in response 
+ # and overwrite their values in info.
+ # unless they are hop-by-hop, or are listed in the connection header.
+
+ for key in _get_end2end_headers(response):
+ info[key] = response[key]
+ merged_response = Response(info)
+ if hasattr(response, "_stale_digest"):
+ merged_response._stale_digest = response._stale_digest
+ _updateCache(headers, merged_response, content, self.cache, cachekey)
+ response = merged_response
+ response.status = 200
+ response.fromcache = True 
+
+ elif response.status == 200:
+ content = new_content
+ else:
+ self.cache.delete(cachekey)
+ content = new_content 
+ else: 
+ (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
+ except Exception, e:
+ if self.force_exception_to_status_code:
+ if isinstance(e, HttpLib2ErrorWithResponse):
+ response = e.response
+ content = e.content
+ response.status = 500
+ response.reason = str(e) 
+ elif isinstance(e, socket.timeout):
+ content = "Request Timeout"
+ response = Response( {
+ "content-type": "text/plain",
+ "status": "408",
+ "content-length": len(content)
+ })
+ response.reason = "Request Timeout"
+ else:
+ content = str(e) 
+ response = Response( {
+ "content-type": "text/plain",
+ "status": "400",
+ "content-length": len(content)
+ })
+ response.reason = "Bad Request" 
+ else:
+ raise
+
+ 
+ return (response, content)
+
+ 
+
+class Response(dict):
+ """An object more like email.Message than httplib.HTTPResponse."""
+ 
+ """Is this response from our local cache"""
+ fromcache = False
+
+ """HTTP protocol version used by server. 10 for HTTP/1.0, 11 for HTTP/1.1. """
+ version = 11
+
+ "Status code returned by server. "
+ status = 200
+
+ """Reason phrase returned by server."""
+ reason = "Ok"
+
+ previous = None
+
+ def __init__(self, info):
+ # info is either an email.Message or 
+ # an httplib.HTTPResponse object.
+ if isinstance(info, httplib.HTTPResponse):
+ for key, value in info.getheaders(): 
+ self[key] = value 
+ self.status = info.status
+ self['status'] = str(self.status)
+ self.reason = info.reason
+ self.version = info.version
+ elif isinstance(info, email.Message.Message):
+ for key, value in info.items(): 
+ self[key] = value 
+ self.status = int(self['status'])
+ else:
+ for key, value in info.iteritems(): 
+ self[key] = value 
+ self.status = int(self.get('status', self.status))
+
+
+ def __getattr__(self, name):
+ if name == 'dict':
+ return self 
+ else: 
+ raise AttributeError, name 
Added: trunk/toolkits/basemap/lib/httplib2/iri2uri.py
===================================================================
--- trunk/toolkits/basemap/lib/httplib2/iri2uri.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/httplib2/iri2uri.py	2007年11月26日 18:58:41 UTC (rev 4451)
@@ -0,0 +1,110 @@
+"""
+iri2uri
+
+Converts an IRI to a URI.
+
+"""
+__author__ = "Joe Gregorio (jo...@bi...)"
+__copyright__ = "Copyright 2006, Joe Gregorio"
+__contributors__ = []
+__version__ = "1.0.0"
+__license__ = "MIT"
+__history__ = """
+"""
+
+import urlparse
+
+
+# Convert an IRI to a URI following the rules in RFC 3987
+# 
+# The characters we need to enocde and escape are defined in the spec:
+#
+# iprivate = %xE000-F8FF / %xF0000-FFFFD / %x100000-10FFFD
+# ucschar = %xA0-D7FF / %xF900-FDCF / %xFDF0-FFEF
+# / %x10000-1FFFD / %x20000-2FFFD / %x30000-3FFFD
+# / %x40000-4FFFD / %x50000-5FFFD / %x60000-6FFFD
+# / %x70000-7FFFD / %x80000-8FFFD / %x90000-9FFFD
+# / %xA0000-AFFFD / %xB0000-BFFFD / %xC0000-CFFFD
+# / %xD0000-DFFFD / %xE1000-EFFFD
+
+escape_range = [
+ (0xA0, 0xD7FF ),
+ (0xE000, 0xF8FF ),
+ (0xF900, 0xFDCF ),
+ (0xFDF0, 0xFFEF),
+ (0x10000, 0x1FFFD ),
+ (0x20000, 0x2FFFD ),
+ (0x30000, 0x3FFFD),
+ (0x40000, 0x4FFFD ),
+ (0x50000, 0x5FFFD ),
+ (0x60000, 0x6FFFD),
+ (0x70000, 0x7FFFD ),
+ (0x80000, 0x8FFFD ),
+ (0x90000, 0x9FFFD),
+ (0xA0000, 0xAFFFD ),
+ (0xB0000, 0xBFFFD ),
+ (0xC0000, 0xCFFFD),
+ (0xD0000, 0xDFFFD ),
+ (0xE1000, 0xEFFFD),
+ (0xF0000, 0xFFFFD ),
+ (0x100000, 0x10FFFD)
+]
+ 
+def encode(c):
+ retval = c
+ i = ord(c)
+ for low, high in escape_range:
+ if i < low:
+ break
+ if i >= low and i <= high:
+ retval = "".join(["%%%2X" % ord(o) for o in c.encode('utf-8')])
+ break
+ return retval
+
+
+def iri2uri(uri):
+ """Convert an IRI to a URI. Note that IRIs must be 
+ passed in a unicode strings. That is, do not utf-8 encode
+ the IRI before passing it into the function.""" 
+ if isinstance(uri ,unicode):
+ (scheme, authority, path, query, fragment) = urlparse.urlsplit(uri)
+ authority = authority.encode('idna')
+ # For each character in 'ucschar' or 'iprivate'
+ # 1. encode as utf-8
+ # 2. then %-encode each octet of that utf-8 
+ uri = urlparse.urlunsplit((scheme, authority, path, query, fragment))
+ uri = "".join([encode(c) for c in uri])
+ return uri
+ 
+if __name__ == "__main__":
+ import unittest
+
+ class Test(unittest.TestCase):
+
+ def test_uris(self):
+ """Test that URIs are invariant under the transformation."""
+ invariant = [ 
+ u"ftp://ftp.is.co.za/rfc/rfc1808.txt",
+ u"http://www.ietf.org/rfc/rfc2396.txt",
+ u"ldap://[2001:db8::7]/c=GB?objectClass?one",
+ u"mailto:Joh...@ex...",
+ u"news:comp.infosystems.www.servers.unix",
+ u"tel:+1-816-555-1212",
+ u"telnet://192.0.2.16:80/",
+ u"urn:oasis:names:specification:docbook:dtd:xml:4.1.2" ]
+ for uri in invariant:
+ self.assertEqual(uri, iri2uri(uri))
+ 
+ def test_iri(self):
+ """ Test that the right type of escaping is done for each part of the URI."""
+ self.assertEqual("http://xn--o3h.com/%E2%98%84", iri2uri(u"http://\N{COMET}.com/\N{COMET}"))
+ self.assertEqual("http://bitworking.org/?fred=%E2%98%84", iri2uri(u"http://bitworking.org/?fred=\N{COMET}"))
+ self.assertEqual("http://bitworking.org/#%E2%98%84", iri2uri(u"http://bitworking.org/#\N{COMET}"))
+ self.assertEqual("#%E2%98%84", iri2uri(u"#\N{COMET}"))
+ self.assertEqual("/fred?bar=%E2%98%9A#%E2%98%84", iri2uri(u"/fred?bar=\N{BLACK LEFT POINTING INDEX}#\N{COMET}"))
+ self.assertEqual("/fred?bar=%E2%98%9A#%E2%98%84", iri2uri(iri2uri(u"/fred?bar=\N{BLACK LEFT POINTING INDEX}#\N{COMET}")))
+ self.assertNotEqual("/fred?bar=%E2%98%9A#%E2%98%84", iri2uri(u"/fred?bar=\N{BLACK LEFT POINTING INDEX}#\N{COMET}".encode('utf-8')))
+
+ unittest.main()
+
+ 
Modified: trunk/toolkits/basemap/setup.py
===================================================================
--- trunk/toolkits/basemap/setup.py	2007年11月26日 18:48:01 UTC (rev 4450)
+++ trunk/toolkits/basemap/setup.py	2007年11月26日 18:58:41 UTC (rev 4451)
@@ -101,12 +101,17 @@
 include_dirs = ["pyshapelib/shapelib"],
 define_macros = dbf_macros()) ]
 
-# install dap, if not already available.
-try:
- from dap import client
-except ImportError:
- packages = packages + ['dap','dap.util','dap.parsers']
- package_dirs['dap'] = os.path.join('lib','dap')
+# install dap and httplib2, if not already available.
+#try:
+# from dap import client
+#except ImportError:
+packages = packages + ['dap','dap.util','dap.parsers']
+package_dirs['dap'] = os.path.join('lib','dap')
+#try:
+# import httplib2
+#except ImportError:
+packages = packages + ['httplib2']
+package_dirs['httlib2'] = os.path.join('lib','httplib2')
 
 if 'setuptools' in sys.modules:
 # Are we running with setuptools?
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 18:48:16
Revision: 4450
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4450&view=rev
Author: jswhit
Date: 2007年11月26日 10:48:01 -0800 (2007年11月26日)
Log Message:
-----------
add missing file to dap package
Added Paths:
-----------
 trunk/toolkits/basemap/lib/dap/proxy.py
Added: trunk/toolkits/basemap/lib/dap/proxy.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/proxy.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/proxy.py	2007年11月26日 18:48:01 UTC (rev 4450)
@@ -0,0 +1,147 @@
+from __future__ import division
+
+"""Proxy class to DAP data.
+
+This module implements a proxy object that behaves like an array and 
+downloads data transparently from a DAP server when sliced. It is used
+when building a representation of the dataset, and should be not 
+directly used.
+"""
+
+__author__ = "Roberto De Almeida <ro...@py...>"
+
+import sys
+
+from dap.xdr import DapUnpacker
+from dap.helper import fix_slice
+from dap.util.http import openurl
+
+try:
+ from numpy import ndarray as ndarray_
+except ImportError:
+ ndarray_ = None
+
+
+class Proxy(object):
+ """
+ A proxy to data stored in a DODS server.
+ """
+ def __init__(self, url, id, shape, type, filters=None, cache=None, username=None, password=None):
+ self.url = url
+ self.id = id
+ self.shape = shape
+ self.type = type
+
+ self.filters = filters or []
+
+ self.cache = cache
+ self.username = username
+ self.password = password
+
+ def __iter__(self):
+ return iter(self[:])
+
+ def __getitem__(self, index):
+ """
+ Download data from DAP server.
+ 
+ When the proxy object is sliced, it build an URL from the slice
+ and retrieves the data from the DAP server.
+ """
+ # Build the base URL.
+ url = '%s.dods?%s' % (self.url, self.id)
+
+ if self.shape:
+ # Fix the index for incomplete slices or ellipsis.
+ index = fix_slice(len(self.shape), index)
+
+ # Force index to tuple, to iterate over the slices.
+ if not isinstance(index, tuple):
+ index = index,
+
+ # Iterate over the sliced dimensions.
+ i = 0
+ outshape = []
+ for dimension in index:
+ # If dimension is a slice, get start, step and stop.
+ if isinstance(dimension, slice):
+ start = dimension.start or 0
+ step = dimension.step or 1
+ if dimension.stop:
+ stop = dimension.stop-1
+ else:
+ stop = self.shape[i]-1
+
+ # Otherwise, retrieve a single value.
+ else:
+ start = dimension
+ stop = dimension
+ step = 1
+
+ # When stop is not specified, use the shape.
+ if stop == sys.maxint or stop > self.shape[i]-1:
+ stop = self.shape[i]-1
+ # Negative slices. 
+ elif stop < 0:
+ stop = self.shape[i]+stop
+
+ # Negative starting slices.
+ if start < 0: start = self.shape[i]+start
+
+ # Build the URL used to retrieve the data.
+ url = '%s[%s:%s:%s]' % (url, str(start), str(step), str(stop))
+
+ # outshape is a list of the slice dimensions.
+ outshape.append(1+(stop-start)//step)
+ 
+ # Update to next dimension.
+ i += 1
+
+ else:
+ # No need to resize the data.
+ outshape = None
+
+ # Make the outshape consistent with the numpy and pytables conventions.
+ if outshape is not None:
+ outshape = self._reduce_outshape(outshape)
+
+ # Check for filters.
+ if self.filters:
+ ce = '&'.join(self.filters)
+ url = '%s&%s' % (url, ce)
+
+ # Fetch data.
+ resp, data = openurl(url, self.cache, self.username, self.password)
+
+ # First lines are ASCII information that end with 'Data:\n'.
+ start = data.index('Data:\n') + len('Data:\n')
+ xdrdata = data[start:]
+
+ # Unpack data.
+ output = DapUnpacker(xdrdata, self.shape, self.type, outshape).getvalue()
+
+ # Convert length 1 arrays to scalars
+ if ndarray_:
+ if outshape == () and isinstance(output, ndarray_):
+ output = output[0]
+
+ return output
+
+
+ def _reduce_outshape(self, outshape):
+ """Make the outshape consistent with the numpy and pytables conventions.
+
+ (1, N) -> (N,)
+ (N, 1) -> (N,)
+ (N, M, 1) -> (N, M)
+
+ Basically, all ones are removed from the shape.
+ """
+ return tuple([index for index in outshape if index != 1])
+
+def _test():
+ import doctest
+ doctest.testmod()
+
+if __name__ == "__main__":
+ _test()
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <js...@us...> - 2007年11月26日 18:30:20
Revision: 4449
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4449&view=rev
Author: jswhit
Date: 2007年11月26日 10:29:14 -0800 (2007年11月26日)
Log Message:
-----------
add the ability to read remote datasets over http using the opendap
client by Roberto D'Almeida (included). NetCDFFile modified so
that it uses the opendap client if the filename begins with 'http'.
Modified Paths:
--------------
 trunk/toolkits/basemap/Changelog
 trunk/toolkits/basemap/MANIFEST.in
 trunk/toolkits/basemap/examples/fcstmaps.py
 trunk/toolkits/basemap/lib/matplotlib/toolkits/basemap/pupynere.py
 trunk/toolkits/basemap/setup.py
Added Paths:
-----------
 trunk/toolkits/basemap/lib/dap/
 trunk/toolkits/basemap/lib/dap/__init__.py
 trunk/toolkits/basemap/lib/dap/client.py
 trunk/toolkits/basemap/lib/dap/dtypes.py
 trunk/toolkits/basemap/lib/dap/exceptions.py
 trunk/toolkits/basemap/lib/dap/helper.py
 trunk/toolkits/basemap/lib/dap/lib.py
 trunk/toolkits/basemap/lib/dap/parsers/
 trunk/toolkits/basemap/lib/dap/parsers/__init__.py
 trunk/toolkits/basemap/lib/dap/parsers/das.py
 trunk/toolkits/basemap/lib/dap/parsers/dds.py
 trunk/toolkits/basemap/lib/dap/util/
 trunk/toolkits/basemap/lib/dap/util/__init__.py
 trunk/toolkits/basemap/lib/dap/util/filter.py
 trunk/toolkits/basemap/lib/dap/util/http.py
 trunk/toolkits/basemap/lib/dap/util/ordereddict.py
 trunk/toolkits/basemap/lib/dap/util/safeeval.py
 trunk/toolkits/basemap/lib/dap/util/wsgi_intercept.py
 trunk/toolkits/basemap/lib/dap/xdr.py
Modified: trunk/toolkits/basemap/Changelog
===================================================================
--- trunk/toolkits/basemap/Changelog	2007年11月26日 17:23:18 UTC (rev 4448)
+++ trunk/toolkits/basemap/Changelog	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -1,3 +1,5 @@
+ * modify NetCDFFile to use dap module to read remote
+ datasets over http. Include dap module.
 * modify NetCDFFile to automatically apply scale_factor
 and add_offset, and return masked arrays masked where
 data == missing_value or _FillValue.
Modified: trunk/toolkits/basemap/MANIFEST.in
===================================================================
--- trunk/toolkits/basemap/MANIFEST.in	2007年11月26日 17:23:18 UTC (rev 4448)
+++ trunk/toolkits/basemap/MANIFEST.in	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -74,6 +74,9 @@
 include pyshapelib/shapelib/*.c pyshapelib/shapelib/*.h
 include MANIFEST.in 
 recursive-include geos-2.2.3 *
+recursive-include lib/dap *
+recursive-include lib/dbflib *
+recursive-include lib/shapelib *
 include lib/matplotlib/toolkits/basemap/data/5minmask.bin
 include lib/matplotlib/toolkits/basemap/data/GL27
 include lib/matplotlib/toolkits/basemap/data/countries_c.dat
Modified: trunk/toolkits/basemap/examples/fcstmaps.py
===================================================================
--- trunk/toolkits/basemap/examples/fcstmaps.py	2007年11月26日 17:23:18 UTC (rev 4448)
+++ trunk/toolkits/basemap/examples/fcstmaps.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -1,19 +1,14 @@
 # this example reads today's numerical weather forecasts 
 # from the NOAA OpenDAP servers and makes a multi-panel plot.
-# Requires the pyDAP module (a pure-python module)
-# from http://pydap.org, and an active intenet connection.
-
-try:
- from dap import client
-except:
- raise ImportError,"requires pyDAP module (version 2.1 or higher) from http://pydap.org"
-from pylab import title, show, figure, cm, arange, frange, figtext, \
- meshgrid, axes, colorbar, where, amin, amax, around
+from pylab import title, show, figure, cm, figtext, \
+ meshgrid, axes, colorbar
+import numpy
 import sys
 from matplotlib.numerix import ma
 import datetime
-from matplotlib.toolkits.basemap import Basemap
+from matplotlib.toolkits.basemap import Basemap, NetCDFFile, addcyclic
 
+
 hrsgregstart = 13865688 # hrs from 00010101 to 15821015 in Julian calendar.
 # times in many datasets use mixed Gregorian/Julian calendar, datetime 
 # module uses a proleptic Gregorian calendar. So, I use datetime to compute
@@ -48,12 +43,11 @@
 YYYYMM = YYYYMMDD[0:6]
 
 # set OpenDAP server URL.
-HH='09'
-URLbase="http://nomad3.ncep.noaa.gov:9090/dods/sref/sref"
-URL=URLbase+YYYYMMDD+"/sref_eta_ctl1_"+HH+"z"
+URLbase="http://nomad3.ncep.noaa.gov:9090/dods/mrf/mrf"
+URL=URLbase+YYYYMMDD+'/mrf'+YYYYMMDD
 print URL+'\n'
 try:
- data = client.open(URL)
+ data = NetCDFFile(URL)
 except:
 msg = """
 opendap server not providing the requested data.
@@ -61,14 +55,14 @@
 raise IOError, msg
 
 
-# read levels, lats,lons,times.
+# read lats,lons,times.
 
-print data.keys()
-levels = data['lev']
-latitudes = data['lat']
-longitudes = data['lon']
-fcsttimes = data['time']
-times = fcsttimes[:]
+print data.variables.keys()
+latitudes = data.variables['lat']
+longitudes = data.variables['lon']
+fcsttimes = data.variables['time']
+times = fcsttimes[0:6] # first 6 forecast times.
+ntimes = len(times)
 # put forecast times in YYYYMMDDHH format.
 verifdates = []
 fcsthrs=[]
@@ -79,55 +73,43 @@
 verifdates.append(fdate.strftime('%Y%m%d%H'))
 print fcsthrs
 print verifdates
-levs = levels[:]
 lats = latitudes[:]
-lons = longitudes[:]
-lons, lats = meshgrid(lons,lats)
+nlats = len(lats)
+lons1 = longitudes[:]
+nlons = len(lons1)
 
 # unpack 2-meter temp forecast data.
 
-t2mvar = data['tmp2m']
-missval = t2mvar.missing_value
-t2m = t2mvar[:,:,:]
-if missval < 0:
- t2m = ma.masked_values(where(t2m>-1.e20,t2m,1.e20), 1.e20)
-else:
- t2m = ma.masked_values(where(t2m<1.e20,t2m,1.e20), 1.e20)
-t2min = amin(t2m.compressed()); t2max= amax(t2m.compressed())
-print t2min,t2max
-clevs = frange(around(t2min/10.)*10.-5.,around(t2max/10.)*10.+5.,4)
-print clevs[0],clevs[-1]
-llcrnrlat = 22.0
-urcrnrlat = 48.0
-latminout = 22.0
-llcrnrlon = -125.0
-urcrnrlon = -60.0
-standardpar = 50.0
-centerlon=-105.
-# create Basemap instance for Lambert Conformal Conic projection.
-m = Basemap(llcrnrlon=llcrnrlon,llcrnrlat=llcrnrlat,
- urcrnrlon=urcrnrlon,urcrnrlat=urcrnrlat,
+t2mvar = data.variables['tmp2m']
+t2min = t2mvar[0:ntimes,:,:]
+t2m = numpy.zeros((ntimes,nlats,nlons+1),t2min.dtype)
+# create Basemap instance for Orthographic projection.
+m = Basemap(lon_0=-105,lat_0=40,
 rsphere=6371200.,
- resolution='l',area_thresh=5000.,projection='lcc',
- lat_1=standardpar,lon_0=centerlon)
+ resolution='c',area_thresh=5000.,projection='ortho')
+# add wrap-around point in longitude.
+for nt in range(ntimes):
+ t2m[nt,:,:], lons = addcyclic(t2min[nt,:,:], lons1)
+# convert to celsius.
+t2m = t2m-273.15
+# contour levels
+clevs = numpy.arange(-30,30.1,2.)
+lons, lats = meshgrid(lons, lats)
 x, y = m(lons, lats)
 # create figure.
-fig=figure(figsize=(8,8))
-yoffset = (m.urcrnry-m.llcrnry)/30.
-for npanel,fcsthr in enumerate(arange(0,72,12)):
- nt = fcsthrs.index(fcsthr)
- ax = fig.add_subplot(320+npanel+1)
- #cs = m.contour(x,y,t2m[nt,:,:],clevs,colors='k')
- cs = m.contourf(x,y,t2m[nt,:,:],clevs,cmap=cm.jet)
- m.drawcoastlines()
- m.drawstates()
+fig=figure(figsize=(6,8))
+# make subplots.
+for nt,fcsthr in enumerate(fcsthrs):
+ ax = fig.add_subplot(321+nt)
+ cs = m.contourf(x,y,t2m[nt,:,:],clevs,cmap=cm.jet,extend='both')
+ m.drawcoastlines(linewidth=0.5)
 m.drawcountries()
- m.drawparallels(arange(25,75,20),labels=[1,0,0,0],fontsize=8,fontstyle='oblique')
- m.drawmeridians(arange(-140,0,20),labels=[0,0,0,1],fontsize=8,yoffset=yoffset,fontstyle='oblique')
+ m.drawparallels(numpy.arange(-80,81,20))
+ m.drawmeridians(numpy.arange(0,360,20))
 # panel title
- title(repr(fcsthr)+'-h forecast valid '+verifdates[nt],fontsize=12)
+ title(repr(fcsthr)+'-h forecast valid '+verifdates[nt],fontsize=9)
 # figure title
-figtext(0.5,0.95,u"2-m temp (\N{DEGREE SIGN}K) forecasts from %s"%verifdates[0],
+figtext(0.5,0.95,u"2-m temp (\N{DEGREE SIGN}C) forecasts from %s"%verifdates[0],
 horizontalalignment='center',fontsize=14)
 # a single colorbar.
 cax = axes([0.1, 0.03, 0.8, 0.025])
Added: trunk/toolkits/basemap/lib/dap/__init__.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/__init__.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/__init__.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -0,0 +1,12 @@
+"""A Python implementation of the Data Access Protocol (DAP).
+
+Pydap is a Python module implementing the Data Access Protocol (DAP)
+written from scratch. The module implements a DAP client, allowing
+transparent and efficient access to dataset stored in DAP server, and
+also implements a DAP server able to serve data from a variety of
+formats.
+
+For more information about the protocol, please check http://opendap.org.
+"""
+
+__import__('pkg_resources').declare_namespace(__name__)
Added: trunk/toolkits/basemap/lib/dap/client.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/client.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/client.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -0,0 +1,67 @@
+__author__ = "Roberto De Almeida <ro...@py...>"
+
+import dap.lib
+from dap.util.http import openurl
+from dap.exceptions import ClientError
+
+
+def open(url, cache=None, username=None, password=None, verbose=False):
+ """Connect to a remote dataset.
+
+ This function opens a dataset stored in a DAP server:
+
+ >>> dataset = open(url, cache=None, username=None, password=None, verbose=False):
+
+ You can specify a cache location (a directory), so that repeated
+ accesses to the same URL avoid the network.
+ 
+ The username and password may be necessary if the DAP server requires
+ authentication. The 'verbose' option will make pydap print all the 
+ URLs that are acessed.
+ """
+ # Set variables on module namespace.
+ dap.lib.VERBOSE = verbose
+
+ if url.startswith('http'):
+ for response in [_ddx, _ddsdas]:
+ dataset = response(url, cache, username, password)
+ if dataset: return dataset
+ else:
+ raise ClientError("Unable to open dataset.")
+ else:
+ from dap.plugins.lib import loadhandler
+ from dap.helper import walk
+
+ # Open a local file. This is a clever hack. :)
+ handler = loadhandler(url)
+ dataset = handler._parseconstraints()
+
+ # Unwrap any arrayterators in the dataset.
+ for var in walk(dataset):
+ try: var.data = var.data._var
+ except: pass
+
+ return dataset
+
+
+def _ddsdas(baseurl, cache, username, password):
+ ddsurl, dasurl = '%s.dds' % baseurl, '%s.das' % baseurl
+
+ # Get metadata.
+ respdds, dds = openurl(ddsurl, cache, username, password)
+ respdas, das = openurl(dasurl, cache, username, password)
+
+ if respdds['status'] == '200' and respdas['status'] == '200':
+ from dap.parsers.dds import DDSParser
+ from dap.parsers.das import DASParser
+
+ # Build dataset.
+ dataset = DDSParser(dds, ddsurl, cache, username, password).parse()
+
+ # Add attributes.
+ dataset = DASParser(das, dasurl, dataset).parse()
+ return dataset
+
+
+def _ddx(baseurl, cache, username, password):
+ pass
Added: trunk/toolkits/basemap/lib/dap/dtypes.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/dtypes.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/dtypes.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -0,0 +1,529 @@
+"""DAP variables.
+
+This module is a Python implementation of the DAP data model.
+"""
+
+__author__ = "Roberto De Almeida <ro...@py...>"
+
+import copy
+import itertools
+
+from dap.lib import quote, to_list, _quote
+from dap.util.ordereddict import odict
+from dap.util.filter import get_filters
+
+__all__ = ['StructureType', 'SequenceType', 'DatasetType', 'GridType', 'ArrayType', 'BaseType',
+ 'Float', 'Float0', 'Float8', 'Float16', 'Float32', 'Float64', 'Int', 'Int0', 'Int8',
+ 'Int16', 'Int32', 'Int64', 'UInt16', 'UInt32', 'UInt64', 'Byte', 'String', 'Url']
+
+_basetypes = ['Float32', 'Float64', 'Int16', 'Int32', 'UInt16', 'UInt32', 'Byte', 'String', 'Url']
+_constructors = ['StructureType', 'SequenceType', 'DatasetType', 'GridType', 'ArrayType']
+
+# Constants.
+Float = 'Float64'
+Float0 = 'Float64'
+Float8 = 'Float32'
+Float16 = 'Float32'
+Float32 = 'Float32'
+Float64 = 'Float64'
+Int = 'Int32'
+Int0 = 'Int32'
+Int8 = 'Byte'
+Int16 = 'Int16'
+Int32 = 'Int32'
+Int64 = 'Int32'
+UInt16 = 'UInt16'
+UInt32 = 'UInt32'
+UInt64 = 'UInt32'
+UInt8 = 'Byte'
+Byte = 'Byte'
+String = 'String'
+Url = 'Url'
+
+typemap = {
+ # numpy
+ 'd': Float64,
+ 'f': Float32,
+ 'l': Int32,
+ 'b': Byte,
+ 'h': Int16,
+ 'q': Int32,
+ 'H': UInt16,
+ 'L': UInt32,
+ 'Q': UInt32,
+ 'B': Byte,
+ 'S': String,
+ }
+
+
+class StructureType(odict):
+ """Structure contructor. 
+
+ A structure is a dict-like object, which can hold other DAP variables.
+ Structures have a 'data' attribute that combines the data from the
+ stored variables when read, and propagates the data to the variables
+ when set.
+ 
+ This behaviour can be bypassed by setting the '_data' attribute; in
+ this case, no data is propagated, and further reads do not combine the
+ data from the stored variables.
+ """
+
+ def __init__(self, name='', attributes=None):
+ odict.__init__(self)
+ self.name = quote(name)
+ self.attributes = attributes or {}
+
+ self._id = name
+ self._filters = []
+ self._data = None
+
+ def __iter__(self):
+ # Iterate over the variables contained in the structure.
+ return self.itervalues()
+ 
+ walk = __iter__
+
+ def __getattr__(self, attr):
+ # Try to return stored variable.
+ try:
+ return self[attr]
+ except KeyError:
+ # Try to return attribute from self.attributes.
+ try: return self.attributes[attr]
+ except KeyError: raise AttributeError
+
+ def __setitem__(self, key, item):
+ # Assign a new variable and apply the proper id.
+ self._dict.__setitem__(key, item)
+ if key not in self._keys: self._keys.append(key)
+
+ # Ensure that stored objects have the proper id.
+ item._set_id(self._id)
+
+ def _get_data(self):
+ if self._data is not None:
+ return self._data
+ else:
+ return [var.data for var in self.values()]
+ 
+ def _set_data(self, data):
+ # Propagate the data to the stored variables.
+ for data_, var in itertools.izip(data, self.values()):
+ var.data = data_
+ 
+ data = property(_get_data, _set_data)
+
+ def _get_id(self):
+ return self._id
+ 
+ def _set_id(self, parent=None):
+ if parent:
+ self._id = '%s.%s' % (parent, self.name)
+ else:
+ self._id = self.name
+
+ # Propagate id to stored variables.
+ for var in self.values(): var._set_id(self._id)
+ 
+ id = property(_get_id) # Read-only.
+ 
+ def _get_filters(self):
+ return self._filters
+ 
+ def _set_filters(self, f):
+ self._filters.append(f)
+
+ # Propagate filter to stored variables.
+ for var in self.values(): var._set_filters(f)
+ 
+ filters = property(_get_filters, _set_filters)
+
+ def __copy__(self):
+ out = self.__class__(name=self.name, attributes=self.attributes.copy())
+ out._id = self._id
+ out._filters = self._filters[:]
+ out._data = self._data
+
+ # Stored variables *are not* copied.
+ for k, v in self.items():
+ out[k] = v
+ return out
+
+ def __deepcopy__(self, memo=None, _nil=[]):
+ out = self.__class__(name=self.name, attributes=self.attributes.copy())
+ out._id = self._id
+ out._filters = self._filters[:]
+ out._data = self._data
+
+ # Stored variables *are* (deep) copied.
+ for k, v in self.items():
+ out[k] = copy.deepcopy(v)
+ return out
+
+
+class DatasetType(StructureType):
+ """Dataset constructor.
+
+ A dataset is very similar to a structure -- the main difference is that
+ its name is not used when composing the fully qualified name of stored
+ variables.
+ """
+ 
+ def __setitem__(self, key, item):
+ self._dict.__setitem__(key, item)
+ if key not in self._keys: self._keys.append(key)
+
+ # Set the id. Here the parent should be None, since the dataset
+ # id is not part of the fully qualified name.
+ item._set_id(None)
+
+ def _set_id(self, parent=None):
+ self._id = self.name
+
+ # Propagate id.
+ for var in self.values(): var._set_id(None)
+
+
+class SequenceType(StructureType):
+ """Sequence constructor.
+
+ A sequence contains ordered data, corresponding to the records in
+ a sequence of structures with the same stored variables.
+ """
+ # Nesting level. Sequences inside sequences have a level 2, and so on.
+ level = 1
+
+ def __setitem__(self, key, item):
+ # Assign a new variable and apply the proper id.
+ self._dict.__setitem__(key, item)
+ if key not in self._keys: self._keys.append(key)
+
+ # Ensure that stored objects have the proper id.
+ item._set_id(self._id)
+
+ # If the variable is a sequence, set the nesting level.
+ def set_level(seq, level):
+ if isinstance(seq, SequenceType):
+ seq.level = level
+ for child in seq.walk(): set_level(child, level+1)
+ set_level(item, self.level+1)
+
+ def walk(self):
+ # Walk over the variables contained in the structure.
+ return self.itervalues()
+ 
+ def _get_data(self):
+ # This is similar to the structure _get_data method, except that data
+ # is combined from stored variables using zip(), i.e., grouped values 
+ # from each variable.
+ if self._data is not None:
+ return self._data
+ else: 
+ return _build_data(self.level, *[var.data for var in self.values()])
+ 
+ def _set_data(self, data):
+ for data_, var in itertools.izip(_propagate_data(self.level, data), self.values()):
+ var.data = data_
+
+ data = property(_get_data, _set_data)
+
+ def __iter__(self):
+ """
+ When iterating over a sequence, we yield structures containing the
+ corresponding data (first record, second, etc.).
+ """
+ out = self.__deepcopy__()
+
+ # Set server-side filters. When the sequence is iterated in a
+ # listcomp/genexp, this function inspects the stack and tries to
+ # build a server-side filter from the client-side filter. This
+ # is voodoo black magic, take care.
+ filters = get_filters(out)
+ for filter_ in filters:
+ out._set_filters(filter_)
+
+ for values in out.data:
+ # Yield a nice structure.
+ struct_ = StructureType(name=out.name, attributes=out.attributes)
+ for data, name in zip(values, out.keys()):
+ var = struct_[name] = out[name].__deepcopy__()
+ var.data = data
+ # Set the id. This is necessary since the new structure is not
+ # contained inside a dataset.
+ parent = out._id[:-len(out.name)-1]
+ struct_._set_id(parent)
+ yield struct_
+
+ def filter(self, *filters):
+ # Make a copy of the sequence.
+ out = self.__deepcopy__()
+
+ # And filter it according to the selection expression.
+ for filter_ in filters:
+ out._set_filters(_quote(filter_))
+ return out
+
+
+class GridType(object):
+ """Grid constructor.
+
+ A grid is a constructor holding an 'array' variable. The array has its
+ dimensions mapped to 'maps' stored in the grid (lat, lon, time, etc.).
+ Most of the requests are simply passed onto the stored array.
+ """
+
+ def __init__(self, name='', array=None, maps=None, attributes=None):
+ self.name = quote(name)
+ self.array = array
+ self.maps = maps or odict()
+ self.attributes = attributes or {}
+
+ self._id = name
+ self._filters = []
+
+ def __len__(self):
+ return self.array.shape[0]
+
+ def __iter__(self):
+ # Iterate over the grid. Yield the array and then the maps.
+ yield self.array
+ for map_ in self.maps.values(): yield map_
+
+ walk = __iter__
+ 
+ def __getattr__(self, attr):
+ # Try to return attribute from self.attributes.
+ try:
+ return self.attributes[attr]
+ except KeyError:
+ raise AttributeError
+
+ def __getitem__(self, index):
+ # Return data from the array.
+ return self.array[index]
+
+ def _get_data(self):
+ return self.array.data
+
+ def _set_data(self, data):
+ self.array.data = data
+
+ data = property(_get_data, _set_data)
+
+ def _get_id(self):
+ return self._id
+
+ def _set_id(self, parent=None):
+ if parent: self._id = '%s.%s' % (parent, self.name)
+ else: self._id = self.name
+
+ # Propagate id to array and maps.
+ if self.array: self.array._set_id(self._id)
+ for map_ in self.maps.values():
+ map_._set_id(self._id)
+
+ id = property(_get_id)
+
+ def _get_filters(self):
+ return self._filters
+
+ def _set_filters(self, f):
+ self.filters.append(f)
+
+ # Propagate filter.
+ self.array._set_filters(f)
+ for map_ in self.maps.values():
+ map_._set_filters(f)
+
+ filters = property(_get_filters, _set_filters)
+
+ def _get_dimensions(self):
+ # Return dimensions from stored maps.
+ return tuple(self.maps.keys())
+
+ dimensions = property(_get_dimensions)
+
+ def _get_shape(self):
+ return self.array.shape
+
+ def _set_shape(self, shape):
+ self.array.shape = shape
+
+ shape = property(_get_shape, _set_shape)
+
+ def _get_type(self):
+ return self.array.type
+
+ def _set_type(self, type):
+ self.array.type = type
+
+ type = property(_get_type, _set_type)
+
+ def __copy__(self):
+ out = self.__class__(name=self.name, array=self.array, maps=self.maps, attributes=self.attributes.copy())
+ out._id = self._id
+ out._filters = self._filters[:]
+ return out
+
+ def __deepcopy__(self, memo=None, _nil=[]):
+ out = self.__class__(name=self.name, attributes=self.attributes.copy())
+ out.array = copy.deepcopy(self.array)
+ out.maps = copy.deepcopy(self.maps)
+ out._id = self._id
+ out._filters = self._filters[:]
+ return out
+
+
+class BaseType(object):
+ """DAP Base type.
+
+ Variable holding a single value, or an iterable if it's stored inside
+ a sequence. It's the fundamental DAP variable, which actually holds
+ data (together with arrays).
+ """
+
+ def __init__(self, name='', data=None, type=None, attributes=None):
+ self.name = quote(name)
+ self.data = data
+ self.attributes = attributes or {}
+ 
+ if type in _basetypes: self.type = type
+ else: self.type = typemap.get(type, Int32)
+
+ self._id = name
+ self._filters = []
+
+ def __iter__(self):
+ # Yield the stored value.
+ # Perhaps we should raise StopIteration?
+ yield self.data
+
+ def __getattr__(self, attr):
+ # Try to return attribute from self.attributes.
+ try:
+ return self.attributes[attr]
+ except KeyError:
+ raise AttributeError
+
+ def __getitem__(self, key):
+ # Return data from the array.
+ return self.data[key]
+
+ def __setitem__(self, key, item):
+ # Assign a new variable and apply the proper id.
+ self.data.__setitem__(key, item)
+
+ def _get_id(self):
+ return self._id
+
+ def _set_id(self, parent=None):
+ if parent: self._id = '%s.%s' % (parent, self.name)
+ else: self._id = self.name
+
+ id = property(_get_id) # Read-only.
+
+ def _get_filters(self):
+ return self._filters
+
+ def _set_filters(self, f):
+ self._filters.append(f)
+
+ # Propagate to data, if it's a Proxy object.
+ if hasattr(self.data, 'filters'):
+ self.data.filters = self._filters
+
+ filters = property(_get_filters, _set_filters)
+
+ def __copy__(self):
+ out = self.__class__(name=self.name, data=self.data, type=self.type, attributes=self.attributes.copy())
+ out._id = self._id
+ out._filters = self._filters[:]
+ return out
+
+ def __deepcopy__(self, memo=None, _nil=[]):
+ out = self.__class__(name=self.name, type=self.type, attributes=self.attributes.copy())
+
+ try:
+ out.data = copy.copy(self.data)
+ except TypeError:
+ self.data = to_list(self.data)
+ out.data = copy.copy(self.data)
+ 
+ out._id = self._id
+ out._filters = self._filters[:]
+ return out
+
+ # This allows the variable to be compared to numbers.
+ def __ge__(self, other): return self.data >= other
+ def __gt__(self, other): return self.data > other
+ def __le__(self, other): return self.data <= other
+ def __lt__(self, other): return self.data < other
+ def __eq__(self, other): return self.data == other
+
+
+class ArrayType(BaseType):
+ """An array of BaseType variables.
+ 
+ Although the DAP supports arrays of any DAP variables, pydap can only
+ handle arrays of base types. This makes the ArrayType class very
+ similar to a BaseType, with the difference that it'll hold an array
+ of data in its 'data' attribute.
+
+ Array of constructors will not be supported until Python has a
+ native multi-dimensional array type. 
+ """
+
+ def __init__(self, name='', data=None, shape=None, dimensions=None, type=None, attributes=None):
+ self.name = quote(name)
+ self.data = data
+ self.shape = shape or ()
+ self.dimensions = dimensions or ()
+ self.attributes = attributes or {}
+
+ if type in _basetypes: self.type = type
+ else: self.type = typemap.get(type, Int32)
+ 
+ self._id = name
+ self._filters = []
+
+ def __len__(self):
+ return self.shape[0]
+
+ def __copy__(self):
+ out = self.__class__(name=self.name, data=self.data, shape=self.shape, dimensions=self.dimensions, type=self.type, attributes=self.attributes.copy())
+ out._id = self._id
+ out._filters = self._filters[:]
+ return out
+
+ def __deepcopy__(self, memo=None, _nil=[]):
+ out = self.__class__(name=self.name, shape=self.shape, dimensions=self.dimensions, type=self.type, attributes=self.attributes.copy())
+ 
+ try:
+ out.data = copy.copy(self.data)
+ except TypeError:
+ self.data = to_list(self.data)
+ out.data = copy.copy(self.data)
+ 
+ out._id = self._id
+ out._filters = self._filters[:]
+ return out
+
+
+# Functions for propagating data up and down in sequences.
+# I'm not 100% sure how this works.
+def _build_data(level, *vars_):
+ if level > 0:
+ out = [_build_data(level-1, *els) for els in itertools.izip(*vars_)]
+ else:
+ out = vars_
+
+ return out
+
+def _propagate_data(level, vars_):
+ if level > 0:
+ out = zip(*[_propagate_data(level-1, els) for els in vars_])
+ else:
+ out = vars_
+
+ return out
Added: trunk/toolkits/basemap/lib/dap/exceptions.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/exceptions.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/exceptions.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -0,0 +1,49 @@
+"""DAP exceptions.
+
+These exceptions are mostly used by the server. When an exception is
+captured, a proper error message is displayed (according to the DAP
+2.0 spec), with information about the exception and the error code 
+associated with it.
+
+The error codes are attributed using the "first come, first serve"
+algorithm.
+"""
+
+__author__ = "Roberto De Almeida <ro...@py...>"
+
+
+class DapError(Exception):
+ """Base DAP exception."""
+ def __init__(self, value):
+ self.value = value
+
+ def __str__(self):
+ return repr(self.value)
+
+
+class ClientError(DapError):
+ """Generic error with the client."""
+ code = 100
+ 
+
+class ServerError(DapError):
+ """Generic error with the server."""
+ code = 200
+
+class ConstraintExpressionError(ServerError):
+ """Exception raised when an invalid constraint expression is given."""
+ code = 201
+
+
+class PluginError(DapError):
+ """Generic error with a plugin."""
+ code = 300
+
+class ExtensionNotSupportedError(PluginError):
+ """Exception raised when trying to open a file not supported by any plugins."""
+ code = 301
+
+class OpenFileError(PluginError):
+ """Exception raised when unable to open a file."""
+ code = 302
+
Added: trunk/toolkits/basemap/lib/dap/helper.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/helper.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/helper.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -0,0 +1,803 @@
+"""Helper functions.
+
+These are generic functions used mostly for writing plugins.
+"""
+
+__author__ = "Roberto De Almeida <ro...@py...>"
+
+import sys
+import re
+import operator
+import itertools
+import copy
+from urllib import quote, unquote
+
+from dap.dtypes import *
+from dap.dtypes import _basetypes
+from dap.exceptions import ConstraintExpressionError
+from dap.lib import isiterable
+from dap.util.safeeval import expr_eval
+from dap.util.ordereddict import odict
+
+
+def constrain(dataset, constraints):
+ """A simple example. We create a dataset holding three variables:
+
+ >>> dataset = DatasetType(name='foo')
+ >>> dataset['a'] = BaseType(name='a', type='Byte')
+ >>> dataset['b'] = BaseType(name='b', type='Byte')
+ >>> dataset['c'] = BaseType(name='c', type='Byte')
+
+ Now we give it a CE requesting only the variables ``a`` and ``b``:
+ 
+ >>> dataset2 = constrain(dataset, 'a,b')
+ >>> print dataset2 #doctest: +ELLIPSIS
+ {'a': <dap.dtypes.BaseType object at ...>, 'b': <dap.dtypes.BaseType object at ...>}
+
+ We can also request the variables in a different order:
+
+ >>> dataset2 = constrain(dataset, 'b,a')
+ >>> print dataset2 #doctest: +ELLIPSIS
+ {'b': <dap.dtypes.BaseType object at ...>, 'a': <dap.dtypes.BaseType object at ...>}
+
+ Another example. A dataset with two structures ``a`` and ``b``:
+
+ >>> dataset = DatasetType(name='foo')
+ >>> dataset['a'] = StructureType(name='a')
+ >>> dataset['a']['a1'] = BaseType(name='a1', type='Byte')
+ >>> dataset['b'] = StructureType(name='b')
+ >>> dataset['b']['b1'] = BaseType(name='b1', type='Byte')
+ >>> dataset['b']['b2'] = BaseType(name='b2', type='Byte')
+
+ If we request the structure ``b`` we should get it complete:
+
+ >>> dataset2 = constrain(dataset, 'a.a1,b')
+ >>> print dataset2 #doctest: +ELLIPSIS
+ {'a': {'a1': <dap.dtypes.BaseType object at ...>}, 'b': {'b1': <dap.dtypes.BaseType object at ...>, 'b2': <dap.dtypes.BaseType object at ...>}}
+
+ >>> dataset2 = constrain(dataset, 'b.b1')
+ >>> print dataset2 #doctest: +ELLIPSIS
+ {'b': {'b1': <dap.dtypes.BaseType object at ...>}}
+
+ Arrays can be sliced. Here we have a ``(2,3)`` array:
+
+ >>> dataset = DatasetType(name='foo')
+ >>> from numpy import array
+ >>> data = array([1,2,3,4,5,6])
+ >>> data.shape = (2,3)
+ >>> dataset['array'] = ArrayType(data=data, name='array', shape=(2,3), type='Int32')
+ >>> dataset2 = constrain(dataset, 'array')
+ >>> from dap.server import SimpleHandler
+ >>> headers, output = SimpleHandler(dataset).dds()
+ >>> print ''.join(output)
+ Dataset {
+ Int32 array[2][3];
+ } foo;
+ <BLANKLINE>
+ >>> print dataset2['array'].data
+ [[1 2 3]
+ [4 5 6]]
+
+ But we request only part of it:
+
+ >>> dataset2 = constrain(dataset, 'array[0:1:1][0:1:1]')
+ >>> headers, output = SimpleHandler(dataset2).dds()
+ >>> print ''.join(output)
+ Dataset {
+ Int32 array[2][2];
+ } foo;
+ <BLANKLINE>
+ >>> print dataset2['array'].data
+ [[1 2]
+ [4 5]]
+
+ The same is valid for grids:
+
+ >>> dataset['grid'] = GridType(name='grid') 
+ >>> data = array([1,2,3,4,5,6])
+ >>> data.shape = (2,3)
+ >>> dataset['grid'].array = ArrayType(name='grid', data=data, shape=(2,3), dimensions=('x', 'y'))
+ >>> dataset['grid'].maps['x'] = ArrayType(name='x', data=array([1,2]), shape=(2,))
+ >>> dataset['grid'].maps['y'] = ArrayType(name='y', data=array([1,2,3]), shape=(3,))
+ >>> dataset._set_id()
+ >>> headers, output = SimpleHandler(dataset).dds()
+ >>> print ''.join(output)
+ Dataset {
+ Int32 array[2][3];
+ Grid {
+ Array:
+ Int32 grid[x = 2][y = 3];
+ Maps:
+ Int32 x[x = 2];
+ Int32 y[y = 3];
+ } grid;
+ } foo;
+ <BLANKLINE>
+ >>> dataset2 = constrain(dataset, 'grid[0:1:0][0:1:0]')
+ >>> headers, output = SimpleHandler(dataset2).dds()
+ >>> print ''.join(output)
+ Dataset {
+ Grid {
+ Array:
+ Int32 grid[x = 1][y = 1];
+ Maps:
+ Int32 x[x = 1];
+ Int32 y[y = 1];
+ } grid;
+ } foo;
+ <BLANKLINE>
+ >>> headers, output = SimpleHandler(dataset2).ascii()
+ >>> print ''.join(output)
+ Dataset {
+ Grid {
+ Array:
+ Int32 grid[x = 1][y = 1];
+ Maps:
+ Int32 x[x = 1];
+ Int32 y[y = 1];
+ } grid;
+ } foo;
+ ---------------------------------------------
+ grid.grid
+ [0][0] 1
+ <BLANKLINE>
+ grid.x
+ [0] 1
+ <BLANKLINE>
+ grid.y
+ [0] 1
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+ <BLANKLINE>
+
+ Selecting a map from a Grid should return a structure:
+
+ >>> dataset3 = constrain(dataset, 'grid.x')
+ >>> headers, output = SimpleHandler(dataset3).dds()
+ >>> print ''.join(output)
+ Dataset {
+ Structure {
+ Int32 x[x = 2];
+ } grid;
+ } foo;
+ <BLANKLINE>
+
+ Short notation also works:
+
+ >>> dataset3 = constrain(dataset, 'x')
+ >>> headers, output = SimpleHandler(dataset3).dds()
+ >>> print ''.join(output)
+ Dataset {
+ Structure {
+ Int32 x[x = 2];
+ } grid;
+ } foo;
+ <BLANKLINE>
+
+ It also works with Sequences:
+
+ >>> dataset = DatasetType(name='foo')
+ >>> dataset['seq'] = SequenceType(name='seq')
+ >>> dataset['seq']['a'] = BaseType(name='a')
+ >>> dataset['seq']['b'] = BaseType(name='b')
+ >>> dataset['seq']['a'].data = range(5)
+ >>> dataset['seq']['b'].data = range(5,10)
+ >>> for i in dataset['seq'].data:
+ ... print i
+ (0, 5)
+ (1, 6)
+ (2, 7)
+ (3, 8)
+ (4, 9)
+ >>> dataset2 = constrain(dataset, 'seq.a')
+ >>> for i in dataset2['seq'].data:
+ ... print i
+ (0,)
+ (1,)
+ (2,)
+ (3,)
+ (4,)
+ >>> dataset2 = constrain(dataset, 'seq.b')
+ >>> for i in dataset2['seq'].data:
+ ... print i
+ (5,)
+ (6,)
+ (7,)
+ (8,)
+ (9,)
+ >>> dataset2 = constrain(dataset, 'seq.b,seq.a')
+ >>> for i in dataset2['seq'].data:
+ ... print i
+ (5, 0)
+ (6, 1)
+ (7, 2)
+ (8, 3)
+ (9, 4)
+
+ The function also parses selection expressions. Let's create a
+ dataset with sequential data:
+
+ >>> dataset = DatasetType(name='foo')
+ >>> dataset['seq'] = SequenceType(name='seq')
+ >>> dataset['seq']['index'] = BaseType(name='index', type='Int32')
+ >>> dataset['seq']['index'].data = [10, 11, 12, 13]
+ >>> dataset['seq']['temperature'] = BaseType(name='temperature', type='Float32')
+ >>> dataset['seq']['temperature'].data = [17.2, 15.1, 15.3, 15.1]
+ >>> dataset['seq']['site'] = BaseType(name='site', type='String')
+ >>> dataset['seq']['site'].data = ['Diamond_St', 'Blacktail_Loop', 'Platinum_St', 'Kodiak_Trail']
+
+ Here's the data:
+
+ >>> for i in dataset['seq'].data:
+ ... print i
+ (10, 17.199999999999999, 'Diamond_St')
+ (11, 15.1, 'Blacktail_Loop')
+ (12, 15.300000000000001, 'Platinum_St')
+ (13, 15.1, 'Kodiak_Trail')
+
+ Now suppose we only want data where ``index`` is greater than 11:
+
+ >>> dataset2 = constrain(dataset, 'seq&seq.index>11')
+ >>> for i in dataset2['seq'].data:
+ ... print i
+ (12, 15.300000000000001, 'Platinum_St')
+ (13, 15.1, 'Kodiak_Trail')
+
+ We can request only a few variables:
+
+ >>> dataset2 = constrain(dataset, 'seq.site&seq.index>11')
+ >>> for i in dataset2['seq'].data:
+ ... print i
+ ('Platinum_St',)
+ ('Kodiak_Trail',)
+
+ A few more tests:
+
+ >>> dataset = DatasetType(name='foo')
+ >>> dataset['a'] = StructureType(name='a')
+ >>> dataset['a']['shn'] = BaseType(name='shn')
+ >>> dataset['b'] = StructureType(name='b')
+ >>> dataset['b']['shn'] = BaseType(name='shn')
+ >>> dataset2 = constrain(dataset, 'a.shn')
+ >>> print dataset2 #doctest: +ELLIPSIS
+ {'a': {'shn': <dap.dtypes.BaseType object at ...>}}
+ >>> dataset3 = constrain(dataset, 'shn')
+ Traceback (most recent call last):
+ ...
+ ConstraintExpressionError: 'Ambiguous shorthand notation request: shn'
+ >>> dataset['shn'] = BaseType(name='shn')
+ >>> dataset3 = constrain(dataset, 'shn')
+ >>> print dataset3 #doctest: +ELLIPSIS
+ {'shn': <dap.dtypes.BaseType object at 0x1746290>}
+ """
+ # Parse constraints.
+ fields, queries = parse_querystring(constraints)
+
+ # Ids and names are used to check that requests made using the 
+ # shorthand notation are not ambiguous. Used names are stored to
+ # make sure that at most only a single variables is returned from
+ # a given name.
+ ids = [var.id for var in walk(dataset)]
+ names = []
+ 
+ new = DatasetType(name=dataset.name, attributes=dataset.attributes.copy())
+ new = build(dataset, new, fields, queries, ids, names)
+
+ return new
+
+
+def build(dapvar, new, fields, queries, ids, names):
+ vars_ = fields.keys()
+ order = []
+ for var in dapvar.walk():
+ # Make a copy of the variable, so that later we can possibly add it 
+ # to the dataset we're building (that's why it's a candidate).
+ candidate = copy.deepcopy(var)
+
+ # We first filter the data in sequences. This has to be done 
+ # before variables are removed, since we can select values based
+ # on conditions on *other* variables. Eg: seq.a where seq.b > 1
+ if queries and isinstance(candidate, SequenceType):
+ # Filter candidate on the server-side, since the data may be 
+ # proxied using ``dap.proxy.Proxy``.
+ candidate = candidate.filter(*queries)
+ # And then filter on the client side.
+ candidate = filter_(candidate, queries)
+ 
+ # If the variable was requested, either by id or name, or if no
+ # variables were requested, we simply add this candidate to the
+ # dataset we're building.
+ if not vars_ or candidate.id in vars_ or (candidate.name in vars_ and candidate.name not in ids):
+ new[candidate.name] = candidate
+
+ # Check if requests done using shn are not ambiguous.
+ if vars_ and candidate.id not in vars_: # request by shn
+ if candidate.name in names:
+ raise ConstraintExpressionError("Ambiguous shorthand notation request: %s" % candidate.name)
+ names.append(candidate.name)
+
+ # We also need to store the order in which the variables were
+ # requested. Later, we'll rearrange the variables in our built
+ # dataset in the correct order.
+ if vars_:
+ if candidate.id in vars_: index = vars_.index(candidate.id)
+ else: index = vars_.index(candidate.name)
+ order.append((index, candidate.name))
+
+ # If the variable was not requested, but it's a constructor, it's
+ # possible that one of its children has been requested. We apply
+ # the algorithm recursively on the variable.
+ elif not isinstance(var, BaseType):
+ # We clear the candidate after storing a copy with the filtered
+ # data and children. We will then append the requested children
+ # to the cleared candidate.
+ ccopy = copy.deepcopy(candidate)
+ if isinstance(candidate, StructureType): 
+ candidate.clear()
+ else: 
+ # If the variable is a grid we should return it as a 
+ # structure with the requested fields.
+ parent = candidate._id[:-len(candidate.name)-1]
+ candidate = StructureType(name=candidate.name, attributes=candidate.attributes.copy())
+ candidate._set_id(parent)
+
+ # Check for requested children.
+ candidate = build(ccopy, candidate, fields, queries, ids, names)
+
+ # If the candidate has any keys, ie, stored variables, we add
+ # it to the dataset we are building.
+ if candidate.keys(): new[candidate.name] = candidate
+
+ # Check if we need to apply a slice in the variable.
+ slice_ = fields.get(candidate.id) or fields.get(candidate.name)
+ if slice_: candidate = slicevar(candidate, slice_)
+ 
+ # Sort variables according to order of requested variables.
+ if len(order) > 1:
+ order.sort()
+ new._keys = [item[1] for item in order]
+ 
+ return new
+
+
+def filter_(dapvar, queries):
+ # Get only the queries related to this variable.
+ queries_ = [q for q in queries if q.startswith(dapvar.id)]
+ if queries_:
+ # Build the filter and apply it to the data.
+ ids = [var.id for var in dapvar.values()]
+ f = buildfilter(queries_, ids)
+ data = itertools.ifilter(f, dapvar.data)
+
+ # Set the data in the stored variables.
+ data = list(data)
+ dapvar.data = data
+
+ return dapvar 
+
+
+def slicevar(dapvar, slice_):
+ if slice_ != (slice(None),):
+ dapvar.data = dapvar.data[slice_]
+
+ try:
+ dapvar.shape = getattr(dapvar.data, 'shape', (len(dapvar.data),))
+ except TypeError:
+ pass
+
+ if isinstance(dapvar, GridType):
+ if not isiterable(slice_): slice_ = (slice_,)
+
+ # Slice the maps.
+ for map_,mapslice in zip(dapvar.maps.values(), slice_):
+ map_.data = map_.data[mapslice]
+ map_.shape = map_.data.shape
+
+ return dapvar
+
+
+def order(dataset, fields):
+ """
+ Order a given dataset according to the requested order.
+
+ >>> d = DatasetType(name='d')
+ >>> d['a'] = BaseType(name='a')
+ >>> d['b'] = BaseType(name='b')
+ >>> d['c'] = SequenceType(name='c')
+ >>> d['c']['d'] = BaseType(name='d')
+ >>> d['c']['e'] = BaseType(name='e')
+ >>> print order(d, 'b,c.e,c.d,a'.split(',')) #doctest: +ELLIPSIS
+ {'b': <dap.dtypes.BaseType object at ...>, 'c': {'e': <dap.dtypes.BaseType object at ...>, 'd': <dap.dtypes.BaseType object at ...>}, 'a': <dap.dtypes.BaseType object at ...>}
+ >>> print order(d, 'c.e,c.d,a'.split(',')) #doctest: +ELLIPSIS
+ {'c': {'e': <dap.dtypes.BaseType object at ...>, 'd': <dap.dtypes.BaseType object at ...>}, 'a': <dap.dtypes.BaseType object at ...>, 'b': <dap.dtypes.BaseType object at ...>}
+ >>> print order(d, 'b,c,a'.split(',')) #doctest: +ELLIPSIS
+ {'b': <dap.dtypes.BaseType object at ...>, 'c': {'d': <dap.dtypes.BaseType object at ...>, 'e': <dap.dtypes.BaseType object at ...>}, 'a': <dap.dtypes.BaseType object at ...>}
+ """
+ # Order the dataset.
+ dataset = copy.copy(dataset)
+ orders = []
+ n = len(dataset._keys)
+ for var in dataset.walk():
+ # Search for id first.
+ fields_ = [field[:len(var.id)] for field in fields]
+ if var.id in fields_: index = fields_.index(var.id)
+ # Else search by name.
+ elif var.name in fields: index = fields.index(var.name)
+ # Else preserve original order.
+ else: index = n + dataset._keys.index(var.name)
+ orders.append((index, var.name))
+
+ # Sort children.
+ if isinstance(var, StructureType):
+ dataset[var.name] = order(var, fields)
+
+ # Sort dataset.
+ if len(orders) > 1:
+ orders.sort()
+ dataset._keys = [item[1] for item in orders]
+
+ return dataset
+
+
+def walk(dapvar):
+ """
+ Iterate over all variables, including dapvar.
+ """
+ yield dapvar
+ try:
+ for child in dapvar.walk():
+ for var in walk(child): yield var
+ except:
+ pass
+
+
+def getslice(hyperslab, shape=None):
+ """Parse a hyperslab.
+
+ Parse a hyperslab to a slice according to variable shape. The hyperslab
+ follows the DAP specification, and ommited dimensions are returned in 
+ their entirety.
+
+ >>> getslice('[0:1:2][0:1:2]')
+ (slice(0, 3, 1), slice(0, 3, 1))
+ >>> getslice('[0:2][0:2]')
+ (slice(0, 3, 1), slice(0, 3, 1))
+ >>> getslice('[0][2]')
+ (slice(0, 1, 1), slice(2, 3, 1))
+ >>> getslice('[0:1:1]')
+ (slice(0, 2, 1),)
+ >>> getslice('[0:2:1]')
+ (slice(0, 2, 2),)
+ >>> getslice('')
+ (slice(None, None, None),)
+ """
+ # Backwards compatibility. In pydap <= 2.2.3 the ``fields`` dict from 
+ # helper.parse_querystring returned the slices as strings (instead of
+ # Python slices). These strings had to be passed to getslice to get a 
+ # Python slice. Old plugins still do this, but with pydap >= 2.2.4 
+ # they are already passing the slices, so we simply return them.
+ if not isinstance(hyperslab, basestring): return hyperslab or slice(None)
+
+ if hyperslab:
+ output = []
+ dimslices = hyperslab[1:-1].split('][')
+ for dimslice in dimslices:
+ start, size, step = _getsize(dimslice)
+ output.append(slice(start, start+size, step))
+ output = tuple(output)
+ else:
+ output = (slice(None),)
+
+ return output
+
+
+def _getsize(dimslice):
+ """Parse a dimension from a hyperslab.
+
+ Calculates the start, size and step from a DAP formatted hyperslab.
+
+ >>> _getsize('0:1:9')
+ (0, 10, 1)
+ >>> _getsize('0:2:9')
+ (0, 10, 2)
+ >>> _getsize('0')
+ (0, 1, 1)
+ >>> _getsize('0:9')
+ (0, 10, 1)
+ """
+ size = dimslice.split(':')
+
+ start = int(size[0])
+ if len(size) == 1:
+ stop = start
+ step = 1
+ elif len(size) == 2:
+ stop = int(size[1])
+ step = 1
+ elif len(size) == 3:
+ step = int(size[1])
+ stop = int(size[2])
+ else:
+ raise ConstraintExpressionError('Invalid hyperslab: %s.' % dimslice)
+ size = (stop-start) + 1
+ 
+ return start, size, step
+
+
+def buildfilter(queries, vars_):
+ """This function is a filter builder.
+
+ Given a list of DAP formatted queries and a list of variable names,
+ this function returns a dynamic filter function to filter rows.
+
+ From the example in the DAP specification:
+
+ >>> vars_ = ['index', 'temperature', 'site']
+ >>> data = []
+ >>> data.append([10, 17.2, 'Diamond_St'])
+ >>> data.append([11, 15.1, 'Blacktail_Loop'])
+ >>> data.append([12, 15.3, 'Platinum_St'])
+ >>> data.append([13, 15.1, 'Kodiak_Trail'])
+
+ Rows where index is greater-than-or-equal 11:
+
+ >>> f = buildfilter(['index>=11'], vars_)
+ >>> for line in itertools.ifilter(f, data):
+ ... print line
+ [11, 15.1, 'Blacktail_Loop']
+ [12, 15.300000000000001, 'Platinum_St']
+ [13, 15.1, 'Kodiak_Trail']
+
+ Rows where site ends with '_St':
+
+ >>> f = buildfilter(['site=~".*_St"'], vars_)
+ >>> for line in itertools.ifilter(f, data):
+ ... print line
+ [10, 17.199999999999999, 'Diamond_St']
+ [12, 15.300000000000001, 'Platinum_St']
+
+ Index greater-or-equal-than 11 AND site ends with '_St':
+
+ >>> f = buildfilter(['site=~".*_St"', 'index>=11'], vars_)
+ >>> for line in itertools.ifilter(f, data):
+ ... print line
+ [12, 15.300000000000001, 'Platinum_St']
+
+ Site is either 'Diamond_St' OR 'Blacktail_Loop':
+ 
+ >>> f = buildfilter(['site={"Diamond_St", "Blacktail_Loop"}'], vars_)
+ >>> for line in itertools.ifilter(f, data):
+ ... print line
+ [10, 17.199999999999999, 'Diamond_St']
+ [11, 15.1, 'Blacktail_Loop']
+
+ Index is either 10 OR 12:
+
+ >>> f = buildfilter(['index={10, 12}'], vars_)
+ >>> for line in itertools.ifilter(f, data):
+ ... print line
+ [10, 17.199999999999999, 'Diamond_St']
+ [12, 15.300000000000001, 'Platinum_St']
+
+ Python is great, isn't it? :)
+ """
+ filters = []
+ p = re.compile(r'''^ # Start of selection
+ {? # Optional { for multi-valued constants
+ (?P<var1>.*?) # Anything
+ }? # Closing }
+ (?P<op><=|>=|!=|=~|>|<|=) # Operators
+ {? # {
+ (?P<var2>.*?) # Anything
+ }? # }
+ $ # EOL
+ ''', re.VERBOSE)
+ for query in queries:
+ m = p.match(query)
+ if not m: raise ConstraintExpressionError('Invalid constraint expression: %s.' % query)
+
+ # Functions associated with each operator.
+ op = {'<' : operator.lt,
+ '>' : operator.gt,
+ '!=': operator.ne,
+ '=' : operator.eq,
+ '>=': operator.ge,
+ '<=': operator.le,
+ '=~': lambda a,b: re.match(b,a),
+ }[m.group('op')]
+ # Allow multiple comparisons in one line. Python rulez!
+ op = multicomp(op)
+
+ # Build the filter for the first variable.
+ if m.group('var1') in vars_:
+ i = vars_.index(m.group('var1'))
+ var1 = lambda L, i=i: operator.getitem(L, i)
+
+ # Build the filter for the second variable. It could be either
+ # a name or a constant.
+ if m.group('var2') in vars_:
+ i = vars_.index(m.group('var2'))
+ var2 = lambda L, i=i: operator.getitem(L, i)
+ else:
+ var2 = lambda x, m=m: expr_eval(m.group('var2'))
+
+ # This is the filter. We apply the function (op) to the variable
+ # filters (var1 and var2).
+ filter0 = lambda x, op=op, var1=var1, var2=var2: op(var1(x), var2(x))
+ filters.append(filter0)
+
+ if filters:
+ # We have to join all the filters that were built, using the AND 
+ # operator. Believe me, this line does exactly that.
+ #
+ # You are not expected to understand this.
+ filter0 = lambda i: reduce(lambda x,y: x and y, [f(i) for f in filters])
+ else:
+ filter0 = bool
+
+ return filter0
+
+
+def multicomp(function):
+ """Multiple OR comparisons.
+
+ Given f(a,b), this function returns a new function g(a,b) which
+ performs multiple OR comparisons if b is a tuple.
+
+ >>> a = 1
+ >>> b = (0, 1, 2)
+ >>> operator.lt = multicomp(operator.lt)
+ >>> operator.lt(a, b)
+ True
+ """
+ def f(a, b):
+ if isinstance(b, tuple):
+ for i in b:
+ # Return True if any comparison is True.
+ if function(a, i): return True
+ return False
+ else:
+ return function(a, b)
+
+ return f
+
+
+def fix_slice(dims, index):
+ """Fix incomplete slices or slices with ellipsis.
+
+ The behaviour of this function was reversed-engineered from numpy.
+
+ >>> fix_slice(3, (0, Ellipsis, 0))
+ (0, slice(None, None, None), 0)
+ >>> fix_slice(4, (0, Ellipsis, 0))
+ (0, slice(None, None, None), slice(None, None, None), 0)
+ >>> fix_slice(4, (0, 0, Ellipsis, 0))
+ (0, 0, slice(None, None, None), 0)
+ >>> fix_slice(5, (0, Ellipsis, 0))
+ (0, slice(None, None, None), slice(None, None, None), slice(None, None, None), 0)
+ >>> fix_slice(5, (0, 0, Ellipsis, 0))
+ (0, 0, slice(None, None, None), slice(None, None, None), 0)
+ >>> fix_slice(5, (0, Ellipsis, 0, Ellipsis))
+ (0, slice(None, None, None), slice(None, None, None), 0, slice(None, None, None))
+ >>> fix_slice(4, slice(None, None, None))
+ (slice(None, None, None), slice(None, None, None), slice(None, None, None), slice(None, None, None))
+ >>> fix_slice(4, (slice(None, None, None), 0))
+ (slice(None, None, None), 0, slice(None, None, None), slice(None, None, None))
+ """
+ if not isinstance(index, tuple): index = (index,)
+
+ out = []
+ length = len(index)
+ for slice_ in index:
+ if slice_ is Ellipsis:
+ out.extend([slice(None)] * (dims - length + 1))
+ length += (dims - length)
+ else:
+ out.append(slice_)
+ index = tuple(out)
+
+ if len(index) < dims:
+ index += (slice(None),) * (dims - len(index))
+
+ return index
+
+
+def lenslice(slice_):
+ """
+ Return the number of values associated with a slice.
+ 
+ By Bob Drach.
+ """
+ step = slice_.step
+ if step is None: step = 1
+
+ if step > 0:
+ start = slice_.start
+ stop = slice_.stop
+ else:
+ start = slice_.stop
+ stop = slice_.start
+ step = -step
+ return ((stop-start-1)/step + 1)
+
+
+def parse_querystring(query):
+ """
+ Parse a query_string returning the requested variables, dimensions, and CEs.
+
+ >>> parse_querystring('a,b')
+ ({'a': (slice(None, None, None),), 'b': (slice(None, None, None),)}, [])
+ >>> parse_querystring('a[0],b[1]')
+ ({'a': (slice(0, 1, 1),), 'b': (slice(1, 2, 1),)}, [])
+ >>> parse_querystring('a[0],b[1]&foo.bar>1')
+ ({'a': (slice(0, 1, 1),), 'b': (slice(1, 2, 1),)}, ['foo.bar>1'])
+ >>> parse_querystring('a[0],b[1]&foo.bar>1&LAYERS=SST')
+ ({'a': (slice(0, 1, 1),), 'b': (slice(1, 2, 1),)}, ['foo.bar>1', 'LAYERS=SST'])
+ >>> parse_querystring('foo.bar>1&LAYERS=SST')
+ ({}, ['foo.bar>1', 'LAYERS=SST'])
+
+ """
+ if query is None: return {}, []
+
+ query = unquote(query)
+ constraints = query.split('&')
+
+ # Check if the first item is either a list of variables (projection)
+ # or a selection.
+ relops = ['<', '<=', '>', '>=', '=', '!=',' =~']
+ for relop in relops:
+ if relop in constraints[0]:
+ vars_ = []
+ queries = constraints[:]
+ break
+ else:
+ vars_ = constraints[0].split(',')
+ queries = constraints[1:]
+ 
+ fields = odict()
+ p = re.compile(r'(?P<name>[^[]+)(?P<shape>(\[[^\]]+\])*)')
+ for var in vars_:
+ if var:
+ # Check if the var has a slice.
+ c = p.match(var).groupdict()
+ id_ = quote(c['name'])
+ fields[id_] = getslice(c['shape'])
+
+ return fields, queries
+
+
+def escape_dods(dods, pad=''):
+ """
+ Escape a DODS response.
+
+ This is useful for debugging. You're probably spending too much time
+ with pydap if you need to use this.
+ """
+ if 'Data:\n' in dods:
+ index = dods.index('Data:\n') + len('Data:\n')
+ else:
+ index = 0
+
+ dds = dods[:index]
+ dods = dods[index:]
+
+ out = []
+ for i, char in enumerate(dods): 
+ char = hex(ord(char))
+ char = char.replace('0x', '\\x')
+ if len(char) < 4: char = char.replace('\\x', '\\x0')
+ out.append(char)
+ if pad and (i%4 == 3): out.append(pad)
+ out = ''.join(out)
+ out = out.replace(r'\x5a\x00\x00\x00', '<start of sequence>')
+ out = out.replace(r'\xa5\x00\x00\x00', '<end of sequence>\n')
+ return dds + out
+
+
+def _test():
+ import doctest
+ doctest.testmod()
+
+if __name__ == "__main__":
+ _test()
Added: trunk/toolkits/basemap/lib/dap/lib.py
===================================================================
--- trunk/toolkits/basemap/lib/dap/lib.py	 (rev 0)
+++ trunk/toolkits/basemap/lib/dap/lib.py	2007年11月26日 18:29:14 UTC (rev 4449)
@@ -0,0 +1,122 @@
+from __future__ import division
+
+"""Basic functions concerning the DAP.
+
+These functions are mostly related to encoding data according to the DAP.
+"""
+
+from urllib import quote as _quote
+
+__author__ = 'Roberto De Almeida <ro...@py...>'
+__version__ = (2,2,6,1) # module version
+__dap__ = (2,0) # protocol version
+
+# Constants that used to live in __init__.py but had to be moved
+# because we share the namespace with plugins and responses.
+USER_AGENT = 'pydap/%s' % '.'.join([str(_) for _ in __version__])
+INDENT = ' ' * 4
+VERBOSE = False
+CACHE = None
+TIMEOUT = None
+
+
+def isiterable(o):
+ """Tests if an object is iterable.
+
+ >>> print isiterable(range(10))
+ True
+ >>> print isiterable({})
+ True
+ >>> def a():
+ ... for i in range(10): yield i
+ >>> print isiterable(a())
+ True
+ >>> print isiterable('string')
+ False
+ >>> print isiterable(1)
+ False
+ """
+ # We DON'T want to iterate over strings.
+ if isinstance(o, basestring): return False
+
+ try:
+ iter(o)
+ return True
+ except TypeError:
+ return False
+
+
+def to_list(L):
+ if hasattr(L, 'tolist'): return L.tolist() # shortcut for numpy arrays
+ elif isiterable(L): return [to_list(item) for item in L]
+ else: return L
+
+
+def quote(name):
+ """Extended quote for the DAP spec.
+
+ The period MUST be escaped in names (DAP spec, item 5.1):
+
+ >>> quote("White space")
+ 'White%20space'
+ >>> _quote("Period.")
+ 'Period.'
+ >>> quote("Period.")
+ 'Period%2E'
+ """
+ return _quote(name).replace('.', '%2E')
+
+
+def encode_atom(atom):
+ r"""Atomic types encoding.
+
+ Encoding atomic types for the DAS. Integers should be printed using the
+ base 10 ASCII representation of its value:
+
...
 
[truncated message content]
From: <md...@us...> - 2007年11月26日 17:23:26
Revision: 4448
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4448&view=rev
Author: mdboom
Date: 2007年11月26日 09:23:18 -0800 (2007年11月26日)
Log Message:
-----------
Still trying to fix compile error on 64-bit platforms...
Modified Paths:
--------------
 branches/transforms/src/_backend_agg.cpp
 branches/transforms/src/_path.cpp
Modified: branches/transforms/src/_backend_agg.cpp
===================================================================
--- branches/transforms/src/_backend_agg.cpp	2007年11月26日 16:59:29 UTC (rev 4447)
+++ branches/transforms/src/_backend_agg.cpp	2007年11月26日 17:23:18 UTC (rev 4448)
@@ -1203,14 +1203,14 @@
 antialiaseds[0] = Py::Int(antialiased ? 1 : 0);
 
 if (showedges) {
- int dims[] = { 1, 4, 0 };
+ npy_intp dims[] = { 1, 4, 0 };
 double data[] = { 0, 0, 0, 1 };
 edgecolors_obj = PyArray_SimpleNewFromData(2, dims, PyArray_DOUBLE, (char*)data);
 } else {
 if (antialiased) {
 edgecolors_obj = facecolors_obj;
 } else {
- int dims[] = { 0, 0 };
+ npy_intp dims[] = { 0, 0 };
 edgecolors_obj = PyArray_SimpleNew(1, dims, PyArray_DOUBLE);
 }
 }
Modified: branches/transforms/src/_path.cpp
===================================================================
--- branches/transforms/src/_path.cpp	2007年11月26日 16:59:29 UTC (rev 4447)
+++ branches/transforms/src/_path.cpp	2007年11月26日 17:23:18 UTC (rev 4448)
@@ -696,10 +696,12 @@
 f = *(double*)(row1);
 }
 
- result = (PyArrayObject*)PyArray_NewFromDescr
- (&PyArray_Type, PyArray_DescrFromType(PyArray_DOUBLE), 
- PyArray_NDIM(vertices), PyArray_DIMS(vertices), 
- NULL, NULL, 0, NULL);
+ // I would have preferred to use PyArray_FromDims here, but on
+ // 64-bit platforms, PyArray_DIMS() does not return the same thing
+ // that PyArray_FromDims wants, requiring a copy, which I would
+ // like to avoid in this critical section.
+ result = (PyArrayObject*)PyArray_SimpleNew
+ (PyArray_NDIM(vertices), PyArray_DIMS(vertices), PyArray_DOUBLE);
 if (PyArray_NDIM(vertices) == 2) {
 size_t n = PyArray_DIM(vertices, 0);
 char* vertex_in = PyArray_BYTES(vertices);
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 16:59:31
Revision: 4447
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4447&view=rev
Author: mdboom
Date: 2007年11月26日 08:59:29 -0800 (2007年11月26日)
Log Message:
-----------
Fix compile error on 64-bit platforms.
Modified Paths:
--------------
 branches/transforms/src/_path.cpp
Modified: branches/transforms/src/_path.cpp
===================================================================
--- branches/transforms/src/_path.cpp	2007年11月26日 16:52:53 UTC (rev 4446)
+++ branches/transforms/src/_path.cpp	2007年11月26日 16:59:29 UTC (rev 4447)
@@ -696,9 +696,10 @@
 f = *(double*)(row1);
 }
 
- result = (PyArrayObject*)PyArray_FromDimsAndDataAndDescr
- (PyArray_NDIM(vertices), PyArray_DIMS(vertices), 
- PyArray_DescrFromType(PyArray_DOUBLE), NULL);
+ result = (PyArrayObject*)PyArray_NewFromDescr
+ (&PyArray_Type, PyArray_DescrFromType(PyArray_DOUBLE), 
+ PyArray_NDIM(vertices), PyArray_DIMS(vertices), 
+ NULL, NULL, 0, NULL);
 if (PyArray_NDIM(vertices) == 2) {
 size_t n = PyArray_DIM(vertices, 0);
 char* vertex_in = PyArray_BYTES(vertices);
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 16:52:55
Revision: 4446
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4446&view=rev
Author: mdboom
Date: 2007年11月26日 08:52:53 -0800 (2007年11月26日)
Log Message:
-----------
Fix zooming with bounding box in Gtk and Qt backends (others seem to
already work). Fix text rotation in Wx (non-Agg) backend.
Modified Paths:
--------------
 branches/transforms/lib/matplotlib/backends/backend_gtk.py
 branches/transforms/lib/matplotlib/backends/backend_qt.py
 branches/transforms/lib/matplotlib/backends/backend_qt4.py
 branches/transforms/lib/matplotlib/backends/backend_wx.py
Modified: branches/transforms/lib/matplotlib/backends/backend_gtk.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_gtk.py	2007年11月26日 16:43:19 UTC (rev 4445)
+++ branches/transforms/lib/matplotlib/backends/backend_gtk.py	2007年11月26日 16:52:53 UTC (rev 4446)
@@ -549,7 +549,7 @@
 return
 
 ax = event.inaxes
- l,b,w,h = [int(val) for val in ax.bbox.get_bounds()]
+ l,b,w,h = [int(val) for val in ax.bbox.bounds]
 b = int(height)-(b+h)
 axrect = l,b,w,h
 self._imageBack = axrect, drawable.get_image(*axrect)
Modified: branches/transforms/lib/matplotlib/backends/backend_qt.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_qt.py	2007年11月26日 16:43:19 UTC (rev 4445)
+++ branches/transforms/lib/matplotlib/backends/backend_qt.py	2007年11月26日 16:52:53 UTC (rev 4446)
@@ -102,7 +102,7 @@
 def mousePressEvent( self, event ):
 x = event.pos().x()
 # flipy so y=0 is bottom of canvas
- y = self.figure.bbox.height() - event.pos().y()
+ y = self.figure.bbox.height - event.pos().y()
 button = self.buttond[event.button()]
 FigureCanvasBase.button_press_event( self, x, y, button )
 if DEBUG: print 'button pressed:', event.button()
Modified: branches/transforms/lib/matplotlib/backends/backend_qt4.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_qt4.py	2007年11月26日 16:43:19 UTC (rev 4445)
+++ branches/transforms/lib/matplotlib/backends/backend_qt4.py	2007年11月26日 16:52:53 UTC (rev 4446)
@@ -359,7 +359,7 @@
 QtGui.QApplication.setOverrideCursor( QtGui.QCursor( cursord[cursor] ) )
 
 def draw_rubberband( self, event, x0, y0, x1, y1 ):
- height = self.canvas.figure.bbox.height()
+ height = self.canvas.figure.bbox.height
 y1 = height - y1
 y0 = height - y0
 
Modified: branches/transforms/lib/matplotlib/backends/backend_wx.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_wx.py	2007年11月26日 16:43:19 UTC (rev 4445)
+++ branches/transforms/lib/matplotlib/backends/backend_wx.py	2007年11月26日 16:52:53 UTC (rev 4446)
@@ -343,10 +343,10 @@
 if angle == 0.0:
 gfx_ctx.DrawText(s, x, y)
 else:
- angle = angle / 180.0 * math.pi
+ rads = angle / 180.0 * math.pi
 xo = h * math.sin(rads)
 yo = h * math.cos(rads)
- gfx_ctx.DrawRotatedText(s, x - xo, y - yo, angle)
+ gfx_ctx.DrawRotatedText(s, x - xo, y - yo, rads)
 
 gc.unselect()
 
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 16:43:24
Revision: 4445
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4445&view=rev
Author: mdboom
Date: 2007年11月26日 08:43:19 -0800 (2007年11月26日)
Log Message:
-----------
Fix compilation error on 64-bit platforms.
Modified Paths:
--------------
 branches/transforms/src/_path.cpp
Modified: branches/transforms/src/_path.cpp
===================================================================
--- branches/transforms/src/_path.cpp	2007年11月26日 15:46:17 UTC (rev 4444)
+++ branches/transforms/src/_path.cpp	2007年11月26日 16:43:19 UTC (rev 4445)
@@ -696,8 +696,9 @@
 f = *(double*)(row1);
 }
 
- result = (PyArrayObject*)PyArray_FromDims
- (PyArray_NDIM(vertices), PyArray_DIMS(vertices), PyArray_DOUBLE);
+ result = (PyArrayObject*)PyArray_FromDimsAndDataAndDescr
+ (PyArray_NDIM(vertices), PyArray_DIMS(vertices), 
+ PyArray_DescrFromType(PyArray_DOUBLE), NULL);
 if (PyArray_NDIM(vertices) == 2) {
 size_t n = PyArray_DIM(vertices, 0);
 char* vertex_in = PyArray_BYTES(vertices);
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 15:46:30
Revision: 4444
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4444&view=rev
Author: mdboom
Date: 2007年11月26日 07:46:17 -0800 (2007年11月26日)
Log Message:
-----------
Merged revisions 4437-4443 via svnmerge from 
http://matplotlib.svn.sf.net/svnroot/matplotlib/trunk/matplotlib
........
 r4438 | mdboom | 2007年11月26日 09:29:49 -0500 (2007年11月26日) | 2 lines
 
 Minor speed improvements in mathtext. Removing trailing whitespace.
........
 r4441 | mdboom | 2007年11月26日 10:31:54 -0500 (2007年11月26日) | 2 lines
 
 Fix colored text in SVG backend.
........
 r4442 | mdboom | 2007年11月26日 10:42:10 -0500 (2007年11月26日) | 2 lines
 
 Reduce SVG file sizes.
........
 r4443 | mdboom | 2007年11月26日 10:43:26 -0500 (2007年11月26日) | 2 lines
 
 One more SVG color detail (in mathtext)
........
Modified Paths:
--------------
 branches/transforms/lib/matplotlib/backends/backend_svg.py
 branches/transforms/lib/matplotlib/mathtext.py
Property Changed:
----------------
 branches/transforms/
Property changes on: branches/transforms
___________________________________________________________________
Name: svnmerge-integrated
 - /trunk/matplotlib:1-4436
 + /trunk/matplotlib:1-4443
Modified: branches/transforms/lib/matplotlib/backends/backend_svg.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:43:26 UTC (rev 4443)
+++ branches/transforms/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:46:17 UTC (rev 4444)
@@ -29,7 +29,7 @@
 
 _capstyle_d = {'projecting' : 'square', 'butt' : 'butt', 'round': 'round',}
 class RendererSVG(RendererBase):
- FONT_SCALE = 1200.0
+ FONT_SCALE = 100.0
 
 def __init__(self, width, height, svgwriter, basename=None):
 self.width=width
@@ -293,7 +293,6 @@
 color = rgb2hex(gc.get_rgb()[:3])
 
 if rcParams['svg.embed_char_paths']:
-
 svg = ['<g style="fill: %s" transform="' % color]
 if angle != 0:
 svg.append('translate(%s,%s)rotate(%1.1f)' % (x,y,-angle))
@@ -453,7 +452,7 @@
 svg.append('</text>\n')
 
 if len(svg_rects):
- style = "fill: black; stroke: none"
+ style = "fill: %s; stroke: none" % color
 svg.append('<g style="%s" transform="' % style)
 if angle != 0:
 svg.append('translate(%s,%s) rotate(%1.1f)'
Modified: branches/transforms/lib/matplotlib/mathtext.py
===================================================================
--- branches/transforms/lib/matplotlib/mathtext.py	2007年11月26日 15:43:26 UTC (rev 4443)
+++ branches/transforms/lib/matplotlib/mathtext.py	2007年11月26日 15:46:17 UTC (rev 4444)
@@ -208,7 +208,7 @@
 class MathtextBackendBbox(MathtextBackend):
 """A backend whose only purpose is to get a precise bounding box.
 Only required for the Agg backend."""
- 
+
 def __init__(self, real_backend):
 MathtextBackend.__init__(self)
 self.bbox = [0, 0, 0, 0]
@@ -219,7 +219,7 @@
 min(self.bbox[1], y1),
 max(self.bbox[2], x2),
 max(self.bbox[3], y2)]
- 
+
 def render_glyph(self, ox, oy, info):
 self._update_bbox(ox + info.metrics.xmin,
 oy - info.metrics.ymax,
@@ -251,14 +251,14 @@
 self.real_backend.fonts_object = self.fonts_object
 self.real_backend.ox = self.bbox[0]
 self.real_backend.oy = self.bbox[1]
- 
+
 class MathtextBackendAggRender(MathtextBackend):
 def __init__(self):
 self.ox = 0
 self.oy = 0
 self.image = None
 MathtextBackend.__init__(self)
- 
+
 def set_canvas_size(self, w, h, d):
 MathtextBackend.set_canvas_size(self, w, h, d)
 self.image = FT2Image(ceil(w), ceil(h + d))
@@ -284,11 +284,11 @@
 
 def MathtextBackendAgg():
 return MathtextBackendBbox(MathtextBackendAggRender())
- 
+
 class MathtextBackendBitmapRender(MathtextBackendAggRender):
 def get_results(self, box):
 return self.image
- 
+
 def MathtextBackendBitmap():
 return MathtextBackendBbox(MathtextBackendBitmapRender())
 
@@ -310,7 +310,7 @@
 """ % locals()
 self.lastfont = postscript_name, fontsize
 self.pswriter.write(ps)
- 
+
 ps = """%(ox)f %(oy)f moveto
 /%(symbol_name)s glyphshow\n
 """ % locals()
@@ -426,7 +426,7 @@
 """Fix any cyclical references before the object is about
 to be destroyed."""
 self.used_characters = None
- 
+
 def get_kern(self, font1, sym1, fontsize1,
 font2, sym2, fontsize2, dpi):
 """
@@ -737,7 +737,7 @@
 
 fontmap = {}
 use_cmex = True
- 
+
 def __init__(self, *args, **kwargs):
 # This must come first so the backend's owner is set correctly
 if rcParams['mathtext.fallback_to_cm']:
@@ -758,7 +758,7 @@
 
 def _map_virtual_font(self, fontname, font_class, uniindex):
 return fontname, uniindex
- 
+
 def _get_glyph(self, fontname, font_class, sym, fontsize):
 found_symbol = False
 
@@ -767,7 +767,7 @@
 if uniindex is not None:
 fontname = 'ex'
 found_symbol = True
- 
+
 if not found_symbol:
 try:
 uniindex = get_unicode_index(sym)
@@ -780,7 +780,7 @@
 
 fontname, uniindex = self._map_virtual_font(
 fontname, font_class, uniindex)
- 
+
 # Only characters in the "Letter" class should be italicized in 'it'
 # mode. Greek capital letters should be Roman.
 if found_symbol:
@@ -830,13 +830,16 @@
 return [(fontname, sym)]
 
 class StixFonts(UnicodeFonts):
+ """
+ A font handling class for the STIX fonts
+ """
 _fontmap = { 'rm' : 'STIXGeneral',
 'it' : 'STIXGeneralItalic',
 'bf' : 'STIXGeneralBol',
 'nonunirm' : 'STIXNonUni',
 'nonuniit' : 'STIXNonUniIta',
 'nonunibf' : 'STIXNonUniBol',
- 
+
 0 : 'STIXGeneral',
 1 : 'STIXSiz1Sym',
 2 : 'STIXSiz2Sym',
@@ -849,7 +852,6 @@
 cm_fallback = False
 
 def __init__(self, *args, **kwargs):
- self._sans = kwargs.pop("sans", False)
 TruetypeFonts.__init__(self, *args, **kwargs)
 if not len(self.fontmap):
 for key, name in self._fontmap.iteritems():
@@ -891,14 +893,14 @@
 # This will generate a dummy character
 uniindex = 0x1
 fontname = 'it'
- 
+
 # Handle private use area glyphs
 if (fontname in ('it', 'rm', 'bf') and
 uniindex >= 0xe000 and uniindex <= 0xf8ff):
 fontname = 'nonuni' + fontname
 
 return fontname, uniindex
- 
+
 _size_alternatives = {}
 def get_sized_alternatives_for_symbol(self, fontname, sym):
 alternatives = self._size_alternatives.get(sym)
@@ -919,7 +921,14 @@
 
 self._size_alternatives[sym] = alternatives
 return alternatives
- 
+
+class StixSansFonts(StixFonts):
+ """
+ A font handling class for the STIX fonts (using sans-serif
+ characters by default).
+ """
+ _sans = True
+
 class StandardPsFonts(Fonts):
 """
 Use the standard postscript fonts for rendering to backend_ps
@@ -1085,7 +1094,8 @@
 # Note that (as TeX) y increases downward, unlike many other parts of
 # matplotlib.
 
-# How much text shrinks when going to the next-smallest level
+# How much text shrinks when going to the next-smallest level. GROW_FACTOR
+# must be the inverse of SHRINK_FACTOR.
 SHRINK_FACTOR = 0.7
 GROW_FACTOR = 1.0 / SHRINK_FACTOR
 # The number of different sizes of chars to use, beyond which they will not
@@ -1160,10 +1170,16 @@
 pass
 
 class Vbox(Box):
+ """
+ A box with only height (zero width).
+ """
 def __init__(self, height, depth):
 Box.__init__(self, 0., height, depth)
 
 class Hbox(Box):
+ """
+ A box with only width (zero height and depth).
+ """
 def __init__(self, width):
 Box.__init__(self, width, 0., 0.)
 
@@ -1241,8 +1257,9 @@
 self.depth *= GROW_FACTOR
 
 class Accent(Char):
- """The font metrics need to be dealt with differently for accents, since they
- are already offset correctly from the baseline in TrueType fonts."""
+ """The font metrics need to be dealt with differently for accents,
+ since they are already offset correctly from the baseline in
+ TrueType fonts."""
 def _update_metrics(self):
 metrics = self._metrics = self.font_output.get_metrics(
 self.font, self.font_class, self.c, self.fontsize, self.dpi)
@@ -1741,7 +1758,7 @@
 self.cur_s += 1
 self.max_push = max(self.cur_s, self.max_push)
 clamp = self.clamp
- 
+
 for p in box.children:
 if isinstance(p, Char):
 p.render(self.cur_h + self.off_h, self.cur_v + self.off_v)
@@ -1864,7 +1881,7 @@
 empty = Empty()
 empty.setParseAction(raise_error)
 return empty
- 
+
 class Parser(object):
 _binary_operators = Set(r'''
 + *
@@ -1922,7 +1939,7 @@
 _dropsub_symbols = Set(r'''\int \oint'''.split())
 
 _fontnames = Set("rm cal it tt sf bf default bb frak circled scr".split())
- 
+
 _function_names = Set("""
 arccos csc ker min arcsin deg lg Pr arctan det lim sec arg dim
 liminf sin cos exp limsup sinh cosh gcd ln sup cot hom log tan
@@ -1935,7 +1952,7 @@
 _leftDelim = Set(r"( [ { \lfloor \langle \lceil".split())
 
 _rightDelim = Set(r") ] } \rfloor \rangle \rceil".split())
- 
+
 def __init__(self):
 # All forward declarations are here
 font = Forward().setParseAction(self.font).setName("font")
@@ -1947,7 +1964,7 @@
 self._expression = Forward().setParseAction(self.finish).setName("finish")
 
 float = Regex(r"-?[0-9]+\.?[0-9]*")
- 
+
 lbrace = Literal('{').suppress()
 rbrace = Literal('}').suppress()
 start_group = (Optional(latexfont) + lbrace)
@@ -1993,7 +2010,7 @@
 c_over_c =(Suppress(bslash)
 + oneOf(self._char_over_chars.keys())
 ).setParseAction(self.char_over_chars)
- 
+
 accent = Group(
 Suppress(bslash)
 + accent
@@ -2055,7 +2072,7 @@
 ) | Error("Expected symbol or group")
 
 simple <<(space
- | customspace 
+ | customspace
 | font
 | subsuper
 )
@@ -2105,11 +2122,11 @@
 + (Suppress(math_delim)
 | Error("Expected end of math '$'"))
 + non_math
- ) 
+ )
 ) + StringEnd()
 
 self._expression.enablePackrat()
- 
+
 self.clear()
 
 def clear(self):
@@ -2156,7 +2173,7 @@
 self.font_class = name
 self._font = name
 font = property(_get_font, _set_font)
- 
+
 def get_state(self):
 return self._state_stack[-1]
 
@@ -2214,7 +2231,7 @@
 
 def customspace(self, s, loc, toks):
 return [self._make_space(float(toks[1]))]
- 
+
 def symbol(self, s, loc, toks):
 # print "symbol", toks
 c = toks[0]
@@ -2240,7 +2257,7 @@
 # (in multiples of underline height)
 r'AA' : ( ('rm', 'A', 1.0), (None, '\circ', 0.5), 0.0),
 }
- 
+
 def char_over_chars(self, s, loc, toks):
 sym = toks[0]
 state = self.get_state()
@@ -2251,7 +2268,7 @@
 self._char_over_chars.get(sym, (None, None, 0.0))
 if under_desc is None:
 raise ParseFatalException("Error parsing symbol")
- 
+
 over_state = state.copy()
 if over_desc[0] is not None:
 over_state.font = over_desc[0]
@@ -2265,19 +2282,19 @@
 under = Char(under_desc[1], under_state)
 
 width = max(over.width, under.width)
- 
+
 over_centered = HCentered([over])
 over_centered.hpack(width, 'exactly')
 
 under_centered = HCentered([under])
 under_centered.hpack(width, 'exactly')
- 
+
 return Vlist([
 over_centered,
 Vbox(0., thickness * space),
 under_centered
 ])
- 
+
 _accent_map = {
 r'hat' : r'\circumflexaccent',
 r'breve' : r'\combiningbreve',
@@ -2601,7 +2618,7 @@
 return is width, height, fonts
 """
 _parser = None
- 
+
 _backend_mapping = {
 'Bitmap': MathtextBackendBitmap,
 'Agg' : MathtextBackendAgg,
@@ -2611,6 +2628,13 @@
 'Cairo' : MathtextBackendCairo
 }
 
+ _font_type_mapping = {
+ 'cm' : BakomaFonts,
+ 'stix' : StixFonts,
+ 'stixsans' : StixSansFonts,
+ 'custom' : UnicodeFonts
+ }
+
 def __init__(self, output):
 self._output = output
 self._cache = {}
@@ -2618,7 +2642,7 @@
 def parse(self, s, dpi = 72, prop = None):
 if prop is None:
 prop = FontProperties()
- 
+
 cacheKey = (s, dpi, hash(prop))
 result = self._cache.get(cacheKey)
 if result is not None:
@@ -2629,16 +2653,13 @@
 else:
 backend = self._backend_mapping[self._output]()
 fontset = rcParams['mathtext.fontset']
- if fontset == 'cm':
- font_output = BakomaFonts(prop, backend)
- elif fontset == 'stix':
- font_output = StixFonts(prop, backend)
- elif fontset == 'stixsans':
- font_output = StixFonts(prop, backend, sans=True)
- elif fontset == 'custom':
- font_output = UnicodeFonts(prop, backend)
+ fontset_class = self._font_type_mapping.get(fontset)
+ if fontset_class is not None:
+ font_output = fontset_class(prop, backend)
 else:
- raise ValueError("mathtext.fontset must be either 'cm', 'stix', 'stixsans', or 'custom'")
+ raise ValueError(
+ "mathtext.fontset must be either 'cm', 'stix', "
+ "'stixsans', or 'custom'")
 
 fontsize = prop.get_size_in_points()
 
@@ -2646,7 +2667,7 @@
 # with each request.
 if self._parser is None:
 self.__class__._parser = Parser()
- 
+
 box = self._parser.parse(s, font_output, fontsize, dpi)
 font_output.set_canvas_size(box.width, box.height, box.depth)
 result = font_output.get_results(box)
@@ -2658,5 +2679,5 @@
 font_output.destroy()
 font_output.mathtext_backend.fonts_object = None
 font_output.mathtext_backend = None
- 
+
 return result
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4443
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4443&view=rev
Author: mdboom
Date: 2007年11月26日 07:43:26 -0800 (2007年11月26日)
Log Message:
-----------
One more SVG color detail (in mathtext)
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/backends/backend_svg.py
Modified: trunk/matplotlib/lib/matplotlib/backends/backend_svg.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:42:10 UTC (rev 4442)
+++ trunk/matplotlib/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:43:26 UTC (rev 4443)
@@ -409,7 +409,7 @@
 svg.append('</text>\n')
 
 if len(svg_rects):
- style = "fill: black; stroke: none"
+ style = "fill: %s; stroke: none" % color
 svg.append('<g style="%s" transform="' % style)
 if angle != 0:
 svg.append('translate(%s,%s) rotate(%1.1f)'
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4442
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4442&view=rev
Author: mdboom
Date: 2007年11月26日 07:42:10 -0800 (2007年11月26日)
Log Message:
-----------
Reduce SVG file sizes.
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/backends/backend_svg.py
Modified: trunk/matplotlib/lib/matplotlib/backends/backend_svg.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:31:54 UTC (rev 4441)
+++ trunk/matplotlib/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:42:10 UTC (rev 4442)
@@ -26,7 +26,7 @@
 
 _capstyle_d = {'projecting' : 'square', 'butt' : 'butt', 'round': 'round',}
 class RendererSVG(RendererBase):
- FONT_SCALE = 1200.0
+ FONT_SCALE = 100.0
 
 def __init__(self, width, height, svgwriter, basename=None):
 self.width=width
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4441
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4441&view=rev
Author: mdboom
Date: 2007年11月26日 07:31:54 -0800 (2007年11月26日)
Log Message:
-----------
Fix colored text in SVG backend.
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/backends/backend_svg.py
Modified: trunk/matplotlib/lib/matplotlib/backends/backend_svg.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:30:12 UTC (rev 4440)
+++ trunk/matplotlib/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:31:54 UTC (rev 4441)
@@ -246,14 +246,11 @@
 font.set_text(s, 0.0, flags=LOAD_NO_HINTING)
 y -= font.get_descent() / 64.0
 
- thetext = escape_xml_text(s)
- fontfamily = font.family_name
- fontstyle = prop.get_style()
 fontsize = prop.get_size_in_points()
 color = rgb2hex(gc.get_rgb())
 
 if rcParams['svg.embed_char_paths']:
- svg = ['<g transform="']
+ svg = ['<g style="fill: %s" transform="' % color]
 if angle != 0:
 svg.append('translate(%s,%s)rotate(%1.1f)' % (x,y,-angle))
 elif x != 0 or y != 0:
@@ -288,6 +285,10 @@
 svg.append('</g>\n')
 svg = ''.join(svg)
 else:
+ thetext = escape_xml_text(s)
+ fontfamily = font.family_name
+ fontstyle = prop.get_style()
+
 style = 'font-size: %f; font-family: %s; font-style: %s; fill: %s;'%(fontsize, fontfamily,fontstyle, color)
 if angle!=0:
 transform = 'transform="translate(%s,%s) rotate(%1.1f) translate(%s,%s)"' % (x,y,-angle,-x,-y)
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 15:30:15
Revision: 4440
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4440&view=rev
Author: mdboom
Date: 2007年11月26日 07:30:12 -0800 (2007年11月26日)
Log Message:
-----------
Support mixed-mode rendering in the SVG backend.
Modified Paths:
--------------
 branches/transforms/lib/matplotlib/backend_bases.py
 branches/transforms/lib/matplotlib/backends/backend_mixed.py
 branches/transforms/lib/matplotlib/backends/backend_svg.py
Modified: branches/transforms/lib/matplotlib/backend_bases.py
===================================================================
--- branches/transforms/lib/matplotlib/backend_bases.py	2007年11月26日 15:18:40 UTC (rev 4439)
+++ branches/transforms/lib/matplotlib/backend_bases.py	2007年11月26日 15:30:12 UTC (rev 4440)
@@ -380,6 +380,8 @@
 Return the alpha value used for blending - not supported on
 all backends
 """
+ if len(self._rgb) == 4:
+ return self._rgb[3]
 return self._alpha
 
 def get_antialiased(self):
Modified: branches/transforms/lib/matplotlib/backends/backend_mixed.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_mixed.py	2007年11月26日 15:18:40 UTC (rev 4439)
+++ branches/transforms/lib/matplotlib/backends/backend_mixed.py	2007年11月26日 15:30:12 UTC (rev 4440)
@@ -41,11 +41,11 @@
 self._set_current_renderer(vector_renderer)
 
 _methods = """
- open_group close_group draw_path draw_markers
- draw_path_collection draw_quad_mesh get_image_magnification
- draw_image draw_tex draw_text flipy option_image_nocomposite
- get_texmanager get_text_width_height_descent new_gc
- points_to_pixels strip_math finalize
+ close_group draw_image draw_markers draw_path
+ draw_path_collection draw_quad_mesh draw_tex draw_text
+ finalize flipy get_canvas_width_height get_image_magnification
+ get_texmanager get_text_width_height_descent new_gc open_group
+ option_image_nocomposite points_to_pixels strip_math
 """.split()
 def _set_current_renderer(self, renderer):
 self._renderer = renderer
@@ -93,7 +93,3 @@
 self._renderer.draw_image(l, height - b - h, image, None)
 self._raster_renderer = None
 self._rasterizing = False
-
- def get_canvas_width_height(self):
- 'return the canvas width and height in display coords'
- return self._width, self._height
Modified: branches/transforms/lib/matplotlib/backends/backend_svg.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:18:40 UTC (rev 4439)
+++ branches/transforms/lib/matplotlib/backends/backend_svg.py	2007年11月26日 15:30:12 UTC (rev 4440)
@@ -5,6 +5,7 @@
 from matplotlib import verbose, __version__, rcParams
 from matplotlib.backend_bases import RendererBase, GraphicsContextBase,\
 FigureManagerBase, FigureCanvasBase
+from matplotlib.backends.backend_mixed import MixedModeRenderer
 from matplotlib.cbook import is_string_like, is_writable_file_like
 from matplotlib.colors import rgb2hex
 from matplotlib.figure import Figure
@@ -93,7 +94,7 @@
 return 'fill: %s; stroke: %s; stroke-width: %s; ' \
 'stroke-linejoin: %s; stroke-linecap: %s; %s opacity: %s' % (
 fill,
- rgb2hex(gc.get_rgb()),
+ rgb2hex(gc.get_rgb()[:3]),
 linewidth,
 gc.get_joinstyle(),
 _capstyle_d[gc.get_capstyle()],
@@ -288,14 +289,12 @@
 font.set_text(s, 0.0, flags=LOAD_NO_HINTING)
 y -= font.get_descent() / 64.0
 
- thetext = escape_xml_text(s)
- fontfamily = font.family_name
- fontstyle = prop.get_style()
 fontsize = prop.get_size_in_points()
- color = rgb2hex(gc.get_rgb())
+ color = rgb2hex(gc.get_rgb()[:3])
 
 if rcParams['svg.embed_char_paths']:
- svg = ['<g transform="']
+
+ svg = ['<g style="fill: %s" transform="' % color]
 if angle != 0:
 svg.append('translate(%s,%s)rotate(%1.1f)' % (x,y,-angle))
 elif x != 0 or y != 0:
@@ -330,6 +329,10 @@
 svg.append('</g>\n')
 svg = ''.join(svg)
 else:
+ thetext = escape_xml_text(s)
+ fontfamily = font.family_name
+ fontstyle = prop.get_style()
+
 style = 'font-size: %f; font-family: %s; font-style: %s; fill: %s;'%(fontsize, fontfamily,fontstyle, color)
 if angle!=0:
 transform = 'transform="translate(%s,%s) rotate(%1.1f) translate(%s,%s)"' % (x,y,-angle,-x,-y)
@@ -393,7 +396,7 @@
 self.mathtext_parser.parse(s, 72, prop)
 svg_glyphs = svg_elements.svg_glyphs
 svg_rects = svg_elements.svg_rects
- color = rgb2hex(gc.get_rgb())
+ color = rgb2hex(gc.get_rgb()[:3])
 
 self.open_group("mathtext")
 
@@ -466,7 +469,7 @@
 self._svgwriter.write (''.join(svg))
 self.close_group("mathtext")
 
- def finish(self):
+ def finalize(self):
 write = self._svgwriter.write
 if len(self._char_defs):
 write('<defs id="fontpaths">\n')
@@ -501,34 +504,23 @@
 'svgz': 'Scalable Vector Graphics'}
 
 def print_svg(self, filename, *args, **kwargs):
- if is_string_like(filename):
- fh_to_close = svgwriter = codecs.open(filename, 'w', 'utf-8')
- elif is_writable_file_like(filename):
- svgwriter = codecs.EncodedFile(filename, 'utf-8')
- fh_to_close = None
- else:
- raise ValueError("filename must be a path or a file-like object")
- return self._print_svg(filename, svgwriter, fh_to_close)
- 
+ svgwriter = codecs.open(filename, 'w', 'utf-8')
+ return self._print_svg(filename, svgwriter)
+
 def print_svgz(self, filename, *args, **kwargs):
- if is_string_like(filename):
- gzipwriter = gzip.GzipFile(filename, 'w')
- fh_to_close = svgwriter = codecs.EncodedFile(gzipwriter, 'utf-8')
- elif is_writable_file_like(filename):
- fh_to_close = gzipwriter = gzip.GzipFile(fileobj=filename, mode='w')
- svgwriter = codecs.EncodedFile(gzipwriter, 'utf-8')
- else:
- raise ValueError("filename must be a path or a file-like object")
- return self._print_svg(filename, svgwriter, fh_to_close)
+ gzipwriter = gzip.GzipFile(filename, 'w')
+ svgwriter = codecs.EncodedFile(gzipwriter, 'utf-8')
+ return self._print_svg(filename, svgwriter)
 
 def _print_svg(self, filename, svgwriter, fh_to_close=None):
 self.figure.set_dpi(72.0)
 width, height = self.figure.get_size_inches()
 w, h = width*72, height*72
 
- renderer = RendererSVG(w, h, svgwriter, filename)
+ renderer = MixedModeRenderer(
+ width, height, 72.0, RendererSVG(w, h, svgwriter, filename))
 self.figure.draw(renderer)
- renderer.finish()
+ renderer.finalize()
 if fh_to_close is not None:
 svgwriter.close()
 
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4439
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4439&view=rev
Author: mdboom
Date: 2007年11月26日 07:18:40 -0800 (2007年11月26日)
Log Message:
-----------
Remove draw_arc (which isn't in the new backend renderer interface).
Modified Paths:
--------------
 branches/transforms/lib/matplotlib/backends/backend_template.py
Modified: branches/transforms/lib/matplotlib/backends/backend_template.py
===================================================================
--- branches/transforms/lib/matplotlib/backends/backend_template.py	2007年11月26日 14:29:49 UTC (rev 4438)
+++ branches/transforms/lib/matplotlib/backends/backend_template.py	2007年11月26日 15:18:40 UTC (rev 4439)
@@ -62,10 +62,6 @@
 writing a new backend. Refer to backend_bases.RendererBase for
 documentation of the classes methods.
 """
- def draw_arc(self, gc, rgbFace, x, y, width, height, angle1, angle2,
- rotation):
- pass
-
 def draw_path(self, gc, path, transform, rgbFace=None):
 pass
 
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 14:29:52
Revision: 4438
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4438&view=rev
Author: mdboom
Date: 2007年11月26日 06:29:49 -0800 (2007年11月26日)
Log Message:
-----------
Minor speed improvements in mathtext. Removing trailing whitespace.
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/mathtext.py
Modified: trunk/matplotlib/lib/matplotlib/mathtext.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/mathtext.py	2007年11月26日 14:10:11 UTC (rev 4437)
+++ trunk/matplotlib/lib/matplotlib/mathtext.py	2007年11月26日 14:29:49 UTC (rev 4438)
@@ -210,7 +210,7 @@
 class MathtextBackendBbox(MathtextBackend):
 """A backend whose only purpose is to get a precise bounding box.
 Only required for the Agg backend."""
- 
+
 def __init__(self, real_backend):
 MathtextBackend.__init__(self)
 self.bbox = [0, 0, 0, 0]
@@ -221,7 +221,7 @@
 min(self.bbox[1], y1),
 max(self.bbox[2], x2),
 max(self.bbox[3], y2)]
- 
+
 def render_glyph(self, ox, oy, info):
 self._update_bbox(ox + info.metrics.xmin,
 oy - info.metrics.ymax,
@@ -253,14 +253,14 @@
 self.real_backend.fonts_object = self.fonts_object
 self.real_backend.ox = self.bbox[0]
 self.real_backend.oy = self.bbox[1]
- 
+
 class MathtextBackendAggRender(MathtextBackend):
 def __init__(self):
 self.ox = 0
 self.oy = 0
 self.image = None
 MathtextBackend.__init__(self)
- 
+
 def set_canvas_size(self, w, h, d):
 MathtextBackend.set_canvas_size(self, w, h, d)
 self.image = FT2Image(ceil(w), ceil(h + d))
@@ -286,11 +286,11 @@
 
 def MathtextBackendAgg():
 return MathtextBackendBbox(MathtextBackendAggRender())
- 
+
 class MathtextBackendBitmapRender(MathtextBackendAggRender):
 def get_results(self, box):
 return self.image
- 
+
 def MathtextBackendBitmap():
 return MathtextBackendBbox(MathtextBackendBitmapRender())
 
@@ -312,7 +312,7 @@
 """ % locals()
 self.lastfont = postscript_name, fontsize
 self.pswriter.write(ps)
- 
+
 ps = """%(ox)f %(oy)f moveto
 /%(symbol_name)s glyphshow\n
 """ % locals()
@@ -428,7 +428,7 @@
 """Fix any cyclical references before the object is about
 to be destroyed."""
 self.used_characters = None
- 
+
 def get_kern(self, font1, sym1, fontsize1,
 font2, sym2, fontsize2, dpi):
 """
@@ -739,7 +739,7 @@
 
 fontmap = {}
 use_cmex = True
- 
+
 def __init__(self, *args, **kwargs):
 # This must come first so the backend's owner is set correctly
 if rcParams['mathtext.fallback_to_cm']:
@@ -760,7 +760,7 @@
 
 def _map_virtual_font(self, fontname, font_class, uniindex):
 return fontname, uniindex
- 
+
 def _get_glyph(self, fontname, font_class, sym, fontsize):
 found_symbol = False
 
@@ -769,7 +769,7 @@
 if uniindex is not None:
 fontname = 'ex'
 found_symbol = True
- 
+
 if not found_symbol:
 try:
 uniindex = get_unicode_index(sym)
@@ -782,7 +782,7 @@
 
 fontname, uniindex = self._map_virtual_font(
 fontname, font_class, uniindex)
- 
+
 # Only characters in the "Letter" class should be italicized in 'it'
 # mode. Greek capital letters should be Roman.
 if found_symbol:
@@ -832,13 +832,16 @@
 return [(fontname, sym)]
 
 class StixFonts(UnicodeFonts):
+ """
+ A font handling class for the STIX fonts
+ """
 _fontmap = { 'rm' : 'STIXGeneral',
 'it' : 'STIXGeneralItalic',
 'bf' : 'STIXGeneralBol',
 'nonunirm' : 'STIXNonUni',
 'nonuniit' : 'STIXNonUniIta',
 'nonunibf' : 'STIXNonUniBol',
- 
+
 0 : 'STIXGeneral',
 1 : 'STIXSiz1Sym',
 2 : 'STIXSiz2Sym',
@@ -851,7 +854,6 @@
 cm_fallback = False
 
 def __init__(self, *args, **kwargs):
- self._sans = kwargs.pop("sans", False)
 TruetypeFonts.__init__(self, *args, **kwargs)
 if not len(self.fontmap):
 for key, name in self._fontmap.iteritems():
@@ -893,14 +895,14 @@
 # This will generate a dummy character
 uniindex = 0x1
 fontname = 'it'
- 
+
 # Handle private use area glyphs
 if (fontname in ('it', 'rm', 'bf') and
 uniindex >= 0xe000 and uniindex <= 0xf8ff):
 fontname = 'nonuni' + fontname
 
 return fontname, uniindex
- 
+
 _size_alternatives = {}
 def get_sized_alternatives_for_symbol(self, fontname, sym):
 alternatives = self._size_alternatives.get(sym)
@@ -921,7 +923,14 @@
 
 self._size_alternatives[sym] = alternatives
 return alternatives
- 
+
+class StixSansFonts(StixFonts):
+ """
+ A font handling class for the STIX fonts (using sans-serif
+ characters by default).
+ """
+ _sans = True
+
 class StandardPsFonts(Fonts):
 """
 Use the standard postscript fonts for rendering to backend_ps
@@ -1087,7 +1096,8 @@
 # Note that (as TeX) y increases downward, unlike many other parts of
 # matplotlib.
 
-# How much text shrinks when going to the next-smallest level
+# How much text shrinks when going to the next-smallest level. GROW_FACTOR
+# must be the inverse of SHRINK_FACTOR.
 SHRINK_FACTOR = 0.7
 GROW_FACTOR = 1.0 / SHRINK_FACTOR
 # The number of different sizes of chars to use, beyond which they will not
@@ -1162,10 +1172,16 @@
 pass
 
 class Vbox(Box):
+ """
+ A box with only height (zero width).
+ """
 def __init__(self, height, depth):
 Box.__init__(self, 0., height, depth)
 
 class Hbox(Box):
+ """
+ A box with only width (zero height and depth).
+ """
 def __init__(self, width):
 Box.__init__(self, width, 0., 0.)
 
@@ -1243,8 +1259,9 @@
 self.depth *= GROW_FACTOR
 
 class Accent(Char):
- """The font metrics need to be dealt with differently for accents, since they
- are already offset correctly from the baseline in TrueType fonts."""
+ """The font metrics need to be dealt with differently for accents,
+ since they are already offset correctly from the baseline in
+ TrueType fonts."""
 def _update_metrics(self):
 metrics = self._metrics = self.font_output.get_metrics(
 self.font, self.font_class, self.c, self.fontsize, self.dpi)
@@ -1743,7 +1760,7 @@
 self.cur_s += 1
 self.max_push = max(self.cur_s, self.max_push)
 clamp = self.clamp
- 
+
 for p in box.children:
 if isinstance(p, Char):
 p.render(self.cur_h + self.off_h, self.cur_v + self.off_v)
@@ -1866,7 +1883,7 @@
 empty = Empty()
 empty.setParseAction(raise_error)
 return empty
- 
+
 class Parser(object):
 _binary_operators = Set(r'''
 + *
@@ -1924,7 +1941,7 @@
 _dropsub_symbols = Set(r'''\int \oint'''.split())
 
 _fontnames = Set("rm cal it tt sf bf default bb frak circled scr".split())
- 
+
 _function_names = Set("""
 arccos csc ker min arcsin deg lg Pr arctan det lim sec arg dim
 liminf sin cos exp limsup sinh cosh gcd ln sup cot hom log tan
@@ -1937,7 +1954,7 @@
 _leftDelim = Set(r"( [ { \lfloor \langle \lceil".split())
 
 _rightDelim = Set(r") ] } \rfloor \rangle \rceil".split())
- 
+
 def __init__(self):
 # All forward declarations are here
 font = Forward().setParseAction(self.font).setName("font")
@@ -1949,7 +1966,7 @@
 self._expression = Forward().setParseAction(self.finish).setName("finish")
 
 float = Regex(r"-?[0-9]+\.?[0-9]*")
- 
+
 lbrace = Literal('{').suppress()
 rbrace = Literal('}').suppress()
 start_group = (Optional(latexfont) + lbrace)
@@ -1995,7 +2012,7 @@
 c_over_c =(Suppress(bslash)
 + oneOf(self._char_over_chars.keys())
 ).setParseAction(self.char_over_chars)
- 
+
 accent = Group(
 Suppress(bslash)
 + accent
@@ -2057,7 +2074,7 @@
 ) | Error("Expected symbol or group")
 
 simple <<(space
- | customspace 
+ | customspace
 | font
 | subsuper
 )
@@ -2107,11 +2124,11 @@
 + (Suppress(math_delim)
 | Error("Expected end of math '$'"))
 + non_math
- ) 
+ )
 ) + StringEnd()
 
 self._expression.enablePackrat()
- 
+
 self.clear()
 
 def clear(self):
@@ -2158,7 +2175,7 @@
 self.font_class = name
 self._font = name
 font = property(_get_font, _set_font)
- 
+
 def get_state(self):
 return self._state_stack[-1]
 
@@ -2216,7 +2233,7 @@
 
 def customspace(self, s, loc, toks):
 return [self._make_space(float(toks[1]))]
- 
+
 def symbol(self, s, loc, toks):
 # print "symbol", toks
 c = toks[0]
@@ -2242,7 +2259,7 @@
 # (in multiples of underline height)
 r'AA' : ( ('rm', 'A', 1.0), (None, '\circ', 0.5), 0.0),
 }
- 
+
 def char_over_chars(self, s, loc, toks):
 sym = toks[0]
 state = self.get_state()
@@ -2253,7 +2270,7 @@
 self._char_over_chars.get(sym, (None, None, 0.0))
 if under_desc is None:
 raise ParseFatalException("Error parsing symbol")
- 
+
 over_state = state.copy()
 if over_desc[0] is not None:
 over_state.font = over_desc[0]
@@ -2267,19 +2284,19 @@
 under = Char(under_desc[1], under_state)
 
 width = max(over.width, under.width)
- 
+
 over_centered = HCentered([over])
 over_centered.hpack(width, 'exactly')
 
 under_centered = HCentered([under])
 under_centered.hpack(width, 'exactly')
- 
+
 return Vlist([
 over_centered,
 Vbox(0., thickness * space),
 under_centered
 ])
- 
+
 _accent_map = {
 r'hat' : r'\circumflexaccent',
 r'breve' : r'\combiningbreve',
@@ -2603,7 +2620,7 @@
 return is width, height, fonts
 """
 _parser = None
- 
+
 _backend_mapping = {
 'Bitmap': MathtextBackendBitmap,
 'Agg' : MathtextBackendAgg,
@@ -2613,6 +2630,13 @@
 'Cairo' : MathtextBackendCairo
 }
 
+ _font_type_mapping = {
+ 'cm' : BakomaFonts,
+ 'stix' : StixFonts,
+ 'stixsans' : StixSansFonts,
+ 'custom' : UnicodeFonts
+ }
+
 def __init__(self, output):
 self._output = output
 self._cache = {}
@@ -2620,7 +2644,7 @@
 def parse(self, s, dpi = 72, prop = None):
 if prop is None:
 prop = FontProperties()
- 
+
 cacheKey = (s, dpi, hash(prop))
 result = self._cache.get(cacheKey)
 if result is not None:
@@ -2631,16 +2655,13 @@
 else:
 backend = self._backend_mapping[self._output]()
 fontset = rcParams['mathtext.fontset']
- if fontset == 'cm':
- font_output = BakomaFonts(prop, backend)
- elif fontset == 'stix':
- font_output = StixFonts(prop, backend)
- elif fontset == 'stixsans':
- font_output = StixFonts(prop, backend, sans=True)
- elif fontset == 'custom':
- font_output = UnicodeFonts(prop, backend)
+ fontset_class = self._font_type_mapping.get(fontset)
+ if fontset_class is not None:
+ font_output = fontset_class(prop, backend)
 else:
- raise ValueError("mathtext.fontset must be either 'cm', 'stix', 'stixsans', or 'custom'")
+ raise ValueError(
+ "mathtext.fontset must be either 'cm', 'stix', "
+ "'stixsans', or 'custom'")
 
 fontsize = prop.get_size_in_points()
 
@@ -2648,7 +2669,7 @@
 # with each request.
 if self._parser is None:
 self.__class__._parser = Parser()
- 
+
 box = self._parser.parse(s, font_output, fontsize, dpi)
 font_output.set_canvas_size(box.width, box.height, box.depth)
 result = font_output.get_results(box)
@@ -2660,5 +2681,5 @@
 font_output.destroy()
 font_output.mathtext_backend.fonts_object = None
 font_output.mathtext_backend = None
- 
+
 return result
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
From: <md...@us...> - 2007年11月26日 14:10:14
Revision: 4437
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4437&view=rev
Author: mdboom
Date: 2007年11月26日 06:10:11 -0800 (2007年11月26日)
Log Message:
-----------
Merged revisions 4406-4436 via svnmerge from 
http://matplotlib.svn.sf.net/svnroot/matplotlib/trunk/matplotlib
........
 r4407 | mdboom | 2007年11月21日 11:35:38 -0500 (2007年11月21日) | 2 lines
 
 Mathtext speed improvement.
........
 r4434 | jdh2358 | 2007年11月25日 23:06:01 -0500 (2007年11月25日) | 1 line
 
 added x11 to default darwin list
........
 r4435 | mdboom | 2007年11月26日 09:06:57 -0500 (2007年11月26日) | 2 lines
 
 Fix stixsans mode: Upper case Greek should be non-slanted.
........
 r4436 | mdboom | 2007年11月26日 09:08:30 -0500 (2007年11月26日) | 2 lines
 
 Fix stixsans mode: Circled numerals should never be slanted.
........
Modified Paths:
--------------
 branches/transforms/lib/matplotlib/_mathtext_data.py
 branches/transforms/lib/matplotlib/mathtext.py
 branches/transforms/setupext.py
Property Changed:
----------------
 branches/transforms/
Property changes on: branches/transforms
___________________________________________________________________
Name: svnmerge-integrated
 - /trunk/matplotlib:1-4405
 + /trunk/matplotlib:1-4436
Modified: branches/transforms/lib/matplotlib/_mathtext_data.py
===================================================================
--- branches/transforms/lib/matplotlib/_mathtext_data.py	2007年11月26日 14:08:30 UTC (rev 4436)
+++ branches/transforms/lib/matplotlib/_mathtext_data.py	2007年11月26日 14:10:11 UTC (rev 4437)
@@ -2296,7 +2296,7 @@
 [
 (0x0041, 0x0041, 'it', 0xe154), # A-B
 (0x0043, 0x0043, 'it', 0x2102), # C (missing in beta STIX fonts)
- (0x0044, 0x0044, 'it', 0x2145), # D 
+ (0x0044, 0x0044, 'it', 0x2145), # D
 (0x0045, 0x0047, 'it', 0xe156), # E-G
 (0x0048, 0x0048, 'it', 0x210d), # H (missing in beta STIX fonts)
 (0x0049, 0x004d, 'it', 0xe159), # I-M
@@ -2344,8 +2344,8 @@
 ],
 'it':
 [
- (0x0030, 0x0030, 'it', 0x24ea), # 0
- (0x0031, 0x0039, 'it', 0x2460), # 1-9
+ (0x0030, 0x0030, 'rm', 0x24ea), # 0
+ (0x0031, 0x0039, 'rm', 0x2460), # 1-9
 (0x0041, 0x005a, 'it', 0x24b6), # A-Z
 (0x0061, 0x007a, 'it', 0x24d0) # a-z
 ],
@@ -2434,10 +2434,10 @@
 [
 # These numerals are actually upright. We don't actually
 # want italic numerals ever.
- (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9
+ (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9
 (0x0041, 0x005a, 'it', 0x1d608), # A-Z
 (0x0061, 0x007a, 'it', 0x1d622), # a-z
- (0x0391, 0x03a9, 'it', 0xe1bf), # \Alpha-\Omega
+ (0x0391, 0x03a9, 'rm', 0xe17d), # \Alpha-\Omega
 (0x03b1, 0x03c9, 'it', 0xe1d8), # \alpha-\omega
 (0x03d1, 0x03d1, 'it', 0xe1f2), # theta variant
 (0x03d5, 0x03d5, 'it', 0xe1f3), # phi variant
Modified: branches/transforms/lib/matplotlib/mathtext.py
===================================================================
--- branches/transforms/lib/matplotlib/mathtext.py	2007年11月26日 14:08:30 UTC (rev 4436)
+++ branches/transforms/lib/matplotlib/mathtext.py	2007年11月26日 14:10:11 UTC (rev 4437)
@@ -503,6 +503,7 @@
 (through ft2font)
 """
 basepath = os.path.join( get_data_path(), 'fonts' )
+ _fonts = {}
 
 class CachedFont:
 def __init__(self, font):
@@ -517,21 +518,17 @@
 def __init__(self, default_font_prop, mathtext_backend):
 Fonts.__init__(self, default_font_prop, mathtext_backend)
 self.glyphd = {}
- self.fonts = {}
 
- filename = findfont(default_font_prop)
- default_font = self.CachedFont(FT2Font(str(filename)))
+ if self._fonts == {}:
+ filename = findfont(default_font_prop)
+ default_font = self.CachedFont(FT2Font(str(filename)))
 
- self.fonts['default'] = default_font
+ self._fonts['default'] = default_font
 
 def destroy(self):
 self.glyphd = None
- for cached_font in self.fonts.values():
- cached_font.charmap = None
- cached_font.glyphmap = None
- cached_font.font = None
 Fonts.destroy(self)
- 
+
 def _get_font(self, font):
 """Looks up a CachedFont with its charmap and inverse charmap.
 font may be a TeX font name (cal, rm, it etc.), or postscript name."""
@@ -540,16 +537,16 @@
 else:
 basename = font
 
- cached_font = self.fonts.get(basename)
+ cached_font = self._fonts.get(basename)
 if cached_font is None:
 try:
 font = FT2Font(basename)
 except RuntimeError:
 return None
 cached_font = self.CachedFont(font)
- self.fonts[basename] = cached_font
- self.fonts[font.postscript_name] = cached_font
- self.fonts[font.postscript_name.lower()] = cached_font
+ self._fonts[basename] = cached_font
+ self._fonts[font.postscript_name] = cached_font
+ self._fonts[font.postscript_name.lower()] = cached_font
 return cached_font
 
 def _get_offset(self, cached_font, glyph, fontsize, dpi):
Modified: branches/transforms/setupext.py
===================================================================
--- branches/transforms/setupext.py	2007年11月26日 14:08:30 UTC (rev 4436)
+++ branches/transforms/setupext.py	2007年11月26日 14:10:11 UTC (rev 4437)
@@ -51,7 +51,7 @@
 'linux' : ['/usr/local', '/usr',],
 'cygwin' : ['/usr/local', '/usr',],
 'darwin' : ['/sw/lib/freetype2', '/sw/lib/freetype219', '/usr/local',
- '/usr', '/sw'],
+ '/usr', '/sw', '/usr/X11R6'],
 'freebsd4' : ['/usr/local', '/usr'],
 'freebsd5' : ['/usr/local', '/usr'],
 'freebsd6' : ['/usr/local', '/usr'],
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4436
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4436&view=rev
Author: mdboom
Date: 2007年11月26日 06:08:30 -0800 (2007年11月26日)
Log Message:
-----------
Fix stixsans mode: Circled numerals should never be slanted.
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/_mathtext_data.py
Modified: trunk/matplotlib/lib/matplotlib/_mathtext_data.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/_mathtext_data.py	2007年11月26日 14:06:57 UTC (rev 4435)
+++ trunk/matplotlib/lib/matplotlib/_mathtext_data.py	2007年11月26日 14:08:30 UTC (rev 4436)
@@ -2298,7 +2298,7 @@
 [
 (0x0041, 0x0041, 'it', 0xe154), # A-B
 (0x0043, 0x0043, 'it', 0x2102), # C (missing in beta STIX fonts)
- (0x0044, 0x0044, 'it', 0x2145), # D 
+ (0x0044, 0x0044, 'it', 0x2145), # D
 (0x0045, 0x0047, 'it', 0xe156), # E-G
 (0x0048, 0x0048, 'it', 0x210d), # H (missing in beta STIX fonts)
 (0x0049, 0x004d, 'it', 0xe159), # I-M
@@ -2346,8 +2346,8 @@
 ],
 'it':
 [
- (0x0030, 0x0030, 'it', 0x24ea), # 0
- (0x0031, 0x0039, 'it', 0x2460), # 1-9
+ (0x0030, 0x0030, 'rm', 0x24ea), # 0
+ (0x0031, 0x0039, 'rm', 0x2460), # 1-9
 (0x0041, 0x005a, 'it', 0x24b6), # A-Z
 (0x0061, 0x007a, 'it', 0x24d0) # a-z
 ],
@@ -2436,7 +2436,7 @@
 [
 # These numerals are actually upright. We don't actually
 # want italic numerals ever.
- (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9
+ (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9
 (0x0041, 0x005a, 'it', 0x1d608), # A-Z
 (0x0061, 0x007a, 'it', 0x1d622), # a-z
 (0x0391, 0x03a9, 'rm', 0xe17d), # \Alpha-\Omega
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.
Revision: 4435
 http://matplotlib.svn.sourceforge.net/matplotlib/?rev=4435&view=rev
Author: mdboom
Date: 2007年11月26日 06:06:57 -0800 (2007年11月26日)
Log Message:
-----------
Fix stixsans mode: Upper case Greek should be non-slanted.
Modified Paths:
--------------
 trunk/matplotlib/lib/matplotlib/_mathtext_data.py
Modified: trunk/matplotlib/lib/matplotlib/_mathtext_data.py
===================================================================
--- trunk/matplotlib/lib/matplotlib/_mathtext_data.py	2007年11月26日 04:06:01 UTC (rev 4434)
+++ trunk/matplotlib/lib/matplotlib/_mathtext_data.py	2007年11月26日 14:06:57 UTC (rev 4435)
@@ -2439,7 +2439,7 @@
 (0x0030, 0x0039, 'rm', 0x1d7e2), # 0-9
 (0x0041, 0x005a, 'it', 0x1d608), # A-Z
 (0x0061, 0x007a, 'it', 0x1d622), # a-z
- (0x0391, 0x03a9, 'it', 0xe1bf), # \Alpha-\Omega
+ (0x0391, 0x03a9, 'rm', 0xe17d), # \Alpha-\Omega
 (0x03b1, 0x03c9, 'it', 0xe1d8), # \alpha-\omega
 (0x03d1, 0x03d1, 'it', 0xe1f2), # theta variant
 (0x03d5, 0x03d5, 'it', 0xe1f3), # phi variant
This was sent by the SourceForge.net collaborative development platform, the world's largest Open Source development site.

Showing results of 461

<< < 1 2 3 4 5 6 .. 19 > >> (Page 4 of 19)
Want the latest updates on software, tech news, and AI?
Get latest updates about software, tech news, and AI from SourceForge directly in your inbox once a month.
Thanks for helping keep SourceForge clean.
X





Briefly describe the problem (required):
Upload screenshot of ad (required):
Select a file, or drag & drop file here.
Screenshot instructions:

Click URL instructions:
Right-click on the ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)

More information about our ad policies

Ad destination/click URL:

AltStyle によって変換されたページ (->オリジナル) /