From 4dc7880405a27865472fa7e489468bc5a82c5b9b Mon Sep 17 00:00:00 2001
From: Issac
Date: 2017年7月12日 22:48:48 +0800
Subject: [PATCH 1/8] update readme
---
README.md | 9 ++++-----
1 file changed, 4 insertions(+), 5 deletions(-)
diff --git a/README.md b/README.md
index ef79281..7df7f62 100644
--- a/README.md
+++ b/README.md
@@ -10,8 +10,7 @@ This is a `Chinese tutorial` which is translated from [DeepLearning 0.1 document
这是一个翻译自[深度学习0.1文档](http://deeplearning.net/tutorial/contents.html)的`中文教程`。在这个教程里面所有的算法和模型都是通过Pyhton和[Theano](http://deeplearning.net/software/theano/index.html)实现的。Theano是一个著名的第三方库,允许程序员使用GPU或者CPU去运行他的Python代码。
-
-##内容/Contents
+## 内容/Contents
* [入门(Getting Started)](https://github.com/Syndrome777/DeepLearningTutorial/blob/master/1_Getting_Started_入门.md)
* [使用逻辑回归进行MNIST分类(Classifying MNIST digits using Logistic Regression)](https://github.com/Syndrome777/DeepLearningTutorial/blob/master/2_Classifying_MNIST_using_LR_逻辑回归进行MNIST分类.md)
@@ -27,10 +26,10 @@ This is a `Chinese tutorial` which is translated from [DeepLearning 0.1 document
* Miscellaneous
-##版权/Copyright
-####作者/Author
+## 版权/Copyright
+#### 作者/Author
[Theano Development Team](http://deeplearning.net/tutorial/LICENSE.html), LISA lab, University of Montreal
-####翻译者/Translator
+#### 翻译者/Translator
[Lifeng Hua](https://github.com/Syndrome777), Zhejiang University
From a7edadbf657d4965dfedfbf66ffb0fc7c9ce3017 Mon Sep 17 00:00:00 2001
From: Syndrome777
Date: 2017年7月12日 22:53:37 +0800
Subject: [PATCH 2/8] remove other files
---
.DS_Store | Bin 0 -> 8196 bytes
LeNet-5/dA.py | 413 ----------------
LeNet-5/deep_learning_test1.py | 343 --------------
LeNet-5/logistic_sgd.py | 445 ------------------
LeNet-5/mlp.py | 404 ----------------
LeNet-5/runGPU.py | 22 -
LeNet-5/utils.py | 139 ------
.../Project/baidu_spider.py | 140 ------
.../Project/cloud_large.png | Bin 118853 -> 0 bytes
.../Project/myTest/TTT.txt | 105 -----
.../Project/myTest/ansj_dict.py | 149 ------
.../Project/myTest/get_word_length.py | 87 ----
.../Project/myTest/math1.py | 16 -
.../Project/myTest/nltk_test.py | 67 ---
.../Project/myTest/pachong_test.py | 34 --
.../Project/myTest/result.txt | 9 -
.../Project/myTest/split_sentence.py | 113 -----
.../Project/myTest/test.py | 35 --
.../Project/myTest/test2.py | 29 --
.../Project/myTest/test_dict_360.py | 38 --
.../Project/qiubai_spider.py | 141 ------
.../Project/snownlp_test.py | 65 ---
Mathematical-Modeling-2014/Project/spider.py | 50 --
Mathematical-Modeling-2014/Project/test1.py | 28 --
.../Project/test_test.py | 6 -
.../Project/wordcloud.py | 12 -
Mathematical-Modeling-2014/car.txt | 37 --
Mathematical-Modeling-2014/car45.txt | 45 --
Mathematical-Modeling-2014/test.py | 37 --
Mathematical-Modeling-2014/test2.py | 40 --
Mathematical-Modeling-2014/test3.py | 35 --
Mathematical-Modeling-2014/test4.py | 57 ---
images/.DS_Store | Bin 0 -> 10244 bytes
33 files changed, 3141 deletions(-)
create mode 100644 .DS_Store
delete mode 100644 LeNet-5/dA.py
delete mode 100644 LeNet-5/deep_learning_test1.py
delete mode 100644 LeNet-5/logistic_sgd.py
delete mode 100644 LeNet-5/mlp.py
delete mode 100644 LeNet-5/runGPU.py
delete mode 100644 LeNet-5/utils.py
delete mode 100644 Mathematical-Modeling-2014/Project/baidu_spider.py
delete mode 100644 Mathematical-Modeling-2014/Project/cloud_large.png
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/TTT.txt
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/ansj_dict.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/get_word_length.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/math1.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/nltk_test.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/pachong_test.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/result.txt
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/split_sentence.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/test.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/test2.py
delete mode 100644 Mathematical-Modeling-2014/Project/myTest/test_dict_360.py
delete mode 100644 Mathematical-Modeling-2014/Project/qiubai_spider.py
delete mode 100644 Mathematical-Modeling-2014/Project/snownlp_test.py
delete mode 100644 Mathematical-Modeling-2014/Project/spider.py
delete mode 100644 Mathematical-Modeling-2014/Project/test1.py
delete mode 100644 Mathematical-Modeling-2014/Project/test_test.py
delete mode 100644 Mathematical-Modeling-2014/Project/wordcloud.py
delete mode 100644 Mathematical-Modeling-2014/car.txt
delete mode 100644 Mathematical-Modeling-2014/car45.txt
delete mode 100644 Mathematical-Modeling-2014/test.py
delete mode 100644 Mathematical-Modeling-2014/test2.py
delete mode 100644 Mathematical-Modeling-2014/test3.py
delete mode 100644 Mathematical-Modeling-2014/test4.py
create mode 100644 images/.DS_Store
diff --git a/.DS_Store b/.DS_Store
new file mode 100644
index 0000000000000000000000000000000000000000..3e4a100e5628c2c17f5781d269e035009a19aa5c
GIT binary patch
literal 8196
zcmeHL%WoS+9R5vOg3}VRP0}QNQPW(g8lt4HAdtXu9IA+fx^|Q%s@rAl-8f6vyJo$P
zW;PZo1Nd|
zoB4Jgo*^O@E$V|rJwz13!=L0=XWQQwwyGyd_uj9=b2TJP~y|QpR#0zW*K6Q^fcGDnwaf%Gw
zLwIPPjaK@-(r_!`p#;IzVXnu3C5i&~vl_Pgchry@vcowz5dqy#I5D71=bJ)<41ドルgu70_bz~hx08x{dp9dmhghm9^c z@73aN&8$G)Basi&5Rei;EQqgT0$+|hilR@P=}Q=U8v2OuhHuPUI44Gx0XDzB7Po#% zb)G5D{?!;!@VS8R*&z{|;mhM?IcEKcW_(=n`MAH4ZMsy*%Q^laowv+_H!$!mYudK` zvF67^EjxDdpR#Mm`Q)5aN_v`CEJQW;oOm;JBQ48u&6`^*PdbiOEoJ)9gkV^XT&YHS6rz*SSt}zeSxMmwuX2DFGmg$v4E&Qh{J&TLS
z22b{d2cJ8-)E8bH8XD*eA3r&;w4|(rLi-M%7@nBPFTMTV&G$d}@S|IIK11h`nEcm3
z9sYYY{~V<{0?o|4b!%5ybjra$siebg)+gjpuf{v6ysuu3_uzra|3le}u&q*vwamq= z%R4mUSdJUDiiWDR6+qrt#<69erafc9opziuytw|hf!t%srg>jBY&zE~M9@>NVc6%g
zuBo4%oh=yNG)Ks|j%CH2g6Wx#JyWBdNeOh%$c@>v4iGu3r}NybFlD#}$W8v!Gn(sa
zcGifN&s$pBuuj`i$Mc-rw25BumJ5JnUdxzvw)dsqeJWdeBYd18RjZWsVy~})GII4(
z-)gNUq_C=X(spbEk>b9xH}Z*%pk$Ib2u%L{Bff$yH(MAg(pXg#4>XH*(H0fd4Hs7F!
zpZ`a=m
l1hq&ghj{@qB4x#ws6qO7F9L%7f5ZQWVE^ZTqaT6&|8Jwepd0`I
literal 0
HcmV?d00001
diff --git a/LeNet-5/dA.py b/LeNet-5/dA.py
deleted file mode 100644
index e1debf7..0000000
--- a/LeNet-5/dA.py
+++ /dev/null
@@ -1,413 +0,0 @@
-"""
- This tutorial introduces denoising auto-encoders (dA) using Theano.
-
- Denoising autoencoders are the building blocks for SdA.
- They are based on auto-encoders as the ones used in Bengio et al. 2007.
- An autoencoder takes an input x and first maps it to a hidden representation
- y = f_{\theta}(x) = s(Wx+b), parameterized by \theta={W,b}. The resulting
- latent representation y is then mapped back to a "reconstructed" vector
- z \in [0,1]^d in input space z = g_{\theta'}(y) = s(W'y + b'). The weight
- matrix W' can optionally be constrained such that W' = W^T, in which case
- the autoencoder is said to have tied weights. The network is trained such
- that to minimize the reconstruction error (the error between x and z).
-
- For the denosing autoencoder, during training, first x is corrupted into
- \tilde{x}, where \tilde{x} is a partially destroyed version of x by means
- of a stochastic mapping. Afterwards y is computed as before (using
- \tilde{x}), y = s(W\tilde{x} + b) and z as s(W'y + b'). The reconstruction
- error is now measured between z and the uncorrupted input x, which is
- computed as the cross-entropy :
- - \sum_{k=1}^d[ x_k \log z_k + (1-x_k) \log( 1-z_k)]
-
-
- References :
- - P. Vincent, H. Larochelle, Y. Bengio, P.A. Manzagol: Extracting and
- Composing Robust Features with Denoising Autoencoders, ICML'08, 1096-1103,
- 2008
- - Y. Bengio, P. Lamblin, D. Popovici, H. Larochelle: Greedy Layer-Wise
- Training of Deep Networks, Advances in Neural Information Processing
- Systems 19, 2007
-
-"""
-
-import os
-import sys
-import time
-
-import numpy
-
-import theano
-import theano.tensor as T
-from theano.tensor.shared_randomstreams import RandomStreams
-
-from logistic_sgd import load_data
-from utils import tile_raster_images
-
-try:
- import PIL.Image as Image
-except ImportError:
- import Image
-
-
-# start-snippet-1
-class dA(object):
- """Denoising Auto-Encoder class (dA)
-
- A denoising autoencoders tries to reconstruct the input from a corrupted
- version of it by projecting it first in a latent space and reprojecting
- it afterwards back in the input space. Please refer to Vincent et al.,2008
- for more details. If x is the input then equation (1) computes a partially
- destroyed version of x by means of a stochastic mapping q_D. Equation (2)
- computes the projection of the input into the latent space. Equation (3)
- computes the reconstruction of the input, while equation (4) computes the
- reconstruction error.
-
- .. math::
-
- \tilde{x} ~ q_D(\tilde{x}|x) (1)
-
- y = s(W \tilde{x} + b) (2)
-
- x = s(W' y + b') (3)
-
- L(x,z) = -sum_{k=1}^d [x_k \log z_k + (1-x_k) \log( 1-z_k)] (4)
-
- """
-
- def __init__(
- self,
- numpy_rng,
- theano_rng=None,
- input=None,
- n_visible=784,
- n_hidden=500,
- W=None,
- bhid=None,
- bvis=None
- ):
- """
- Initialize the dA class by specifying the number of visible units (the
- dimension d of the input ), the number of hidden units ( the dimension
- d' of the latent or hidden space ) and the corruption level. The
- constructor also receives symbolic variables for the input, weights and
- bias. Such a symbolic variables are useful when, for example the input
- is the result of some computations, or when weights are shared between
- the dA and an MLP layer. When dealing with SdAs this always happens,
- the dA on layer 2 gets as input the output of the dA on layer 1,
- and the weights of the dA are used in the second stage of training
- to construct an MLP.
-
- :type numpy_rng: numpy.random.RandomState
- :param numpy_rng: number random generator used to generate weights
-
- :type theano_rng: theano.tensor.shared_randomstreams.RandomStreams
- :param theano_rng: Theano random generator; if None is given one is
- generated based on a seed drawn from `rng`
-
- :type input: theano.tensor.TensorType
- :param input: a symbolic description of the input or None for
- standalone dA
-
- :type n_visible: int
- :param n_visible: number of visible units
-
- :type n_hidden: int
- :param n_hidden: number of hidden units
-
- :type W: theano.tensor.TensorType
- :param W: Theano variable pointing to a set of weights that should be
- shared belong the dA and another architecture; if dA should
- be standalone set this to None
-
- :type bhid: theano.tensor.TensorType
- :param bhid: Theano variable pointing to a set of biases values (for
- hidden units) that should be shared belong dA and another
- architecture; if dA should be standalone set this to None
-
- :type bvis: theano.tensor.TensorType
- :param bvis: Theano variable pointing to a set of biases values (for
- visible units) that should be shared belong dA and another
- architecture; if dA should be standalone set this to None
-
-
- """
- self.n_visible = n_visible
- self.n_hidden = n_hidden
-
- # create a Theano random generator that gives symbolic random values
- if not theano_rng:
- theano_rng = RandomStreams(numpy_rng.randint(2 ** 30))
-
- # note : W' was written as `W_prime` and b' as `b_prime`
- if not W:
- # W is initialized with `initial_W` which is uniformely sampled
- # from -4*sqrt(6./(n_visible+n_hidden)) and
- # 4*sqrt(6./(n_hidden+n_visible))the output of uniform if
- # converted using asarray to dtype
- # theano.config.floatX so that the code is runable on GPU
- initial_W = numpy.asarray(
- numpy_rng.uniform(
- low=-4 * numpy.sqrt(6. / (n_hidden + n_visible)),
- high=4 * numpy.sqrt(6. / (n_hidden + n_visible)),
- size=(n_visible, n_hidden)
- ),
- dtype=theano.config.floatX
- )
- W = theano.shared(value=initial_W, name='W', borrow=True)
-
- if not bvis:
- bvis = theano.shared(
- value=numpy.zeros(
- n_visible,
- dtype=theano.config.floatX
- ),
- borrow=True
- )
-
- if not bhid:
- bhid = theano.shared(
- value=numpy.zeros(
- n_hidden,
- dtype=theano.config.floatX
- ),
- name='b',
- borrow=True
- )
-
- self.W = W
- # b corresponds to the bias of the hidden
- self.b = bhid
- # b_prime corresponds to the bias of the visible
- self.b_prime = bvis
- # tied weights, therefore W_prime is W transpose
- self.W_prime = self.W.T
- self.theano_rng = theano_rng
- # if no input is given, generate a variable representing the input
- if input is None:
- # we use a matrix because we expect a minibatch of several
- # examples, each example being a row
- self.x = T.dmatrix(name='input')
- else:
- self.x = input
-
- self.params = [self.W, self.b, self.b_prime]
- # end-snippet-1
-
- def get_corrupted_input(self, input, corruption_level):
- """This function keeps ``1-corruption_level`` entries of the inputs the
- same and zero-out randomly selected subset of size ``coruption_level``
- Note : first argument of theano.rng.binomial is the shape(size) of
- random numbers that it should produce
- second argument is the number of trials
- third argument is the probability of success of any trial
-
- this will produce an array of 0s and 1s where 1 has a
- probability of 1 - ``corruption_level`` and 0 with
- ``corruption_level``
-
- The binomial function return int64 data type by
- default. int64 multiplicated by the input
- type(floatX) always return float64. To keep all data
- in floatX when floatX is float32, we set the dtype of
- the binomial to floatX. As in our case the value of
- the binomial is always 0 or 1, this don't change the
- result. This is needed to allow the gpu to work
- correctly as it only support float32 for now.
-
- """
- return self.theano_rng.binomial(size=input.shape, n=1,
- p=1 - corruption_level,
- dtype=theano.config.floatX) * input
-
- def get_hidden_values(self, input):
- """ Computes the values of the hidden layer """
- return T.nnet.sigmoid(T.dot(input, self.W) + self.b)
-
- def get_reconstructed_input(self, hidden):
- """Computes the reconstructed input given the values of the
- hidden layer
-
- """
- return T.nnet.sigmoid(T.dot(hidden, self.W_prime) + self.b_prime)
-
- def get_cost_updates(self, corruption_level, learning_rate):
- """ This function computes the cost and the updates for one trainng
- step of the dA """
-
- tilde_x = self.get_corrupted_input(self.x, corruption_level)
- y = self.get_hidden_values(tilde_x)
- z = self.get_reconstructed_input(y)
- # note : we sum over the size of a datapoint; if we are using
- # minibatches, L will be a vector, with one entry per
- # example in minibatch
- L = - T.sum(self.x * T.log(z) + (1 - self.x) * T.log(1 - z), axis=1)
- # note : L is now a vector, where each element is the
- # cross-entropy cost of the reconstruction of the
- # corresponding example of the minibatch. We need to
- # compute the average of all these to get the cost of
- # the minibatch
- cost = T.mean(L)
-
- # compute the gradients of the cost of the `dA` with respect
- # to its parameters
- gparams = T.grad(cost, self.params)
- # generate the list of updates
- updates = [
- (param, param - learning_rate * gparam)
- for param, gparam in zip(self.params, gparams)
- ]
-
- return (cost, updates)
-
-
-def test_dA(learning_rate=0.1, training_epochs=15,
- dataset='mnist.pkl.gz',
- batch_size=20, output_folder='dA_plots'):
-
- """
- This demo is tested on MNIST
-
- :type learning_rate: float
- :param learning_rate: learning rate used for training the DeNosing
- AutoEncoder
-
- :type training_epochs: int
- :param training_epochs: number of epochs used for training
-
- :type dataset: string
- :param dataset: path to the picked dataset
-
- """
- datasets = load_data(dataset)
- train_set_x, train_set_y = datasets[0]
-
- # compute number of minibatches for training, validation and testing
- n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size
-
- # allocate symbolic variables for the data
- index = T.lscalar() # index to a [mini]batch
- x = T.matrix('x') # the data is presented as rasterized images
-
- if not os.path.isdir(output_folder):
- os.makedirs(output_folder)
- os.chdir(output_folder)
- ####################################
- # BUILDING THE MODEL NO CORRUPTION #
- ####################################
-
- rng = numpy.random.RandomState(123)
- theano_rng = RandomStreams(rng.randint(2 ** 30))
-
- da = dA(
- numpy_rng=rng,
- theano_rng=theano_rng,
- input=x,
- n_visible=28 * 28,
- n_hidden=500
- )
-
- cost, updates = da.get_cost_updates(
- corruption_level=0.,
- learning_rate=learning_rate
- )
-
- train_da = theano.function(
- [index],
- cost,
- updates=updates,
- givens={
- x: train_set_x[index * batch_size: (index + 1) * batch_size]
- }
- )
-
- start_time = time.clock()
-
- ############
- # TRAINING #
- ############
-
- # go through training epochs
- for epoch in xrange(training_epochs):
- # go through trainng set
- c = []
- for batch_index in xrange(n_train_batches):
- c.append(train_da(batch_index))
-
- print 'Training epoch %d, cost ' % epoch, numpy.mean(c)
-
- end_time = time.clock()
-
- training_time = (end_time - start_time)
-
- print>> sys.stderr, ('The no corruption code for file ' +
- os.path.split(__file__)[1] +
- ' ran for %.2fm' % ((training_time) / 60.))
- image = Image.fromarray(
- tile_raster_images(X=da.W.get_value(borrow=True).T,
- img_shape=(28, 28), tile_shape=(10, 10),
- tile_spacing=(1, 1)))
- image.save('filters_corruption_0.png')
-
- #####################################
- # BUILDING THE MODEL CORRUPTION 30% #
- #####################################
-
- rng = numpy.random.RandomState(123)
- theano_rng = RandomStreams(rng.randint(2 ** 30))
-
- da = dA(
- numpy_rng=rng,
- theano_rng=theano_rng,
- input=x,
- n_visible=28 * 28,
- n_hidden=500
- )
-
- cost, updates = da.get_cost_updates(
- corruption_level=0.3,
- learning_rate=learning_rate
- )
-
- train_da = theano.function(
- [index],
- cost,
- updates=updates,
- givens={
- x: train_set_x[index * batch_size: (index + 1) * batch_size]
- }
- )
-
- start_time = time.clock()
-
- ############
- # TRAINING #
- ############
-
- # go through training epochs
- for epoch in xrange(training_epochs):
- # go through trainng set
- c = []
- for batch_index in xrange(n_train_batches):
- c.append(train_da(batch_index))
-
- print 'Training epoch %d, cost ' % epoch, numpy.mean(c)
-
- end_time = time.clock()
-
- training_time = (end_time - start_time)
-
- print>> sys.stderr, ('The 30% corruption code for file ' +
- os.path.split(__file__)[1] +
- ' ran for %.2fm' % (training_time / 60.))
-
- image = Image.fromarray(tile_raster_images(
- X=da.W.get_value(borrow=True).T,
- img_shape=(28, 28), tile_shape=(10, 10),
- tile_spacing=(1, 1)))
- image.save('filters_corruption_30.png')
-
- os.chdir('../')
-
-
-if __name__ == '__main__':
- test_dA()
diff --git a/LeNet-5/deep_learning_test1.py b/LeNet-5/deep_learning_test1.py
deleted file mode 100644
index c281687..0000000
--- a/LeNet-5/deep_learning_test1.py
+++ /dev/null
@@ -1,343 +0,0 @@
-"""This tutorial introduces the LeNet5 neural network architecture
-using Theano. LeNet5 is a convolutional neural network, good for
-classifying images. This tutorial shows how to build the architecture,
-and comes with all the hyper-parameters you need to reproduce the
-paper's MNIST results.
-
-
-This implementation simplifies the model in the following ways:
-
- - LeNetConvPool doesn't implement location-specific gain and bias parameters
- - LeNetConvPool doesn't implement pooling by average, it implements pooling
- by max.
- - Digit classification is implemented with a logistic regression rather than
- an RBF network
- - LeNet5 was not fully-connected convolutions at second layer
-
-References:
- - Y. LeCun, L. Bottou, Y. Bengio and P. Haffner:
- Gradient-Based Learning Applied to Document
- Recognition, Proceedings of the IEEE, 86(11):2278-2324, November 1998.
- http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf
-
-"""
-import os
-import sys
-import time
-
-import numpy
-
-import theano
-import theano.tensor as T
-from theano.tensor.signal import downsample
-from theano.tensor.nnet import conv
-
-from logistic_sgd import LogisticRegression, load_data
-from mlp import HiddenLayer
-
-
-class LeNetConvPoolLayer(object):
- """Pool Layer of a convolutional network """
-
- def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
- """
- Allocate a LeNetConvPoolLayer with shared variable internal parameters.
-
- :type rng: numpy.random.RandomState
- :param rng: a random number generator used to initialize weights
-
- :type input: theano.tensor.dtensor4
- :param input: symbolic image tensor, of shape image_shape
-
- :type filter_shape: tuple or list of length 4
- :param filter_shape: (number of filters, num input feature maps,
- filter height, filter width)
-
- :type image_shape: tuple or list of length 4
- :param image_shape: (batch size, num input feature maps,
- image height, image width)
-
- :type poolsize: tuple or list of length 2
- :param poolsize: the downsampling (pooling) factor (#rows, #cols)
- """
-
- assert image_shape[1] == filter_shape[1]
- self.input = input
-
- # there are "num input feature maps * filter height * filter width"
- # inputs to each hidden unit
- fan_in = numpy.prod(filter_shape[1:])
- # each unit in the lower layer receives a gradient from:
- # "num output feature maps * filter height * filter width" /
- # pooling size
- fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
- numpy.prod(poolsize))
- # initialize weights with random weights
- W_bound = numpy.sqrt(6. / (fan_in + fan_out))
- self.W = theano.shared(
- numpy.asarray(
- rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
- dtype=theano.config.floatX
- ),
- borrow=True
- )
-
- # the bias is a 1D tensor -- one bias per output feature map
- b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
- self.b = theano.shared(value=b_values, borrow=True)
-
- # convolve input feature maps with filters
- conv_out = conv.conv2d(
- input=input,
- filters=self.W,
- filter_shape=filter_shape,
- image_shape=image_shape
- )
-
- # downsample each feature map individually, using maxpooling
- pooled_out = downsample.max_pool_2d(
- input=conv_out,
- ds=poolsize,
- ignore_border=True
- )
-
- # add the bias term. Since the bias is a vector (1D array), we first
- # reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
- # thus be broadcasted across mini-batches and feature map
- # width & height
- self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
-
- # store parameters of this layer
- self.params = [self.W, self.b]
-
-
-def evaluate_lenet5(learning_rate=0.1, n_epochs=200,
- dataset='mnist.pkl.gz',
- nkerns=[20, 50], batch_size=500):
- """ Demonstrates lenet on MNIST dataset
-
- :type learning_rate: float
- :param learning_rate: learning rate used (factor for the stochastic
- gradient)
-
- :type n_epochs: int
- :param n_epochs: maximal number of epochs to run the optimizer
-
- :type dataset: string
- :param dataset: path to the dataset used for training /testing (MNIST here)
-
- :type nkerns: list of ints
- :param nkerns: number of kernels on each layer
- """
-
- rng = numpy.random.RandomState(23455)
-
- datasets = load_data(dataset)
-
- train_set_x, train_set_y = datasets[0]
- valid_set_x, valid_set_y = datasets[1]
- test_set_x, test_set_y = datasets[2]
-
- # compute number of minibatches for training, validation and testing
- n_train_batches = train_set_x.get_value(borrow=True).shape[0]
- n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
- n_test_batches = test_set_x.get_value(borrow=True).shape[0]
- n_train_batches /= batch_size
- n_valid_batches /= batch_size
- n_test_batches /= batch_size
-
- # allocate symbolic variables for the data
- index = T.lscalar() # index to a [mini]batch
-
- # start-snippet-1
- x = T.matrix('x') # the data is presented as rasterized images
- y = T.ivector('y') # the labels are presented as 1D vector of
- # [int] labels
-
- ######################
- # BUILD ACTUAL MODEL #
- ######################
- print '... building the model'
-
- # Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
- # to a 4D tensor, compatible with our LeNetConvPoolLayer
- # (28, 28) is the size of MNIST images.
- layer0_input = x.reshape((batch_size, 1, 28, 28))
-
- # Construct the first convolutional pooling layer:
- # filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
- # maxpooling reduces this further to (24/2, 24/2) = (12, 12)
- # 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
- layer0 = LeNetConvPoolLayer(
- rng,
- input=layer0_input,
- image_shape=(batch_size, 1, 28, 28),
- filter_shape=(nkerns[0], 1, 5, 5),
- poolsize=(2, 2)
- )
-
- # Construct the second convolutional pooling layer
- # filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
- # maxpooling reduces this further to (8/2, 8/2) = (4, 4)
- # 4D output tensor is thus of shape (nkerns[0], nkerns[1], 4, 4)
- layer1 = LeNetConvPoolLayer(
- rng,
- input=layer0.output,
- image_shape=(batch_size, nkerns[0], 12, 12),
- filter_shape=(nkerns[1], nkerns[0], 5, 5),
- poolsize=(2, 2)
- )
-
- # the HiddenLayer being fully-connected, it operates on 2D matrices of
- # shape (batch_size, num_pixels) (i.e matrix of rasterized images).
- # This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
- # or (500, 50 * 4 * 4) = (500, 800) with the default values.
- layer2_input = layer1.output.flatten(2)
-
- # construct a fully-connected sigmoidal layer
- layer2 = HiddenLayer(
- rng,
- input=layer2_input,
- n_in=nkerns[1] * 4 * 4,
- n_out=500,
- activation=T.tanh
- )
-
- # classify the values of the fully-connected sigmoidal layer
- layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
-
- # the cost we minimize during training is the NLL of the model
- cost = layer3.negative_log_likelihood(y)
-
- # create a function to compute the mistakes that are made by the model
- test_model = theano.function(
- [index],
- layer3.errors(y),
- givens={
- x: test_set_x[index * batch_size: (index + 1) * batch_size],
- y: test_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
-
- validate_model = theano.function(
- [index],
- layer3.errors(y),
- givens={
- x: valid_set_x[index * batch_size: (index + 1) * batch_size],
- y: valid_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
-
- # create a list of all model parameters to be fit by gradient descent
- params = layer3.params + layer2.params + layer1.params + layer0.params
-
- # create a list of gradients for all model parameters
- grads = T.grad(cost, params)
-
- # train_model is a function that updates the model parameters by
- # SGD Since this model has many parameters, it would be tedious to
- # manually create an update rule for each model parameter. We thus
- # create the updates list by automatically looping over all
- # (params[i], grads[i]) pairs.
- updates = [
- (param_i, param_i - learning_rate * grad_i)
- for param_i, grad_i in zip(params, grads)
- ]
-
- train_model = theano.function(
- [index],
- cost,
- updates=updates,
- givens={
- x: train_set_x[index * batch_size: (index + 1) * batch_size],
- y: train_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
- # end-snippet-1
-
- ###############
- # TRAIN MODEL #
- ###############
- print '... training'
- # early-stopping parameters
- patience = 10000 # look as this many examples regardless
- patience_increase = 2 # wait this much longer when a new best is
- # found
- improvement_threshold = 0.995 # a relative improvement of this much is
- # considered significant
- validation_frequency = min(n_train_batches, patience / 2)
- # go through this many
- # minibatche before checking the network
- # on the validation set; in this case we
- # check every epoch
-
- best_validation_loss = numpy.inf
- best_iter = 0
- test_score = 0.
- start_time = time.clock()
-
- epoch = 0
- done_looping = False
-
- while (epoch < n_epochs) and (not done_looping): - epoch = epoch + 1 - for minibatch_index in xrange(n_train_batches): - - iter = (epoch - 1) * n_train_batches + minibatch_index - - if iter % 100 == 0: - print 'training @ iter = ', iter - cost_ij = train_model(minibatch_index) - - if (iter + 1) % validation_frequency == 0: - - # compute zero-one loss on validation set - validation_losses = [validate_model(i) for i - in xrange(n_valid_batches)] - this_validation_loss = numpy.mean(validation_losses) - print('epoch %i, minibatch %i/%i, validation error %f %%' % - (epoch, minibatch_index + 1, n_train_batches, - this_validation_loss * 100.)) - - # if we got the best validation score until now - if this_validation_loss < best_validation_loss: - - #improve patience if loss improvement is good enough - if this_validation_loss < best_validation_loss * \ - improvement_threshold: - patience = max(patience, iter * patience_increase) - - # save best validation score and iteration number - best_validation_loss = this_validation_loss - best_iter = iter - - # test it on the test set - test_losses = [ - test_model(i) - for i in xrange(n_test_batches) - ] - test_score = numpy.mean(test_losses) - print((' epoch %i, minibatch %i/%i, test error of ' - 'best model %f %%') % - (epoch, minibatch_index + 1, n_train_batches, - test_score * 100.)) - - if patience <= iter: - done_looping = True - break - - end_time = time.clock() - print('Optimization complete.') - print('Best validation score of %f %% obtained at iteration %i, ' - 'with test performance %f %%' % - (best_validation_loss * 100., best_iter + 1, test_score * 100.)) - print>> sys.stderr, ('The code for file ' +
- os.path.split(__file__)[1] +
- ' ran for %.2fm' % ((end_time - start_time) / 60.))
-
-if __name__ == '__main__':
- evaluate_lenet5()
-
-
-def experiment(state, channel):
- evaluate_lenet5(state.learning_rate, dataset=state.dataset)
\ No newline at end of file
diff --git a/LeNet-5/logistic_sgd.py b/LeNet-5/logistic_sgd.py
deleted file mode 100644
index 83f46d5..0000000
--- a/LeNet-5/logistic_sgd.py
+++ /dev/null
@@ -1,445 +0,0 @@
-#coding=UTF-8
-
-# logistic regression
-# http://deeplearning.net/tutorial/logreg.html
-# http://www.cnblogs.com/xueliangliu/archive/2013/04/07/3006014.html
-
-
-"""
-This tutorial introduces logistic regression using Theano and stochastic
-gradient descent.
-
-Logistic regression is a probabilistic, linear classifier. It is parametrized
-by a weight matrix :math:`W` and a bias vector :math:`b`. Classification is
-done by projecting data points onto a set of hyperplanes, the distance to
-which is used to determine a class membership probability.
-
-Mathematically, this can be written as:
-
-.. math::
- P(Y=i|x, W,b) &= softmax_i(W x + b) \\
- &= \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}
-
-
-The output of the model or prediction is then done by taking the argmax of
-the vector whose i'th element is P(Y=i|x).
-
-.. math::
-
- y_{pred} = argmax_i P(Y=i|x,W,b)
-
-
-This tutorial presents a stochastic gradient descent optimization method
-suitable for large datasets.
-
-
-References:
-
- - textbooks: "Pattern Recognition and Machine Learning" -
- Christopher M. Bishop, section 4.3.2
-
-"""
-__docformat__ = 'restructedtext en'
-
-import cPickle
-import gzip
-import os
-import sys
-import time
-
-import numpy
-
-import theano
-import theano.tensor as T
-
-
-class LogisticRegression(object):
- """Multi-class Logistic Regression Class
-
- The logistic regression is fully described by a weight matrix :math:`W`
- and bias vector :math:`b`. Classification is done by projecting data
- points onto a set of hyperplanes, the distance to which is used to
- determine a class membership probability.
- """
-
- def __init__(self, input, n_in, n_out):
- """ Initialize the parameters of the logistic regression
-
- :type input: theano.tensor.TensorType
- :param input: symbolic variable that describes the input of the
- architecture (one minibatch)
-
- :type n_in: int
- :param n_in: number of input units, the dimension of the space in
- which the datapoints lie
-
- :type n_out: int
- :param n_out: number of output units, the dimension of the space in
- which the labels lie
-
- """
- # start-snippet-1
- # initialize with 0 the weights W as a matrix of shape (n_in, n_out)
- self.W = theano.shared(
- value=numpy.zeros(
- (n_in, n_out),
- dtype=theano.config.floatX
- ),
- name='W',
- borrow=True
- )
- # initialize the baises b as a vector of n_out 0s
- self.b = theano.shared(
- value=numpy.zeros(
- (n_out,),
- dtype=theano.config.floatX
- ),
- name='b',
- borrow=True
- )
-
- # symbolic expression for computing the matrix of class-membership
- # probabilities
- # Where:
- # W is a matrix where column-k represent the separation hyper plain for
- # class-k
- # x is a matrix where row-j represents input training sample-j
- # b is a vector where element-k represent the free parameter of hyper
- # plain-k
- self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
-
- # symbolic description of how to compute prediction as class whose
- # probability is maximal
- self.y_pred = T.argmax(self.p_y_given_x, axis=1)
- # end-snippet-1
-
- # parameters of the model
- self.params = [self.W, self.b]
-
- def negative_log_likelihood(self, y):
- """Return the mean of the negative log-likelihood of the prediction
- of this model under a given target distribution.
-
- .. math::
-
- \frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =
- \frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}
- \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\
- \ell (\theta=\{W,b\}, \mathcal{D})
-
- :type y: theano.tensor.TensorType
- :param y: corresponds to a vector that gives for each example the
- correct label
-
- Note: we use the mean instead of the sum so that
- the learning rate is less dependent on the batch size
- """
- # start-snippet-2
- # y.shape[0] is (symbolically) the number of rows in y, i.e.,
- # number of examples (call it n) in the minibatch
- # T.arange(y.shape[0]) is a symbolic vector which will contain
- # [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
- # Log-Probabilities (call it LP) with one row per example and
- # one column per class LP[T.arange(y.shape[0]),y] is a vector
- # v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
- # LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
- # the mean (across minibatch examples) of the elements in v,
- # i.e., the mean log-likelihood across the minibatch.
- return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
- # end-snippet-2
-
- def errors(self, y):
- """Return a float representing the number of errors in the minibatch
- over the total number of examples of the minibatch ; zero one
- loss over the size of the minibatch
-
- :type y: theano.tensor.TensorType
- :param y: corresponds to a vector that gives for each example the
- correct label
- """
-
- # check if y has same dimension of y_pred
- if y.ndim != self.y_pred.ndim:
- raise TypeError(
- 'y should have the same shape as self.y_pred',
- ('y', y.type, 'y_pred', self.y_pred.type)
- )
- # check if y is of the correct datatype
- if y.dtype.startswith('int'):
- # the T.neq operator returns a vector of 0s and 1s, where 1
- # represents a mistake in prediction
- return T.mean(T.neq(self.y_pred, y))
- else:
- raise NotImplementedError()
-
-
-def load_data(dataset):
- ''' Loads the dataset
-
- :type dataset: string
- :param dataset: the path to the dataset (here MNIST)
- '''
-
- #############
- # LOAD DATA #
- #############
-
- # Download the MNIST dataset if it is not present
- data_dir, data_file = os.path.split(dataset)
- if data_dir == "" and not os.path.isfile(dataset):
- # Check if dataset is in the data directory.
- new_path = os.path.join(
- os.path.split(__file__)[0],
- "..",
- "data",
- dataset
- )
- if os.path.isfile(new_path) or data_file == 'mnist.pkl.gz':
- dataset = new_path
-
- if (not os.path.isfile(dataset)) and data_file == 'mnist.pkl.gz':
- import urllib
- origin = (
- 'http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz'
- )
- print 'Downloading data from %s' % origin
- urllib.urlretrieve(origin, dataset)
-
- print '... loading data'
-
- # Load the dataset
- f = gzip.open(dataset, 'rb')
- train_set, valid_set, test_set = cPickle.load(f)
- f.close()
- #train_set, valid_set, test_set format: tuple(input, target)
- #input is an numpy.ndarray of 2 dimensions (a matrix)
- #witch row's correspond to an example. target is a
- #numpy.ndarray of 1 dimensions (vector)) that have the same length as
- #the number of rows in the input. It should give the target
- #target to the example with the same index in the input.
-
- def shared_dataset(data_xy, borrow=True):
- """ Function that loads the dataset into shared variables
-
- The reason we store our dataset in shared variables is to allow
- Theano to copy it into the GPU memory (when code is run on GPU).
- Since copying data into the GPU is slow, copying a minibatch everytime
- is needed (the default behaviour if the data is not in a shared
- variable) would lead to a large decrease in performance.
- """
- data_x, data_y = data_xy
- shared_x = theano.shared(numpy.asarray(data_x,
- dtype=theano.config.floatX),
- borrow=borrow)
- shared_y = theano.shared(numpy.asarray(data_y,
- dtype=theano.config.floatX),
- borrow=borrow)
- # When storing data on the GPU it has to be stored as floats
- # therefore we will store the labels as ``floatX`` as well
- # (``shared_y`` does exactly that). But during our computations
- # we need them as ints (we use labels as index, and if they are
- # floats it doesn't make sense) therefore instead of returning
- # ``shared_y`` we will have to cast it to int. This little hack
- # lets ous get around this issue
- return shared_x, T.cast(shared_y, 'int32')
-
- test_set_x, test_set_y = shared_dataset(test_set)
- valid_set_x, valid_set_y = shared_dataset(valid_set)
- train_set_x, train_set_y = shared_dataset(train_set)
-
- rval = [(train_set_x, train_set_y), (valid_set_x, valid_set_y),
- (test_set_x, test_set_y)]
- return rval
-
-
-def sgd_optimization_mnist(learning_rate=0.13, n_epochs=1000,
- dataset='mnist.pkl.gz',
- batch_size=600):
- """
- Demonstrate stochastic gradient descent optimization of a log-linear
- model
-
- This is demonstrated on MNIST.
-
- :type learning_rate: float
- :param learning_rate: learning rate used (factor for the stochastic
- gradient)
-
- :type n_epochs: int
- :param n_epochs: maximal number of epochs to run the optimizer
-
- :type dataset: string
- :param dataset: the path of the MNIST dataset file from
- http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz
-
- """
- datasets = load_data(dataset)
-
- train_set_x, train_set_y = datasets[0]
- valid_set_x, valid_set_y = datasets[1]
- test_set_x, test_set_y = datasets[2]
-
- # compute number of minibatches for training, validation and testing
- n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size
- n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] / batch_size
- n_test_batches = test_set_x.get_value(borrow=True).shape[0] / batch_size
-
- ######################
- # BUILD ACTUAL MODEL #
- ######################
- print '... building the model'
-
- # allocate symbolic variables for the data
- index = T.lscalar() # index to a [mini]batch
-
- # generate symbolic variables for input (x and y represent a
- # minibatch)
- x = T.matrix('x') # data, presented as rasterized images
- y = T.ivector('y') # labels, presented as 1D vector of [int] labels
-
- # construct the logistic regression class
- # Each MNIST image has size 28*28
- classifier = LogisticRegression(input=x, n_in=28 * 28, n_out=10)
-
- # the cost we minimize during training is the negative log likelihood of
- # the model in symbolic format
- cost = classifier.negative_log_likelihood(y)
-
- # compiling a Theano function that computes the mistakes that are made by
- # the model on a minibatch
- test_model = theano.function(
- inputs=[index],
- outputs=classifier.errors(y),
- givens={
- x: test_set_x[index * batch_size: (index + 1) * batch_size],
- y: test_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
-
- validate_model = theano.function(
- inputs=[index],
- outputs=classifier.errors(y),
- givens={
- x: valid_set_x[index * batch_size: (index + 1) * batch_size],
- y: valid_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
-
- # compute the gradient of cost with respect to theta = (W,b)
- g_W = T.grad(cost=cost, wrt=classifier.W)
- g_b = T.grad(cost=cost, wrt=classifier.b)
-
- # start-snippet-3
- # specify how to update the parameters of the model as a list of
- # (variable, update expression) pairs.
- updates = [(classifier.W, classifier.W - learning_rate * g_W),
- (classifier.b, classifier.b - learning_rate * g_b)]
-
- # compiling a Theano function `train_model` that returns the cost, but in
- # the same time updates the parameter of the model based on the rules
- # defined in `updates`
- train_model = theano.function(
- inputs=[index],
- outputs=cost,
- updates=updates,
- givens={
- x: train_set_x[index * batch_size: (index + 1) * batch_size],
- y: train_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
- # end-snippet-3
-
- ###############
- # TRAIN MODEL #
- ###############
- print '... training the model'
- # early-stopping parameters
- patience = 5000 # look as this many examples regardless
- patience_increase = 2 # wait this much longer when a new best is
- # found
- improvement_threshold = 0.995 # a relative improvement of this much is
- # considered significant
- validation_frequency = min(n_train_batches, patience / 2)
- # go through this many
- # minibatche before checking the network
- # on the validation set; in this case we
- # check every epoch
-
- best_validation_loss = numpy.inf
- test_score = 0.
- start_time = time.clock()
-
- done_looping = False
- epoch = 0
- while (epoch < n_epochs) and (not done_looping): - epoch = epoch + 1 - for minibatch_index in xrange(n_train_batches): - - minibatch_avg_cost = train_model(minibatch_index) - # iteration number - iter = (epoch - 1) * n_train_batches + minibatch_index - - if (iter + 1) % validation_frequency == 0: - # compute zero-one loss on validation set - validation_losses = [validate_model(i) - for i in xrange(n_valid_batches)] - this_validation_loss = numpy.mean(validation_losses) - - print( - 'epoch %i, minibatch %i/%i, validation error %f %%' % - ( - epoch, - minibatch_index + 1, - n_train_batches, - this_validation_loss * 100. - ) - ) - - # if we got the best validation score until now - if this_validation_loss < best_validation_loss: - #improve patience if loss improvement is good enough - if this_validation_loss < best_validation_loss * \ - improvement_threshold: - patience = max(patience, iter * patience_increase) - - best_validation_loss = this_validation_loss - # test it on the test set - - test_losses = [test_model(i) - for i in xrange(n_test_batches)] - test_score = numpy.mean(test_losses) - - print( - ( - ' epoch %i, minibatch %i/%i, test error of' - ' best model %f %%' - ) % - ( - epoch, - minibatch_index + 1, - n_train_batches, - test_score * 100. - ) - ) - - if patience <= iter: - done_looping = True - break - - end_time = time.clock() - print( - ( - 'Optimization complete with best validation score of %f %%,' - 'with test performance %f %%' - ) - % (best_validation_loss * 100., test_score * 100.) - ) - print 'The code run for %d epochs, with %f epochs/sec' % ( - epoch, 1. * epoch / (end_time - start_time)) - print>> sys.stderr, ('The code for file ' +
- os.path.split(__file__)[1] +
- ' ran for %.1fs' % ((end_time - start_time)))
-
-if __name__ == '__main__':
- sgd_optimization_mnist()
-
diff --git a/LeNet-5/mlp.py b/LeNet-5/mlp.py
deleted file mode 100644
index 3efd0e4..0000000
--- a/LeNet-5/mlp.py
+++ /dev/null
@@ -1,404 +0,0 @@
-"""
-This tutorial introduces the multilayer perceptron using Theano.
-
- A multilayer perceptron is a logistic regressor where
-instead of feeding the input to the logistic regression you insert a
-intermediate layer, called the hidden layer, that has a nonlinear
-activation function (usually tanh or sigmoid) . One can use many such
-hidden layers making the architecture deep. The tutorial will also tackle
-the problem of MNIST digit classification.
-
-.. math::
-
- f(x) = G( b^{(2)} + W^{(2)}( s( b^{(1)} + W^{(1)} x))),
-
-References:
-
- - textbooks: "Pattern Recognition and Machine Learning" -
- Christopher M. Bishop, section 5
-
-"""
-__docformat__ = 'restructedtext en'
-
-
-import os
-import sys
-import time
-
-import numpy
-
-import theano
-import theano.tensor as T
-
-
-from logistic_sgd import LogisticRegression, load_data
-
-
-# start-snippet-1
-class HiddenLayer(object):
- def __init__(self, rng, input, n_in, n_out, W=None, b=None,
- activation=T.tanh):
- """
- Typical hidden layer of a MLP: units are fully-connected and have
- sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
- and the bias vector b is of shape (n_out,).
-
- NOTE : The nonlinearity used here is tanh
-
- Hidden unit activation is given by: tanh(dot(input,W) + b)
-
- :type rng: numpy.random.RandomState
- :param rng: a random number generator used to initialize weights
-
- :type input: theano.tensor.dmatrix
- :param input: a symbolic tensor of shape (n_examples, n_in)
-
- :type n_in: int
- :param n_in: dimensionality of input
-
- :type n_out: int
- :param n_out: number of hidden units
-
- :type activation: theano.Op or function
- :param activation: Non linearity to be applied in the hidden
- layer
- """
- self.input = input
- # end-snippet-1
-
- # `W` is initialized with `W_values` which is uniformely sampled
- # from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
- # for tanh activation function
- # the output of uniform if converted using asarray to dtype
- # theano.config.floatX so that the code is runable on GPU
- # Note : optimal initialization of weights is dependent on the
- # activation function used (among other things).
- # For example, results presented in [Xavier10] suggest that you
- # should use 4 times larger initial weights for sigmoid
- # compared to tanh
- # We have no info for other function, so we use the same as
- # tanh.
- if W is None:
- W_values = numpy.asarray(
- rng.uniform(
- low=-numpy.sqrt(6. / (n_in + n_out)),
- high=numpy.sqrt(6. / (n_in + n_out)),
- size=(n_in, n_out)
- ),
- dtype=theano.config.floatX
- )
- if activation == theano.tensor.nnet.sigmoid:
- W_values *= 4
-
- W = theano.shared(value=W_values, name='W', borrow=True)
-
- if b is None:
- b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
- b = theano.shared(value=b_values, name='b', borrow=True)
-
- self.W = W
- self.b = b
-
- lin_output = T.dot(input, self.W) + self.b
- self.output = (
- lin_output if activation is None
- else activation(lin_output)
- )
- # parameters of the model
- self.params = [self.W, self.b]
-
-
-# start-snippet-2
-class MLP(object):
- """Multi-Layer Perceptron Class
-
- A multilayer perceptron is a feedforward artificial neural network model
- that has one layer or more of hidden units and nonlinear activations.
- Intermediate layers usually have as activation function tanh or the
- sigmoid function (defined here by a ``HiddenLayer`` class) while the
- top layer is a softamx layer (defined here by a ``LogisticRegression``
- class).
- """
-
- def __init__(self, rng, input, n_in, n_hidden, n_out):
- """Initialize the parameters for the multilayer perceptron
-
- :type rng: numpy.random.RandomState
- :param rng: a random number generator used to initialize weights
-
- :type input: theano.tensor.TensorType
- :param input: symbolic variable that describes the input of the
- architecture (one minibatch)
-
- :type n_in: int
- :param n_in: number of input units, the dimension of the space in
- which the datapoints lie
-
- :type n_hidden: int
- :param n_hidden: number of hidden units
-
- :type n_out: int
- :param n_out: number of output units, the dimension of the space in
- which the labels lie
-
- """
-
- # Since we are dealing with a one hidden layer MLP, this will translate
- # into a HiddenLayer with a tanh activation function connected to the
- # LogisticRegression layer; the activation function can be replaced by
- # sigmoid or any other nonlinear function
- self.hiddenLayer = HiddenLayer(
- rng=rng,
- input=input,
- n_in=n_in,
- n_out=n_hidden,
- activation=T.tanh
- )
-
- # The logistic regression layer gets as input the hidden units
- # of the hidden layer
- self.logRegressionLayer = LogisticRegression(
- input=self.hiddenLayer.output,
- n_in=n_hidden,
- n_out=n_out
- )
- # end-snippet-2 start-snippet-3
- # L1 norm ; one regularization option is to enforce L1 norm to
- # be small
- self.L1 = (
- abs(self.hiddenLayer.W).sum()
- + abs(self.logRegressionLayer.W).sum()
- )
-
- # square of L2 norm ; one regularization option is to enforce
- # square of L2 norm to be small
- self.L2_sqr = (
- (self.hiddenLayer.W ** 2).sum()
- + (self.logRegressionLayer.W ** 2).sum()
- )
-
- # negative log likelihood of the MLP is given by the negative
- # log likelihood of the output of the model, computed in the
- # logistic regression layer
- self.negative_log_likelihood = (
- self.logRegressionLayer.negative_log_likelihood
- )
- # same holds for the function computing the number of errors
- self.errors = self.logRegressionLayer.errors
-
- # the parameters of the model are the parameters of the two layer it is
- # made out of
- self.params = self.hiddenLayer.params + self.logRegressionLayer.params
- # end-snippet-3
-
-
-def test_mlp(learning_rate=0.01, L1_reg=0.00, L2_reg=0.0001, n_epochs=1000,
- dataset='mnist.pkl.gz', batch_size=20, n_hidden=500):
- """
- Demonstrate stochastic gradient descent optimization for a multilayer
- perceptron
-
- This is demonstrated on MNIST.
-
- :type learning_rate: float
- :param learning_rate: learning rate used (factor for the stochastic
- gradient
-
- :type L1_reg: float
- :param L1_reg: L1-norm's weight when added to the cost (see
- regularization)
-
- :type L2_reg: float
- :param L2_reg: L2-norm's weight when added to the cost (see
- regularization)
-
- :type n_epochs: int
- :param n_epochs: maximal number of epochs to run the optimizer
-
- :type dataset: string
- :param dataset: the path of the MNIST dataset file from
- http://www.iro.umontreal.ca/~lisa/deep/data/mnist/mnist.pkl.gz
-
-
- """
- datasets = load_data(dataset)
-
- train_set_x, train_set_y = datasets[0]
- valid_set_x, valid_set_y = datasets[1]
- test_set_x, test_set_y = datasets[2]
-
- # compute number of minibatches for training, validation and testing
- n_train_batches = train_set_x.get_value(borrow=True).shape[0] / batch_size
- n_valid_batches = valid_set_x.get_value(borrow=True).shape[0] / batch_size
- n_test_batches = test_set_x.get_value(borrow=True).shape[0] / batch_size
-
- ######################
- # BUILD ACTUAL MODEL #
- ######################
- print '... building the model'
-
- # allocate symbolic variables for the data
- index = T.lscalar() # index to a [mini]batch
- x = T.matrix('x') # the data is presented as rasterized images
- y = T.ivector('y') # the labels are presented as 1D vector of
- # [int] labels
-
- rng = numpy.random.RandomState(1234)
-
- # construct the MLP class
- classifier = MLP(
- rng=rng,
- input=x,
- n_in=28 * 28,
- n_hidden=n_hidden,
- n_out=10
- )
-
- # start-snippet-4
- # the cost we minimize during training is the negative log likelihood of
- # the model plus the regularization terms (L1 and L2); cost is expressed
- # here symbolically
- cost = (
- classifier.negative_log_likelihood(y)
- + L1_reg * classifier.L1
- + L2_reg * classifier.L2_sqr
- )
- # end-snippet-4
-
- # compiling a Theano function that computes the mistakes that are made
- # by the model on a minibatch
- test_model = theano.function(
- inputs=[index],
- outputs=classifier.errors(y),
- givens={
- x: test_set_x[index * batch_size:(index + 1) * batch_size],
- y: test_set_y[index * batch_size:(index + 1) * batch_size]
- }
- )
-
- validate_model = theano.function(
- inputs=[index],
- outputs=classifier.errors(y),
- givens={
- x: valid_set_x[index * batch_size:(index + 1) * batch_size],
- y: valid_set_y[index * batch_size:(index + 1) * batch_size]
- }
- )
-
- # start-snippet-5
- # compute the gradient of cost with respect to theta (sotred in params)
- # the resulting gradients will be stored in a list gparams
- gparams = [T.grad(cost, param) for param in classifier.params]
-
- # specify how to update the parameters of the model as a list of
- # (variable, update expression) pairs
-
- # given two list the zip A = [a1, a2, a3, a4] and B = [b1, b2, b3, b4] of
- # same length, zip generates a list C of same size, where each element
- # is a pair formed from the two lists :
- # C = [(a1, b1), (a2, b2), (a3, b3), (a4, b4)]
- updates = [
- (param, param - learning_rate * gparam)
- for param, gparam in zip(classifier.params, gparams)
- ]
-
- # compiling a Theano function `train_model` that returns the cost, but
- # in the same time updates the parameter of the model based on the rules
- # defined in `updates`
- train_model = theano.function(
- inputs=[index],
- outputs=cost,
- updates=updates,
- givens={
- x: train_set_x[index * batch_size: (index + 1) * batch_size],
- y: train_set_y[index * batch_size: (index + 1) * batch_size]
- }
- )
- # end-snippet-5
-
- ###############
- # TRAIN MODEL #
- ###############
- print '... training'
-
- # early-stopping parameters
- patience = 10000 # look as this many examples regardless
- patience_increase = 2 # wait this much longer when a new best is
- # found
- improvement_threshold = 0.995 # a relative improvement of this much is
- # considered significant
- validation_frequency = min(n_train_batches, patience / 2)
- # go through this many
- # minibatche before checking the network
- # on the validation set; in this case we
- # check every epoch
-
- best_validation_loss = numpy.inf
- best_iter = 0
- test_score = 0.
- start_time = time.clock()
-
- epoch = 0
- done_looping = False
-
- while (epoch < n_epochs) and (not done_looping): - epoch = epoch + 1 - for minibatch_index in xrange(n_train_batches): - - minibatch_avg_cost = train_model(minibatch_index) - # iteration number - iter = (epoch - 1) * n_train_batches + minibatch_index - - if (iter + 1) % validation_frequency == 0: - # compute zero-one loss on validation set - validation_losses = [validate_model(i) for i - in xrange(n_valid_batches)] - this_validation_loss = numpy.mean(validation_losses) - - print( - 'epoch %i, minibatch %i/%i, validation error %f %%' % - ( - epoch, - minibatch_index + 1, - n_train_batches, - this_validation_loss * 100. - ) - ) - - # if we got the best validation score until now - if this_validation_loss < best_validation_loss: - #improve patience if loss improvement is good enough - if ( - this_validation_loss < best_validation_loss * - improvement_threshold - ): - patience = max(patience, iter * patience_increase) - - best_validation_loss = this_validation_loss - best_iter = iter - - # test it on the test set - test_losses = [test_model(i) for i - in xrange(n_test_batches)] - test_score = numpy.mean(test_losses) - - print((' epoch %i, minibatch %i/%i, test error of ' - 'best model %f %%') % - (epoch, minibatch_index + 1, n_train_batches, - test_score * 100.)) - - if patience <= iter: - done_looping = True - break - - end_time = time.clock() - print(('Optimization complete. Best validation score of %f %% ' - 'obtained at iteration %i, with test performance %f %%') % - (best_validation_loss * 100., best_iter + 1, test_score * 100.)) - print>> sys.stderr, ('The code for file ' +
- os.path.split(__file__)[1] +
- ' ran for %.2fm' % ((end_time - start_time) / 60.))
-
-
-if __name__ == '__main__':
- test_mlp()
\ No newline at end of file
diff --git a/LeNet-5/runGPU.py b/LeNet-5/runGPU.py
deleted file mode 100644
index fbdcdae..0000000
--- a/LeNet-5/runGPU.py
+++ /dev/null
@@ -1,22 +0,0 @@
-from theano import function, config, shared, sandbox
-import theano.tensor as T
-import numpy
-import time
-
-vlen = 10 * 30 * 768 # 10 x #cores x # threads per core
-iters = 1000
-
-rng = numpy.random.RandomState(22)
-x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
-f = function([], T.exp(x))
-print f.maker.fgraph.toposort()
-t0 = time.time()
-for i in xrange(iters):
- r = f()
-t1 = time.time()
-print 'Looping %d times took' % iters, t1 - t0, 'seconds'
-print 'Result is', r
-if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
- print 'Used the cpu'
-else:
- print 'Used the gpu'
\ No newline at end of file
diff --git a/LeNet-5/utils.py b/LeNet-5/utils.py
deleted file mode 100644
index 3b50019..0000000
--- a/LeNet-5/utils.py
+++ /dev/null
@@ -1,139 +0,0 @@
-""" This file contains different utility functions that are not connected
-in anyway to the networks presented in the tutorials, but rather help in
-processing the outputs into a more understandable way.
-
-For example ``tile_raster_images`` helps in generating a easy to grasp
-image from a set of samples or weights.
-"""
-
-
-import numpy
-
-
-def scale_to_unit_interval(ndar, eps=1e-8):
- """ Scales all values in the ndarray ndar to be between 0 and 1 """
- ndar = ndar.copy()
- ndar -= ndar.min()
- ndar *= 1.0 / (ndar.max() + eps)
- return ndar
-
-
-def tile_raster_images(X, img_shape, tile_shape, tile_spacing=(0, 0),
- scale_rows_to_unit_interval=True,
- output_pixel_vals=True):
- """
- Transform an array with one flattened image per row, into an array in
- which images are reshaped and layed out like tiles on a floor.
-
- This function is useful for visualizing datasets whose rows are images,
- and also columns of matrices for transforming those rows
- (such as the first layer of a neural net).
-
- :type X: a 2-D ndarray or a tuple of 4 channels, elements of which can
- be 2-D ndarrays or None;
- :param X: a 2-D array in which every row is a flattened image.
-
- :type img_shape: tuple; (height, width)
- :param img_shape: the original shape of each image
-
- :type tile_shape: tuple; (rows, cols)
- :param tile_shape: the number of images to tile (rows, cols)
-
- :param output_pixel_vals: if output should be pixel values (i.e. int8
- values) or floats
-
- :param scale_rows_to_unit_interval: if the values need to be scaled before
- being plotted to [0,1] or not
-
-
- :returns: array suitable for viewing as an image.
- (See:`Image.fromarray`.)
- :rtype: a 2-d array with same dtype as X.
-
- """
-
- assert len(img_shape) == 2
- assert len(tile_shape) == 2
- assert len(tile_spacing) == 2
-
- # The expression below can be re-written in a more C style as
- # follows :
- #
- # out_shape = [0,0]
- # out_shape[0] = (img_shape[0]+tile_spacing[0])*tile_shape[0] -
- # tile_spacing[0]
- # out_shape[1] = (img_shape[1]+tile_spacing[1])*tile_shape[1] -
- # tile_spacing[1]
- out_shape = [
- (ishp + tsp) * tshp - tsp
- for ishp, tshp, tsp in zip(img_shape, tile_shape, tile_spacing)
- ]
-
- if isinstance(X, tuple):
- assert len(X) == 4
- # Create an output numpy ndarray to store the image
- if output_pixel_vals:
- out_array = numpy.zeros((out_shape[0], out_shape[1], 4),
- dtype='uint8')
- else:
- out_array = numpy.zeros((out_shape[0], out_shape[1], 4),
- dtype=X.dtype)
-
- #colors default to 0, alpha defaults to 1 (opaque)
- if output_pixel_vals:
- channel_defaults = [0, 0, 0, 255]
- else:
- channel_defaults = [0., 0., 0., 1.]
-
- for i in xrange(4):
- if X[i] is None:
- # if channel is None, fill it with zeros of the correct
- # dtype
- dt = out_array.dtype
- if output_pixel_vals:
- dt = 'uint8'
- out_array[:, :, i] = numpy.zeros(
- out_shape,
- dtype=dt
- ) + channel_defaults[i]
- else:
- # use a recurrent call to compute the channel and store it
- # in the output
- out_array[:, :, i] = tile_raster_images(
- X[i], img_shape, tile_shape, tile_spacing,
- scale_rows_to_unit_interval, output_pixel_vals)
- return out_array
-
- else:
- # if we are dealing with only one channel
- H, W = img_shape
- Hs, Ws = tile_spacing
-
- # generate a matrix to store the output
- dt = X.dtype
- if output_pixel_vals:
- dt = 'uint8'
- out_array = numpy.zeros(out_shape, dtype=dt)
-
- for tile_row in xrange(tile_shape[0]):
- for tile_col in xrange(tile_shape[1]):
- if tile_row * tile_shape[1] + tile_col < X.shape[0]: - this_x = X[tile_row * tile_shape[1] + tile_col] - if scale_rows_to_unit_interval: - # if we should scale values to be between 0 and 1 - # do this by calling the `scale_to_unit_interval` - # function - this_img = scale_to_unit_interval( - this_x.reshape(img_shape)) - else: - this_img = this_x.reshape(img_shape) - # add the slice to the corresponding position in the - # output array - c = 1 - if output_pixel_vals: - c = 255 - out_array[ - tile_row * (H + Hs): tile_row * (H + Hs) + H, - tile_col * (W + Ws): tile_col * (W + Ws) + W - ] = this_img * c - return out_array diff --git a/Mathematical-Modeling-2014/Project/baidu_spider.py b/Mathematical-Modeling-2014/Project/baidu_spider.py deleted file mode 100644 index 9f6124d..0000000 --- a/Mathematical-Modeling-2014/Project/baidu_spider.py +++ /dev/null @@ -1,140 +0,0 @@ -# -*- coding: utf-8 -*- -#--------------------------------------- -# 程序:百度贴吧爬虫 -# 版本:0.5 -# 作者:why -# 日期:2013-05-16 -# 语言:Python 2.7 -# 操作:输入网址后自动只看楼主并保存到本地文件 -# 功能:将楼主发布的内容打包txt存储到本地。 -#--------------------------------------- - -import string -import urllib2 -import re - -#----------- 处理页面上的各种标签 ----------- -class HTML_Tool: - # 用非 贪婪模式 匹配 \t 或者 \n 或者 空格 或者 超链接 或者 图片 - BgnCharToNoneRex = re.compile("(\t|\n| ||)")
-
- # 用非 贪婪模式 匹配 任意标签
- EndCharToNoneRex = re.compile("<.*?>")
-
- # 用非 贪婪模式 匹配 任意标签
- BgnPartRex = re.compile("
")
- CharToNewLineRex = re.compile("(
|
||
|
)")
- CharToNextTabRex = re.compile(" ")
-
- # 将一些html的符号实体转变为原始符号
- replaceTab = [("<","<"),(">",">"),("&","&"),("&","\""),(" "," ")]
-
- def Replace_Char(self,x):
- x = self.BgnCharToNoneRex.sub("",x)
- x = self.BgnPartRex.sub("\n ",x)
- x = self.CharToNewLineRex.sub("\n",x)
- x = self.CharToNextTabRex.sub("\t",x)
- x = self.EndCharToNoneRex.sub("",x)
-
- for t in self.replaceTab:
- x = x.replace(t[0],t[1])
- return x
-
-class Baidu_Spider:
- # 申明相关的属性
- def __init__(self,url):
- self.myUrl = url + '?see_lz=1'
- self.datas = []
- self.myTool = HTML_Tool()
- print u'已经启动百度贴吧爬虫,咔嚓咔嚓'
-
- # 初始化加载页面并将其转码储存
- def baidu_tieba(self):
- # 读取页面的原始信息并将其从gbk转码
- myPage = urllib2.urlopen(self.myUrl).read().decode("gbk")
- # 计算楼主发布内容一共有多少页
- endPage = self.page_counter(myPage)
- # 获取该帖的标题
- title = self.find_title(myPage)
- print u'文章名称:' + title
- # 获取最终的数据
- self.save_data(self.myUrl,title,endPage)
-
- #用来计算一共有多少页
- def page_counter(self,myPage):
- # 匹配 "共有
12页" 来获取一共有多少页
- myMatch = re.search(r'class="red">(\d+?)', myPage, re.S)
- if myMatch:
- endPage = int(myMatch.group(1))
- print u'爬虫报告:发现楼主共有%d页的原创内容' % endPage
- else:
- endPage = 0
- print u'爬虫报告:无法计算楼主发布内容有多少页!'
- return endPage
-
- # 用来寻找该帖的标题
- def find_title(self,myPage):
- # 匹配
xxxxxxxxxx
找出标题
- myMatch = re.search(r'
(.*?)', myPage, re.S)
- title = u'暂无标题'
- if myMatch:
- title = myMatch.group(1)
- else:
- print u'爬虫报告:无法加载文章标题!'
- # 文件名不能包含以下字符: \ / : * ? " |
- title = title.replace('\\','').replace('/','').replace(':','').replace('*','').replace('?','').replace('"','').replace('>','').replace('<','').replace('|','') - return title - - - # 用来存储楼主发布的内容 - def save_data(self,url,title,endPage): - # 加载页面数据到数组中 - self.get_data(url,endPage) - # 打开本地文件 - f = open(title+'.txt','w+') - f.writelines(self.datas) - f.close() - print u'爬虫报告:文件已下载到本地并打包成txt文件' - print u'请按任意键退出...' - raw_input(); - - # 获取页面源码并将其存储到数组中 - def get_data(self,url,endPage): - url = url + '&pn=' - for i in range(1,endPage+1): - print u'爬虫报告:爬虫%d号正在加载中...' % i - myPage = urllib2.urlopen(url + str(i)).read() - # 将myPage中的html代码处理并存储到datas里面 - self.deal_data(myPage.decode('gbk')) - - - # 将内容从页面代码中抠出来 - def deal_data(self,myPage): - myItems = re.findall('id="post_content.*?>(.*?) ',myPage,re.S)
- for item in myItems:
- data = self.myTool.Replace_Char(item.replace("\n","").encode('gbk'))
- self.datas.append(data+'\n')
-
-
-
-#-------- 程序入口处 ------------------
-print u"""#---------------------------------------
-# 程序:百度贴吧爬虫
-# 版本:0.5
-# 作者:why
-# 日期:2013年05月16日
-# 语言:Python 2.7
-# 操作:输入网址后自动只看楼主并保存到本地文件
-# 功能:将楼主发布的内容打包txt存储到本地。
-#---------------------------------------
-"""
-
-# 以某小说贴吧为例子
-# bdurl = 'http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1'
-
-print u'请输入贴吧的地址最后的数字串:'
-bdurl = 'http://tieba.baidu.com/p/' + str(raw_input(u'http://tieba.baidu.com/p/'))
-
-#调用
-mySpider = Baidu_Spider(bdurl)
-mySpider.baidu_tieba()
\ No newline at end of file
diff --git a/Mathematical-Modeling-2014/Project/cloud_large.png b/Mathematical-Modeling-2014/Project/cloud_large.png
deleted file mode 100644
index f8b17b99e9552eada19dd932cf8d5cec181b2025..0000000000000000000000000000000000000000
GIT binary patch
literal 0
HcmV?d00001
literal 118853
zcmYIv1yCGK*EJ+afB*>^Ji(pd&L$At-66QUJAvR9+}(n^ThPVb-Q8Ul_=YEW|G#Rt
zirQjkdb;nuea^Y3gJh(Hkr8nbp`f6UMMb{JK|#GL2ENK~;DBG0;^Fq8pguy0e*3E6
zoN~D2Darrl&W*8VxmZOV_5#8YG_(5kQ7Ve
zmh7@uz)YHt*I;ji6R)A{!>EOQRIZy81M_kf@pcLWLc(C~mfxb&o={Yn4HAZ;zWbxz
z2-E1n2e;y}!r98~K(iAKLDM{{;K*)zQ58Sn9pBrj(J7^Yl)5u}*fgSPo0)d2o1ZHs
zH5w|Np|g{dohXWT3KYcX8?$~TA8QKBC4EBt-=%>crWrXTl(KNPnCKks>vO*4ON-AGY$#4tEsPoo4Nr&<5slru~>Z
zdS0LI-SC9-XMUs`kg&8t;}jItaM~&KL*y(LC}Xd9-h#s|x=)}*7VrCPGeS`-SHWio
zogE_izo)r*xkYD79-)U*;2_5s=B0>#m#31F7AgxWN$e6qf|4fW=$jCh3g89f=WjeO
zHTzZ$D)2p~fhl;Nv-3}q!EcQ6@;wrkq*JzU7wB#u*Cr~8EY;gBPFB3dTHF_Q#5<^8 zBB^|(%^?(^GILu;&BQWC%BO&E-j;+Q{%4!Tm4(yqfR)I~0rC3-sP
zGiI#rTh~Q*V_@x+Qw>_O;+>jFRp-Mt)X^n2jO&z+Ule+Z}($!cDfiezBB5YtZ=`i
z1Xe@n?OYwFZ6j~W(e9eL@W^ZhMzqc%t+|aFQ{x9}GLaz`n{5&J=|_^3*%df%xm;2j
z(2tHv48(qUDoXLS*YOtX_HOJ;O6t7_92%Bb^08%V*_)5)Eim%UX*21lyxEp}6&KlF
zbDjA_t%h@CB^4LwFB2Jv04)3amb7j@pWN11->VWR%2=`UDD6*ZFmu=DtOfiMgIi{E!}dUT;?B;G^V>}TYB8nSG9ZiIXkX|w>>7TF**4%=i!2y
zLH>RPsS-m8%b}HalfG5UT$Wf<`kawjy0jtg;|l;tua2kv3nry|u{f3fm*lsrz`y8e z*zX4GN(Hku2`$S0Enb3J0qLB6#tK%el~GPuBd^{6*jI<1!02l-bj%y)}e< zOunK|%iLR_03VbT8ZsHG*v(wh@GC#MAQ`H!riS|BAD52?h@68j&6)kS3flcNw+i&J zNwXBXmDv5l;;~LQ8Nyfb5!zHcmNgxU+c?rIS@)1WsM?FfzmzuOp2zhXfo9h{zIh#q zzKkiE6r0S@3>Ffu>^b309Z|Q$-jfoeQxj>}D+%sv(XoP9Pfa-I`*D
z;<0jjd>bP{D0-=R8{_C$!X3C`E9JheI`9t%YTn%KWS8&aahhnVD>y`Nyj_l-@=y=!qLth^w5;7+_V8rBxl9j_6P`%e2
zW#!pTnRcObz~V}R>!T72o}H}_LR0#0j&s2yClXrKVfBkC=11%LX4p{vYzg;<&x=6# z^uvS)n}-!rP#B1?dvk_}{l47xLk?e+9}|}gq`C5Ky^=|b|J$Vqv!#4R%Q}zk1-WBj z*FstlrcDrFUR~P1otR)8%$&jf;Y7cSNM7!@HK9K0aC=35-z0Fdjwv=(A!XTWwN$JZ z;qW&xLcRbYbw^r^R3+T*^QIw#ru(xJ|1Zr15kt#YaM4&{N`6yJ9g*zGbB|qnIAIfG z=G`0DbaP=ae*+ThM<@(bgh;x5k*dk$h`qlKkp9QA`BB^z_o{Pv_#%B#uKW
zS)q>qZN&-ml6XqC2YHIc{e54`roM5fxNOaQ`I0wjI5=(=;rQl4$P@piI0$Bt@+L-I
zt{$p>e3xPIJ53N%*Oj&!`R&@xLpHv_=I#0y-bXYk=-IqE#qW$!|82A?`oO&1
zQ3Y-0liE%dqCshB5lObZp1ir77}d+noZ(-wwy(2GMG{pp8s0h=;{NIKH+(8!b|85)21uSy|!
zLQ!gF;J{tIGxqmkE(@MT;@p=wYcvk{oZIQ^=G^oQ%X9_1_{)W8#%M7_HI-xQ0OQA(
zkJl46K=AMnB1Mcy+pYYmPn@+h3tb68^`0k8@tArG8?8FEUg8;;0N-j4dr-S0#2qT?
z(X_qMy!6~LtCuP^GG%hIRk{d_cmU~vEAhX5CXO*oOjOm7+90vOy7Z)ZX^)WR6Gz)`
zGiMh6;{lpKW+j~^Ir_}X9Y`viFw
zuVmBfaQPcky7THU|!wuVA|w2;1BS2Zq3
zB$s277(>yUZuUfFxPnBoKcsIBr$pE}9aD@^(#LR!h4t4R8fzr2);CQ{>+tC{gFUFQ
zc3wTpk)lt@`{hpx9ru==+TP(0%@0n8ドルDOM6L@0(_P9%Tjy*bBZrR5nPukkCH$$KR<{n9u)40l z%6|UAuJc`KqfkgZ7R9L$^3gKeubd+-Pa55?al)D0dX
zTE#9S$SKT@U)FiZ*)b&R)~n#GH*~<+qy|prl%i;vm#q}g4f-mtkcoq*`3j3b(rw4yt|NCqBcoSoix??ElH)lH#&0LC0&1Vv
zE2d8(c;o)@kDk%r(JRe1KLMQeDP5Ws@6N045TG@+P9F19rx>8)CynaPkCO4cY_HWV
zmqiLp=xoIFsynmBZ2YB_mpQIg{@c0uX}QW$N_W_;@KG9Me~6ZCq+7L_c4QTOfijhL
zly@i&@h{Z!hfkdiNTtL;yo5vE<2=kn%e5ze^wy;4w(3wn;pjg^u}yg^g!!v1i87+h z?y%2{$@nAlL#(1aIh)J9v-#vtQU&;L5gCY?IT0>NX$Bk7E~qbcNo&m!opWYO4a}^#
z5j6C*PhDIYxw_o5gN@A+mir-B*gI!VsFv+19H1>&$CGzA_!`^{GvO>P)Zu6wNp=Y4
z);h}m^j2EnOE{jziZcDfiofNW>1w?|Esf7TY5fvIjM;~X42SB21M`Xnm73?-{3k75
zbk`aWwdNqJ<(b9wd~hqkbiyq?jps=t=i5q{no`xcf!}_kfwoma&kllt(90&{76-7f zpwb^QSp*uJmGCLU6LHy|&FcLZIDrQH-SRa&=RJ`KU}p58n3voIkObmH)aY*o*+6U4 z&TM+VtKJD`kIdv#<=dza?w$ss6r7}|bizfs+de^B8cJC
zM7qIyDyh-!LDQuOBUC)2ZC>xT@bk9!M=r+|CZuHr_wlpd(#fZ{<)bruj~q|ac%d$q zm_SX6Aq)D)>gB=!$uVJCvUMJDuKy4LCW!tUW6|gL5SA;H(Mj$nuFt4Dw-~;yX@eQpU(4kCYgF7a>keDlQsPxHtMqE2(>Sn+NwL$U)`)iyqPbj!0
z!Q-qKRd_L^#S{O^7FOuYPIfDp-y_1{FhMrIMF!Qal=*AcCrh;%H}e%6f8kX0@FI&f
z?J8S6toS7vAx{&;JZAz=FK^j^0y?FA=^Ht>%H;O}$tB8|=tc?YmHsT6C<{?# zx$}-GpCSzQv1#?jr^Nt4K_#!FrY0Dcio8>W&fL?Kn6nQd;gYMG8q+a81OQk-z4@7< z2`j<{wi3awo2gd(;l|t+a!g&d+0#_y9zj^h=uk0pqv^js4qs8ksx4fymts@@$aew6 z_+GnI-~T_MgAuGt)UHdkCL^#=tW|dD&)2GBv{tQECP$-M!|A4^HzMRupZ->pPdR59
z4#vsV)K=karOhQq%Y&}8ScxYHONMsDMd)vxt=OdOAT4$j{Hdfn#9YHKkZ3~SwCNRS
zM97Katqlrhu0mE;!UL{aoFj0)x~3Fv{QPfsi^_h}a)x5Ihft8$A(*lYVFEi68hWQQ!@dgn}$co`O1xdYDK8vnP_y~Ef+*hzIP)(
z0QLOO*ud?`{D_>%689%n$uUD~oBk_|4>U5=y4++jI4{J5pO({;;9XV;<`?cn*ypex zLcXUd8c~rnt+p9sri2dtm7Ns6@gDwk1
zU4~w@jX7NHv}X#r@RZMFx7dFHK`V~{#~WWpxqgl*cng6Jud^>NtM!w&Sei{OH4b2>
zNbA~5#eM_heBq}S-m=smcA`Fz%PlTjh;f0p__jePUrGh9=%MGyLw|Lue8)nh-hJAJ
zQsx()GQXt;f^z;6f;anW-k6P}2G!Sqsz?#59P0l7O~s(1!}a+_{Ec3PAN{>XVziWr
z5VaDR@~Q=^v7|PTf;4iQ*;^X5(B#wc|{e)}UYD6$IG_0$p&1ioJ
zx=}6Q^x+kyA|g;^JJNt@Hox;#%gkM@{$V@eU!jMx@~);9SWKFN8bvuHP@i{`(X`&K
zqRqdFRuzB#xZ-l`pph0-G$@?3v`MqN4HL*P`;!*Z==yjw9kYL=O4~uKq20q`Dv(qYY;w{UBCOzIE2i}tNmqg@@<;|n$j_ryg}=zu zt_najr23VGibc2m1j^BgpDVNrWb<__w8+gan84x4v%p`Y_#XGe&Ud2p1h6v8Oswac{QB6VMoq=a0!H?5BmoFMDBLjdD
z?{+{(hPon2!sAf&Q${#e|EDOHh$@doeY=X!ls5gXM@oN=9H
zZAg4kmHqb*A@@ygB(_7>d0*#~;au)#z`gy_`bTm2N*2R;8MLsmT<2jomd401r&b#9 z=^!pNjgk|jpXqbQb`F(IJB*gYkkyopkQj@;nF?|pnw4T#?+F2T3Z4_x+keYc)U$jV zCbm%>X?2z(OGPy#lF341a5$^Ndw7>Lt2*6Dj)S~O3a;W}_q8Cq(av0f-kHuVoL%_~Y&Wze6Bc6M?ehQE0x;ehI=
zeaXbr_}QX_f4Y-PsNnc;3s9*j)S5n&OQ-8ドル%B8qyz+<2ogex^8;fhqeg)4#a@29yo zn6Emz6mX774h0=o=uz#F+Xkcc4T^u{91;PP?i_{JHbQi2Ex4&tVf@fqW$Cg3ctr*= zY2i9ag%2DKzkM@*BvKshwDDAtn~eW-=rY>DGXf3P%{&FMSX_sK6B*QU!@UY`5wtn(
z2-Sp5!Mqf#|E(|6)oNK<2npsfol5jox-%itshdijpm(bxdfx#kwm%=%5m5$lsr7}# z`Tpy1$gh#-GcVMNlid3|L}v2Er}i)nJFr-*{eDXd4j<3f-qk+hq1qid;#*9lct1p7 z98wo|=ymHz8o!;w-P>OR`4g6(%o6l4_xLQ~mEseWn4eDxnTz{IX)2HPl+s%rF-##1
zOV9cTx+nXO_Ne3Bpz5GQA3yD_wqL_mYe_~4%Oy`*M%>yQF7Rd7ysYUuLn=Ww$TK-B
zyvKiuP8ZYGRX|(c{-v|a{plWTsG%+5YSL6L&*k!g2or8c+Rb1jGh{RN@fPxA1Fmsp
zCAu5mVr+UeCGDc1piPuE{f-N;E
zGd0%qWlTcaQ;ZmO$%XFB=DvLAu*7n`O?#d_(~y;JVDK@$^@ATVn`3!!b=Tx~;x>Al
z?Oc3#T86W}jhR#D1yt*Y>^3^9Oq;F$e{Wiya-msCK&<;ob{r24|acg%|h)~msyn&73hfw)bwff9dpodqz*0arnb znf}qw4FDMSXcB3viJAdeMQzEC^rXM~X7FC^v6S9(`Su+Vrz?VV?em8NQf{N6c&B)Z zzgsn}{G9071ecZbWM5sHOY9Pb**0fZVuRfPE=ytG050K*6VK+`vn}?xGbf@2AGsP9 zxy<0j#0|oe!fr?ncrnp?m@a1!h>5Mt!oBGJD+=my`$)&3f}{OujZK82_VfKrTkA&>
zq!-0A?=6=Q#nRD!D&z+gg<9k9z5q%jx3q!r_k`73$s(n(s@thgtjbr2?ouvfp*kd_ z$)qF1+QyQ&9&4gNaME$~c7;)gvHt3WJ#>M7I)iCq&pc~RmUYMM;kDG~w_cs}t8S2Dg4^6^BoVHe4+PZ7
z-{b`(ns0_Yd?%%3OTQD~kTEl+K_-JNN&sO>!`|f+T@4o8gVAy^2rDNix&DGA*~Q6E
zpuz^T$(691C~&d(_rghzOHPtUoF>oGogE5tGyj4yfzWXuA&ZDUG5DiqFgsXEF`DPC&M=6fs8^|ehJi0_J4*R?t5
z+4Z?Z=RoPP70RkrW}p<5syn$bzl7?m--2px2p-bzeg=)7pfkqoduh(lqu5e)h>iWd
zih9Sf0cDa*GCo9NsIdQQPYic@pH>lrgrP2>GYO?lxw&}Z=YQos2vcR
z&!orm*_3h&lWzOHR(!+=T=N#ToX+?95B5G-sIB%Ai;KN_5p)e`ncD|LD9i+gO(o+e
zHntSh*#W*W&d2dnG@pcSxy!w*{pJ<|u;ceut$r!7n>J)z=j6P?`N~y}X7qRrbbpR6
z$CvoNeH21h&XsF_t=;GoBOyO)H;KqjPe~qf%1h5Q$oXPay}#m_Y&_JHEl6qu%Oue
z&inZKQ#0Z7B!kA6IxlPVWCn(W^JY>DT4ドルupYjsD4FA5g||K6+AX?FP8+Gc#uHRAZ>nns+Y{dawQK!;_UWx^x$y()-)K7j7-
z2keshGdOzul!sbi)!r2@?^Kdn3Ch&Pp
zV4716r^-W0u3qDNW(Vn6)qJm_KeUu__4RKeTI`4;@MHG0@P+~vJ`*$CK&Z}t3^D&@
zV?P!SHqxK!L+H(J)AGdIUn^T|0fqW~L);2c#*25%25}6-=d)~l#
z!_gz2s2KC$J(nj0^84xeOl&0jeW>M=Xoo6y#jpQ)0bY86Z=lrfOWcn}7}l=om%BOT
z8*ZYI7i&$Xnl+l5ECL%RT3Aw!aw`f^iwurEG&iExXYEy?VE&Gm(a=f~9pvKTRo7ドルuezMS-#!au*@=*^Nm22+>PjG#++WD+BK-u}4
zV9FwGivg5{uuJdjgPZayxS3O_tp6
zf?9%*UAEFl(!b(pk1SL&lHG1y#OOF^MAYnKLPk$I+!568l#J6K#P1-*wm*6hZ3r^#
zeJKzU1jJ{WYD+Xvm+JFK@;?Vv*5IUY%@Wp1WHeSHMJxz+-}gBkBvIw^chCPk(nxbe
z@+fuW62c>KeH8g;c0dQzKQ`>G&--U`GfNW+DbY
zsS%HO^MpGjY!SJ_ALliR_%2nPn-Vl%^c;wOeDMYz`<@7-md&|vy)ym3?mtyp2ドル+`d z;nMW(FTwsM0wlrJovJ>00_WJgpOwGlJq6|)@}hYhsnxV}8wZ1Si5t(BoM!JK!N{`L
zVL3lHIie=QQaIl6sTPAK#BCcR7qfC)f<7t6cecy}rj}d`cp>LWrVL#4ドルgXIE5Z%fr
zugv_IykE*^o?RNKF){lgMKu*U7U|TNf(*i`hV~T|7R4=CiU0)cyu1)7bpn?SA{}O
zj1GG5&_n0zw()n@E^VMM@(v(uZFL_x*3N=9ドルI?@ya=JS%%f5SfrpvpxF0?9(RS+}1
zxSW#M7i4FcGhn(=Q5RuXUErZq#?AuY1T`3&{Xr<0a7osvfcf0vqhlzbasjbvlq07q zDW)Sx-DeVcV#lStayzS?zsP&A?KWfcxKg5;5lFvl&t~_hGWqc}gRCC6Kj>Y%ZREdo
z>MEXexVh;15uIY0K4-4JKK7kpU&96MkE`tN4!O9BfT&1#b~$lu$N6`(WaX$(A?}b^
z&G(m0&-36&_9O~kVWRxCqGvanZJ5Un7|f1e;EsZ|*C~WtfA-dVtHURSNPn=Jp*4_h
z36)B=ss9zJ2Z!duwbDRZf(GIg6QVS7%ZS&SsWfd^wDAF+mU##VP2C-|hgEGQ`XkIn
z`8-xTZpwB8j|bQvv-WK6giQzvvJG;QY9n3mmo!xtm)X_B4H^?7ドルgDXQYYivBkY|
zQ4ドルDj9(LKAs7Ch;@(VC6smu#45nfN#oVqHKTzzt_&pSl6eDzs*%}^y)jO@U?&mKP^
z{i$MS=!W*tUhLKI!*0A&xucm>SC1AY{WGO>^)ss1-YpKH*Dg^ok3a!wv3UDQA&Puj
z%<%p~k8udop>Dk)c=a~`E)jsYPym|I{CH+@Vy^Vil;)dV~H5pz?)GF)@`~)M?9ics$oz^lWqm3ドル^14!1B_u(TR1Pq_2V9`jZD+
z%vL-@ZQ8C#%3EMHakH1NYS3Cdhl-kcyrarE+w3< z+XXN$B>-+6sNqqror95{(LaD{Lip%*5)P~PwyHM75sk_2l$`y3>}EI)vxgQEy>DzneX!3KQO
zwe4LnF<+$~5|`v*kheu}usz1q^(p60jt)z14@e#qm$o{{;%7nvspmg$u^9<_p0!a{ zipkxNu-dEOD{Vq2p6lnLU@9t1Y|mT37`MWCPJO1v%Z^bTsH*^!gKy3N(An@y1mm11 z{n6rknqA%}903ドルd?VwZsmH0heCMIOyOS*5A-SahU^W=iGtQDvm2|@^!&6UP9;YvM- z_xrO|4j?LhlQg}2g|g9j+2oRQx4_YijhITNqxouCzA=vCfO?O}0jfRcD)V)d=osE? zR3y>X?h7~!Vo}3H^)!HQyRBn4DNTJceVW8)etqx?W)a=Vd;bIedM2oC@VVtKSj4~4
zc-A7Cz0ScoprHCVcf8T*P{p4w0`UT_UlNas-72H2o_s$S#?&wF3L(C7q|)p#!5%?{
zIQ+WyqJmdCbYWpGXY6d?U|HBjR}F&%9eD4K3s(rw^De^jqGHI*29jHvT(16tqTu4)
z+K@AbC!wbyD&3>+r;#bM$-~nf+r!*#mtZcorSqGEIhUFJ)wY$irJu24nc+clKH6;0
zG$Zy9+n3&wcy9#t&3W&fz(#m3js%bGQj0#$Mm7decmq|oukS|-vzuV4V62RYJWGNK
zH#9GAW^HCQx^QYp^Kn!wK1I`U2lw{2oHm*N;p>Qhu1;Q)HR?SeW1$*j8$NluuGqdt
zHQ!9SDTXmjFZV|alk-`rd4iX&ejIseHO0S+SOGt1yNAxVx7vUQqQ~!!hTX;ra$bD{
zAcynK^JWBq8vlCnsB>Q72ドルV+5@stc+qy4)rvjBcRb|P-g?#72F8}5ZNDmdcga(~;*
znD4K*w!_{tklpG$VmmkN1OZ*TcG^4ドルxxrp@jnQ7tduKMe&^5j-?kBru?gj@AvY}Ey
zMA4q#J^t$m4dm}becr8($)SKppB=;yW0DgI3~9j<CMIr!|
zkroglq-q@O3QOD$Rn(gI0RP(OA9O&A*_G;tT7kZ#C0qb5d2PTssm}t|qOAB@7{4`=
zM5fFy?Ig6CGm^R3)v4AtiPHY4UUwqQME!I?U^26JcR$K?e>kPt
z2_;lg;|3iXsNwghfWIiojE-gIdP2^uoe$sj
zY+ASCEUJ5~=X@y_W`ayrJk-{C=`xIC7q4dG#C{ZH3TEQ+>E8Q3%IP!3+W%`IXy{0H
z@}M32G+Ws$P~J>r3a^iuB1?|?><(p^tc-+$d?t6fwbrauae}h^mi?()b{3p>(v*+m
zxiCuLOSGYQ=_#9qZN=ZQVnDwa^MJWPC7tuy9cX!MCp}e=)>h4N)s^UB{~9aS0}qKe
zZAzm}o4k2v-h;YI@#W*YKVHY3*l$)*Z_i*~IX|{mj98|l)~y8WEE?s{_)hF}Z(T6$
zU#*_j`x0qkcUoe1(o+anQoNRef6PV^YDvQZbFSMkuiNP1ILI*1;1ドルhdC=#sQcVNbl
zcP$^k=qGV;cl2bx_uoE09H$nOWGk>a*~;2j_U=qn4(1pn&e1sJ%z3Dj%xB
z@`n>L#My#0u6dY^cGDfjSeke_*jN_f{F&ramzYF1oWZX9CFO)
zDlH9xCCa6wjNKSVpZ(r3T2{*WtY7LQ%0>4|1%rQ2WXTo)Ov1UTHn9z#k2Tw7xLVqZ
zjV&dI<+z5eop~dz3d8=zy@?e*etz|ocd%4fk3{i&%uq}eec_uq!dwjky^lhvfydrm zD6VY{ED>;O}ANGSMdD{V~J%Dax3E4JiPPM62{O7r#5iJ@sqx+RC3<43jmhbl7ycejqgit+l`3x3jp3 z*=f|&as;{j%1fdryd9_O!oe;rs8MNd)*^b{(s}8GIyPg`J*=5tKoSQO%JdwgrCN3#2&0+ec`w6f1@L-Ii0@jy!6Mu%Q|(
zR5l8>$CAKK)K?mh9ljeURGpex{_Eb!J=Po};k*d@YB2WL1gP1%?(h=_UJ@W*0}f|E
zxr6%?xFqSe@$pE~_tNq0#DaldFO!c&ZxDD(AXDM{XOkdi(;(?dd6_hRSwUQ~qVdlO
zaXT>4`jpWklifPwyX9^gV<-k%1p2y7zexn)8uijjm@qoqc{nsnqhjm$wo9n-d)`5w zYCW)`jkfl?OLtUmrnAV4SM0f=Yg8xMWn`c^>TOI_^CQmDnc(3E8(}XzWnWS8yT-Tu
zGX{eTAJVDs*nuwT?+Swa&N+_hBjXC7P9ZjIe=CG{%_8V&b}KxFQD-`zO>DbV#tojs
z*(KN>!lvG_4^w`2jTx|1p+KsL%j|!2z+X~l&Nst|Gm`R<3k_$`4krwlr7x$vv?npg z8GL}YzV-iIp(tm;EmTqx1+@Ll3C&_iQgB37@_*@xsbC?{{BeqC?IBYYOkf$faP&=B z%$zCOUTp0?I=9`uqIPU`691Wn7Se5Qw7p_NkDv4ドルb%j6~PWpYy%2i>Fj`wo7&VAF-
zY_SwR8#ZAbY8iXNsws=uBrc|D7-E=GP^tGWLH%#8BXo~qn92fgcKNK`6RYhOSHjc`-t@MMr)`;
z`+}jS(!HC1k7cIE_F5Ao#jti}W1xjm1gr2Smr*#IXh-8934S+~9xCjIIt|_$=h5Gtg)}wW4
zf`-Phqq7;d_p7;~0j$X~D=lk*N?y)dyK&jH{S=^;Bhta;w^e?+pGpX_i%ELs2;bH;
zXbPet7gdUHcuLFc)yU^&nt2~0tegxxv%9zthaN3iMNbxVeM{^}=|0-p7O~Un@>3QJ
znX2Rx=F#&|U2*19364rKF`UYy1v-t{`VOvxQvwGuu1eijI*JfK1V3meRK?otm;P`q
zQFgm0V!jm%^_l3)FjzVfSCs{%ZM+u?&z?#827MBSh+{`*f`*e*1JmLf6CGmTzqddh#^3I-l2Es=w337id&Cfc2;Q@2qw5_BB`ua$*-wW5iN>X)
z*=RMb>-tuRpf=(O_(~yn&-U|Ho@L++M1;wM~u2!k@T2%<~px{gti&)wkx~ z8mKNFp=MhICTAleV&4P3)*XYd%I021x?l3fLe&bo4zvl25TP}MtVw^^oOV>?LJaBh
zC^(1kBY(1zdAP7PVMQjcr3?@ajS+e=GVZe!4~*z)KKb210Cn#^&2ztrZu07KV&1B|
zF3^+0)PID7F8--Tib{7rwe+_TO
zggKaEx+om76y=^{o3=}F*^)ZM>*nT6PS2Cx}
z$n`uXU{Gc94AxD5Aga3fy7u}aaaK}k#iZ+ayFr-7+t6w8@s&enrj!tmt$xo*z2rd+e=` zdbCo}HYBKcuNc<{d!_5vvyocpx3F%
z)w*?Z^h8u^A$M&LOeo*#k8OrrzK7WD>im|YnseRieb)SNRDP(Q14Ky)Z1v(?)w{Yp6I#)BDLr`Km`e9ad*iIhrR0X8ICzCWk53zI
zdhZQ=GvXt1sufpkl7t?Qst_}dzcbTY@^NHCxhpui^-8ドルiw+S^|>yz#56FTwHI(5-l
z3m=8u7F^B_aqcmgpWCL=Fg|*}f|dEW1@)?TP}1;Ws%ZA+d{0$-qY`gAGOnc=i;3PT
z@;6cHfSXJ;SeqVDm(1z<^vpyfq4>PKl|mc;>bMJTC1$cH(=P^I~C^!C@Tb278bb*
zlrmH?zhlH=5MfRI&|1_47H0!+qlJWh=)+-3(qK~4^izMZ_y)c8O}ykl4{A)jvJjuf
z6Pd=N43#z%mO=;y-Prcd3mwcuWiwL}T$;99KrOw&h@*>`r(>+)1~vO`&s%b--y1>Z
zJI*DJZ&$Z2-o*wQjFl3FUEdDGiWOu~(gkHzla{K>!vSeZnEayzum^u0L28r`ppLOy
zJS3fmJiW9|#D{_``R=!9L`GEA%%EZ@8)&x-YsobSh!(thqmMu+cfv#Byk0M1B!(s@
zOBKsZ%``))Xx=wyWaP=Iyc4T4P7CQb1{5|o0aPj}*q|0fQ+kB&LkUmXb@UqzKNXjY
zyE*SU=z!xj1dz+*kx@q9oSJf;kO+@0HWoFP0aeSS?3B+!ZtgZeWb+L?Jh2W&=ynmI
zG^cf5viIgve#Rnmi0KAav!M?GSgcOB#Xiq{l6NkbSN3l&(zRWKP(9X|l*|*BN3thC
zK3WYN_xWbATztQt1*s+Q?xcqDtsf|AWffrJOgn4d;Ms?rtEm%|RZrRG?sr6(%a7aVKBL
z2(<@o1`e`;^abr1lb#vi$;-xm#%qyeqqN(JpsT8tg2@&bj6VaT-EX+W}{md=epvo47Cc;2BAB{;|_~jmHrAXHd=a~
zPDgR$sF)A23R)*%P}pL7(GQIO@%n@JFGV9HaiT!6xK~O~&=I%JT%!*-IpEqF{V>67u&I{;;(H@t2x#v*i@F;
z&6iF5RyZ6iTTKfNE0#)2UyzJ=+9d;)%&_CEM-sYGJpXA#%bLGEV8(J|`hg)e;`HY`
z@x_aXBfXnwcmSmx94~ep5!IUm0EC0ドル-5cxj%rMYlQH~RbJFzQpf`YGlgP4UHQ@ef#
zVX<`ai8=u_4~af!egscc1fgwse3wp$-=qr>KC1#}7Fe+0w=}X;S={NIyL($iC{}ng
z_hN=BJCv$cs~%f;&f0fyTm+pwEB5|vtf{G$$Z_DE4J%2bJM{@ww!EFw_E`0Lo_x|Q
z^sAv7%})_VGt_yp<@bt}i^oseud_5kxbx!g1x>HV@!M4mlmP(G8LAQ<@z+sxn4g0k zPIKKsB$ZdM=zX?}s`LRvRME-t7Bi|g_l{NxLAIoQDBGFoi9FP(WczXBX!0Cd@!9DSc&fudSlfi=X*;D$%2a4rqt+>O7xlohHchvcVp!t6J_$yfFY7ドルR`729{xcc8Kz-
zZ7nSP?-Aa6{D7*S9`eWzntK|zwR1W)dDVUAmQTJzX+J6GW0Zak%A
zNu%Yf#@Wk8
zRR|wl`F{ADTw&tR&AF+2055fKcDCIzyYwN%805(%B?3o1sR4fhV+)Isoh%CA8