Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 5f66e7c

Browse files
finish bn and fix bugs in CNN
1 parent cad93b5 commit 5f66e7c

File tree

12 files changed

+1323
-15
lines changed

12 files changed

+1323
-15
lines changed

‎README.md‎

Lines changed: 10 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -31,11 +31,14 @@ Learn Deep Learning with PyTorch
3131

3232
- Chapter 4: 卷积神经网络
3333
- [PyTorch 中的卷积模块](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/basic_conv.ipynb)
34+
- [批标准化,batch normalization](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/batch-normalization.ipynb))
3435
- [使用重复元素的深度网络,VGG](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/vgg.ipynb)
3536
- [更加丰富化结构的网络,GoogLeNet](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/googlenet.ipynb)
3637
- [深度残差网络,ResNet](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/resnet.ipynb)
3738
- [稠密连接的卷积网络,DenseNet](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/densenet.ipynb)
38-
- 更好的训练卷积网络:数据增强、批标准化、dropout、正则化方法以及学习率衰减
39+
- 更好的训练卷积网络
40+
- [数据增强](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/data-augumentation.ipynb)
41+
- dropout、正则化方法和学习率衰减]()
3942

4043
- Chapter 5: 循环神经网络
4144
- LSTM 和 GRU
@@ -53,12 +56,12 @@ Learn Deep Learning with PyTorch
5356
- Chapter 7: PyTorch高级
5457
- tensorboard 可视化
5558
- 优化算法
56-
- [SGD](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/sgd.ipynb)
57-
- [动量法](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/momentum.ipynb)
58-
- [Adagrad](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adagrad.ipynb)
59-
- [RMSProp](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/rmsprop.ipynb)
60-
- [Adadelta](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adadelta.ipynb)
61-
- [Adam](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adam.ipynb)
59+
- [SGD](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/sgd.ipynb)
60+
- [动量法](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/momentum.ipynb)
61+
- [Adagrad](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/adagrad.ipynb)
62+
- [RMSProp](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/rmsprop.ipynb)
63+
- [Adadelta](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/adadelta.ipynb)
64+
- [Adam](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances/optimizer/adam.ipynb)
6265
- 灵活的数据读取介绍
6366
- autograd.function 的介绍
6467
- 数据并行和多 GPU

‎chapter3_NN/deep-nn.ipynb‎

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -391,7 +391,7 @@
391391
"source": [
392392
"# 定义 loss 函数\n",
393393
"criterion = nn.CrossEntropyLoss()\n",
394-
"optimzier = torch.optim.SGD(net.parameters(), 1e-1) # 使用随机梯度下降,学习率 0.1"
394+
"optimizer = torch.optim.SGD(net.parameters(), 1e-1) # 使用随机梯度下降,学习率 0.1"
395395
]
396396
},
397397
{
@@ -447,9 +447,9 @@
447447
" out = net(im)\n",
448448
" loss = criterion(out, label)\n",
449449
" # 反向传播\n",
450-
" optimzier.zero_grad()\n",
450+
" optimizer.zero_grad()\n",
451451
" loss.backward()\n",
452-
" optimzier.step()\n",
452+
" optimizer.step()\n",
453453
" # 记录误差\n",
454454
" train_loss += loss.data[0]\n",
455455
" # 计算分类的准确率\n",

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /