@@ -31,11 +31,14 @@ Learn Deep Learning with PyTorch
31
31
32
32
- Chapter 4: 卷积神经网络
33
33
- [ PyTorch 中的卷积模块] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/basic_conv.ipynb )
34
+ - [ 批标准化,batch normalization] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/batch-normalization.ipynb ) )
34
35
- [ 使用重复元素的深度网络,VGG] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/vgg.ipynb )
35
36
- [ 更加丰富化结构的网络,GoogLeNet] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/googlenet.ipynb )
36
37
- [ 深度残差网络,ResNet] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/resnet.ipynb )
37
38
- [ 稠密连接的卷积网络,DenseNet] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/densenet.ipynb )
38
- - 更好的训练卷积网络:数据增强、批标准化、dropout、正则化方法以及学习率衰减
39
+ - 更好的训练卷积网络
40
+ - [ 数据增强] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_CNN/data-augumentation.ipynb )
41
+ - dropout、正则化方法和学习率衰减] ( )
39
42
40
43
- Chapter 5: 循环神经网络
41
44
- LSTM 和 GRU
@@ -53,12 +56,12 @@ Learn Deep Learning with PyTorch
53
56
- Chapter 7: PyTorch高级
54
57
- tensorboard 可视化
55
58
- 优化算法
56
- - [ SGD] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN /optimizer/sgd.ipynb )
57
- - [ 动量法] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN /optimizer/momentum.ipynb )
58
- - [ Adagrad] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN /optimizer/adagrad.ipynb )
59
- - [ RMSProp] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN /optimizer/rmsprop.ipynb )
60
- - [ Adadelta] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN /optimizer/adadelta.ipynb )
61
- - [ Adam] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN /optimizer/adam.ipynb )
59
+ - [ SGD] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances /optimizer/sgd.ipynb )
60
+ - [ 动量法] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances /optimizer/momentum.ipynb )
61
+ - [ Adagrad] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances /optimizer/adagrad.ipynb )
62
+ - [ RMSProp] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances /optimizer/rmsprop.ipynb )
63
+ - [ Adadelta] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances /optimizer/adadelta.ipynb )
64
+ - [ Adam] ( https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_PyTorch-Advances /optimizer/adam.ipynb )
62
65
- 灵活的数据读取介绍
63
66
- autograd.function 的介绍
64
67
- 数据并行和多 GPU
0 commit comments