Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 3ffe6aa

Browse files
author
xyliao
committed
finish fcn and fix some bugs
1 parent 38e2e34 commit 3ffe6aa

File tree

10 files changed

+401
-437
lines changed

10 files changed

+401
-437
lines changed

‎.gitignore

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,2 @@
1-
__pycache__
2-
*.pth
31
.ipynb_checkpoints
4-
img
5-
chapter3_MLP/3_Neural_Network/.desktop
6-
.vscode
7-
data
8-
mnist
2+
.idea

‎README.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ Learn Deep Learning with PyTorch
2020
- [Tensor和Variable](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter2_PyTorch-Basics/Tensor-and-Variable.ipynb)
2121
- [自动求导机制](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter2_PyTorch-Basics/autograd.ipynb)
2222
- [动态图与静态图](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter2_PyTorch-Basics/dynamic-graph.ipynb)
23-
23+
2424

2525
- Chapter 3: 神经网络
2626
- [线性模型与梯度下降](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/linear-regression-gradient-descend.ipynb)
@@ -35,7 +35,6 @@ Learn Deep Learning with PyTorch
3535
- [RMSProp](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/rmsprop.ipynb)
3636
- [Adadelta](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adadelta.ipynb)
3737
- [Adam](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter3_NN/optimizer/adam.ipynb)
38-
3938
- Chapter 4: 卷积神经网络
4039
- [PyTorch 中的卷积模块](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/basic_conv.ipynb)
4140
- [批标准化,batch normalization](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/batch-normalization.ipynb)
@@ -47,7 +46,6 @@ Learn Deep Learning with PyTorch
4746
- [数据增强](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/data-augumentation.ipynb)
4847
- [正则化](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/regularization.ipynb)
4948
- [学习率衰减](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter4_CNN/lr-decay.ipynb)
50-
5149
- Chapter 5: 循环神经网络
5250
- [循环神经网络模块:LSTM 和 GRU](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/pytorch-rnn.ipynb)
5351
- [使用 RNN 进行图像分类](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/rnn-for-image.ipynb)
@@ -56,24 +54,22 @@ Learn Deep Learning with PyTorch
5654
- [Word Embedding](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/nlp/word-embedding.ipynb)
5755
- [N-Gram 模型](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/nlp/n-gram.ipynb)
5856
- [Seq-LSTM 做词性预测](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter5_RNN/nlp/seq-lstm.ipynb)
59-
6057
- Chapter 6: 生成对抗网络
6158
- [自动编码器](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_GAN/autoencoder.ipynb)
6259
- [变分自动编码器](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_GAN/vae.ipynb)
6360
- [生成对抗网络](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter6_GAN/gan.ipynb)
6461
- 深度卷积对抗网络 (DCGANs) 生成人脸
65-
6662
- Chapter 7: 深度强化学习
6763
- [Q Learning](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter7_RL/q-learning-intro.ipynb)
6864
- [Open AI gym](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter7_RL/open_ai_gym.ipynb)
6965
- [Deep Q-networks](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter7_RL/dqn.ipynb)
70-
7166
- Chapter 8: PyTorch高级
7267
- [tensorboard 可视化](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter8_PyTorch-Advances/tensorboard.ipynb)
7368
- [灵活的数据读取介绍](https://github.com/SherlockLiao/code-of-learn-deep-learning-with-pytorch/blob/master/chapter8_PyTorch-Advances/data-io.ipynb)
7469
- autograd.function 的介绍
7570
- 数据并行和多 GPU
7671
- 使用 ONNX 转化为 Caffe2 模型
72+
- 如何部署训练好的神经网络
7773
- 打造属于自己的 PyTorch 的使用习惯
7874

7975
### part2: 深度学习的应用

‎chapter2_PyTorch-Basics/autograd.ipynb

Lines changed: 40 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -31,9 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": 2,
34-
"metadata": {
35-
"collapsed": false
36-
},
34+
"metadata": {},
3735
"outputs": [
3836
{
3937
"name": "stdout",
@@ -74,9 +72,7 @@
7472
{
7573
"cell_type": "code",
7674
"execution_count": 3,
77-
"metadata": {
78-
"collapsed": false
79-
},
75+
"metadata": {},
8076
"outputs": [
8177
{
8278
"name": "stdout",
@@ -105,9 +101,7 @@
105101
{
106102
"cell_type": "code",
107103
"execution_count": 4,
108-
"metadata": {
109-
"collapsed": false
110-
},
104+
"metadata": {},
111105
"outputs": [],
112106
"source": [
113107
"x = Variable(torch.randn(10, 20), requires_grad=True)\n",
@@ -128,9 +122,7 @@
128122
{
129123
"cell_type": "code",
130124
"execution_count": 5,
131-
"metadata": {
132-
"collapsed": false
133-
},
125+
"metadata": {},
134126
"outputs": [
135127
{
136128
"name": "stdout",
@@ -174,9 +166,7 @@
174166
{
175167
"cell_type": "code",
176168
"execution_count": 6,
177-
"metadata": {
178-
"collapsed": false
179-
},
169+
"metadata": {},
180170
"outputs": [
181171
{
182172
"name": "stdout",
@@ -207,9 +197,7 @@
207197
{
208198
"cell_type": "code",
209199
"execution_count": 7,
210-
"metadata": {
211-
"collapsed": false
212-
},
200+
"metadata": {},
213201
"outputs": [
214202
{
215203
"name": "stdout",
@@ -271,9 +259,7 @@
271259
{
272260
"cell_type": "code",
273261
"execution_count": 8,
274-
"metadata": {
275-
"collapsed": false
276-
},
262+
"metadata": {},
277263
"outputs": [
278264
{
279265
"name": "stdout",
@@ -300,9 +286,7 @@
300286
{
301287
"cell_type": "code",
302288
"execution_count": 9,
303-
"metadata": {
304-
"collapsed": false
305-
},
289+
"metadata": {},
306290
"outputs": [
307291
{
308292
"name": "stdout",
@@ -361,9 +345,7 @@
361345
{
362346
"cell_type": "code",
363347
"execution_count": 10,
364-
"metadata": {
365-
"collapsed": false
366-
},
348+
"metadata": {},
367349
"outputs": [],
368350
"source": [
369351
"n.backward(torch.ones_like(n)) # 将 (w0, w1) 取成 (1, 1)"
@@ -372,9 +354,7 @@
372354
{
373355
"cell_type": "code",
374356
"execution_count": 11,
375-
"metadata": {
376-
"collapsed": false
377-
},
357+
"metadata": {},
378358
"outputs": [
379359
{
380360
"name": "stdout",
@@ -423,9 +403,7 @@
423403
{
424404
"cell_type": "code",
425405
"execution_count": 12,
426-
"metadata": {
427-
"collapsed": false
428-
},
406+
"metadata": {},
429407
"outputs": [
430408
{
431409
"name": "stdout",
@@ -447,9 +425,7 @@
447425
{
448426
"cell_type": "code",
449427
"execution_count": 13,
450-
"metadata": {
451-
"collapsed": false
452-
},
428+
"metadata": {},
453429
"outputs": [],
454430
"source": [
455431
"y.backward(retain_graph=True) # 设置 retain_graph 为 True 来保留计算图"
@@ -458,9 +434,7 @@
458434
{
459435
"cell_type": "code",
460436
"execution_count": 14,
461-
"metadata": {
462-
"collapsed": false
463-
},
437+
"metadata": {},
464438
"outputs": [
465439
{
466440
"name": "stdout",
@@ -491,9 +465,7 @@
491465
{
492466
"cell_type": "code",
493467
"execution_count": 16,
494-
"metadata": {
495-
"collapsed": false
496-
},
468+
"metadata": {},
497469
"outputs": [
498470
{
499471
"name": "stdout",
@@ -568,42 +540,41 @@
568540
"$$\n",
569541
"\\left[\n",
570542
"\\begin{matrix}\n",
571-
"4 & 6 \\\\\n",
572-
"3 & 9 \\\\\n",
543+
"4 & 3 \\\\\n",
544+
"2 & 6 \\\\\n",
573545
"\\end{matrix}\n",
574546
"\\right]\n",
575547
"$$"
576548
]
577549
},
578550
{
579551
"cell_type": "code",
580-
"execution_count": 17,
552+
"execution_count": 6,
581553
"metadata": {
582554
"collapsed": true
583555
},
584556
"outputs": [],
585557
"source": [
586-
"x = Variable(torch.FloatTensor([[2, 3]]), requires_grad=True)\n",
587-
"k = Variable(torch.zeros(1, 2))\n",
558+
"x = Variable(torch.FloatTensor([2, 3]), requires_grad=True)\n",
559+
"k = Variable(torch.zeros(2))\n",
588560
"\n",
589-
"k[0, 0] = x[0, 0] ** 2 + 3 * x[0 ,1]\n",
590-
"k[0, 1] = x[0, 1] ** 2 + 2 * x[0, 0]"
561+
"k[0] = x[0] ** 2 + 3 * x[1]\n",
562+
"k[1] = x[1] ** 2 + 2 * x[0]"
591563
]
592564
},
593565
{
594566
"cell_type": "code",
595-
"execution_count": 18,
596-
"metadata": {
597-
"collapsed": false
598-
},
567+
"execution_count": 7,
568+
"metadata": {},
599569
"outputs": [
600570
{
601571
"name": "stdout",
602572
"output_type": "stream",
603573
"text": [
604574
"Variable containing:\n",
605-
" 13 13\n",
606-
"[torch.FloatTensor of size 1x2]\n",
575+
" 13\n",
576+
" 13\n",
577+
"[torch.FloatTensor of size 2]\n",
607578
"\n"
608579
]
609580
}
@@ -614,37 +585,33 @@
614585
},
615586
{
616587
"cell_type": "code",
617-
"execution_count": 19,
618-
"metadata": {
619-
"collapsed": false
620-
},
588+
"execution_count": 8,
589+
"metadata": {},
621590
"outputs": [],
622591
"source": [
623592
"j = torch.zeros(2, 2)\n",
624593
"\n",
625-
"k.backward(torch.FloatTensor([[1, 0]]), retain_graph=True)\n",
626-
"j[:, 0] = x.grad.data\n",
594+
"k.backward(torch.FloatTensor([1, 0]), retain_graph=True)\n",
595+
"j[0] = x.grad.data\n",
627596
"\n",
628-
"m.grad.data.zero_() # 归零之前求得的梯度\n",
597+
"x.grad.data.zero_() # 归零之前求得的梯度\n",
629598
"\n",
630-
"k.backward(torch.FloatTensor([[0, 1]]))\n",
631-
"j[:, 1] = x.grad.data"
599+
"k.backward(torch.FloatTensor([0, 1]))\n",
600+
"j[1] = x.grad.data"
632601
]
633602
},
634603
{
635604
"cell_type": "code",
636-
"execution_count": 20,
637-
"metadata": {
638-
"collapsed": false
639-
},
605+
"execution_count": 9,
606+
"metadata": {},
640607
"outputs": [
641608
{
642609
"name": "stdout",
643610
"output_type": "stream",
644611
"text": [
645612
"\n",
646-
" 4 6\n",
647-
" 3 9\n",
613+
" 4 3\n",
614+
" 2 6\n",
648615
"[torch.FloatTensor of size 2x2]\n",
649616
"\n"
650617
]
@@ -664,9 +631,9 @@
664631
],
665632
"metadata": {
666633
"kernelspec": {
667-
"display_name": "mx",
634+
"display_name": "Python 3",
668635
"language": "python",
669-
"name": "mx"
636+
"name": "python3"
670637
},
671638
"language_info": {
672639
"codemirror_mode": {
@@ -678,7 +645,7 @@
678645
"name": "python",
679646
"nbconvert_exporter": "python",
680647
"pygments_lexer": "ipython3",
681-
"version": "3.6.0"
648+
"version": "3.6.3"
682649
}
683650
},
684651
"nbformat": 4,

‎chapter3_NN/nn-sequential-module.ipynb

Lines changed: 57 additions & 105 deletions
Large diffs are not rendered by default.

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /