# pytorch教程之网络的构建流程笔记

## 构建网络

1.定义一个网络结构包含一些可训练的额参数
2.为数据集制定输入iterata
3.通过网络计算Output
4.计算loss
5.反向传播计算梯度
6.更新权值

`weight = weight - learning_rate * gradient`

## 定义一个网络

```import torch
import torch as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__(
#1 input image channel ,6output image channel ,5*5convolytion kernel
self.conv1 = nn.Conv2d(1,6,5)
self.conv2 = nn.Conv2d(6,16,5)
# an affine operation:y = Wx+b
self.fc1 = nn.Linear(16*5*5,120)
self.fc2 = nn.Linear(120,84)
self.fc3 = nn.Linear(84,10)
def forward(self,x):
#max pooling
x.F.max_pool2d(F.relu(self.conv1(x)),(2,2))
#2   =    （2,2）
x.F.max_pool2d(F.relu(self.con2(x)),2)
x = x.view(-1,self.num_flat_features(x))
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return  x
def num_flat_features(self,x):
size = x.size()[1:]
num_feature = 1
for s in size:
num_features *=s
return num_features

net = Net()
print(net)
```

out

```Net(
(conv1): Conv2d(1, 6, kernel_size=(5, 5), stride=(1, 1))
(conv2): Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1))
(fc1): Linear(in_features=400, out_features=120, bias=True)
(fc2): Linear(in_features=120, out_features=84, bias=True)
(fc3): Linear(in_features=84, out_features=10, bias=True)
)```

```params = list(net.parameters())
print(len(params))
print(params[0].size())  # conv1's .weight```

out

```10
torch.Size([6, 1, 5, 5])```

```input = torch.randn(1,1,32,32)
oyt = net(input)
print(out)```

out

```tensor([[-0.1346,  0.0581, -0.0396, -0.1136, -0.1128,  0.0180, -0.1226,
-0.0419, -0.1150,  0.0278]])```

```net.zero_grad()
out.backward(torch.randn(1,10))```

torch.nn只支持mini-batch，而不是单个的样本

`nSamples * nChannels * Height * Width`

Recap:

`torch.Tensor` - A multi-dimensional array with support for autograd operations like backward(). Also holds the gradient w.r.t. the tensor.
`nn.Module `- Neural network module. Convenient way of encapsulating parameters, with helpers for moving them to GPU, exporting, loading, etc.
`nn.Parameter `- A kind of Tensor, that is automatically registered as a parameter when assigned as an attribute to a Module.
`autograd.Function` - Implements forward and backward definitions of an autograd operation. Every Tensor operation, creates at least a single Function node, that connects to functions that created a Tensor and encodes its history.

1.定义一个神经网络
2.处理输入和使用后向传播

1.计算loss
2.更新权值

## loss Function

Loss function接受(output traget)对作为输入，计算一个反映到目标距离的值。

```output = net(input)
target = torch.arrange(1,11)
target = target.view(1m-1)
criterion = nn.MSELoss()
loss = criterion(output,target)
print(loss)
```

Out:

`tensor(39.1076)`

## 更新权值

`weight = weight - learning_rata * gradiernt`

```learning_rata = 0.01
for f in net.parameters():

```import torch.optim as optim
#creat you optimizer
optimizer = optim.SGD(net.parameters(),lr = 0.01)
output = net(input)
loss = criterion(output,target)
loss.backward()
optimizer.step()```

```ner.zero_grad()
loss.backward()

out

```conv1.bias.grad before backward
tensor([ 0.,  0.,  0.,  0.,  0.,  0.])