开发手册 欢迎您!
软件开发者资料库

PyTorch 实现第一个神经网络

PyTorch包含一个创建和实现神经网络的特定功能。本文主要介绍将创建一个简单的神经网络,它有一个隐含层,开发一个单独的输出单元。以及使用PyTorch通过下面的步骤实现第一个神经网络。

1、导入torch和 torch.nn

首先,我们需要使用以下命令导入Pytorch库:

import torch import torch.nn as nn

2、定义变量

定义所有图层和批次大小以开始执行神经网络,如下所示:

# Defining input size, hidden layer size, output size and batch size respectivelyn_in, n_h, n_out, batch_size = 10, 5, 1, 10

3、输入数据

作为神经网络包括输入数据的组合来获取相应的输出数据,例如:

# Create dummy input and target tensors (data)x = torch.randn(batch_size, n_in)y = torch.tensor([[1.0], [0.0], [0.0], [1.0], [1.0], [1.0], [0.0], [0.0], [1.0], [1.0]])

4、创建模型

在内置功能的帮助下创建顺序模型。 使用以下代码行,创建顺序模型:

# Create a modelmodel = nn.Sequential(nn.Linear(n_in, n_h),   nn.ReLU(),   nn.Linear(n_h, n_out),   nn.Sigmoid())

5、构建loss函数

利用梯度下降优化器构建损失函数,如下图所示:

Construct the loss functioncriterion = torch.nn.MSELoss()# Construct the optimizer (Stochastic Gradient Descent in this case)optimizer = torch.optim.SGD(model.parameters(), lr = 0.01)

6、梯度下降模型

使用给定的代码行实现迭代循环的梯度下降模型

# Gradient Descentfor epoch in range(50):   # Forward pass: Compute predicted y by passing x to the model   y_pred = model(x)   # Compute and print loss   loss = criterion(y_pred, y)   print('epoch: ', epoch,' loss: ', loss.item())   # Zero gradients, perform a backward pass, and update the weights.   optimizer.zero_grad()   # perform a backward pass (backpropagation)   loss.backward()   # Update the parameters   optimizer.step()

7、生成输出

生成的输出如下 :

epoch: 0 loss: 0.2545787990093231epoch: 1 loss: 0.2545052170753479epoch: 2 loss: 0.254431813955307epoch: 3 loss: 0.25435858964920044epoch: 4 loss: 0.2542854845523834epoch: 5 loss: 0.25421255826950073epoch: 6 loss: 0.25413978099823epoch: 7 loss: 0.25406715273857117epoch: 8 loss: 0.2539947032928467epoch: 9 loss: 0.25392240285873413epoch: 10 loss: 0.25385022163391113epoch: 11 loss: 0.25377824902534485epoch: 12 loss: 0.2537063956260681epoch: 13 loss: 0.2536346912384033epoch: 14 loss: 0.25356316566467285epoch: 15 loss: 0.25349172949790955epoch: 16 loss: 0.25342053174972534epoch: 17 loss: 0.2533493936061859epoch: 18 loss: 0.2532784342765808epoch: 19 loss: 0.25320762395858765epoch: 20 loss: 0.2531369626522064epoch: 21 loss: 0.25306645035743713epoch: 22 loss: 0.252996027469635epoch: 23 loss: 0.2529257833957672epoch: 24 loss: 0.25285571813583374epoch: 25 loss: 0.25278574228286743epoch: 26 loss: 0.25271597504615784epoch: 27 loss: 0.25264623761177063epoch: 28 loss: 0.25257670879364014epoch: 29 loss: 0.2525072991847992epoch: 30 loss: 0.2524380087852478epoch: 31 loss: 0.2523689270019531epoch: 32 loss: 0.25229987502098083epoch: 33 loss: 0.25223103165626526epoch: 34 loss: 0.25216227769851685epoch: 35 loss: 0.252093642950058epoch: 36 loss: 0.25202515721321106epoch: 37 loss: 0.2519568204879761epoch: 38 loss: 0.251888632774353epoch: 39 loss: 0.25182053446769714epoch: 40 loss: 0.2517525553703308epoch: 41 loss: 0.2516847252845764epoch: 42 loss: 0.2516169846057892epoch: 43 loss: 0.2515493929386139epoch: 44 loss: 0.25148195028305054epoch: 45 loss: 0.25141456723213196epoch: 46 loss: 0.2513473629951477epoch: 47 loss: 0.2512802183628082epoch: 48 loss: 0.2512132525444031epoch: 49 loss: 0.2511464059352875