Learning PyTorch with Examples (3) – nn module
http://pytorch.org/tutorials/beginner/pytorch_with_examples.html#nn-module
Learning PyTorch with Examples
nn module
PyTorch: nn
계산 그래프와 autograd는 복잡한 연산을 처리하고 자동으로 그 미분을 해주는 매우 강력한 패러다임입니다. 하지만 거대한 신경망에서 autograd를 직접 쓰기에는 너무 저수준의 작업이 될 것입니다.
신경망을 만들 때 ,우리는 보통 계산들을 구성해서 층으로 만들고 그 층들이 가지는 learnable paramters를 학습 중에 최적화하는 작업으로 생각합니다.
TensorFlow에서는 Keras, TensorFlow-Slim, TFLearn같은 패키지들이 계산 그래프들의 고수준 추상화를 제공하여 신경망을 만드는게 도움을 줍니다.
PyTorch에서는 nn
패키지가 이러한 기능을 담당합니다. nn
패키지는 신경망 층들과 거의 비슷한 모듈의 집합을 가지고 있습니다. 모듈은 입력 Variable을 받아 출력 Variable을 계산합니다. 거기에 learnable parameter를 나타내는 Variable 또한 내부 상태로 가지고 있습니다. nn
패키지는 또한 신경망 학습에 자주 쓰이는 유용한 로스 함수도 가지고 있습니다.
아래 예제에서 nn
패키지를 이용하여 우리의 2층 망을 구현해봅시다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
# -*- coding: utf-8 -*- import torch from torch.autograd import Variable # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs, and wrap them in Variables. x = Variable(torch.randn(N, D_in)) y = Variable(torch.randn(N, D_out), requires_grad=False) # Use the nn package to define our model as a sequence of layers. nn.Sequential # is a Module which contains other Modules, and applies them in sequence to # produce its output. Each Linear Module computes output from input using a # linear function, and holds internal Variables for its weight and bias. model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) # The nn package also contains definitions of popular loss functions; in this # case we will use Mean Squared Error (MSE) as our loss function. loss_fn = torch.nn.MSELoss(size_average=False) learning_rate = 1e-4 for t in range(500): # Forward pass: compute predicted y by passing x to the model. Module objects # override the __call__ operator so you can call them like functions. When # doing so you pass a Variable of input data to the Module and it produces # a Variable of output data. y_pred = model(x) # Compute and print loss. We pass Variables containing the predicted and true # values of y, and the loss function returns a Variable containing the # loss. loss = loss_fn(y_pred, y) print(t, loss.data[0]) # Zero the gradients before running the backward pass. model.zero_grad() # Backward pass: compute gradient of the loss with respect to all the learnable # parameters of the model. Internally, the parameters of each Module are stored # in Variables with requires_grad=True, so this call will compute gradients for # all learnable parameters in the model. loss.backward() # Update the weights using gradient descent. Each parameter is a Variable, so # we can access its data and gradients like we did before. for param in model.parameters(): param.data -= learning_rate * param.grad.data |
PyTorch: optim
지금까지는 우리 모델을 학습시킬 때, learnerable parameter의 Variable의 .data
를 직접 변화시켜 파라미터를 업데이트 하였습니다. Stochastic gradient descent같이 간단한 최적화 알고리즘에서는 이것이 큰 부담이 되지는 않지만, 실제 신경망을 학습시킬 때 자주 쓰는 AdaGrad, RMSProp, Adam 등과 같은 최적화 기법에서는 부담이 될 수 있습니다.
PyTorch의 optim
패키지는 최적화 알고리즘의 아이디어를 추상화하고, 자주 쓰이는 알고리즘의 구현을 제공하고 있습니다.
아래 예제에서 nn
패키지가 모델을 정의하고, optim
패키지가 제공하는 Adam 알고리즘을 이용하여 이 모델을 최적화 하는 코드를 보겠습니다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 |
# -*- coding: utf-8 -*- import torch from torch.autograd import Variable # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs, and wrap them in Variables. x = Variable(torch.randn(N, D_in)) y = Variable(torch.randn(N, D_out), requires_grad=False) # Use the nn package to define our model and loss function. model = torch.nn.Sequential( torch.nn.Linear(D_in, H), torch.nn.ReLU(), torch.nn.Linear(H, D_out), ) loss_fn = torch.nn.MSELoss(size_average=False) # Use the optim package to define an Optimizer that will update the weights of # the model for us. Here we will use Adam; the optim package contains many other # optimization algoriths. The first argument to the Adam constructor tells the # optimizer which Variables it should update. learning_rate = 1e-4 optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate) for t in range(500): # Forward pass: compute predicted y by passing x to the model. y_pred = model(x) # Compute and print loss. loss = loss_fn(y_pred, y) print(t, loss.data[0]) # Before the backward pass, use the optimizer object to zero all of the # gradients for the variables it will update (which are the learnable # weights of the model). This is because by default, gradients are # accumulated in buffers( i.e, not overwritten) whenever .backward() # is called. Checkout docs of torch.autograd.backward for more details. optimizer.zero_grad() # Backward pass: compute gradient of the loss with respect to model # parameters loss.backward() # Calling the step function on an Optimizer makes an update to its # parameters optimizer.step() |
PyTorch: Custom nn Modules
때로는 기존의 모듈을 이어붙인 것보다 더 복잡한 모델을 만들어 사용하고 싶을 때도 있습니다. 이런 경우, nn.Module
을 상속받아 서브클래스를 만들고 forward
을 정의하여 자신만의 모듈을 만들 수 있습니다. forward
안에서는 입력 Variable을 받아 다른 모듈과 다른 autograd 연산을 이용하여 출력 Variable을 만드는 역할을 합니다.
아래 예제에서는 2층 망을 custom Module 서브클래스로 구현해보겠습니다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 |
# -*- coding: utf-8 -*- import torch from torch.autograd import Variable class TwoLayerNet(torch.nn.Module): def __init__(self, D_in, H, D_out): """ In the constructor we instantiate two nn.Linear modules and assign them as member variables. """ super(TwoLayerNet, self).__init__() self.linear1 = torch.nn.Linear(D_in, H) self.linear2 = torch.nn.Linear(H, D_out) def forward(self, x): """ In the forward function we accept a Variable of input data and we must return a Variable of output data. We can use Modules defined in the constructor as well as arbitrary operators on Variables. """ h_relu = self.linear1(x).clamp(min=0) y_pred = self.linear2(h_relu) return y_pred # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs, and wrap them in Variables x = Variable(torch.randn(N, D_in)) y = Variable(torch.randn(N, D_out), requires_grad=False) # Construct our model by instantiating the class defined above model = TwoLayerNet(D_in, H, D_out) # Construct our loss function and an Optimizer. The call to model.parameters() # in the SGD constructor will contain the learnable parameters of the two # nn.Linear modules which are members of the model. criterion = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.SGD(model.parameters(), lr=1e-4) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) print(t, loss.data[0]) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() |
PyTorch: Control Flow + Weight Sharing
동적 그래프와 파라미터 공유에 대한 예제로써 이상한 모델을 하나 만들어보고자 합니다. Fully connected ReLU 망인데, 각 순전파 단계에서 1에서 4 사이의 랜덤 수를 선택하고, 같은 가중치를 여러번 재사용하는 은닉층을 이 수만큼 통과시킨 뒤, 최종 은닉 층을 계산하는 망입니다.
이 모델을 구현하기 위해 루프에는 Python의 흐름 제어 구문을, 가중치 공유에는 순전파 과정에서 같은 Module을 여러번 사용하는 방법을 이용하였습니다.
아래는 모듈 서브클래스를 이용하여 구현한 결과입니다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
# -*- coding: utf-8 -*- import random import torch from torch.autograd import Variable class DynamicNet(torch.nn.Module): def __init__(self, D_in, H, D_out): """ In the constructor we construct three nn.Linear instances that we will use in the forward pass. """ super(DynamicNet, self).__init__() self.input_linear = torch.nn.Linear(D_in, H) self.middle_linear = torch.nn.Linear(H, H) self.output_linear = torch.nn.Linear(H, D_out) def forward(self, x): """ For the forward pass of the model, we randomly choose either 0, 1, 2, or 3 and reuse the middle_linear Module that many times to compute hidden layer representations. Since each forward pass builds a dynamic computation graph, we can use normal Python control-flow operators like loops or conditional statements when defining the forward pass of the model. Here we also see that it is perfectly safe to reuse the same Module many times when defining a computational graph. This is a big improvement from Lua Torch, where each Module could be used only once. """ h_relu = self.input_linear(x).clamp(min=0) for _ in range(random.randint(0, 3)): h_relu = self.middle_linear(h_relu).clamp(min=0) y_pred = self.output_linear(h_relu) return y_pred # N is batch size; D_in is input dimension; # H is hidden dimension; D_out is output dimension. N, D_in, H, D_out = 64, 1000, 100, 10 # Create random Tensors to hold inputs and outputs, and wrap them in Variables x = Variable(torch.randn(N, D_in)) y = Variable(torch.randn(N, D_out), requires_grad=False) # Construct our model by instantiating the class defined above model = DynamicNet(D_in, H, D_out) # Construct our loss function and an Optimizer. Training this strange model with # vanilla stochastic gradient descent is tough, so we use momentum criterion = torch.nn.MSELoss(size_average=False) optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9) for t in range(500): # Forward pass: Compute predicted y by passing x to the model y_pred = model(x) # Compute and print loss loss = criterion(y_pred, y) print(t, loss.data[0]) # Zero gradients, perform a backward pass, and update the weights. optimizer.zero_grad() loss.backward() optimizer.step() |