In this article, we will provide a quick introduction to PyTorch and demonstrate how to use it.
Tensors are fundamental data structures in PyTorch. They are similar to NumPy's ndarrays but have the added advantage of being able to run computations on GPUs for accelerated performance.
To start using PyTorch, we first need to import the torch module:
import torch
We can display the head of the table to get an image of what we are working on:
Let's begin by creating a 5x3 matrix. We can initialize it without specifying any values, resulting in an uninitialized tensor:
x = torch.Tensor(5, 3)
print(x)
Output:
tensor([[9.6429e-39, 9.2755e-39, 1.0286e-38],
[9.0919e-39, 8.9082e-39, 9.2755e-39],
[8.4490e-39, 1.0194e-38, 9.0919e-39],
[8.4490e-39, 1.0653e-38, 9.9184e-39],
[8.4490e-39, 9.9184e-39, 8.9082e-39]])
We can also create a tensor with random values using the torch.rand() function:
x = torch.rand(5, 3)
print(x)
Output:
tensor([[0.9476, 0.8858, 0.3435],
[0.6950, 0.4547, 0.6225],
[0.4207, 0.4876, 0.6673],
[0.3921, 0.2402, 0.8420],
[0.8632, 0.6434, 0.9973]])
PyTorch provides various operations for getting properties from Tensors.Let's explore some of these:
Size of a Tensor
We can determine the size of a tensor using the size() method:
print(x.size())
Output:
torch.Size([5, 3])
Keep in mind that torch.Size is in fact a tuple, so it supports the same operations that a tuple supports.
Indexing and Slicing
Tensors support standard indexing and slicing operations, similar to NumPy:
x[1:3] = 2
print(x)
Output:
tensor([[0.9476, 0.8858, 0.3435],
[2.0000, 2.0000, 2.0000],
[2.0000, 2.0000, 2.0000],
[0.3921, 0.2402, 0.8420],
[0.8632, 0.6434, 0.9973]])
In order to feel more comfortable with the syntaxis and the opearions let's try to do the following operations
A = torch.Tensor(2,17)
A, A.type()
# If we create a tensor it initialize it with flat numbers
Output:
(tensor([[8.4489e-39, 1.0102e-38, 9.0919e-39, 1.0102e-38, 8.9082e-39, 8.4489e-39,
9.6429e-39, 8.4490e-39, 9.6429e-39, 9.2755e-39, 1.0286e-38, 9.0919e-39,
8.9082e-39, 9.2755e-39, 8.4490e-39, 8.9082e-39, 9.1837e-39],
[1.0561e-38, 9.0918e-39, 1.0010e-38, 8.9082e-39, 8.4490e-39, 8.7245e-39,
1.1112e-38, 8.9082e-39, 9.5510e-39, 8.7245e-39, 8.4490e-39, 9.6429e-39,
9.8265e-39, 9.2755e-39, 9.0918e-39, 1.0010e-38, 8.9082e-39]]),
'torch.FloatTensor')
We can directly create a Float Tensor with:
B = torch.FloatTensor(3, 1)
B, B.type()
Output:
tensor([[-5.1665e-36],
[ 4.5907e-41],
[ 0.0000e+00]])
'torch.FloatTensor'
We can also initialize a zero tensor and specify it to be a Float Tensor
B1 = torch.zeros([3,1], dtype=torch.float64)
B1, B1.type()
Output:
tensor([[0.],
[0.],
[0.]], dtype=torch.float64)
'torch.DoubleTensor'
LongTensor it's just a tensor with large integers.
C = torch.LongTensor(5,2,1)
print(C)
Output:
tensor([[[30962664056029233],
[30962724186030177]],
[[25895916907069540],
[25896118771056748]],
[[28429470870863987],
[27866439313522733]],
[[28429415035764843],
[27303553783300211]],
[[32370038940106862],
[25896174605631580]]])
Fill the entire tensor with 7s. We can simply use slicing to assign values to the tensor
C[:]=7
Output:
tensor([[[7],
[7]],
[[7],
[7]],
[[7],
[7]],
[[7],
[7]],
[[7],
[7]]])
Byte tensor comes from the unit byte, equivalent to use uint8
D = torch.ByteTensor(5,)
print(D)
Output:
tensor([ 16, 194, 219, 132, 248], dtype=torch.uint8)
To fill the middle 3 indices with ones such that it records [0, 1, 1, 1, 0], again we can simply use slicing
D[1:4] = 1
print(D)
Output:
tensor([ 16, 1, 1, 1, 248], dtype=torch.uint8)
In the next post we'll see the possible operations that we can do with tensors