import torch
import numpy as np
x = torch.tensor(
[
[1.0, 2, 3, 4, 5],
[6.0, 7, 8, 9, 10],
]
)
x
torch.Tensor vs torch.tensor vs torch.as_tensor¶
torch.Tensor
always returnstorch.FloatTensor
.torch.tensor
infers the data type and allows users to specify the data type. It is suggested that you usetorch.tensor
instead oftorch.Tensor
.torch.tensor
always copies data whiletorch.as_tensor
avoids copying data if possible. One such an example is when you convert a numpy array to a Tensor. However, notice that bothtorch.tensor
andtorch.as_tensor
copies data if a list is feeded to them.In most situations, you should use
torch.tensor
. Never usetorch.Tensor
. Be cautious if you usetorch.as_tensor
.
a1 = np.array([1, 2, 3])
t1 = torch.Tensor(arr)
t1
a1[0] = 1000
t1
a2 = np.array([1, 2, 3])
t2 = torch.Tensor(arr)
t2
a2[0] = 1000
t2
a3 = np.array([1, 2, 3])
t3 = torch.as_tensor(a3)
t3
a3[0] = 1000
t3
torch.rand(10)
torch.stack¶
Concatenates sequence of tensors along a new dimension. All tensors need to be of the same size.
data = [torch.tensor([1, 2, 3]), torch.tensor([4, 5, 6])]
torch.stack(data)
torch.stack
takes a list/tuple of tensors.
It does NOT work on a generator/iterator of tensors.
torch.stack(t for t in data)
Tensor.detach¶
Returns a new Tensor, detached from the current graph.
The result will never require gradient.
Note: Returned Tensor shares the same storage with the original one. In-place modifications on either of them will be seen, and may trigger errors in correctness checks. IMPORTANT NOTE: Previously, in-place size / stride / storage changes (such as resize_ / resizeas / set / transpose) to the returned tensor also update the original tensor. Now, these in-place changes will not update the original tensor anymore, and will instead trigger an error. For sparse tensors: In-place indices / values changes (such as zero / copy / add_) to the returned tensor will not update the original tensor anymore, and will instead trigger an error.
?x.detach
Tensor.mean¶
x.mean()
Tensor.item¶
x.item()
y = torch.tensor([2])
y
y.item()
y = torch.tensor(2)
y
y.item()
Tensor.backward¶
Generally speaking, you only need to call this method on the loss tensor.
Move a Tensor to a Specific Device in PyTorch¶
Please refer to Move a Tensor to a Specific Device in PyTorch for more details.
Convert a Tensor to a Numpy Array in PyTorch¶
Please refer to Convert a Tensor to a Numpy Array in PyTorch for more details.
Tensor Transformations in TorchVision¶
Please refer to Tensor Transformations in TorchVision for more details.
Resize a Tensor in PyTorch¶
Please refer to Resize a Tensor in PyTorch for more details.