In older versions, pytorch don’t allow tensors with different data type to operate with each other, for example, t1(torch.int64) + t2(forch.float32) is not allowed. if you do, you’ll get an exception like:
1
Expected object of type torch.LongTensor but found type torch.FloatTensor forargument #3 'other'
Note that tensors from different devices can’t operate with each other, too.
1 2
t1 = torch.tensor([1,2,3]) t2 = t1.cuda()
1
t1.device
device(type='cpu')
1
t2.device
device(type='cuda', index=0)
1
t1 + t2
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-20-9ac58c83af08> in <module>
----> 1 t1 + t2
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
See the difference? When useing torch.Tensor(), it will call the constructer of the Tensor class while torch.tensor() is the factory function. The data type of them are different because the int data type data that you input is transformed into float data type by Tensor() but tensor() doesn’t do this transformation.
This difference occurs because the constructor uses the global default data type value when constructing a tensor while the factory functions infer the data type.
The best option is the lowercase T function torch.tensor().
The second option is torch.as_tensor().
Since numpy.ndarray objects are allocated on the CPU, the as_tensor() function must copy the data from the CPU to the GPU when a GPU is being used.
The memory sharing of as_tensor() doesn’t work with built-in Python data structures like lists.
The as_tensor() call requires developer knowledge of the sharing feature. This is necessary so we don’t inadvertently make an unwanted change in the underlying data without realizing the change impacts multiple objects.
The as_tensor() performance improvement will be greater if there are a lot of back and forth operations between numpy.ndarray objects and tensor objects. However, if there is just a single load operation, there shouldn’t be much impact from a performance perspective.