Related reading: PyTorch tensor creation API
Wrapping a torch tensor
A neml2::Tensor can be created from a torch::Tensor by marking its batch dimension:
C++
#include <torch/torch.h>
#include "neml2/tensors/Tensor.h"
int
main()
{
auto A = torch::rand({5, 3, 2, 8});
std::cout << "Shape of A: " << A.sizes() << '\n' << std::endl;
std::cout << " Shape of B: " << B.sizes() << std::endl;
std::cout << "Batch shape of B: " << B.batch_sizes() << std::endl;
std::cout << " Base shape of B: " << B.base_sizes() << std::endl;
}
Output:
Shape of A: [5, 3, 2, 8]
Shape of B: [5, 3, 2, 8]
Batch shape of B: [5, 3]
Base shape of B: [2, 8]
Python
import torch
from neml2.tensors import Tensor
A = torch.rand(5, 3, 2, 8)
print("Shape of A:", A.shape, "\n")
B = Tensor(A, 2)
print(" Shape of B:", B.shape)
print("Batch shape of B:", B.batch.shape)
print(" Base shape of B:", B.base.shape)
Output:
Shape of A: torch.Size([5, 3, 2, 8])
Shape of B: (5, 3, 2, 8)
Batch shape of B: (5, 3)
Base shape of B: (2, 8)
Factory methods
A factory tensor creation function produces a new tensor. All factory functions adhere to the same schema:
- C++
<TensorType>::<function_name>(<function-specific-options>, const neml2::TensorOptions & options);
- Python
<TensorType>.<function_name>(<function-specific-options>, *, dtype, device, requires_grad)
where <TensorType> is the class name of the primitive tensor type listed here, and <function-name> is the name of the factory function which produces the new tensor. <function-specific-options> are positional arguments a particular factory function accepts. Refer to each tensor type's class documentation for the concrete signature. The last argument const TensorOptions & options configures the data type, device, and other "meta" properties of the produced tensor. The commonly used meta properties are
- dtype: the data type of the elements stored in the tensor. Available options are kInt8, kInt16, kInt32, kInt64, kFloat32, and kFloat64. Support for unsigned integer types were added in recent versions of PyTorch.
- device: the compute device where the tensor will be allocated. Available options are kCPU and kCUDA. On MacOS, the device type torch::kMPS could be used but is not officially supported by NEML2.
- requires_grad: whether the tensor is part of a function graph used by automatic differentiation to track functional relationship. Available options are true and false.
For example, the following code creates a statically (base) shaped, dense, single precision tensor of type SR2 filled with zeros, with batch shape \((5, 3)\), allocated on the CPU.
C++
#include "neml2/tensors/SR2.h"
int
main()
{
}
static SR2 zeros(const TensorOptions &options=default_tensor_options())
Definition PrimitiveTensor.h:243
Definition DiagnosticsInterface.h:31
c10::TensorOptions TensorOptions
Definition types.h:66
constexpr auto kCPU
Definition types.h:57
constexpr auto kFloat32
Definition types.h:53
Output:
Python
import torch
from neml2.tensors import SR2
A = SR2.zeros((5, 3), dtype=torch.float32, device=torch.device("cpu"))
Output:
- Note
- All the factory methods are listed here.