NEML2 allows a tensor to be a view of an existing tensor. As the name suggests, a tensor view shares the same underlying data with the tensor it is viewing into. In other words, supporting tensor view avoids data copy. Moreover, tensor views optionally (and often) reinterpret the shape and/or striding of the original data, allowing for fast and memory efficient reshaping, slicing, and element-wise operations.
Remarks
Tensor views enable efficient assembly of implicit systems.
In fact, all indexing mechanisms covered in the previous tutorial are creating tensor views, i.e., zero copy, negligible allocation. In addition to those indexing API, NEML2 also provides flexible tensor reshaping API (documented in neml2::TensorBase).
Who touched my data?
Tensor views avoid explicit data copying, which means:
Modification in the original data will be reflected by the tensor view
Modifying data viewed by the tensor view will alter the original data
It is therefore important to understand the difference between view and copy, and when to declare ownership (copy, or clone) of the original data.
The following example demonstrates the two bullet points, i.e., "change in original data" <-> "change in data viewed by the tensor view".
# Create a tensor with shape (; 4, 3) filled with zeros
a = Tensor.zeros((4, 3))
print("a =")
print(a)
# b is a view into the first row and the third row of a
b = a.base[::2]
print("b =")
print(b)
# Modification in a is reflected in b
a += 1.0
print("\nAfter first modification")
print("a =")
print(a)
print("b =")
print(b)
# Modification in data viewed by b is reflected in a
b += 1.0
print("\nAfter second modification")
print("a =")
print(a)
print("b =")
print(b)
Output:
a =
0 0 0
0 0 0
0 0 0
0 0 0
[ CPUDoubleType{4,3} ]
<Tensor of shape [][4, 3]>
b =
0 0 0
0 0 0
[ CPUDoubleType{2,3} ]
<Tensor of shape [][2, 3]>
After first modification
a =
1 1 1
1 1 1
1 1 1
1 1 1
[ CPUDoubleType{4,3} ]
<Tensor of shape [][4, 3]>
b =
1 1 1
1 1 1
[ CPUDoubleType{2,3} ]
<Tensor of shape [][2, 3]>
After second modification
a =
2 2 2
1 1 1
2 2 2
1 1 1
[ CPUDoubleType{4,3} ]
<Tensor of shape [][4, 3]>
b =
2 2 2
2 2 2
[ CPUDoubleType{2,3} ]
<Tensor of shape [][2, 3]>
Note
The same statements/rules still hold when multiple tensor views are viewing into the same underlying data, even if they are views of different regions.