|
NEML2 2.0.0
|
The previous tutorials illustrated the use of NEML2 constitutive models in the "feed-forward" setting, i.e., the model maps from input variables to output variables with a given parametrization, i.e.
\[ y = f(x; p, b). \]
Recall that \(p\) and \(b\) are respectively the parameters and the buffers of the model.
Another interesting use of NEML2 constitutive models is parameter calibration: With given input variables \(x\), find the optimal parameter set \(p^*\) such that
\[ p^* = \mathop{\mathrm{argmin}}\limits_{p} \ l(y), \]
where \(l\) is oftentimes referred to as the loss (or objective) function defining optimality.
This set of tutorials demonstrate the use of PyTorch Autograd to calculate parameter derivatives ( \(\pdv{l}{p}\)), which is a necessary ingredient in all gradient-based optimizers.