# Background and methods#

In this section we explain the mathematical background of forward, backward and central differences. The main ideas in this chapter are taken from [DS96]. x is used for the pandas DataFrame with parameters. We index the entries of x as a n-dimensional vector, where n is the number of variables in params_sr. The forward difference for the gradient is given by:

The backward difference for the gradient is given by:

The central difference for the gradient is given by:

For the optimal stepsize h the following rule of thumb is applied:

With the above in mind it is easy to calculate the Jacobian matrix. The calculation of the finite difference w.r.t. each variable of params_sr yields a vector, which is the corresponding column of the Jacobian matrix. The optimal stepsize remains the same.

For the Hessian matrix, we repeatedly call the finite differences functions. As we allow for central finite differences in the second order derivative only, the deductions for forward and backward, are left to the interested reader:

For the optimal stepsize a different rule is used:

Similar deviations lead to the elements of the Hessian matrix calculated by backward and central differences.

**References:**

- DS96
J.E. Dennis and R.B. Schnabel.

*Numerical Methods for Unconstrained Optimization and Nonlinear Equations*. Classics in Applied Mathematics. Society for Industrial and Applied Mathematics, 1996. ISBN 9780898713640. URL: https://books.google.de/books?id=RtxcWd0eBD0C&redir_esc=y.