top of page

The Jacobian Matrix as path to simplify a nonlinear function

Straight from the article Theory for PEST Users, by Zhulu Lin, University of Georgia:

Its lognormal mode gives a better manageable linear function.

b26.PNG

To a given equation, the Q and H variables are mediated by the b and d coefficients.  

b30.PNG

This notation, known as a simple linear regression model, helps the estimation process of the original b and d coefficients.

In a multiple linear regression model,

the y is the dependent or response variable

while any amount of x’s constitute the independent or predictor variables.

b29.PNG

Writing a bunch of n observations together, we have:

b31.PNG

... in the matrixial formulation it becomes:

b31.PNG

Then better indeed, just:

b343.PNG

Here, finally,

we reach the X matrix

as the derivative of Xβ with respect to β.

The parameter estimation process seeks for the best map of Heads (observations), through various attempts to find a good distribution of its set of parameters or variables (k) 

08_Numerical_covariance.PNG

In summary PEST varies, incrementally, all of its the predefined parameters (P1 to n), and register the correspondent results for each observation (O1 to n).

This is the Jacobian Matrix First-Order partial derivatives of a multivariate function.

35_ND.PNG

Here the derivatives provides a way to estimate the coefficients of a nonlinear equation. 

36_ND.PNG

Beta μ

Here we go!

One of the first PEST control variables is Noptmax. 

26.png
10.png
24.png
07.png
11.png

Use Noptmax = -1, to find out a first, previous, parameters sensibilities data set (SEN). 

37_ND.PNG
04+kz-1.png

Beta μ

But to grasp the meaning of this variables, visually, there is a need of a couple of more concepts, about singular value decomposition, mainlly.

25.png

Further on, the control variables Rlambda, RlamFAC e NumLAM comes to enhance your chances to find the best parameter set.

...

bottom of page