ANN_JACOB — Function to calculate the jacobian performance vector of a feed forward artifical neural network
[JE,JJ,normJE,[JX]] = ANN_JACOB(IN,W,IN_W,HID_OUT,ERR_OUT)
Input data (Matrix [PxN] where P is the number of input neurons and N the number of input patterns)
Initial weight and bias values (2 dimensions Matrix [max(Nhid,M) x max(P+1,Nhid+1) x 2]).
Wini(1:Nhid,1,1) are the bias for the hidden neurons
Wini(1:Nhid,2:P+1,1) are the weights for the hidden neurons (P weights for each hidden neuron)
Wini(1:M,1,2) are the bias for the ouput neurons
Wini(1:M,2:Nhid+1,2) are the weights for the ouput neurons (Nhid weights for each output neuron)
Weighted inputs
Ouputs from the hidden layer
Errors (target-network ouput)
Jacobian times errors (Matrix [N_PARxNB_OUT] where NB_PAR is the number of parameters in the network
Transposed jacobian times jacobian (Matrix [N_PARxNB_PAR])
Transposed JE matrix times JE
Full Jacobian matrix
This function is valid only for feed forward network with one hidden layer.
The activation function of the hidden layer is the hyperbolic tangent and the identity function for the output layer.
This function is used in the function 'LMBR' for the training of feed-forward neural network with Levenberg-Marquadt algorithm under bayesian regulation.