Name

ANN_OIPDERIV — Function to estimate input variables influence on one output variable based on ANN partial derivative

Calling Sequence

[CI,[PD]]=ANN_OIPDERIV(IN,Nhid,W)

Parameters

IN

Input data (matrix [PxN] where P is the number of input neurons and N the number of input patterns)

Nhid

Number of neurons in the hidden layer

W

Weight and bias values (2 dimensions Matrix [max(Nhid,M) x max(P+1,Nhid+1) x 2]).

Wini(1:Nhid,1,1) are the bias for the hidden neurons

Wini(1:Nhid,2:P+1,1) are the weights for the hidden neurons (P weights for each hidden neuron)

Wini(1:M,1,2) are the bias for the ouput neurons

Wini(1:M,2:Nhid+1,2) are the weights for the ouput neurons (Nhid weights for each output neuron)

CI

Sum of square of the partial derivatives of the output with respect to the input (matrix [Px1])

PD

Partial derivatives of the output with respect to the input for each pattern (matrix [PxN])

Description

  • The activation function of the hidden layer is the hyperbolic tangent and the identity function for the output layer.

  • The network should have only ONE output.

  • The values of CI indicates the relative importance of inputs into output calculation.

Examples

    // INPUT
    t=1:0.03:10;
    // Two variables : first can explain output values, second is only a random sample : 
    IN  = [sin(t)./(t+%eps)+rand(1,size(t,2))/10;rand(1,size(t,2))/2];
    TARG= [sin(t)+rand(1,size(t,2))/20];
    
    // Print a random number matrix
    ChemRAND  = TMPDIR+'/randMat.txt'; fprintfMat(ChemRAND,rand(1000000,1),'%5.4f');

    // Network calibration with 10 repetitions, 30 epochs (default options of ANN_REPET) and 4 hidden neurons
    Nhid=4;
    [W,OUT,C,RMSE,IN_Stat,TARG_Stat]=ANN_REPET(IN,TARG,Nhid,ChemRAND);
    
    // Delete the random number matrix
    mdelete(ChemRAND);
    
    [CI,PD]=ANN_OIPDERIV(IN,Nhid,ANN_CONV_W(W(:,3),size(IN,1),Nhid,1,'vector'));
    // Here : 
    //  CI(1) = 857.57
    //  CI(2) = 1.16    >> Second variable has no impact on output variable 
    
    // Results
    xset('window',0);fig=get('current_figure');fig.figure_size=[500,700];
    subplot(3,1,1),plot(t,[TARG' OUT(:,4)]);legend(['Target' 'Network output (validation mode)'],a=1,%f);
    subplot(3,1,2),plot(t,PD');legend(['Partial derivative of output with resp. to 1st var.' 'Partial derivative of output with resp. to 2nd var.'],a=1,%f);
    subplot(3,1,3),plot2d2(1:3,[sum(PD.^2,'c')/sum(PD.^2);0]*100);gr  = gca();gr.auto_ticks=['off','on'];
    xtitle('','','% of total Partial derivative');
    XTK=tlist(['ticks','locations','labels'],[1.5,2.5],['Var 1' 'Var 2']);gr.x_ticks=XTK;
  

See Also

ANN_CONV_W , ANN_JACOB , ANN_NORM , ANN_LMBR , ANN_SIM

Authors

Julien Lerat

CEMAGREF Antony, HBAN Unit, julien.lerat@cemagref.fr

Bibliography

MacKay, Neural Computation, vol. 4, no. 3, 1992, pp. 415-447.

Foresee and Hagan, Proceedings of the International Joint Conference on Neural Networks, June, 1997.