Name

nisp_sobolsa — Compute sensitivity indices by Sobol, Ishigami, Homma.

Calling Sequence

   s = nisp_sobolsa ( func , nx )
   s = nisp_sobolsa ( func , nx , randgen )
   s = nisp_sobolsa ( func , nx , randgen , n )
   [ s , nbevalf ] = nisp_sobolsa ( ... )
   [ s , nbevalf , st ] = nisp_sobolsa ( ... )
   [ s , nbevalf , st , mi , si ] = nisp_sobolsa ( ... )
   
   

Parameters

func :

a function or a list, the name of the function to be evaluated.

nx :

a 1-by-1 matrix of floating point integers, the number of inputs of the function.

randgen :

a function or a list, the random number generator. (default = uniform random variables)

n :

a 1-by-1 matrix of floating point integers (default n=10000), the number of Monte-Carlo experiments, for each sensitivity index

s :

a nx-by-1 matrix of doubles, the first order sensitivity indices

nbevalf :

a nx-by-1 matrix of doubles, the actual number of function evaluations.

st :

a nx-by-1 matrix of doubles, the total sensitivity indices

mi :

a m-by-nx matrix of doubles, the multi-indices of the variables in si, where m=2^nx - 1. Each row in mi represents a group of variables. We have mi(k,i) = 1 if Xi is in the group of variables and 0 if not.

si :

a m-by-1 matrix of doubles, the partial sensitivity indices.

Description

The algorithm uses the Sobol method to compute the first order sensitivity indices.

This method assumes that all the input random variables are independent.

Any optional input argument equal to the empty matrix will be set to its default value.

The function should have header y = func ( x ) where x is a m-by-nx matrix of doubles, where m is the number of experiments to perform, nx is the number of input random variables, and y is a m-by-1 matrix of doubles.

It might happen that the function requires additionnal arguments to be evaluated. In this case, we can use the following feature. The function func can also be a list, with header y = f ( x , a1 , a2 , ... ). In this case the func variable should hold the list (f,a1,a2,...) and the input arguments a1, a2, will be automatically be appended at the end of the calling sequence of rg.

The random number generator must have header x = randgen ( m , i ) where

  • m is a 1-by-1 matrix of floating point integers, the number of experiments to perform
  • i is a 1-by-1 matrix of floating point integers representing the index of the input variable,
  • and x is a m-by-1 matrix of doubles.

On output, x must contain random numbers sampled from the distribution function associated with the input variable #i, where i is in the set {1,2,...,nx}.

Since the input random variables are independent, we can generate the samples associated with Xi independently from the samples for Xj, for i not equal to j. This is why the callback randgen only needs the index i of the input random variable. If there is some dependency between the inputs (e.g. correlation), then the randgen callback would require to generate all the samples dependently one from the other. In this case, the current function cannot be used.

It might happen that the random number generator requires additionnal arguments to be evaluated. In this case, we can use the following feature. The function randgen can also be a list, with header u = rg ( m , i , a1 , a2 , ... ). In this case the randgen variable should hold the list (rg,a1,a2,...) and the input arguments a1, a2, will be automatically be appended at the end of the calling sequence of rg.

TODO : test it on a 4 input variables test case.

Examples

// Compute the first order sensitivity indices of the ishigami function.
// Three random variables uniform in [-pi,pi].
function y = ishigami (x)
a=7.
b=0.1
s1=sin(x(:,1))
s2=sin(x(:,2))
x34 = x(:,3).^4
y(:,1) = s1 + a.*s2.^2 + b.*x34.*s1
endfunction
function x = myrandgen ( m , i )
x = grand(m,1,"unf",-%pi,%pi)
endfunction
a=7.;
b=0.1;
exact.expectation = a/2;
exact.var = 1/2 + a^2/8 + b*%pi^4/5 + b^2*%pi^8/18;
exact.S1 = (1/2 + b*%pi^4/5+b^2*%pi^8/50)/exact.var;
exact.S2 = (a^2/8)/exact.var;
exact.S3 = 0;
n = 1000;
nx = 3;
[ s , nbevalf ] = nisp_sobolsa ( ishigami , nx , myrandgen , n );
mprintf("S1 : %f (expected = %f)\n", s(1), exact.S1);
mprintf("S2 : %f (expected = %f)\n", s(2), exact.S2);
mprintf("S3 : %f (expected = %f)\n", s(3), exact.S3);

// See the variability of the sensitivity indices.
for k = 1 : 100
[ s , nbevalf ] = nisp_sobolsa ( ishigami , nx , myrandgen , 1000 );
sall(k,:) = s';
end
scf();
subplot(2,2,1);
histplot(10,sall(:,1));
xtitle("Variability of the sensitivity index for X1","S1","P(S1)");
subplot(2,2,2);
histplot(10,sall(:,2));
xtitle("Variability of the sensitivity index for X2","S2","P(S2)");
subplot(2,2,3);
histplot(10,sall(:,3));
xtitle("Variability of the sensitivity index for X3","S3","P(S3)");

// See the convergence of the sensitivity indices
n=10;
stacksize("max");
for k = 1 : 100
tic();
[ s , nbevalf ] = nisp_sobolsa ( ishigami , nx , myrandgen , n );
sc(k,:) = s';
t = toc();
mprintf("Run #%d, n=%d, t=%.2f (s)\n",k,n,t);
if ( t > 1 ) then
break
end
n = 1.2*n;
end
h = scf();
subplot(1,2,1);
plot(1:k,sc(1:k,1),"bx-");
plot(1:k,sc(1:k,2),"ro-");
plot(1:k,sc(1:k,3),"g*-");
xtitle("Convergence of the sensitivity indices","Number of simulations","S");
legend(["S1","S2","S3"]);
subplot(1,2,2);
plot(1:k,abs(sc(1:k,1)-exact.S1),"bx-");
plot(1:k,abs(sc(1:k,2)-exact.S2),"ro-");
plot(1:k,abs(sc(1:k,3)-exact.S3),"g*-");
xtitle("Convergence of the sensitivity indices","Number of simulations","|S-exact|");
legend(["S1","S2","S3"]);
h.children(1).log_flags="lnn";

// Compute the total sensivity indices.
[ s , nbevalf , st ] = nisp_sobolsa ( ishigami , nx , myrandgen )

// Compute all the sensivity indices.
[ s , nbevalf , st , mi , si ] = nisp_sobolsa ( ishigami , nx , myrandgen )
// The ANOVA decomposition can be seen more easily in
[mi si]
// The sum of si is 1.
sum(si) // expected = 1.

   

Authors

Michael Baudin - 2011 - DIGITEO