<< nisp_sobolsaAll Sensitivity Analysis nisp_sobolsaTotal >>

NISP >> NISP > Sensitivity Analysis > nisp_sobolsaFirst

nisp_sobolsaFirst

Compute sensitivity indices by Sobol, Ishigami, Homma.

Calling Sequence

s = nisp_sobolsaFirst ( func , nx )
s = nisp_sobolsaFirst ( func , nx , randgen )
s = nisp_sobolsaFirst ( func , nx , randgen , n )
s = nisp_sobolsaFirst ( func , nx , randgen , n , inrange )
s = nisp_sobolsaFirst ( func , nx , randgen , n , inrange , c )
[ s, nbevalf ] = nisp_sobolsaFirst ( ... )
[ s, nbevalf, smin] = nisp_sobolsaFirst ( ... )
[ s, nbevalf, smin, smax] = nisp_sobolsaFirst ( ... )

Parameters

func :

a function or a list, the name of the function to be evaluated.

nx :

a 1-by-1 matrix of floating point integers, the number of inputs of the function.

randgen :

a function or a list, the random number generator. (default = uniform random variables)

n :

a 1-by-1 matrix of floating point integers (default n=10000), the number of Monte-Carlo experiments, for each sensitivity index

inrange :

a 1-by-1 matrix of booleans (default inrange = %t), set to true to restrict the sensitivity indices into [0,1].

c :

a 1-by-1 matrix of doubles (default c = 1-0.95), the level for the confidence interval.

s :

a nx-by-1 matrix of doubles, the first order sensitivity indices

nbevalf :

a nx-by-1 matrix of doubles, the actual number of function evaluations.

smin :

a nx-by-1 matrix of doubles, the lower bound of the confidence interval.

smax :

a nx-by-1 matrix of doubles, the upper bound of the confidence interval.

Description

The algorithm uses the Sobol method to compute the first order sensitivity indices.

This method assumes that all the input random variables are independent.

On output, if inrange is true, then the sensitivity indices are forced to be in the range [0,1]. If not, the sensitivity indices is a random variable which varies under and above the exact value. For example, if the exact value is zero, then it may happen that the estimate is negative. The inrange option manages this situation.

The confidence level 1-c is so that

This is done by the Fisher transformation of the sensitivity indices, as published by Martinez, 2005 (see below).

Any optional input argument equal to the empty matrix will be set to its default value.

The function must have the header

y = func ( x )

where x is a m-by-nx matrix of doubles, where m is the number of experiments to perform, nx is the number of input random variables, and y is a m-by-1 matrix of doubles.

It might happen that the function requires additionnal arguments to be evaluated. In this case, we can use the following feature. The function func can also be the list (f,a1,a2,...), where f is the function with header:

y = f ( x , a1 , a2 , ... )

and the input arguments a1, a2, ... are automatically appended at the end of the calling sequence of f.

The random number generator must have the header

x = randgen ( m , i )

where

On output, x must contain random numbers sampled from the distribution function associated with the input variable #i, where i is in the set {1,2,...,nx}.

Since the input random variables are independent, we can generate the samples associated with Xi independently from the samples for Xj, for i not equal to j. This is why the callback randgen only needs the index i of the input random variable. If there is some dependency between the inputs (e.g. correlation), then the randgen callback would require to generate all the samples dependently one from the other. In this case, the current function cannot be used.

It might happen that the random number generator requires additionnal arguments to be evaluated. In this case, we can use the following feature. The function randgen can also be the list (rg,a1,a2,...), where rg is a function with header :

u = rg ( m , i , a1 , a2 , ... )

and the input arguments a1, a2, ... are automatically appended at the end of the calling sequence of rg.

Examples

// Compute the first order sensitivity indices of the ishigami function.
// Three random variables uniform in [-pi,pi].
function y=ishigami(x)
a=7.
b=0.1
s1=sin(x(:,1))
s2=sin(x(:,2))
x34 = x(:,3).^4
y(:,1) = s1 + a.*s2.^2 + b.*x34.*s1
endfunction
function x=myrandgen(m, i)
x = distfun_unifrnd(-%pi,%pi,m,1)
endfunction
a=7.;
b=0.1;
exact = nisp_ishigamisa ( a , b )
n = 1000;
nx = 3;
[s,nbevalf]=nisp_sobolsaFirst(ishigami,nx,myrandgen,n)

// See the confidence interval.
[s,nbevalf,smin,smax]=nisp_sobolsaFirst(ishigami,nx,myrandgen,n)

// Configure inrange
distfun_seedset(0);
// s(3) is zero
s=nisp_sobolsaFirst(ishigami,nx,myrandgen,n,%t)
// s(3) is negative
s=nisp_sobolsaFirst(ishigami,nx,myrandgen,n,%f)

// Configure the 95% confidence interval
[s,nbevalf,smin,smax]=nisp_sobolsaFirst(ishigami,nx,myrandgen,n,[],1-0.95)
// Configure the 99% confidence interval
[s,nbevalf,smin,smax]=nisp_sobolsaFirst(ishigami,nx,myrandgen,n,[],1-0.99)

// See the variability of the sensitivity indices.
for k = 1 : 100
[s,nbevalf]=nisp_sobolsaFirst(ishigami,nx,myrandgen,1000);
sall(k,:) = s';
end
scf();
subplot(2,2,1);
histplot(10,sall(:,1));
xtitle("Variability of the sensitivity index for X1","S1","Frequency");
subplot(2,2,2);
histplot(10,sall(:,2));
xtitle("Variability of the sensitivity index for X2","S2","Frequency");
subplot(2,2,3);
histplot(10,sall(:,3));
xtitle("Variability of the sensitivity index for X3","S3","Frequency");

// See the convergence of the sensitivity indices
n=10;
stacksize("max");
for k = 1 : 100
tic();
[s,nbevalf]=nisp_sobolsaFirst(ishigami,nx,myrandgen,n);
sc(k,:) = s';
t = toc();
mprintf("Run #%d, n=%d, t=%.2f (s)\n",k,n,t);
if ( t > 1 ) then
break
end
n = 1.2*n;
end
h = scf();
subplot(1,2,1);
plot(1:k,sc(1:k,1),"bx-");
plot(1:k,sc(1:k,2),"ro-");
plot(1:k,sc(1:k,3),"g*-");
mytitle="Convergence of the sensitivity indices";
xtitle(mytitle,"Number of simulations","S");
legend(["S1","S2","S3"]);
subplot(1,2,2);
plot(1:k,abs(sc(1:k,1)-exact.S1),"bx-");
plot(1:k,abs(sc(1:k,2)-exact.S2),"ro-");
plot(1:k,abs(sc(1:k,3)-exact.S3),"g*-");
mytitle="Convergence of the sensitivity indices";
xtitle(mytitle,"Number of simulations","|S-exact|");
legend(["S1","S2","S3"]);
h.children(1).log_flags="lnn";

Authors

Bibliography

GdR Ondes & Mascot Num, "Analyse de sensibilite globale par decomposition de la variance", jean-marc.martinez@cea.fr, 13 janvier 2011, Institut Henri Poincare

"Contribution à l'analyse de sensibilité et à l'analyse discriminante généralisée", 2005, Julien Jacques, Thèse


Report an issue
<< nisp_sobolsaAll Sensitivity Analysis nisp_sobolsaTotal >>