Experiments with the Integration Test Problems toolbox.
In this section, we compare the accuracy of Monte-Carlo and Quasi-Monte-Carlo. For the Quasi-Monte-Carlo experiment, we use a Sobol sequence. We consider the problem #17 (Sobol Product), which is harder when the number of dimensions increases, because the high order interactions play an important part in the variability of the function. We consider here a problem in 10 dimensions.
nprob=17; shortname = intprb_getname(nprob); longname = intprb_getname(nprob,%t); [n,p] = intprb_getsetup(nprob); e = intprb_getexpect(n,p,nprob); npoints=2^(1:15); abserrLDS=[]; abserrSRS=[]; for callf = npoints integr=intprb_getcrudeld(nprob,n,p,callf); abserrLDS($+1)=abs(integr-e); integr=intprb_getcrudemc(nprob,n,p,callf); abserrSRS($+1)=abs(integr-e); end scf(); plot(npoints,abserrLDS,"bo-"); plot(npoints,abserrSRS,"ro-"); plot(npoints,1 ./npoints,"b-"); plot(npoints,1 ./sqrt(npoints),"r-"); a=gca(); a.log_flags="lln"; xtitle("Convergence of approximate integral",.. "Number of Simulations","Absolute Error"); legend(["Quasi-Monte-Carlo","Monte-Carlo",.. "$n^{-1}$","$n^{-1/2}$"]); | ![]() | ![]() |
The previous script produces the following graphics.
We see that Monte-Carlo has a convergence rate which is close to 1/sqrt(n) while Quasi-Monte-Carlo has a convergence rate which is close to 1/n. For n=1000 simulations, for example, the absolute error is close to 0.1 for Monte-Carlo, while it is close to 0.001 for Quasi-Monte-Carlo.
In this section, we compare the accuracy of various Quasi-Monte-Carlo sequences. We use the Halton, Faure, Sobol and Niederreiter sequences, for a number of dimensions equal to 5, 10, 20 and 40.
function benchLD(nprob, n) npoints=2^(1:17); colmat=["bo-","ro-","go-","ko-"]; ldsmat=["halton","sobol","faure",.. "niederreiter"]; for ilds=1:size(ldsmat,"*") abserrLDS=[]; for callf = npoints integr=intprb_getcrudeld(nprob,n,p,.. callf,ldsmat(ilds)); abserrLDS($+1)=abs(integr-e); end plot(npoints,abserrLDS,colmat(ilds)); end plot(npoints,1 ./npoints,"b-"); a=gca(); a.log_flags="lln"; stitle=msprintf("%s - %d dimensions",longname,n); xtitle(stitle,.. "Number of Simulations","Absolute Error"); legend(["Halton","Sobol","Faure",.. "Niederreiter","1/n"]); endfunction nprob=17; longname = intprb_getname(nprob,%t); [n,p]=intprb_getsetup(nprob); e = intprb_getexpect(n,p,nprob); scf(); subplot(2,2,1) benchLD(nprob,5); subplot(2,2,2) benchLD(nprob,10); subplot(2,2,3) benchLD(nprob,20); subplot(2,2,4) benchLD(nprob,40); | ![]() | ![]() |
The previous script produces the following graphics.
We see that all Quasi-Monte-Carlo have a convergence rate which is close to 1/n. But the ranking of the methods may depend on the number of simulations and the dimensions. For a number of simulations larger than 100, the Halton sequence is always the less accurate method, whatever the number of dimensions. When the number of dimensions is low (from 5 to 10), the Sobol sequence seems to be more accurate, while for a large number of dimensions (from 20 to 40), the situation is less clear.
In the following script, we test various Halton sequences, including the Leaped Halton sequence (Kocis and Whiten), the Scrambled Halton sequence (Kocis and Whiten), the Reverse Halton sequence (Vandewoestyne and Cools).
function benchLD(nprob, n) npoints=2^(1:17); colmat=["bo-","ro-","go-","ko-"]; ldsmat=["halton","halton-leaped",.. "halton-scrambled","reversehalton"]; for ilds=1:size(ldsmat,"*") abserrLDS=[]; for callf = npoints integr=intprb_getcrudeld(nprob,n,p,callf,ldsmat(ilds)); abserrLDS($+1)=abs(integr-e); end plot(npoints,abserrLDS,colmat(ilds)); end plot(npoints,1 ./npoints,"b-"); a=gca(); a.log_flags="lln"; stitle=msprintf("%s - %d dimensions",longname,n); xtitle(stitle,.. "Number of Simulations","Absolute Error"); legend(["Halton","Halton Leaped","Halton Scrambled",.. "Reverse Halton","1/n"]); endfunction nprob=17; longname = intprb_getname(nprob,%t); [n,p]=intprb_getsetup(nprob); e = intprb_getexpect(n,p,nprob); scf(); subplot(2,2,1) benchLD(nprob,5); subplot(2,2,2) benchLD(nprob,10); subplot(2,2,3) benchLD(nprob,20); subplot(2,2,4) benchLD(nprob,40); | ![]() | ![]() |
The previous script produces the following graphics.
Although the Reverse Halton sequence seems to compete when the number of dimensions is low (s=5), its performance deteriorates when the number of dimensions increases and cannot be distinguished from Halton for large dimensions (s=20 and 40). The best sequences seems to be the Leaped Halton and the Scrambled Halton sequences, with a small advantage for the Leaped Halton (but the differences seems to be less clear when the number of dimensions increase).