<< Accsum Accsum accsum_compsum >>

Accsum >> Accsum > Overview

Overview

An overview of the Accsum toolbox.

Purpose

The goal of this toolbox is to provide accurate algorithms to compute sums. We consider here sums of a given dataset x, that is, we consider s = x(1)+x(2)+...+x(n).

These algorithms may be required to manage datasets which are ill-conditionned with respect to the sum function, which happens when the data are varying highly in magnitude and in sign. Hence, these datasets are very sensitive to small changes in the input. In this case, the "sum" function of Scilab is not appropriate and may produce results which have only a small number of significant digits, or no significant digit at all. Users may consider the condnum module and the condnb_sumcond function to compute the condition number of a particular sum. See http://atoms.scilab.org/toolboxes/condnb for details.

The module is mainly based on the book "Stability and numerical accuracy of algorithms" by Nicolas Higham.

In order to test the algorithms on practical datasets, we include a dataset which was created by Yun Helen He and Chris H.Q. Ding. This dataset is provided in the "demos" directory, in the "etaana.dat" file.

"The SSH variable is a two-dimensional see surface volume (integrated sea surface area times sea surface height) distributed among multiple processors. At each time step, the global summation of the sea surface volume of each model grid is needed in order to calculate the average sea surface height. The absolute value of the data itself is very large (in the order of 10^10 to 10^15), with different signs, while the result of the global summation is only of order of 1. Running the model in double precision with different number of processors generate very different global summations, ranging from -100 to 100, making the simulation results totally meaningless. [...] The 2D array is dimensioned as ssh(120,64), with a total of 7860 double precision numbers."

Other datasets are provided in this module, based on examples created by Wilkinson, Higham and Priest.

The toolbox is based on macros and compiled source code.

Quick start

The accsum_fdcs function provides a doubly self compensated sum algorithm. The data must be ordered in decreasing magnitude. To do this, we use the accsum_order function with order=5. This function is based on compiled source code, so that it is fast enough, even for relatively large datasets.

path = accsum_getpath (  );
filename=fullfile(path,"demos","etaana.dat");
x=fscanfMat(filename);
e = 0.357985839247703552;
x = accsum_order ( x , 5 );
s = accsum_fdcs ( x )
abs(s-e)/e

The accsum_fcompsum uses a (simply) compensated summation algorithm. It does not require an ordered input and requires less floating point operations. In the current case, the simply compensated summation algorithm does not perform sufficiently well.

x=fscanfMat(filename);
e = 0.357985839247703552;
s = accsum_fcompsum ( x )
abs(s-e)/e

Authors

2010 - 2011 - DIGITEO - Michael Baudin

Acknowledgements

Licence

This toolbox is distributed under the CeCILL.

Bibliography

"Stability and numerical accuracy of algorithms", Nicolas Higham

"Handbook of Floating Point Computations", Muller et al

https://hpcrd.lbl.gov/SCG/ocean/NRS/ECMWF/img14.htm

https://hpcrd.lbl.gov/SCG/ocean/NRS/SCSsum.F

"On properties of floating point arithmetics: numerical stability and the cost of accurate computations", Douglas Priest, 1992

"Using Accurate Arithmetics to Improve Numerical Reproducibility and Stability in Parallel Applications". Yun He and Chris H.Q. Ding. Journal of Supercomputing, Vol.18, Issue 3, 259-277, March 2001. Also Proceedings of International Conference on Supercomputing (ICS'00), May 2000, 225-234.


<< Accsum Accsum accsum_compsum >>