<< Introduction Introduction 5-Contributors >>

mpscilab >> mpscilab > Introduction > 4-Advanced Topics

4-Advanced Topics

Advanced informations about using MPScilab

This section is organized more or less as a FAQ until proper documentation is written about these subjects.

What are the size and precision limitations?

The minimum precision is 2 bits and independent of the architecture. The maximum precision of an MPS matrix depends on the MPFR and GMP libraries linked with the toolbox. Generally on a 32 bit system the maximum precision is around the maximum positive value of a 32 integral type i.e. ~2^31. On a 64 bit system however the theoretical maximum ~2^63 is beyond what can be stored on modern computer systems.

Internally the dimension of an MPS matrix is stored as an integral type so the theoretical limit is roughly 2**32 * 2**32 on a 32 bit system and ~ 2**63 * 2**63 on a 64 bit system. However, a 2**32 elements matrix won't fit in the virtual memory limit of a 32 bit system. Practically, the maximum size of a matrix depends on the system's main memory as it's not currently possible to keep operands on disk. Of course, the maximum matrix size that will fit in memory depends on the precision of its elements. Null matrices are not supported.

NOTE : Irregardless of the previously discussed limits, Scilab converts integer constant to doubles. Consequently, the size and precision is limited to the integers that can be accurately represented in a double datatype. This limits the precision and size that can be accurately used to 2^53.

What are the advantages of the different function call types?

As discussed in the introduction most mps functions can be used in different ways in respect to handling the result argument. The most intuitive form is the overloaded functions like the addition operator (+) or the sin() function. The overloaded functions have the advantages of being simple and intuitive. However they are slower and not every mps function has an overloaded equivalent. The main reason the overloaded functions are slower is that they require the creation of a new mps matrix to hold the result at every call. For example, let's take a simple addition :

A = mps_init2( 2, 128 );
A = A + A;

In this example a new mps scalar A is created using mps_init2() with a precision of 128 bits. The newly created scalar is then added to itself. Here a new matrix is created to hold the new value of A. The old matrix is no longer visible and will be freed by the garbage collector the next time a new matrix is allocated. This is fine for most calculation but if it happens in a loop for example a new matrix will be created for every loop iteration.

The second method is to call the mps function with the result operand on the left side of the call like this next example :

A = mps_init2( 2, 128 );
A = mps_add( A, A );

This call format will also create a new mps matrix at each call but is somewhat faster than the overloaded case. If speed is of the utmost importance there is the last method :

A = mps_init2( 2, 128 );
mps_add( A, A, A );

In this last example the storage space of A is reused to receive the result. This prevents the implicit creation of a result operand. It does however require an existing mps matrix of the correct size. Note that the precision of the result can be different if needed. This method of call is also useful when dealing with very large matrices and helps minimize memory usage.

<< Introduction Introduction 5-Contributors >>