This document introduces the CMA-ES Optimization (CMA-ES) in Scilab.
The CMA-ES is a meta-heuristic optimization process created by Nikolaus Hansen
initially (1,lambda)-ES in 1996 (micro = 1), later with (micro,lambda) in 1997
and it is based on a Covariance Matrix Adaptation technique.
This direct search method does not require any knowledge of the objective
The CMA-ES (Covariance Matrix Adaptation Evolution Strategy) is an
algorithm for difficult non-linear non-convex optimization problems in
continuous domain. The CMA-ES is typically applied to unconstrained or bounded
constraint optimization problems, and search space dimensions between three
a hundred. The method should be applied, if derivative based methods, e.g.
quasi-Newton BFGS or conjugate gradient, (supposedly) fail due to a rugged
search landscape (e.g. discontinuities, sharp bends or ridges, noise, local
optima, outliers). If second order derivative based methods are successful,
they are usually faster than the CMA-ES
This toolbox implements the original CMA-ES algorithm in two ways:
* the functional call (similar to Scilab fminsearch, but not exactly the
* the object oriented call sequence (as described by Yann Collette)
The functional call supports additionally re-execution and population
See https://www.lri.fr/~hansen/cmaesintro.html for