Returns the name.
[fopt,xopt] = uncprb_getopt(nprob,n,m)
a floating point integers, the problem number
the number of variables, i.e. the size of x
the number of functions, i.e. the size of fvec
a 1 x 1 matrix of doubles, the minimum of the function
a matrix of doubles, n x 1, the optimum of the function
Returns the optimum, according to the paper "Algo 566". For each problem this corresponds to the data in the (d) section. For some problems, the optimum is known only for particular values of m and n. If fopt or xopt is unknown, an error is returned.
When the paper mentions several local optimums, we return the first one in the paper. When the paper only gives a limited number of significant digits, so do we.
When the paper writes +/-inf, we set +/-%inf.
When the paper gives the function value, but not the optimum, we return xopt=[]. This allows to return something, instead of nothing at all.
There are some cases where the optimum is known for several particular values of n or m. This is why n and m are input arguments. If it turns out that the optimum is not known for this particular value of n or m, we generate an error.
We emphasize that, when xopt is unknown, there is no way to check that fopt is correct, that is satisfies f(xopt)=fopt and g(xopt)=0.
All default settings provided by getinitf do correspond to the parameters used here. This allows to make checkings with the default settings, when possible.
For problem #20 - Watson - the values of the approximate solutions are taken from Brent, "Algorithms for Minimization with Derivatives", Prentice-Hall, 1973. The values are taken from John Burkardt - "test_opt" - March 2000. I improved the precision based on a full-precision optimization with optim : the gradient is smaller.
For problem #3 - Powell Badly Scaled - a manual optimization was done and the results are reported to full precision. We claim that the gradient is exactly zero at optimum.