<< uncprb_getgrdfcn Unconstrained Optimization Problems Toolbox uncprb_gethesfcn >>

Unconstrained Optimization Problems Toolbox >> Unconstrained Optimization Problems Toolbox > uncprb_getgrdfd

uncprb_getgrdfd

Compute the gradient by finite differences.

Calling Sequence

gfd = uncprb_getgrdfd ( n , m , x , nprob )
gfd = uncprb_getgrdfd ( n , m , x , nprob , gstep )
gfd = uncprb_getgrdfd ( n , m , x , nprob , gstep , gorder )

Parameters

x:

a n x 1 matrix of doubles, the point where to compute f

gstep :

the step to use for differences. If gstep=[], uses the default step. The default step depends on the order and the machine precision %eps.

nprob:

the problem number

n:

the number of variables, i.e. the size of x

m:

the number of functions, i.e. the size of fvec

gfd:

a 1 x n matrix of doubles, the gradient, df(x)/dxj, j=1, ..., n

Description

Uses finite differences to compute the Jacobian matrix. Does not exploit the structure to compute the difference.

Examples

// Get Jfd at x0 for Rosenbrock's test case
nprob = 1
[n,m,x0]=uncprb_getinitf(nprob)
gfd = uncprb_getgrdfd ( n , m , x0 , nprob )
// Compare with exact gradient
g = uncprb_getgrdfcn (n,m,x0,nprob)
norm(g-gfd)/norm(g)
// Set the step
gfd = uncprb_getgrdfd ( n , m , x0 , nprob , 1.e-1 )
// Set the step and the order
gfd = uncprb_getgrdfd ( n , m , x0 , nprob , 1.e-1 , 4 )
// Set the order (use default step)
gfd = uncprb_getgrdfd ( n , m , x0 , nprob , [] , 4 )
// Use default step and order
gfd = uncprb_getgrdfd ( n , m , x0 , nprob , [] , [] )

Authors

<< uncprb_getgrdfcn Unconstrained Optimization Problems Toolbox uncprb_gethesfcn >>