Scilab Home Page | Wiki | Bug Tracker | Forge | Mailing List Archives | Scilab Online Help | File Exchange
ATOMS : Optkelley details
Please login or create an account

Optkelley

Scilab software for Iterative Methods for Optimization
(2119 downloads for this version - 8854 downloads for all versions)
Details
Version
1.1
A more recent valid version exists: 1.1.3
Authors
C. T. Kelley
Yann Collette
Michael Baudin
Owner Organization
North Carolina State University and Consortium Scilab
Maintainers
Michael Baudin
Allan Cornet
Category
License
Creation Date
June 1, 2010
Source created on
Scilab 5.2.x
Binaries available on
Scilab 5.2.x:
Windows 64-bit Windows 32-bit Linux 64-bit Linux 32-bit MacOSX
Install command
--> atomsInstall("optkelley")
Description
            Purpose

These Scilab files are implementations of the algorithms from the book
'Iterative Methods for Optimization', published by SIAM, by C. T. Kelley. The
book, which describes the algorithms, is available from SIAM (service@siam.org).


This toolbox provide the following algorithms:
 * optkelley_bfgswopt: Steepest descent/bfgs with polynomial line search.
 * optkelley_cgtrust: Steihaug Newton-CG-Trust region algorithm.
 * optkelley_diffhess: Compute a forward difference Hessian.
 * optkelley_dirdero: Finite difference directional derivative.
 * optkelley_gaussn: Damped Gauss-Newton with Armijo rule.
 * optkelley_gradproj: Gradient projection with Armijo rule, simple linesearch
 * optkelley_hooke: Hooke-Jeeves optimization.
 * optkelley_imfil: Unconstrained implicit filtering.
 * optkelley_levmar: Levenberg-Marquardt.
 * optkelley_mds: Multidirectional search.
 * optkelley_nelder: Nelder-Mead optimizer, No tie-breaking rule other than
Scilab's gsort.
 * optkelley_ntrust: Dogleg trust region.
 * optkelley_polyline: Polynomial line search.
 * optkelley_polymod: Cubic/quadratic polynomial linesearch.
 * optkelley_projbfgs: Projected BFGS with Armijo rule, simple linesearch
 * optkelley_simpgrad: Simplex gradient.
 * optkelley_steep: Steepest descent with Armijo rule.


The optimization codes have the calling convention
[f,g] = objective(x) 
returns both the objective function value f and the gradient vector g. I 
expect g to be a column vector. The Nelder-Mead, Hooke-Jeeves, Implicit
Filtering, and MDS codes do not ask for a gradient.

Authors

C. T. Kelley, 1999

Yann Collette, 2008

Michael Baudin, DIGITEO, 2010

History

These files have been ported into Scilab by Yann Collette in 2008.

In 2010, the update of the module to Scilab 5.2 has been performed by Michael
Baudin, with formatting of the help pages and bug fixes. The bug fixes include
the optkelley_nelder function, with replacement of the call to sort with call to
mtlb_sort. It also includes the replacement of sum with mtlb_sum. The functions
are now consistently named. Several unit tests have been added, as well as
demos. The error message use now mprintf, with the name of the routine first.
            
Files (2)
[102.20 kB]
Source code archive

[182.47 kB]
OS-independent binary for Scilab 5.2.x
Binary version
Automatically generated by the ATOMS compilation chain

News (0)
Comments (0)
Leave a comment
You must register and log in before leaving a comment.
Email notifications
Send me email when this toolbox has changes, new files or a new release.
You must register and log in before setting up notifications.