Solves a nonlinearily constrained optimization problem.
x = fmincon(fun,x0) x = fmincon(fun,x0,A,b) x = fmincon(fun,x0,A,b,Aeq,beq) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon) x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,options) [x,fval,exitflag,output,lambda,grad,hessian] = fmincon ( ... )
a function, the function to minimize. See below for the complete specifications.
a nx1 or 1xn matrix of doubles, where n is the number of variables. The initial guess for the optimization algorithm.
a nil x n matrix of doubles, where n is the number of variables and nil is the number of linear inequalities. If A==[] and b==[], it is assumed that there is no linear inequality constraints. If (A==[] & b<>[]), fmincon generates an error (the same happens if (A<>[] & b==[])).
a nil x 1 matrix of doubles, where nil is the number of linear inequalities.
a nel x n matrix of doubles, where n is the number of variables and nel is the number of linear equalities. If A==[] and b==[], it is assumed that there is no linear equality constraints. If (Aeq==[] & beq<>[]), fmincon generates an error (the same happens if (Aeq<>[] & beq==[])).
a nel x 1 matrix of doubles, where nel is the number of linear inequalities.
a nx1 or 1xn matrix of doubles, where n is the number of variables. The lower bound for x. If lb==[], then the lower bound is automatically set to -inf.
a nx1 or 1xn matrix of doubles, where n is the number of variables. The upper bound for x. If lb==[], then the upper bound is automatically set to +inf.
a function, the nonlinear constraints. See below for the complete specifications.
an optional struct, as provided by optimset
Search the minimum of a constrained optimization problem specified by : find the minimum of f(x) such that c(x)<=0, ceq(x)<=0, A*x<=b, Aeq*x=beq and lb<=x<=ub. Currently, we use ipopt for the actual solver of fmincon.
The objective function must have header :
f = objfun ( x )
By default, fmincon uses finite differences with order 2 formulas and optimum step size in order to compute a numerical gradient of the objective function. If we can provide exact gradients, we should do so since it improves the convergence speed of the optimization algorithm. In order to use exact gradients, we must update the header of the objective function to :
[f,G] = objfungrad ( x )
options = optimset("GradObj","on");
The constraint function must have header :
[c, ceq] = confun(x)
By default, fmincon uses finite differences with order 2 formulas and optimum step size in order to compute a numerical gradient of the constraint function. In order to use exact gradients, we must update the header of the constraint function to :
[c,ceq,DC,DCeq] = confungrad(x)
options = optimset("GradConstr","on");
By default, fmincon uses a L-BFGS formula to compute an approximation of the Hessian of the Lagrangian. Notice that this is different from Matlab's fmincon, which default is to use a BFGS.
TODO : describe the output arguments TODO : test with A, b TODO : test with Aeq, beq TODO : test with ceq TODO : avoid using global for ipopt_data TODO : implement Display option TODO : implement FinDiffType option TODO : implement MaxFunEvals option TODO : implement DerivativeCheck option TODO : implement MaxIter option TODO : implement OutputFcn option TODO : implement PlotFcns option TODO : implement TolFun option TODO : implement TolCon option TODO : implement TolX option TODO : implement Hessian option
// A basic case : // we provide only the objective function and the nonlinear constraint // function : we let fmincon compute the gradients by numerical // derivatives. function f=objfun(x) f = exp(x(1))*(4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1) endfunction function [c, ceq]=confun(x) // Nonlinear inequality constraints c = [ 1.5 + x(1)*x(2) - x(1) - x(2) -x(1)*x(2) - 10 ] // Nonlinear equality constraints ceq = [] endfunction // The initial guess x0 = [-1,1]; // The expected solution : only 4 digits are guaranteed xopt = [-9.547345885974547 1.047408305349257] fopt = 0.023551460139148 // Run fmincon [x,fval,exitflag,output,lambda,grad,hessian] = fmincon ( objfun,x0,[],[],[],[],[],[], confun )
// A case where we provide the gradient of the objective // function and the Jacobian matrix of the constraints. // The objective function and its gradient function [f, G]=objfungrad(x) [lhs,rhs]=argn() f = exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1) if ( lhs > 1 ) then G = [ f + exp(x(1)) * (8*x(1) + 4*x(2)) exp(x(1))*(4*x(1)+4*x(2)+2) ] end endfunction // The nonlinear constraints and the Jacobian matrix of the constraints function [c, ceq, DC, DCeq]=confungrad(x) // Inequality constraints c(1) = 1.5 + x(1) * x(2) - x(1) - x(2) c(2) = -x(1) * x(2)-10 // No nonlinear equality constraints ceq=[] [lhs,rhs]=argn() if ( lhs > 2 ) then // DC(:,i) = gradient of the i-th constraint // DC = [ // Dc1/Dx1 Dc2/Dx1 // Dc1/Dx2 Dc2/Dx2 // ] DC= [ x(2)-1, -x(2) x(1)-1, -x(1) ] DCeq = [] end endfunction // Test with both gradient of objective and gradient of constraints options = optimset("GradObj","on","GradConstr","on"); // The initial guess x0 = [-1,1]; // The expected solution : only 4 digits are guaranteed xopt = [-9.547345885974547 1.047408305349257] fopt = 0.023551460139148 // Run fmincon [x,fval,exitflag,output] = fmincon(objfungrad,x0,[],[],[],[],[],[], confungrad,options)
// A case where we set the bounds of the optimization. // By default, the bounds are set to infinity. function f=objfun(x) f = exp(x(1))*(4*x(1)^2 + 2*x(2)^2 + 4*x(1)*x(2) + 2*x(2) + 1) endfunction function [c, ceq]=confun(x) // Nonlinear inequality constraints c = [ 1.5 + x(1)*x(2) - x(1) - x(2) -x(1)*x(2) - 10 ] // Nonlinear equality constraints ceq = [] endfunction // The initial guess x0 = [-1,1]; // The expected solution xopt = [0 1.5] fopt = 8.5 // Make sure that x(1)>=0, and x(2)>=0 lb = [0,0]; ub = [ ]; // Run fmincon [x,fval] = fmincon ( objfun , x0,[],[],[],[],lb,ub,confun)