Significantly nonzero values for any of these, after the solver has terminated, indicates that the corresponding constraint is active. Significance is judged in the first instance by the relative scale of any value compared to the smallest among them.
Byrd R H, Gilbert J Ch and Nocedal J (2000) A trust region method based on interior point techniques for nonlinear programming Mathematical Programming 89 149–185
Byrd R H, Liu G and Nocedal J (1997) On the local behavior of an interior point method for nonlinear programming Numerical Analysis (eds D F Griffiths and D J Higham) Addison–Wesley
Conn A R, Gould N I M, Orban D and Toint Ph L (2000) A primal-dual trust-region algorithm for non-convex nonlinear programming Mathematical Programming 87 (2) 215–249
Gould N I M, Orban D, Sartenaer A and Toint Ph L (2001) Superlinear convergence of primal-dual interior point algorithms for nonlinear programming SIAM Journal on Optimization 11 (4) 974–1002
Hogg J D and Scott J A (2010) An indefinite sparse direct solver for large problems on multicore machines RAL Technical Report. RAL-TR-2010-011
Hogg J D and Scott J A (2011) HSL MA97: a bit-compatible multifrontal code for sparse symmetric systems RAL Technical Report. RAL-TR-2011-024
Wächter A and Biegler L T (2006) On the implementation of a primal-dual interior point filter line search algorithm for large-scale nonlinear programming Mathematical Programming 106(1) 25–57
Yamashita H (1998) A globally convergent primal-dual interior-point method for constrained optimization Optimization Methods and Software 10 443–469
There are four sections printed to the primary output with the default settings (level ): a derivative check, a header, an iteration log and a summary. At higher levels more information will be printed, including any internal IPOPT options that have been changed from their default values.
The second line reports on a discrepancy for the partial derivative of the first constraint with respect to the fourth variable. If the indicator v is absent, the discrepancy refers to a component that had not been included in the sparsity structure, in which case the nonzero structure of the derivatives should be corrected. Mistakes in the first derivatives should be corrected before attempting to correct mistakes in the second derivatives.
The third line reports on a discrepancy in a second derivative of the objective function, differentiated with respect to the first variable, twice.
The fourth line reports on a discrepancy in a second derivative of the first constraint, differentiated with respect to the first and third variables.
If
, the status of each iteration is condensed to one line. The line shows:
iter
The current iteration count. This includes regular iterations and iterations during the restoration phase. If the algorithm is in the restoration phase, the letter
r will be appended to the iteration number. The iteration number
represents the starting point. This quantity is also available as
of
monit.
objective
The unscaled objective value at the current point (given the current NLP scaling). During the restoration phase, this value remains the unscaled objective value for the original problem. This quantity is also available as
of
monit.
inf_pr
The unscaled constraint violation at the current point (given the current NLP scaling). This quantity is the infinity-norm (max) of the (unscaled) constraints
. During the restoration phase, this value remains the constraint violation of the original problem at the current point. This quantity is also available as
of
monit.
inf_du
The scaled dual infeasibility at the current point (given the current NLP scaling). This quantity measure the infinity-norm (max) of the internal dual infeasibility,
of Eq. (4a) in the implementation paper
Wächter and Biegler (2006), including inequality constraints reformulated using slack variables and NLP scaling. During the restoration phase, this is the value of the dual infeasibility for the restoration phase problem. This quantity is also available as
of
monit.
lg(mu)
of the value of the barrier parameter
.
itself is also available as
of
monit.
||d||
The infinity norm (max) of the primal step (for the original variables x and the internal slack variables s). During the restoration phase, this value includes the values of additional variables,
and
(see Eq. (30) in
Wächter and Biegler (2006)). This quantity is also available as
of
monit.
lg(rg)
of the value of the regularization term for the Hessian of the Lagrangian in the augmented system (
of Eq. (26) and Section 3.1 in
Wächter and Biegler (2006)). A dash (–) indicates that no regularization was done. The regularization term itself is also available as
of
monit.
alpha_pr
The step size for the primal variables (
of Eq. (14a) in
Wächter and Biegler (2006)). This quantity is also available as
of
monit. The number is usually followed by a character for additional diagnostic information regarding the step acceptance criterion.
f
f-type iteration in the filter method without second-order correction
F
f-type iteration in the filter method with second-order correction
h
h-type iteration in the filter method without second-order correction
H
h-type iteration in the filter method with second-order correction
k
penalty value unchanged in merit function method without second-order correction
K
penalty value unchanged in merit function method with second-order correction
n
penalty value updated in merit function method without second-order correction
N
penalty value updated in merit function method with second-order correction
R
Restoration phase just started
w
in watchdog procedure
s
step accepted in soft restoration phase
t/T
tiny step accepted without line search
r
some previous iterate restored
| ls |
The number of backtracking line search steps (does not include second-order correction steps). This quantity is also available as of monit. |
Note that the step acceptance mechanisms in IPOPT consider the barrier objective function
(4) which is usually different from the value reported in the
objective column. Similarly, for the purposes of the step acceptance, the constraint violation is measured for the internal problem formulation, which includes slack variables for inequality constraints and potentially NLP scaling of the constraint functions. This value, too, is usually different from the value reported in
inf_pr. As a consequence, a new iterate might have worse values both for the objective function and the constraint violation as reported in the iteration output, seemingly contradicting globalization procedure.
Note that all these values are also available in
,
, and
of the monitoring function
monit.
The output might look as follows:
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 2.6603500E+05 1.55E+02 3.21E+01 -1.0 0.00E+00 - 0.00E+00 0.00E+00 0
1 1.5053889E+05 7.95E+01 1.43E+01 -1.0 1.16E+00 - 4.55E-01 1.00E+00f 1
2 8.9745785E+04 3.91E+01 6.45E+00 -1.0 3.07E+01 - 5.78E-03 1.00E+00f 1
3 3.9878595E+04 1.63E+01 3.47E+00 -1.0 5.19E+00 0.0 2.43E-01 1.00E+00f 1
4 2.7780042E+04 1.08E+01 1.64E+00 -1.0 3.66E+01 - 7.24E-01 8.39E-01f 1
5 2.6194274E+04 1.01E+01 1.49E+00 -1.0 1.07E+01 - 1.00E+00 1.05E-01f 1
6 1.5422960E+04 4.75E+00 6.82E-01 -1.0 1.74E+01 - 1.00E+00 1.00E+00f 1
7 1.1975453E+04 3.14E+00 7.26E-01 -1.0 2.83E+01 - 1.00E+00 5.06E-01f 1
8 8.3508421E+03 1.34E+00 2.04E-01 -1.0 3.96E+01 - 9.27E-01 1.00E+00f 1
9 7.0657495E+03 4.85E-01 9.22E-02 -1.0 5.32E+01 - 1.00E+00 1.00E+00f 1
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
10 6.8359393E+03 1.17E-01 1.28E-01 -1.7 4.69E+01 - 8.21E-01 1.00E+00h 1
11 6.6508917E+03 1.52E-02 1.52E-02 -2.5 1.87E+01 - 1.00E+00 1.00E+00h 1
12 6.4123213E+03 8.77E-03 1.49E-01 -3.8 1.85E+01 - 7.49E-01 1.00E+00f 1
13 6.3157361E+03 4.33E-03 1.90E-03 -3.8 2.07E+01 - 1.00E+00 1.00E+00f 1
14 6.2989280E+03 1.12E-03 4.06E-04 -3.8 1.54E+01 - 1.00E+00 1.00E+00h 1
15 6.2996264E+03 9.90E-05 2.05E-04 -5.7 5.35E+00 - 9.63E-01 1.00E+00h 1
16 6.2998436E+03 0.00E+00 1.86E-07 -5.7 4.55E-01 - 1.00E+00 1.00E+00h 1
17 6.2998424E+03 0.00E+00 6.18E-12 -8.2 2.62E-03 - 1.00E+00 1.00E+00h 1
If , each iteration produces significantly more detailed output comprising detailed error measures and output from internal operations. The output is reasonably self-explanatory so it is not featured here in detail.
Summary
Once the solver finishes, a detailed summary is produced if
. An example is shown below:
Number of Iterations....: 6
(scaled) (unscaled)
Objective...............: 7.8692659500479623E-01 6.2324586324379867E+00
Dual infeasibility......: 7.9744615766675617E-10 6.3157735687207093E-09
Constraint violation....: 8.3555384833289281E-12 8.3555384833289281E-12
Complementarity.........: 0.0000000000000000E+00 0.0000000000000000E+00
Overall NLP error.......: 7.9744615766675617E-10 6.3157735687207093E-09
Number of objective function evaluations = 7
Number of objective gradient evaluations = 7
Number of equality constraint evaluations = 7
Number of inequality constraint evaluations = 0
Number of equality constraint Jacobian evaluations = 7
Number of inequality constraint Jacobian evaluations = 0
Number of Lagrangian Hessian evaluations = 6
Total CPU secs in IPOPT (w/o function evaluations) = 0.724
Total CPU secs in NLP function evaluations = 0.343
EXIT: Optimal Solution Found.
It starts with the total number of iterations the algorithm went through. Then, five quantities are printed, all evaluated at the termination point: the value of the objective function, the dual infeasibility, the constraint violation, the complementarity and the NLP error.
This is followed by some statistics on the number of calls to user-supplied functions and CPU time taken in user-supplied functions and the main algorithm. Lastly, status at exit is indicated by a short message. Detailed timings of the algorithm are displayed only if
Stats Time is set.
9.2
Additional Licensor
Parts of the code for
handle_solve_ipopt are distributed according to terms imposed by the Eclipse Public License. Please refer to
Library Licensors for further details.
10
Example
This example is based on Problem 73 in
Hock and Schittkowski (1981) and involves the minimization of the linear function
subject to the bounds
to the nonlinear constraint
and the linear constraints
The initial point, which is infeasible, is
and
.
The optimal solution (to five significant figures) is
10.1
Example Program
11
Algorithmic Details
handle_solve_ipopt is an implementation of IPOPT (see
Wächter and Biegler (2006)) that is fully supported and maintained by
NAG. It uses Harwell packages MA97 or MA86 for the underlying sparse linear algebra factorization. MA86 uses MC68 approximate minimum degree for the ordering and MA97 uses either MC68 approximate minimum degree or METIS algorithm. Any issues relating to
handle_solve_ipopt should be directed to
NAG who assume all responsibility for the
handle_solve_ipopt function and its implementation.
To simplify notation, we describe the method for the problem formulation
Range constraints of the form can be expressed in this formulation by introducing slack variables , (increasing by ) and defining new equality constraints and .
The algorithm may be interpreted as a homotopy method to the primal-dual equations,
(6)
with the homotopy parameter
, which is driven to zero (see e.g.,
Byrd et al. (1997) and
Gould et al. (2001)). Here,
for a vector
, similarly
, and
stands for the vector of all ones for appropriate dimension, while
and
correspond to the Lagrange multipliers for the equality constraints
(2) and the bound constraints
(3), respectively.
Note, that the equations
(6),
(7) and
(8) for
together with ‘
,
’ are the Karush–Kuhn–Tucker (KKT) conditions for the original problem
(1),
(2) and
(3). Those are the first-order optimality conditions for
(1),
(2) and
(3) if constraint qualifications are satisfied (
Conn et al. (2000)).
Starting from an initial point supplied in
x,
handle_solve_ipopt computes an approximate solution to the barrier problem
(4) and
(5) for a fixed value of
(by default,
), then decreases the barrier parameter, and continues the solution of the next barrier problem from the approximate solution of the previous one.
A sophisticated overall termination criterion for the algorithm is used to overcome potential difficulties when the Lagrange multipliers become large. This can happen, for example, when the gradients of the active constraints are nearly linear dependent. The termination criterion is described in detail by
Wächter and Biegler (2006) (also see below
Section 11.1).
11.1
Stopping Criteria
Using the individual parts of the primal-dual equations
(6),
(7) and
(8), we define the optimality error for the barrier problem as
(9)
with scaling parameters
,
defined below (not to be confused with NLP scaling factors described in
Section 11.2). By
we denote
(9) with
; this measures the optimality error for the original problem
(1),
(2) and
(3). The overall algorithm terminates if an approximate solution
(including multiplier estimates) satisfying
(10)
is found, where
is the user-supplied error tolerance in optional parameter
Stop Tolerance 1.
Even if the original problem is well scaled, the multipliers
and
might become very large, for example, when the gradients of the active constraints are (nearly) linearly dependent at a solution of
(1),
(2) and
(3). In this case, the algorithm might encounter numerical difficulties satisfying the unscaled primal-dual equations
(6),
(7) and
(8) to a tight tolerance. In order to adapt the termination criteria to handle such circumstances, we choose the scaling factors
in
(9). In this way, a component of the optimality error is scaled, whenever the average value of the multipliers becomes larger than a fixed number
(
in our implementation). Also note, in the case that the multipliers diverge,
can only become small, if a Fritz John point for
(1),
(2) and
(3) is approached, or if the primal variables diverge as well.
11.2
Scaling the NLP
Ideally, the formulated problem should be scaled so that, near the solution, all function gradients (objective and constraints), when nonzero, are of a similar order of a magnitude.
handle_solve_ipopt will compute automatic NLP scaling factors for the objective and constraint functions (but not the decision variables) and apply them if large imbalances of scale are detected. This rescaling is only computed at the starting point. References to scaled or unscaled objective or constraints in
Section 9.1 and
Section 11 should be understood in this context.
12
Optional Parameters
Several optional parameters in handle_solve_ipopt define choices in the problem specification or the algorithm logic. In order to reduce the number of formal arguments of handle_solve_ipopt these optional parameters have associated default values that are appropriate for most problems. Therefore, you need only specify those optional parameters whose values are to be different from their default values.
The remainder of this section can be skipped if you wish to use the default values for all optional parameters.
The optional parameters can be changed by calling
handle_opt_set anytime between the initialization of the handle and the call to the solver. Modification of the optional parameters during intermediate monitoring stops is not allowed. Once the solver finishes, the optional parameters can be altered again for the next solve.
If any options are set by the solver (typically those with the choice of
), their value can be retrieved by
handle_opt_get. If the solver is called again, any such arguments are reset to their default values and the decision is made again.
The following is a list of the optional parameters available. A full description of each optional parameter is provided in
Section 12.1.
12.1
Description of the Optional Parameters
For each option, we give a summary line, a description of the optional parameter and details of constraints.
The summary line contains:
- the keywords, where the minimum abbreviation of each keyword is underlined;
- a parameter value,
where the letters , and denote options that take character, integer and real values respectively.
- the default value, where the symbol is a generic notation for machine precision (see precision).
All options accept the value to return single options to their default states.
Keywords and character values are case and white space insensitive.
Defaults
This special keyword may be used to reset all optional parameters to their default values. Any value given with this keyword will be ignored.
Hessian Mode Default
This parameter specifies whether the Hessian will be user-supplied (in
hx) or approximated by
handle_solve_ipopt using a limited-memory quasi-Newton L-BFGS method. In the
setting, if no Hessian structure has been registered in the problem with a call to
handle_set_nlnhess and there are general nonlinear objective or constraints, then the Hessian will be approximated. Otherwise
hess will be called if and only if any of
handle_set_nlnobj and
handle_set_nlnconstr have been used to define the problem. Approximating the Hessian is likely to require more iterations to achieve convergence but will reduce the time spent in user-supplied functions.
Constraint: , or .
Infinite Bound Size Default
This defines the ‘infinite’ bound in the definition of the problem constraints. Any upper bound greater than or equal to will
be regarded as (and similarly any lower bound less than or equal to will be regarded as ). Note that a modification of this optional parameter does not influence constraints which have already been defined; only the constraints formulated after the change will be affected.
It also serves as a limit for the objective function to be considered unbounded ().
Constraint: .
Monitoring File Default
If
, the
unit number
for the secondary (monitoring) output. If set to
, no secondary output is provided. The information output to this unit is controlled by
Monitoring Level.
Constraint: .
Monitoring Level Default
This parameter sets the amount of information detail that will be printed by the solver to the secondary output. The meaning of the levels is the same as with
Print Level.
Constraint: .
Matrix Ordering Default
If
NLP Factorization Method is
, this parameter specifies the ordering to be used by the internal sparse linear algebra solver. It affects the number of nonzeros in the factorized matrix and thus influences the cost per iteration.
- A heuristic is used to choose automatically between METIS and AMD orderings.
- Both AMD and METIS orderings are computed at the beginning of the solve and the one with the fewest nonzeros in the factorized matrix is selected.
- An approximate minimum degree (AMD) ordering is used.
- METIS ordering is used.
Constraint: , , or .
NLP Factorization Method Default
This parameter controls whether Harwell package or is used for the sparse linear algebra factorization. To determine which best suits your application, it is recommended to try both and .
Constraint: or .
Outer Iteration Limit Default
The maximum number of iterations to be performed by handle_solve_ipopt. Setting the option too low might lead to .
Constraint: .
Print File Default
If
, the
unit number
for the primary output of the solver. If
, the primary output is completely turned off independently of other settings. The default value is the advisory message unit number as defined by
x04abf (no CPP interface) at the time of the optional parameters initialization, e.g., at the initialization of the handle. The information output to this unit is controlled by
Print Level.
Constraint: .
Print Level Default
This parameter defines how detailed information should be printed by the solver to the primary output.
|
Output |
|
No output from the solver |
|
Additionally, derivative check information, the Header and Summary. |
|
Additionally, the Iteration log. |
| , |
Additionally, details of each iteration with scalar quantities printed. |
|
Additionally, individual components of arrays are printed resulting in large output. |
Constraint: .
Print Options Default
If , a listing of optional parameters will be printed to the primary output.
Constraint: or .
Print Solution Default
If , the final values of the primal variables are printed on the primary and secondary outputs.
If or , in addition to the primal variables, the final values of the dual variables are printed on the primary and secondary outputs.
Constraint: , , or .
Stats Time Default
This parameter allows you to turn on timings of various parts of the algorithm to give a better overview of where most of the time is spent. This might be helpful for a choice of different solving approaches.
Constraint: or .
Stop Tolerance 1 Default
This option sets the value
of
(10) which is used for optimality and complementarity tests from KKT conditions. See
Section 11.1.
Constraint: .
Task Default
This parameter specifies the required direction of the optimization. If
, the objective function (if set) is ignored and the algorithm stops as soon as a feasible point is found with respect to the given tolerance. If no objective function is set,
Task reverts to
automatically.
Constraint: , or .
Time Limit Default
A limit to the number of seconds that the solver can use to solve one problem. If during the convergence check this limit is exceeded, the solver will terminate with a corresponding error message.
Constraint: .
Verify Derivatives Default
This parameter specifies whether the function should perform numerical checks on the consistency of the user-supplied functions. It is recommended that such checks are enabled when first developing the formulation of the problem, however, the derivative check results in a significant increase of the number of the function evaluations and thus it shouldn't be used in production code.
Constraint: or .
Function: handle_solve_ipopt