3.2. Definitions

This page discusses key terms and definitions introduced in GTOpt.

3.2.1. Problem Statement

In general, the optimization problem solved by GTOpt can be formulated as:

(1)\[\begin{split}\begin{array}{rcl} \min\limits_x & f^i(x) & i = 1, ..., K \\ \mbox{s. t.} & c^j_L \,\le\, c^j(x)\,\le\, c^j_U & j = 1, ..., M \\ & x^k_L \,\le\, x^k\,\le\, x^k_U & k = 1, ..., N \\ \end{array}\end{split}\]

Particular cases of (1) include:

  • Unconstrained Problem:

    This is the case of generic optimization problem with \(K>0\), \(M=0\) and complete absence of box bounds.

  • Box constrained Problem:

    This is an optimization problem (1) without generic constraints (\(M=0, `K>0\)).

  • Constrained Problem:

    Special case of (1) with non-trivial set of generic constraints \(M > 0\) and objectives \(K>0\).

  • Constraints Satisfaction Problem:

    Problem (1) without objective functions (\(K = 0\)) but with non-trivial set of generic constraints \(M > 0\).

  • Multiobjective Problem:

    Problem (1) with multiple objective functions (\(K > 1\), \(M \ge 0\)).

Note that it is not possible to have \(K=0\) and \(M=0\) simultaneously.

3.2.2. Optimal Solution

The definition of optimal solution depends on the problem type. In general, the solution includes one or more designs from the problem’s feasible domain that satisfy certain conditions.

The feasible domain of the optimization problem (1) is the set of all designs \(x\) such that

\[\begin{split}\begin{array}{c} c^j_L \,\le\, c^j(x)\,\le\, c^j_U & M\,\,\,\mbox{generic constraints} \\ x^k_L \,\le\, x^k\,\le\, x^k_U & N\,\,\,\mbox{box bounds} \end{array}\end{split}\]

The number of constraints can be zero, in which case the feasible domain is an \(N\)-dimensional parallelepiped defined by box bounds. Theoretically some bounds or even all of them can be infinite. In practice the maximum absolute values of bounds are always limited. Limiting values that are used in GTOpt are about \(10^{38}\).

For \(M>0\) the problem of finding a design in the feasible domain (the constraint satisfaction problem) becomes non-trivial and its solution may be valuable in many applications.

  • \(K=0\): solution of (1) is understood as any design that belongs the feasible domain.

If \(K>0\), the task is to find best solutions that bring objective values to a minimum.

  • \(K=1\): \(f_1\) is better than \(f_2\) if

    \[f_1 < f_2, \,\,\, f_i\in R\]

    The solution is understood as any feasible point where the objective reaches the best value.

  • \(K \gt 1\): the minimum is understood in Pareto sense.

    \[\begin{split}\vec{f}_1 < \vec{f}_2 \,\, \leftrightarrow \,\, \begin{cases} \forall i : \,\, f^i_1 \le f^i_2 \\ \exists i_0 : \,\, f^{i_0}_1 \ne f^{i_0}_2 \end{cases}\end{split}\]

    In general, the solution is a manifold.

The definitions above consider only the designs that belong to the problem’s feasible domain. However, when GTOpt is used in the context of engineering optimization, such strict results may be inadequate. Indeed, in many real-life problems the constraint limits \(c_L, c_U\) are known only approximately. The designs which are “almost” feasible may show better performance than strictly feasible solutions. Thus in many cases the decision can be to adjust constraints, or simply to accept some solution that shows high performance while violating problem constraints to a certain extent. For this reason it is much better to extend the results with additional promising solutions, regardless of whether they are formally feasible. However, infeasible designs cannot be simply added to the results set: generally, the more a design violates problem constraints, the better performance it exhibits.

The solution undertaken in GTOpt is the following: promising designs must either have best seen performance or minimize constraint violation. The idea goes back to the filtering approach in non-linear programming. In order to give a quantitative definition, let us introduce the feasibility violation measure \(\psi\) of a design \(x\) with respect to some constraint \(c_L \le c(x) \le c_U\):

\[\psi = \max\left\{ \, \frac{c_L-c}{\max(1,|c_L|)}, \frac{c-c_U}{\max(1,|c_U|)} \,\right\}\]

where normalization to \(\max(1,|c_{L,U}|)\) is needed to get a properly scaled dimensionless value and to get rid of apparent singularity at \(c_{L,U}=0\). In other words, positive values of \(\psi(x)\) measure the relative amount of constraint violation, while negative indicate feasible designs.

Considering this measure, quality of a given design is characterized by:

  • \(K \gt 0\) performance (objective) values \(\vec{f}\), and
  • \(M \ge 0\) constraint violation measures \(\vec{\psi}\).

Qualitatively, among the evaluated designs we want to select those for which either performance or feasibility measure is the best. Quantitatively this means that we consider Pareto optimal solutions in the extended space of dimensionality \(K+M\):

\[\left\{ \vec{f}, \vec{\psi} \right\}\]

Note that for any meaningful problem \(K+M > 0\), hence this definition is operational in all cases, including unconstrained and constraint satisfaction problems.

The above constitutes the universal definition of an optimal set in GTOpt. Consequently, GTOpt results include two sets, “optimal” and “infeasible”, which are formed according to this definition (see the next section Optimal and Infeasible Result Sets).

The only point yet leaved aside is that the number of “infeasible” optimal solutions rapidly grows with \(K\) or \(M\), so even for problems with a moderate number of objectives and constraints, the number of optimal solutions becomes prohibitively large (sometimes just including all evaluated designs). To reduce this number, GTOpt additionally filters the infeasible designs after discovering those that are not dominated (optimal) in the extended space \(\left\{ \vec{f}, \vec{\psi} \right\}\). First, it is clear that designs which strongly violate some constraint are of no interest, so all designs with \(\psi\) exceeding certain threshold are removed (this threshold is fixed internally in GTOpt). Second, GTOpt removes infeasible designs that are close enough to each other in terms of performance \(\vec{f}\). This phase is controlled with the GTOpt/OptimalSetRigor option that can make filtering more or less aggressive (more aggressive filter means less points in the final result). Note that the filtering phase controlled by GTOpt/OptimalSetRigor considers only the infeasible designs that are not dominated in the \(\left\{ \vec{f}, \vec{\psi} \right\}\) space and do not exceed the fixed \(\psi\) threshold.

3.2.3. Optimal and Infeasible Result Sets

When solving an optimization problem, GTOpt aims to find a limited set of optimal points — a single optimum for single-objective an constraint satisfaction problems, or a set of Pareto-optimal points for a multi-objective problem. This search requires evaluating a much greater number of points that, in the end, form the evaluated set — all points requested by solver.

From the evaluated set, GTOpt produces two different result sets.

The optimal set includes only feasible points (ones that do satisfy problem constraints). For multi-objective problems it contains all non-dominated (in Pareto sense) feasible points from the evaluated set and is commonly understood as the solution.

If GTOpt/OptimalSetType is set to "Extended", the result also includes an additional infeasible set. This set contains points that violate problem constraints and feasibility measures to a certain extent (set by GTOpt/OptimalSetRigor). This information is useful when GTOpt seems to be unable to solve a problem: it can show which constrains should be weakened to obtain a feasible solution.

If GTOpt/OptimalSetType is "Strict", the infeasible set is present in the result, but contains no points (all point arrays are empty).

Finally, the points from the evaluated set that got dominated by any other evaluated point are completely discarded from the result.

3.2.4. Assumptions on Objectives and Constraints

GTOpt always treats problem (1) as a “black box”, namely, it suffices to only evaluate all relevant functions \(f^i\) and \(c^j\) at particular design space point \(x\). Moreover, GTOpt could successfully treat (with unfortunately unpredictable level of success) regions of undefined \(f^i\), \(c^j\) functions behavior (provided, of course, that undefined points are not too dense in the design space). Nevertheless, GTOpt obviously must assume something on the class of considered functions and the basic assumption is

(2)\[\begin{split}f^i(x) = (1 + \epsilon_f ) \cdot {\cal F}^i(x) \qquad |\epsilon_f(x)| \ll 1 \nonumber \\ ~ \\ c^j(x) = (1 + \epsilon_c ) \cdot {\cal C}^j(x) \qquad |\epsilon_c(x)| \ll 1 \nonumber\end{split}\]

where \({\cal F}^i\), \({\cal C}^i\) are at least continuous functions and random noise factors \(\epsilon_f\), \(\epsilon_c\) are relatively small. Moreover, let \(x^*\) denote the set of optimal solutions to the problem (1). Then it is also assumed that noise factors become irrelevant close to optimal solution:

(3)\[\lim\limits_{x \to x^*} \epsilon_{f,c} = 0\]

Note that in practice it is hardly possible to estimate noise magnitude \(|\epsilon|\) and fulfillment of (3) a priori. Moreover, suitability of particular optimization tool and of GTOpt in particular is usually judged by trial-and-error approach. Nevertheless, eqs. (2), (3) provide a framework in which GTOpt is expected to be appropriate to solve (1).