# Optimizer¶

**Tag:** Exploration

The *Optimizer* block defines and solves an optimization problem.
When configuring this block, you describe the problem by adding variables, objective functions, and constraints and specifying their properties.
The block automatically adds corresponding ports: each variable creates an output port, each objective or constraint creates an input port.
These ports then have to be connected to inputs and outputs of the blocks that evaluate problem functions.

See also

*GTOpt*guide- The guide to the Generic Tool for Optimization (GTOpt) — the pSeven Core optimization problem solver used by
*Optimizer*.

Sections

## Introduction¶

The key to understanding *Optimizer* is that it does not require to select a specific optimization algorithm to solve a problem.
It uses a variety of methods and implements complex logic that selects the best solving method automatically (see *Solver Architecture* for details).
This process can be tuned with various options (see section *Options* for a full reference), but the primary task when configuring *Optimizer* is to describe the problem as full and precise as possible.
Thus *Optimizer* configuration contains the problem description, but not the definitions of problem functions.
Objectives and constraints are evaluated by another block or blocks, so an optimization workflow in pSeven contains at least two blocks: the *Optimizer* and the evaluating block.

The evaluating block can be an *Composite* containing some computational chain, or you can use several blocks that evaluate problem objectives and constraints.
When running, *Optimizer* outputs values of variables and receives evaluation results, iteratively solving the problem, and finally outputs the solution data.
In particular, it allows to create collaborative optimization workflows, where different *Optimizer* blocks solve subproblems and send their results to an “upper level” *Optimizer* block.

See also

*Optimization Basics*- Introductory optimization tutorial.
*Multi-Objective Optimization Basics*- Optimization tutorial solving a simple multi-objective problem.
*Integrated Component Optimization*- An example of using an external program to evaluate problem functions in an optimization workflow.

## Configuration Dialog¶

The *Configuration* tab in *Optimizer* configuration dialog contains three nested tabs:

*Problem definition*— general problem definition and main settings.*Advanced*— additional settings for block inputs and outputs and problem properties.*Robust optimization*— settings specific to robust optimization problems.

### Problem Definition¶

The *Problem definition* tab opens by default when you open the configuration dialog.
It is essential for configuring any problem (specifies problem variables, objectives, and constraints) and also contains some frequently used settings.

Problem variables and functions are added on the *Variables* 1, *Objectives* 2, and *Constraints* 3 panes.
They can have scalar or vector type; definitions of vector variables and functions can be expanded 4 to see component properties.
It is important that certain properties shown on these panes can be mapped to input ports.
For example, you can create additional ports that accept initial guesses of variables, values of constraint bounds, or even the dimension of vector variables and functions (see section *General Problem Configuration* for more details).

The option presets selector 5 can be used to quickly apply some typical option settings.
These presets are also described in section *General Problem Configuration*, and options are detailed in section *Options*.

Enabling automatic grouping 6 creates default variable groups, adding sets of ports that separately output values of variables for calculating objectives, constraints, or their gradients.
This feature is useful when objective and constraint functions are evaluated by different blocks.
Variable groups can also be configured manually — see section *Variable Grouping* for more details.

Enabling batch optimization mode 7 allows *Optimizer* to request batches of function evaluations (with batch size limited by *GTOpt/BatchSize*).
In particular, this allows to use parallel calculations in optimization if problem functions are evaluated by *Composite* blocks.
Batch configuration is described in more detail in section *Batch Mode*.

Finally, the output configuration buttons 8 allow to select what data will be output as the optimization result.
For details, see sections *Solution Output Configuration* and *Optimization Results*.

### Advanced¶

The *Advanced* tab allows fine configuration of variable outputs and is also used to enable and configure analytical gradients of problem functions.

On the *Grouping* pane 1 you can manually create and edit variable groups.
Note that this pane is disabled if automatic variable grouping is enabled on the *Problem definition* tab.
For more details, see section *Variable Grouping*.

Gradient-based optimization methods by default use numerical differentiation to calculate required gradients.
If the blocks that evaluate functions can also calculate gradient values, *Optimizer* can use these “analytical” gradients instead of numeric differentiation in order to reduce the number of function evaluations.
Analytical gradients can be enabled separately for objectives 2 and constraints 3; additional settings on the *Objectives gradient* and *Constraints gradient* panes allow to configure sparse gradients (see section *Using Analytical Gradients* for details).

See also

Section *Numerical Methods* in the *GTOpt* guide.

### Robust Optimization¶

The *Robust optimization* tab contains specific settings for robust optimization problems.

The *Stochastic variables* pane 1 is used to add random variables to the problem; these variables are used to simulate uncertainties in problem inputs.
Their values are not generated by *Optimizer* but requested from some other block that samples values from a random distribution (so such a block has to be added to the workflow, in addition to the blocks that evaluate problem functions).

The *Chance constraints* pane 2 is used to add chance constraints to a robust optimization problem.
Chance constraints are optional, since by default robust optimization works with expectation constraints and uses constraint definitions added on the *Constraints* pane on the *Problem definition* tab.
The *Chance constraints* pane also allows to edit certain properties and to create port mappings for vector dimensions and constraint bound values.

Settings in the *Tolerances* pane 3 duplicate a few options specific to robust optimization that set optimization thresholds:

- Robust constraints tolerance: see
*GTOpt/RobustConstraintsTolerance*. - Robust gradient tolerance: see
*GTOpt/RobustGradientTolerance*. - Robust objectives tolerance: see
*GTOpt/RobustObjectiveTolerance*.

Note that robust optimization requires certain changes to the workflow: you will have to add one or more blocks that generate random values (the values of stochastic variables). For more details, see section *Robust Problem Configuration*.

See also

Section *Robust Optimization* in the *GTOpt* guide.

## General Problem Configuration¶

General definition of an optimization problem describes its variables, objective and constraint functions.
All of them are named; adding a variable creates an output port with the same name, while adding a function creates an input port.
When *Optimizer* runs, it outputs values of variables to evaluate problem functions for and waits until function values are received from the evaluating block or blocks, then generates new values of variables.
This iterative process continues until an optimal solution is found.

Additional properties of variables and functions are:

- Variables: bounds, type (continuous or integer), optional initial guess.
- Objectives: linearity type and evaluation cost.
- Constraints: bounds, linearity type, and evaluation cost.

### Variables¶

Variables are added on the *Variables* pane.
Clicking the add button brings up the *Add Variable* dialog.

- Variable name: the name for this variable and the corresponding output port. This will also be the base part of the name for additional ports related to this variable — such as the optional ports for variable bounds and initial guess value.
- Type: changes both the variable type and type of the corresponding output port —
*RealScalar*for scalar variables,*RealVector*for vector ones;*IntScalar*and*IntVector*respectively if the variable is integer. For vector variables, the dimension (vector size) can be fixed or mapped to an input port (see below). - Bounds: the lower and upper bound values. Variable bounds are required to solve the problem, but you can leave them empty in the
*Add Variable*dialog and map to input ports or specify later. - Initial guess: optional initial guess value for the variable, can be used to perform multistart optimization.
- Variable type: continuous or integer (discrete).
*Optimizer*supports mixed integer problems, and solving them does not require special configuration — you only have to specify which variables are integer (default is continuous).

After adding a variable, you can edit its name, bounds and initial guess values on the *Variables* pane.
Variable bounds, initial guesses and dimensions of vector variables can also be mapped to special input ports.
These additional ports are enabled by double-clicking a cell and ticking the checkbox 1.

Assuming there is a variable named “x1”, you can create the following mappings:

- Lower bound: adds an input port named x1_lb.
- Upper bound: adds an input port named x1_ub.
- Initial guess: adds an input port named x1_ig.
- Size, for vector variables: adds an input port named x1_n.

These ports, when enabled, are required to start the block.

### Objectives¶

Objectives are added on the *Objectives* pane.
Clicking the add button brings up the *Add Objective* dialog.

- Objective name: the name for this objective function and the corresponding input port. This will also be the base part of the name for additional ports related to this objective — such as the ports for gradient values.
- Type: changes both the objective type and type of the corresponding input port, similarly to variable type. For vector objectives, the dimension (vector size) can also be fixed or mapped to an input port.
- Linearity type: if it is known that the function is linear or quadratic, you can hint
*Optimizer*on this to improve performance. If the generic type (default) is selected, no assumptions on function behavior will be made. - Evaluation cost type: setting this to “Expensive” enables surrogate based optimization — a special mode that trains and evaluates internal approximation models of the expensive functions to improve solution quality (see section
*Surrogate Based Optimization*for more details).

After adding an objective, you can change its name, linearity type and evaluation cost on the *Objectives* pane.
Similarly to vector variables, dimensions of vector objectives can be mapped to input ports, creating additional inputs named f1_n where “f1” is an example name of an objective.
These ports, if enabled, are also required to start the block.

### Constraints¶

Constraint functions are added on the *Constraints* pane.
Clicking the add button brings up the *Add Constraint* dialog.

- Constraint name: the name for this constraint function and the corresponding input port. This will also be the base part of the name for additional ports related to this constraint — such as the ports for constraint bounds.
- Type: changes both the constraint type and type of the corresponding input port, similarly to variable and objective type. For vector constraints, the dimension (vector size) can also be fixed or mapped to an input port.
- Bounds: the lower and upper bound values. At least one of them is required, but you can leave these fields empty in the
*Add Constraint*dialog and map the constraint bounds to input ports or specify them later. - Linearity type: the same setting as for objectives; if the constraint function is known to be linear or quadratic, giving this hint to
*Optimizer*improves performance. - Evaluation cost type: the same setting as for objectives, enables surrogate based optimization with respect to this constraint.

After adding a constraint, you can change its name, bounds, linearity type and evaluation cost on the *Constraints* pane.
Similarly to variables, constraint bounds and dimensions of vector constraints can also be mapped to special input ports.
These additional ports are enabled by double-clicking a cell and ticking the checkbox 1.

Assuming there is a constraint named “c1”, you can create the following mappings:

- Lower bound: adds an input port named c1_lb.
- Upper bound: adds an input port named c1_ub.
- Size, for vector constraints: adds an input port named c1_n.

These ports, when enabled, are also required to start the block.

### Option Presets¶

To simplify configuration in typical usage scenarios, the *Optimizer* block provides several option presets.

Available presets are:

- (Unconfigured): default preset placeholder for new blocks. This preset is prohibited to use. Each time a new block is created, one should change this preset. It isn’t set by default for old blocks or blocks with any another preset.
- Analytical problem: suitable for test problems and similar tasks where all objectives and constraints are analytical functions (smooth, noiseless, well-defined).
- Smooth problem: suitable for problems where all objectives and constraints are at least continuous functions.
- Noisy problem: should be used for problems where at least one objective or constraint is known (or suspected) to be noisy. See section
*Noisy Problems*in the*GTOpt*guide for the definition of “noisy”. - Heavily noisy problem: for even more noisy problems. More specifically, should be used for such problems where at least one objective or constraint has large relative noise as defined in section
*Noisy vs Meaningless Problems*. - Expensive problem: the preset to use with surrogate based optimization, that is, for problems that contain computationally expensive objectives or constraints. Note that this preset does not automatically enable surrogate based optimization: you still have to set the evaluation cost for expensive functions (see sections
*Objectives*and*Constraints*). However, it does enable global optimization, since surrogate based optimization does not work with purely local methods (see section*Surrogate Based Optimization*for details). - (Custom): a placeholder selected in the case when you change some preset options after selecting a preset. It simply shows that some of your settings on the
*Options*tab override preset values.

The above presets are intended to be “starting points” in configuring the block — that is, your settings on the *Options* tab override preset option values, if you manually change an option that is included into the currently selected preset.
Note however that selecting a preset can cancel manual changes in option settings: preset settings take priority if you first change an option that is a part of a preset, and select a preset after.

See also

*Options**Optimizer*option reference

## Solution Output Configuration¶

For a general definition of a solution to an optimization problem, see the *Optimal Solution* section in the *GTOpt* guide.
*Optimizer* always outputs the optimal solution data and, optionally, can output an additional infeasible solutions data set (see *Optimal and Infeasible Point Sets* for details).
The content and structure of solution output can be changed using the output configuration buttons on the *Problem definition* tab.

For the optimal solution:

- If “Single port” is selected, all solution data is output to the optimal port as a single
*RealMatrix*. - If “Multiple ports” are selected, solution data is output to separate ports named optimal_x, optimal_f and so on. These ports can be enabled independently.

For the infeasible solution, default is disabled (“No ports”), and enabling it will require you to select the ports in “Multiple ports”. These ports are named infeasible_x, infeasible_f and so on.

For a detailed description of solution structure and content, see section *Optimization Results*.

## Batch Mode¶

Enabling the batch mode allows *Optimizer* to request multiple function evaluations at once.
This mode is primarily intended for usage with parallelized *Composite* blocks or with *Program* blocks that submit jobs to a HPC cluster.
Enabling the batch mode changes the type of variable, objective and constraint ports to *List* (of the respective scalar or vector type).
Lists of variable values (the output batches) can then be processed in parallel to take advantage of parallel computation capabilities.

Note that after enabling the batch mode you also have to change the batch size limit applied by *GTOpt/BatchSize* (default is 1, so all batches will contain one point only).
Setting a bigger value does not necessarily mean that *Optimizer* will request this number of evaluations every time; *GTOpt/BatchSize* sets only the maximum number of points in a batch.
Also note that when configuring a *Composite* block for parallel evaluations, you are not required to set the parallelization ratio to the same value as the batch size: if the number of points in a batch is greater than the number of parallel threads, the *Composite* will queue them automatically.

## Surrogate Based Optimization¶

Surrogate based optimization (SBO) is a special feature of *Optimizer* that allows it to replace certain objectives and constraints with their approximation models, trained and evaluated internally.
To enable SBO, you have to specify the evaluation cost (expensive) for at least one objective or constraint (see *General Problem Configuration*).
SBO is a unique and complex approach; it is described in full detail in the *GTOpt* guide (see section *Surrogate Based Optimization* there).
This section contains only the most essential notes regarding the SBO usage.

- SBO always introduces a significant overhead because internal models are re-trained multiple times, and even a single model training is computationally expensive. If the modeled functions are not really expensive, training models can actually take more time than real evaluations; this effect is commonly mistaken for performance degradation, but in fact it simply means that SBO is meaningless for this specific problem. Thus the first question to consider when using SBO is whether the gain in performance is expected to surpass the model training overhead.
- There is no on/off switch for SBO in
*Optimizer*configuration. It is enabled automatically for all objectives and constraints that have their evaluation cost set to expensive, and using this setting is the only way to enter SBO. - SBO is not compatible with purely local search methods (certain degree of globalization is required to train meaningful models).
Due to this when
*Optimizer*runs in SBO mode,*GTOpt/GlobalPhaseIntensity*is forced to 0.5 unless it is manually set to a non-default value (default is 0, disables global search). Globalization, in turn, increases the number of real evaluations; thus SBO also becomes a matter of trade-off between globalization (increases the number of real evaluations) and the ability to evaluate internal models instead of real objectives and constraints (decreases the number of real evaluations). In some scenarios the forced value of*GTOpt/GlobalPhaseIntensity*is too much, and the increase in number of evaluations due to globalization can even exceed the “savings” due to evaluating internal models. To avoid this, you have to set*GTOpt/GlobalPhaseIntensity*to a lower value on the*Options*tab; the non-default value set manually will take priority over the default behavior imposed by SBO. For more details on selecting the value to use, see section*Local and Global Methods*.

## Re-using Problem Data¶

If some problem evaluation data (optimization history, previous solution) is already available, it can be sent to the designs input port to be used in solving.
This data should be a *RealMatrix* containing values of variables, objectives, and constraints (the latter two are optional).
Its structure is similar to the optimization results structure described in section *Optimization Results*.

The initial data can be used by *Optimizer* in following ways:

- The incoming
*RealMatrix*can contain only values of variables. In this case it is interpreted as multiple initial guess values.*Optimizer*will evaluate these points first, then select the best one and use it as the starting point. - In addition to variable values, the matrix can also contain either objective values, or both objective and constraint values.
In general this is the same case as above, but
*Optimizer*will skip evaluations where results are already available. - In surrogate based optimization, this data will be used to train initial approximation models for expensive functions.

The support for initial problem data allows to continue interrupted optimization or to improve existing solutions.
Also it can be used to select the best point from a sample or to check feasibility of sample points.
In the latter case it is recommended to enable the infeasible solution output (see sections *Solution Output Configuration* and *Optimization Results*) as it will allow to discover points that violate constraints the least.

## Using Analytical Gradients¶

Enabling analytical objective or constraint gradients on the *Advanced* tab creates additional input ports that accept gradient values.
Each objective and constraint gets its own port for gradients.
For example, if there is an objective named “f1”, the d_f1 input is created in addition to f1.
By default, gradients are dense — that is, the gradient is a *RealMatrix* containing all partial derivatives in the function-major order:

where \(m\) is the function dimension and \(n\) is the total variable dimension.
Note that since objectives and constraints can be vector functions, the gradient is always a *RealMatrix*; for scalar functions, this matrix will simply contain a single row.
The order of variables is the same as in the list on the *Variables* pane on the *Problem definition* tab; vector variables yield several gradient components, hence \(n\) is not the number of variables but the sum of dimensions of all variables, counting scalar variables as 1-dimensional.

If batch mode is enabled, the type of gradient ports is changed to *List* which should contain *RealMatrix* elements (gradient matrices).

Using the *Objectives gradient* and *Constraints gradient* panes on the *Advanced* tab you can configure sparse gradients.
If a gradient is sparse, it becomes a vector that contains only non-zero gradient values:

where the number of elements is usually less than in the dense matrix (\(k < n\)).
The type of the corresponding gradient port is changed to *RealVector*.
This holds even in the case of \(k = n\) (that is, the dense gradient is disabled, but all variables are selected in the configuration pane).

Similarly to dense gradients, if batch mode is enabled, port type is changed to *List* which in this case should contain *RealVector* elements (sparse gradient vectors).

## Variable Grouping¶

By default *Optimizer* outputs values of all variables whenever it requests problem function values.
This does not mean that values of all objectives and constraints are required each time — in fact, most iterations require values of either objectives or constraints but not both.
An unwanted effect is that even if objectives and constraints are evaluated by different blocks, each of these blocks always receives a full set of inputs and starts despite its results are possibly not required by *Optimizer* at the current iteration.
To avoid such a waste of processing time, you can change the way *Optimizer* outputs values of variables by using variable grouping.

A simple option is to use automatic grouping enabled on the *Problem definition* tab.
When it is on, default outputs for variable values are removed, and each of them is replaced by several independent ports that activate when objectives, constraints, or their gradients are needed.
For example, assuming there is a variable named “x1” (hence the x1 output port), automatic grouping replaces x1 with the following ports:

- x1_f: outputs a value only when the block needs objective evaluations. The port does not appear if the problem contains no objectives (constraint satisfaction problem).
- x1_c: same as above for constraint evaluations, does not appear if the problem is unconstrained.
- x1_df: outputs a value only when the block needs values of objective gradients. This port appears only if analytical objective gradients are enabled (see
*Using Analytical Gradients*). - x1_dc: same as above for constraints. Similarly, appears only if analytical constraint gradients are enabled.

The finest degree of tuning is available on the *Advanced* tab where you can create variable groups manually (automatic grouping must be disabled to enable the manual one).
Clicking the add button on the *Grouping* pane creates a new empty group.
The group is named, and this name is used to generate names of new ports.
After adding a group, you can select which variables and functions it includes; ports with the group name as a suffix will be added for variables that are included in the group, and these ports will output values of variables only when the block needs to evaluate some function in this group.
For example, if there is a group “G1” including the objective “f1” and variables “x1”, “x2”:

- Additional outputs are x1_G1 and x2_G1.
- The name of objective input port (f1) is not changed.

Note that adding a group manually does not remove the default variable output port (as the automatic grouping does). For example, the ports x1 and x2 in the example above continue to exist after the group is created.

As a rule of thumb, a group definition on the *Grouping* pane can be read as “this objective (constraint, gradient, or several functions) depends on these variables”, and *Optimizer* changes its ports according to this definition.

## Robust Problem Configuration¶

Robust optimization considers the case when uncertainties are present in the problem.
In robust optimization, problem functions can depend on stochastic (random) variables.
This leads to uncertainties in objective and constraint values; to handle them, *Optimizer* considers expected (mean) function values at each evaluated point.
Alternative treatment of constraints is also possible: instead of using expected values, *Optimizer* can estimate the probability of violating a constraint (denoted \(\alpha\)).
For more details and formal definitions you can see section *Robust Problem Formulation* (and other sections in *Robust Optimization*).

See also

- Section
*Examples* - The
**Robust Optimization**example project contains several workflows illustrating robust optimization and its relationship with uncertainty quantification methods.

### Stochastic Variables¶

Robust optimization algorithms are enabled automatically if the problem contains stochastic variables.
These variables are added on the *Stochastic variables* pane on the *Robust optimization* tab.

- Variable name: the name for this variable and the base part of names for corresponding input and output ports (see below).
- Type: changes the variable type and types of corresponding ports. For vector variables, dimension can be fixed or mapped to an additional input port.

Adding a stochastic variable (for example, “s1”) creates three ports:

- s1_quantity — outputs the number of random values required at the current iteration.
- s1 — accepts the stochastic variable values from a random generator block. The type of this port is always
*List*, and the number of elements in the list must be equal to s1_quantity. The elements may be scalar or vector, depending on the variable type. - s1 — outputs stochastic variable values to the evaluating block, taking them from the list received to s1. Port type is either
*RealScalar*or*RealVector*depending on the variable type.

Dimension of a vector variable can be mapped to an input port in the same way as for variables on the *Problem definition* tab (see section *Variables*). This creates an additional input port named s1_n which is required once, when the optimization process starts.

Note that *Optimizer* itself does not generate random variable values but requests them from a random generator block which hence has to be added to the workflow.

For example, to generate values of a scalar stochastic variable, you can use a *Random* block in vector mode.
In this case, s1_quantity has to be connected to its size input, and the value output of the *Random* block has to be connected to s1.

Finally, you will have to edit the evaluating block, creating an additional input port that will accept random values from the s1 output of *Optimizer*.
Naturally, the evaluating block must actually use this value when calculating outputs — so you will be required to edit its function definitions as well.

Warning

If a function depends on some stochastic variable, it should not be specified as linear or quadratic in general problem configuration (see section *Objectives*) because it leads to incorrect behavior of the robust optimization algorithm. Note that such problem formulation is incorrect by definition: a function depending on a random variable cannot be considered linear or quadratic. *Optimizer* cannot analyze the blocks you created to evaluate problem functions, and thus it cannot tell which of your functions are randomized, so automatic validation of this issue is not possible.

### Constraints¶

Chance constraints in a robust optimization problem are optional: by default, *Optimizer* considers expectation constraints and uses constraint definitions added on the *Problem definition* tab.
Chance constraints are added separately, on the *Chance constraints* pane on the *Robust optimization* tab.

- Constraint name: the name for this chance constraint function and the corresponding input port that accepts evaluation results. This will also be the base part of the name for additional ports related to this constraint — such as the ports for constraint bounds.
- Type: changes both the constraint type and type of the corresponding input port, similarly to the types of variables, objectives, and constraints on the
*Problem definition*tab. For vector constraints, the dimension (vector size) can also be fixed or mapped to an input port. - Bounds: the lower and upper bound values. At least one of them is required, but you can leave these fields empty the constraint bounds to input ports or specify them later.
- Linearity type: the same setting as for general objectives and constraints.
- Evaluation cost type: the same setting as for general objectives and constraints, enables surrogate based optimization.
- Constraint alpha: the maximum allowed probability \(\alpha\) of violating the specified constraint bounds. This setting is required: for chance constraints,
*Optimizer*considers not the constraint value as is, but the estimated probability of an event that the constraint value is out of bounds; \(\alpha\) can also be understood as the failure probability threshold.

After adding a chance constraint, you can change its name, bounds, \(\alpha\) value, linearity type and evaluation cost on the *Chance constraints* pane.
Also, constraint bounds and dimension of a vector constraint can be mapped to input ports in the same way as for constraints on the *Problem definition* tab, creating additional ports named c_alpha_lb, c_alpha_ub, c_alpha_n (“c_alpha” is an example constraint name).

## Optimization Results¶

The result data to output can be selected in block configuration on the *Problem Definition* tab (see section *Solution Output Configuration*).
In general, an optimization result can be represented by a table where each row contains single solution data.
For deterministic problems (without stochastic variables), such solution contains values of variables, objectives, constraints, constraint violation measures, and solution feasibility measure (see *Deterministic Problem Solution*).
For robust optimization problems, the solution data also contains accuracy estimates for values of objectives, constraints, violation and feasibility measures (see *Robust Problem Solution*).

### Deterministic Problem Solution¶

General solution structure (applies both to the optimal and infeasible datasets) is the following:

Solution # | Variables | Objectives | Constraints | Constraint violation | Feasibility measure |
---|---|---|---|---|---|

0 | \(x^1 \dots x^n\) | \(f^1 \dots f^k\) | \(c^1 \dots c^m\) | \(\psi^1 \dots \psi^{m}\) | \(\psi\) |

1 | ... | ||||

... | ... | ||||

port suffix | x | f | c | v | psi |

In the single port mode (applies to the optimal dataset only), the whole dataset is output as a single *RealMatrix*.
Columns containing no data are omitted: for example, if the problem is unconstrained, the result dataset contains only values of variables and objectives.

In the separate ports mode, each column in the table above becomes a separate *RealMatrix* which is output to the corresponding port.
If a column contains no data, its port will output an empty array (so it is recommended to disable unneeded ports in this mode).

Solution data is detailed below.

\(x^1 \dots x^n\): optimal values of variables.

\(f^1 \dots f^k\): optimal objective values. Omitted for constraint satisfaction problems since they define no objectives.

\(c^1 \dots c^m\): constraint values at \(x^1 \dots x^n\). Omitted for unconstrained and box-constrained problems.

\(\psi^1 \dots \psi^{m}\): constraint violation measures, calculated separately for each constraint \(c^i\) (with lower and upper bounds \(c^i_L\) and \(c^i_U\)) as

\[\psi^i(x) = \max\left[ \frac{c^i_L - c^i(x)}{\max(1, |c^i_L|)}, \frac{c^i(x) - c^i_U}{\max(1, |c^i_U|)} \right],\]so positive \(\psi^i(x)\) means that constraint \(c^i\) is violated (\(\psi^i(x)\) also may be understood as a normalized distance from the \(c^i\) constraint bound, measured positive outside the feasibility domain and negative inside). Note that this data is included in both the optimal and infeasible datasets; obviously in the optimal dataset all measure values are negative.

Omitted for unconstrained and box-constrained problems.

\(\psi\): generalized solution feasibility measure, \(\psi(x) = \max_i \psi^i(x)\). Omitted for unconstrained and box-constrained problems.

Note that variables (objectives, constraints) in results come in the same order as in the list of variables (objectives, constraints) on the *Problem Definition* tab in block configuration.
The order of constraints also applies to violation measures \(\psi^i\).

### Robust Problem Solution¶

In robust optimization problems, objective and constraint functions are generally dependent on random variables (see *Robust Problem Formulation*).
Due to this, their values in result are approximate — as well as the values of constraint violation and solution feasibility measures,
and solution data includes corresponding accuracy estimates.
The order of variables, objectives, and constraints follows the same rules as for deterministic optimization, with one notable exception:
chance constraints (added on the *Robust Optimization* tab, denoted \(c^1_{\alpha} \dots c^q_{\alpha}\) below), if any, are always listed after expectation constraints.
The order of objectives also applies to their accuracy estimates \(\delta f^i\).
The order of constraints (with the same note on chance constraints) also applies to constraint accuracy estimates \(\delta c^i\), violation measures \(\psi^i\), and violation measure accuracy estimates \(\delta\psi^i\).

In the end, general solution structure (both for the optimal and infeasible datasets) is the following:

Solution # | Variables | Objectives | Constraints | Constraint violation | Objective accuracy estimates | Constraint accuracy estimates | Violation accuracy estimates | Feasibility | Feasibility accuracy |
---|---|---|---|---|---|---|---|---|---|

0 | \(x^1 \dots x^n\) | \(f^1 \dots f^k\) | \(c^1 \dots c^m\), \(c^1_{\alpha} \dots c^q_{\alpha}\) | \(\psi^1 \dots \psi^{m+q}\) | \(\delta f^1 \dots \delta f^k\) | \(\delta c^1 \dots \delta c^{m+q}\) | \(\delta\psi^1 \dots \delta\psi^{m+q}\) | \(\psi\) | \(\delta\psi\) |

1 | ... | ||||||||

... | ... | ||||||||

port suffix | x | f | c | v | fe | ce | ve | psi | psie |

Just like deterministic optimization, in the single port mode the whole dataset is output as a single *RealMatrix*, and columns containing no data are omitted;
in the separate ports mode, each column in the table above becomes a separate *RealMatrix* which is output to the corresponding port (see port suffixes),
and if there is no data, the port will output an empty array.

Solution data is detailed below.

\(x^1 \dots x^n\): optimal values of variables.

\(f^1 \dots f^k\): expected (mean) objective values at \(x^1 \dots x^n\). Omitted for constraint satisfaction problems since they define no objectives.

\(c^1 \dots c^m\), \(c^1_{\alpha} \dots c^q_{\alpha}\): constraint values at \(x^1 \dots x^n\), expectation constraints first. \(c^i\) (\(i = 1 \dots m\)) is essentially the mean value of the

*i*-th expectation constraint function at \(x^1 \dots x^n\). \(c^i_{\alpha}\) (\(i = 1 \dots q\)) is the expected probability of an event that the*i*-th chance constraint value is out of constrain bounds (the probability of violating the constraint).Omitted for unconstrained and box-constrained problems.

\(\psi^1 \dots \psi^{m+q}\): approximate constraint violation measures. Omitted for unconstrained and box-constrained problems.

\(\delta f^1 \dots \delta f^k\): objective accuracy estimates. \(\delta f^i\) is essentially the standard deviation of the mean objective value \(f^i\): \(\delta f^i = \frac{1}{\sqrt{N_s}}\sigma_f\), where \(N_s\) is the size of \(\xi\)-sample (see

*Robust Problem Formulation*) and \(\sigma_f\) is the sample standard deviation.Omitted for constraint satisfaction problems.

\(\delta c^1 \dots \delta c^{m+q}\): constraint accuracy estimates. For expectation constraints (\(i = 1 \dots m\)), \(\delta c^i\) is essentially the standard deviation of the mean constraint value \(c^i\) (similarly to objective functions). For chance constraints (\(i = m+1 \dots q\)), \(\delta c^i\) is estimated by the jackknife method.

Omitted for unconstrained and box-constrained problems.

\(\delta\psi^1 \dots \delta\psi^{m+q}\): accuracy estimates for constraint violation measure values. Omitted for unconstrained and box-constrained problems.

\(\psi\): generalized solution feasibility measure. Omitted for unconstrained and box-constrained problems.

\(\delta\psi\): accuracy estimate for the above \(\psi\) values. Omitted for unconstrained and box-constrained problems.

## Options¶

- Basic options
*GTOpt/ConstraintsSmoothness*— assumed smoothness of constraint functions.*GTOpt/EnsureFeasibility*— always stay within feasible domain.*GTOpt/LogLevel*- minimum log level.*GTOpt/MaximumIterations*— maximum evaluations for constraints and objective functions.*GTOpt/MOPIsGlobal*- multi-objective optimization mode.*GTOpt/ObjectivesSmoothness*— assumed smoothness of objective functions.*GTOpt/TimeLimit*— optimization time limit.*GTOpt/VerboseOutput*— turn on/off trace level logging of optimization process.

- Advanced options
*GTOpt/AbsoluteGradientTolerance*— use absolute magnitude for gradient tolerance.*GTOpt/BatchSize*— maximum point batch size for batch mode.*GTOpt/ConstraintsTolerance*— required relative precision of constraints satisfaction.*GTOpt/CoordinateTolerance*— relative consecutive coordinate change threshold.*GTOpt/DiffScheme*— differentiation scheme order.*GTOpt/DiffType*— strategy for estimating derivatives.*GTOpt/FrontDensity*— approximate number of Pareto optimal solutions to be generated.*GTOpt/GlobalPhaseIntensity*— configure global search (*added in 1.10.5*).*GTOpt/GradientTolerance*— gradient threshold.*GTOpt/MaximumExpensiveIterations*— maximum number of expensive function evaluation calls.*GTOpt/MaxParallel*— maximum number of parallel threads (*added in 5.0 RC 1*).*GTOpt/NumDiffStepSize*— numerical differentiation step.*GTOpt/ObjectiveTolerance*— threshold for relative consecutive change of objective functions.*GTOpt/OptimalSetRigor*— set the degree of constraints violation allowed for the points in the extended result data set (*added in 3.0 Beta 1*).*GTOpt/OptimalSetType*— types of points to include in the result data set (*added in 3.0 Beta 1*).*GTOpt/RestoreAnalyticResponses*— restore analytic forms of problem objectives and constraints hinted as linear or quadratic (*added in 3.0 Beta 1*).*GTOpt/RobustConstraintsTolerance*— constraints violation relative error threshold for robust optimization.*GTOpt/RobustGradientTolerance*— gradient threshold for robust optimization.*GTOpt/RobustObjectiveTolerance*— objective value relative error threshold for robust optimization.*GTOpt/Techniques*— solving methods to use (*added in 6.9*).

**GTOpt/AbsoluteGradientTolerance**

Use absolute magnitude for gradient tolerance.

Value: Boolean Default: off If on, the value of

GTOpt/GradientToleranceoption is treated as the required magnitude of absolute remaining gradient (optimal descent) with no regard to the current values of objective function(s).

**GTOpt/BatchSize**

Maximum point batch size for batch mode.

Value: integer in range \([1, 16384]\) Default: 1 (batch mode off) This option, when set 2 or more, makes optimizer to work in batch mode, in which it can request multiple evaluations at each optimization step. Option value is the maximum number of points per step (actual number may be less for some steps). Default value is 1, meaning that batch mode is off.

**GTOpt/ConstraintsSmoothness**

Set assumed smoothness of constraint functions.

Value: "Smooth","Noisy", or"Auto"Default: "Auto"This option allows user to specify that all constraints functions are smooth (

"Smooth") or at least one of them is known to be noisy ("Noisy"). In noisy context GTOpt assumes that corresponding noise factor is less than 10%. If left default ("Auto"), GTOpt makes assumption on smoothness automatically.

"Smooth": all constraints are to be considered smooth functions."Noisy": at least one constraint is noisy with noise level at most 10%."Auto": GTOpt is free to assume whatever seems appropriate.

**GTOpt/ConstraintsTolerance**

Set required relative precision of constraints satisfaction.

Value: floating point number in range \([0.01 \cdot float\_epsilon, 1.0)\) Default: \(10^{-5}\) This option sets the required relative precision of constraints satisfaction. Constraints are required to be at most this number times relevant limit away from limiting value.

**GTOpt/CoordinateTolerance**

Set relative consecutive coordinate change threshold.

Value: floating point number in range \([l, 1.0)\) (\(l\) depends on gradients type, see GTOpt/DiffType), or 0.0Default: auto, depends on gradients type Optimization stops when \(L_{\infty}\) norm of relative consecutive coordinate change becomes smaller than value of this option. In case this option is set to 0.0 (this is not always valid), GTOpt does not use consecutive coordinate change norm value as optimization stop criterion.

The lower bound of this option value depends upon

GTOpt/DiffTypeoption:

GTOpt/DiffTypeset to"Numerical":GTOpt/CoordinateTolerancevalue range is \([10^4 \cdot double\_epsilon, 1.0)\).GTOpt/DiffTypeset to"Framed":GTOpt/CoordinateTolerancevalue range is \([0.1 \cdot float\_epsilon, 1.0)\).GTOpt/DiffTypeset to"Auto": same as for"Numerical"or"Framed", depending on context.For all these cases, actual value of

GTOpt/CoordinateToleranceis forced to be within the valid range.

**GTOpt/DiffScheme**

Set differentiation scheme order.

Value: "FirstOrder","SecondOrder","Adaptive", or"Auto"Default: "Auto"This options sets the differentiation scheme order and applies to both numerical and framed gradients. When analytic gradients are available, this option value is ignored.

"FirstOrder": utilize first order approximation exclusively, even close to optimality."SecondOrder": always use second order approximation, even far away from optimal set."Adaptive": adaptively switch between first and second order schemes basing on estimated distance to optimality."Auto": GTOpt is free to choose whatever scheme seems more appropriate.

**GTOpt/DiffType**

Set strategy for estimating derivatives.

Value: "Numerical","Framed", or"Auto"Default: "Auto"This option allows user to specify strategy to use to estimate derivatives. Note that for analytical gradients this option is ignored.

"Numerical": use conventional numerical differentiation if analytic gradients are not available. Most suitable for smooth problems, not to be used in noisy context."Framed": use framed (simplex-based) gradients if analytic gradients are not available. Most suitable for noisy problems, in smooth context leads to (sometimes severe) performance degradation."Auto": GTOpt is free to choose whatever differentiation type seems more appropriate.

**GTOpt/EnsureFeasibility**

Require optimizer to stay within the feasible domain.

Value: Boolean Default: off In many real life cases optimizer is required to stay strictly within the feasible domain, for instance, when the objective function is difficult or impossible to calculate outside the feasible set. This option allows user to request strict feasibility of all optimization iterates. By default it is off since feasibility requirement frequently causes performance degradation.

**GTOpt/FrontDensity**

Approximate number of Pareto optimal solutions to be generated.

Value: integer in range \((0, 1000)\) Default: 10 This option value sets the approximate number of Pareto optimal solutions to be generated (it makes sense for global multi-objective optimization mode only). GTOpt will try to generate \((density)^{K-1}\) solutions where \(K\) is the number of objective functions. Depending on front complexity, actual number of Pareto solutions may vary, but still remains of the same order as this option value.

**GTOpt/GlobalPhaseIntensity**

Configures global searching algorithms.

Value: floating point number in range \([0, 1]\) or "Auto"Default: "Auto"New in version 1.10.5.

Enables global optimum search and allows tuning the complexity of applied global methods and SBO. The minimal sensible step is \(0.01\).

This option has different meaning for expensive and non-expensive optimization problems (see

Hint Reference).For non-expensive problems

"Auto"(default) and zero value disable global search completely. Any positive value (more than or equal to \(0.01\)) enables global search and also sets algorithm complexity. Lesser values (close to zero) are suitable for “almost unimodal” problems in which multi-modality is not severe and simplest methods should work. Greater values (close to 1) may be used in hard to solve multi-modal cases; such setting results in using most complex and time-consuming global algorithms.See section

Local and Global Methodsfor more details.Changed in version 6.11.

For expensive problems, any positive value (more than or equal to \(0.01\)) enables general SBO framework (see

Surrogate Based Optimization) that takes into account both predicted values and error estimations of constructed surrogate models. The higher is the value of the option, the more intensive exploration and the more accurate optimization of internal criterion are. If"Auto"(default) is set, GTOpt determines the suitable value of the option based on provided number of expensive iterations (if set, seeGTOpt/MaximumExpensiveIterations) and problem dimensionality.Zero value turns on Direct SBO technique that does not consider error estimations of surrogate models (see

Direct SBO).

**GTOpt/GradientTolerance**

Set gradient threshold.

Value: floating point number in range \([0.0, 1.0)\) Default: \(10^{-5}\) This option sets the \(L_{\infty}\) norm of remaining gradient (or optimal descent for constrained and multi-objective problems) at which optimization stops. In case option value is 0.0, GTOpt optimizer will not use gradient norm value as optimization stop criterion.

Note that by default gradient tolerance refers to relative magnitude of remaining gradient (optimal descent) which is measured with respect to current value of objective function (\(L_{\infty}\) norm of objective functions).

GTOpt/AbsoluteGradientTolerancecan be used to switch to absolute gradient tolerance.

**GTOpt/LogLevel**

Set minimum log level.

Value: "Debug","Info","Warn","Error","Fatal"Default: "Info"If this option is set, only messages with log level greater than or equal to the threshold are dumped into log.

**GTOpt/MaximumIterations**

Maximum evaluations for constraints and objective functions.

Value: integer in range \([0, 2^{32} - 2]\) Default: 0 This options limits the number of evaluations done for constraints and objective functions. If set to 0 (default), there is no limit.

**GTOpt/MaxParallel**

Set the maximum number of parallel threads to use when solving.

Value: positive integer or 0 (auto) Default: 0 (auto) New in version 5.0rc1.

GTOpt can run in parallel to speed up solving. This option sets the maximum number of threads the solver is allowed to create. Default setting (0) uses the value given by the

OMP_NUM_THREADSenvironment variable, which by default is equal to the number of virtual processors, including hyperthreading CPUs. Other values overrideOMP_NUM_THREADS.

**GTOpt/MOPIsGlobal**

Set multi-objective optimization mode.

Value: Boolean Default: on This option switches multi-objective optimization modes. Global (on, default) means discovery of whole Pareto frontier. Local (off) implies search of single Pareto optimal solution close to the initial point.

**GTOpt/MaximumExpensiveIterations**

The maximum number of expensive function evaluation calls.

Value: integer in range \([3, 2^{31} - 1]\), or 0 (auto) Default: 0 (auto) Expensive functions are objectives and constraints marked as expensive using the

@GTOpt/EvaluationCostTypehint (seeHint Reference). Non-defaultGTOpt/MaximumExpensiveIterationsspecifies the same evaluation budget for each of these functions. If left default, the evaluation budget is determined automatically; the automatic budget is finite but can vary depending on the problem properties.

**GTOpt/NumDiffStepSize**

Numerical differentiation step.

Value: floating point number in range \([float\_epsilon, 1.0]\) Default: \(float\_epsilon \cdot 10\) This option sets the step size to use in numerical differentiation.

**GTOpt/ObjectivesSmoothness**

Set assumed smoothness of objective functions.

Value: "Smooth","Noisy", or"Auto"Default: "Auto"This option allows user to specify that all objective functions are smooth (

"Smooth") or at least one of them is known to be noisy ("Noisy"). In noisy context GTOpt assumes that corresponding noise factor is less than 10%. If left default ("Auto"), GTOpt makes assumption on smoothness automatically.

"Smooth": all objectives are to be considered smooth functions."Noisy": at least one objective function is noisy with noise level at most 10%."Auto": GTOpt is free to assume whatever seems appropriate.

**GTOpt/ObjectiveTolerance**

Set the threshold for relative consecutive change of objective functions.

Value: floating point number in range \([l, 1.0)\) (\(l\) depends on problem type), or 0.0 Default: \(l\) Optimization stops when \(L_{\infty}\) norm of relative consecutive change of objective functions becomes smaller than value of this option. In case this option is set to 0.0 (this is not always valid), GTOpt does not use the norm of consecutive change of objective functions as optimization stop criterion.

If analytical gradients are available, \(l\) is set equal \(0.1 \cdot float\_epsilon\). When analytical gradients are not available in the problem, \(l\) is silently changed to \(10^4 \cdot double\_epsilon\).

**GTOpt/OptimalSetRigor**

Set the degree of constraints violation allowed for the points in the extended result data set.

Value: floating point number in range \([0.0, 1.0]\) Default: 0.1 New in version 3.0beta1.

This option controls the degree of constraint and feasibility violation allowed for points included in the infeasible result set when

GTOpt/OptimalSetTypeis set to"Extended". Internally it sets several related thresholds in such a way that lower option values act as a weak filter, while higher values make the filter more aggressive. For example:

- 0.0: no filtering; all optimal points that violate constraints are included in the infeasible set.
- 1.0: no violation allowed; effectively the same as setting
GTOpt/OptimalSetTypeto"Strict"(the infeasible set will be empty).If

GTOpt/OptimalSetTypeis"Strict",GTOpt/OptimalSetRigoris ignored. For more details, see also sectionsOptimal SolutionandOptimal and Infeasible Point Sets.

**GTOpt/OptimalSetType**

Types of points to include in the result data set.

Value: "Extended","Strict", or"Auto"Default: "Auto"New in version 3.0beta1.

Since version 3.0 Beta 1, an optimization result may include additional points that satisfy optimality criteria but violate problem constraints and feasibility measures to a certain extent.

"Extended": include additional points as the infeasible set."Strict": include non-violating points only; with this setting, the infeasible set is empty (similar to solver behavior prior to 3.0 Beta 1)."Auto": defaults to"Extended".The degree of constraint violation allowed for infeasible result points is controlled by the

GTOpt/OptimalSetRigoroption. For more details, see also sectionsOptimal SolutionandOptimal and Infeasible Point Sets.Note that in robust optimization problems setting

GTOpt/OptimalSetTypeto"Extended"can also have an effect on which points are included into the optimal set due to some aspects of selecting candidate solutions (see sectionApproximated Problem Solutionfor details). That is, comparing the results of solving the same robust optimization problem withGTOpt/OptimalSetTypeset to"Strict"and"Extended"can show differences. These differences are minor if we consider overall quality of the result. For more details on how optimal and infeasible results are formed in the robust case, seeRobust Problem Optimal Solution.

**GTOpt/RestoreAnalyticResponses**

Restore analytic forms of problem objectives and constraints hinted as linear or quadratic.

Value: Boolean or "Auto"Default: "Auto"New in version 3.0beta1.

If on, GTOpt tries to restore analytic forms of the objective and constraint functions for that the linearity type hint is set to

"Linear"or"Quadratic"(seeHint Reference). Once done, such functions are evaluated internally by GTOpt solver without calling the problem.This feature is enabled by default (when option value is

"Auto").

**GTOpt/RobustConstraintsTolerance**

Set constraints violation relative error threshold for robust optimization.

Value: floating point number in range \([0.01 \cdot float\_epsilon, 1.0)\) Default: 0.025 Robust optimization can stop only when current estimate of constraints violation relative error at proposed optimal point becomes smaller than this option value. See also

GTOpt/RobustObjectiveTolerance.

**GTOpt/RobustGradientTolerance**

Set the gradient threshold for robust optimization.

Value: floating point number in range \([0.0, 1.0)\) Default: 0.0 (disabled) Within the robust optimization context it might be desirable to check the magnitude of extremality of proposed solution. This option provides an upper bound on required relative uncertainty of estimated extremality measure. Note that explicit verification of solution extremality might be quite expensive (normally the solution is indeed extremal without explicit checks). For this reason default option value disables extremality testing.

**GTOpt/RobustObjectiveTolerance**

Set objective value relative error threshold for robust optimization.

Value: floating point number in range \([float\_epsilon, 1.0)\) Default: 0.025 Robust optimization can stop only when current estimate of relative error of the objective value at proposed optimal solution becomes smaller than this option value. See also

GTOpt/RobustConstraintsTolerance.

**GTOpt/Techniques**

Solving methods to use.

Value: string representing a list of enabled techniques Default: "[]"(automatic selection)New in version 6.9.

This option allows to select specific optimization methods that will be enabled when solving the problem. The methods are sorted into 3 groups: governing (general problem type), globalization, and local methods. Technique value is a string with a list of methods, for example

"[MO, PM, MoM]". Note that techniques have certain requirements to problem’s properties — such as the number of objectives, types of constraints, and other (noted below). These requirements are checked byOptimizerbefore solving the problem, and if they are not met, the block will stop with an error.

- Governing methods (specify general problem type):

- SO — single-objective problem. Requires a problem with 1 or no objectives.
- MO — multi-objective problem. Requires a problem with 2 or more objectives.
- RDO — robust design optimization. Requires a robust optimization problem (see section
Robust Optimization).- SBO — surrogate based optimization. Requires a problem with at least one expensive objective or constraint (see section
Surrogate Based Optimization).- Globalization methods (control global search):

- RL — random linkage. Requires
GTOpt/GlobalPhaseIntensitygreater than 0 and a problem with no expensive objectives or constraints.- PM — plain multistart. Requires a problem with no expensive objectives or constraints.
- MS — multistart pi. Requires
GTOpt/GlobalPhaseIntensitygreater than 0 and a problem with no expensive objectives or constraints.- Local optimization methods. Note that the list must include only one of these methods.

- FD — feasible direction method. Requires a problem with no expensive objectives or constraints.
- MILP — mixed-integer linear programming. Requires all objectives and constraints to be linear. Requires a problem with enabled analytical gradients.
- MoM — method of multipliers. Requires a problem with no expensive objectives or constraints, and there must be at least one constraint.
- NCG — non-linear conjugated gradients. Requires a problem with no expensive objectives and without any constraints.
- NLS — non-linear simplex. Requires a problem with no expensive objectives and without any constraints.
- Powell — Powell’s conjugate direction method. Requires a problem with no expensive objectives and without any constraints.
- QP — quadratic programming. Requires all objectives to be linear or quadratic, requires at least one constraint, all constraints must be linear.
- SQP — sequential quadratic programming. Requires a problem with no expensive objectives or constraints, and there must be at least one constraint.
- SQ2P — sequential quadratic constrained quadratic programming. Requires a problem with no expensive objectives or constraints, and there must be at least one constraint.
Note that you can select only one local method and only one globalization method, but multiple governing methods are allowed if they are compatible with the problem.

**GTOpt/TimeLimit**

Optimization time limit.

Value: integer in range \([0, 2^{32} - 2]\) Default: 0 This option sets the maximum allowed time to solve a problem in seconds. Defaults to 0, unlimited.

**GTOpt/VerboseOutput**

Turn on/off trace level logging of optimization process.

Value: Boolean Default: off New in version 1.6.2.

If on,

GTOpt/LogLevelis always"Debug", and logs include additional information. Note that this option produces very large logs.

## Known Issues¶

- Constraint type hints in
*Optimizer*configuration dialog may be displayed wrong if multiple hints are set at once. It is recommended to click after each hint change.