Optimization process in pSeven
Simplified scheme describing optimization problem solution in pSeven is given below:
Problem setup
The goal of this step is to get the cheapest and adequate representation of the models to investigate. This allows identifying problem type. Steps to perform: Performance measure(s) setup Recommendations:
 Consider only relevant model responses in any optimization problem.
 Estimate uncertainties of both model and model responses, since the effects unaccounted by model itself (systematic uncertainties) must be much smaller than expected improvement of responses.
 Ensure proper scaling of selected performance measures. Responses which differ by many orders of magnitude are difficult to handle numerically. Though pSeven automatically performs basic problem scaling, there are cases where automatic procedure might fail.
 Set the number of performance measures you want to consider.
Start with single relevant objective function, adding performance measures if no difficulties had been identified. Model evaluations performed at simplified singleobjective stage can be reused, sincepSeven archives performed evaluations.
Design Variables Selection
Recommendations:
 Consider only relevant model parameters. Note that pSeven allows to conduct a quick preliminary studies to rank the relevance of various design parameters.
 Ensure proper scaling of design variables. Same as with objectives, Seven automatically performs design space rescaling, but any automated procedure is subject to failure.
 Consider only bounded design variables
Constraints Choice
Recommendations:
 The problem often contains explicit constraints on geometry parameters. These geometry constraints should be kept as simple as possible. If the geometry constraint could be written as linear combination of design variables, then it must be written this way (optimizer is to be advised accordingly via the respective hints, see the picture at the bottom of the page).
 For generic constraints, recommendations are similar to these of objective functions. User needs to ensure:
 Constraint relevance to underlying processes
 Proper scaling of constraint functions
 Sufficient degree of certainty of imposed constraints
Uncertain Parameters Definition
Specification of uncertain parameters is challenging, since identifying uncertainties relevant to the considered model is required. Optimization tasks with uncertainties are classified as robust optimization problems. This type of optimization problems is much more difficult to handle. General recommendation is to try to avoid or decrease the number of uncertain parameters. Uncertainties of an optimal design are often negligible and simplified problem formulation is suitable. Otherwise, a simplified solution provides good starting point to conduct robust optimization and uncertainty quantification (relevance of various uncertain parameters). In some cases, this preliminary study allows to reduce the number of initial uncertainties in half.
Types of optimization problems
Type of problem  Characteristics 
SingleObjective Nonlinear Constrained Problem (NLP) or its unconstrained analog (UNLP) 

MultiObjective Nonlinear Constrained Problem (MOP) 

Robust Optimization problem (RO) 

Preliminary study of Formulated Problem
Next step is to conduct preliminary experiments with formulated models in order to determine its basic properties. This allows to detect possible technical issues and estimate model robustness and to help pSeven to select the best suited optimization algorithm. It is possible to select the most common properties for all the responses in pSeven, but in this case a common multipurpose algorithm might be automatically selected, though it may not be the most efficient one. In order to get the best out of pSeven capabilities, it is necessary to provide additional information to the software so that the choice of best suited optimization method could be done.
Preliminary study of the model consists of two parts:
 Investigation of model behavior at large scales (overall design space)  mandatory
 Study of smallscale properties of the model  optional
First stage is to define if the optimization of formulated model is necessary. It allows to estimate the optimization potential (improvement of performance criteria). Second stage  investigation of small scale model properties  might be skipped, but with the risk of potential optimization performance degradation.
I. Investigation of model behavior at large scales
Design of Experiment (DoE) study is the best tool to examine largescale behavior of a model. It is aimed to probe model responses over the whole design space. pSeven provides large variety of DoE scenarios. The outcome of DoE has to be analyzed to answer the following questions:
How stable the model responses are? What is the actual model validity region?
The means is to calculate the fraction of unsuccessful model evaluations. Model stability can be quantified bases on the number of unsuccessful runs: if it is smaller than ~20%, then the optimization will most likely be successful. If this percentage is bigger than 50%, then optimization is pointless. The solution is to reduce the design space (though this is very contextual). Below are some options of reduction:
a. Diminish the box bounds
Imposed box bound sometimes are too optimistic and underlying model becomes invalid within the boxes. Try to diminish box bounds until either the fraction of unsuccessful runs becomes acceptable or boxes could not be reduced further.
b. Introduce additional linear constraints
In large dimensions, volume of design space cannot be simply reduced by box bounds along (most of design volume is concentrated near the corners of imposed box). In case the model validity region is located near the origin, box bounds would not help to diminish formally available design space. Possible solution is to introduce additional linear constraints, which excludes designs near the corners of the box bounds. It is important to note that most of optimization methods strongly respect linear constraints.
What is the variation range of each model response and how does it correlate with model uncertainty?
Relevant scales to be compared: variation magnitude of model responses over the whole design space and the expected uncertainty of corresponding responses. First quantity is resulting from DoE. Estimation of responses uncertainty requires specific knowledge of the model in question. Note that model optimization makes sense only when the first quantity is much larger than the second, otherwise it is impossible to tell the effect of optimization from generic random variability of responses due to respective uncertainties.
II. Study of small scale properties of the model
The purpose of this investigation is to estimate the level of noisiness of the model responses. Conducting a few runs is recommended. At the course of runs, design parameters are slightly changing along an arbitrary line in the design space. A small change of design parameters leads to two possibilities:
 Measured responses are indeed close to linear, or
 Measured responses exhibit random abrupt deviations from linear law.
The first case confirms sufficient smoothness of the responses, thus allowing gradientbased optimization methods. In the second case, model behavior is noisy, and one should compare the measured magnitude of random deviations (noise magnitude) to the large scale variability of responses examined previously. Optimization makes sense only when noise magnitude is much smaller than the expected largescale variations. If this is not the case, model uncertainty is to be diminished first.
Conducting Optimization in pSeven
Final step is to conduct optimization itself. First, optimization problem setup is performed — objectives, criteria and constraints are to be defined. Then one should specify the type of considered problem:
 Expected noise magnitude of model responses
 Type of relevant response functions  whether they are linear, quadratic or generic functions of design parameters
 Expected degree of model multimodality  indicate whether the optimizer should perform local or globalized search of optimal solution. Moreover, depending on expected model multimodality, it is possible to explicitly regulate the globalization degree of involved optimization algorithms.
 Computational complexity of underlying model responses. If a problem contains expensive to evaluate observables, it is better to explicitly set the maximal number of functions evaluations. Information on model properties provided this way is used by pSeven to automatically select the most appropriate method to solve the optimization problem.
For more details on the optimization process in pSeven, please watch the dedicated webinar.