11.9. da.p7core.gtopt
¶
Generic Tool for Optimization (GTOpt) module.
>>> from da.p7core import gtopt
Submodules
da.p7core.gtopt.diagnostic |
GTOpt optimizer diagnostic record. |
Classes
da.p7core.gtopt.ProblemConstrained (*iargs, …) |
Simplified problem class for constrained problems. |
da.p7core.gtopt.ProblemCSP (*iargs, **ikwargs) |
Simplified problem class for constraint satisfaction problems (CSP). |
da.p7core.gtopt.ProblemGeneric (*iargs, **ikwargs) |
Base optimization problem class. |
da.p7core.gtopt.ProblemMeanVariance (*iargs, …) |
Simplified problem class for mean variance problems. |
da.p7core.gtopt.ProblemUnconstrained (*iargs, …) |
Simplified problem class for unconstrained problems. |
da.p7core.gtopt.ProblemFitting (*iargs, **ikwargs) |
Specialized problem class for fitting problems (see Data Fitting Problem for details). |
da.p7core.gtopt.Result (info, status, …) |
Optimization result. |
da.p7core.gtopt.Solver () |
Optimizer interface. |
da.p7core.gtopt.ValidationResult (status, details) |
Validation result and details. |
11.9.1. ProblemConstrained
— constrained problem¶
-
class
da.p7core.gtopt.
ProblemConstrained
(*iargs, **ikwargs)¶ Simplified problem class for constrained problems. Inherits from
ProblemGeneric
.This class does not support the usage of analytical objective and constraint gradients.
To define a constrained optimization problem, create your own problem class, inheriting from
ProblemConstrained
. This class must implement the following methods:prepare_problem()
inherited fromProblemGeneric
define_objectives()
(ordefine_objectives_batch()
)define_constraints()
(ordefine_constraints_batch()
)
-
define_constraints
(x)¶ An abstract method to define problem constraints.
Parameters: x ( ndarray
, 1D) – point to evaluateReturns: evaluation results Return type: array-like, 1D Changed in version 3.0 Release Candidate 1: the x argument is
ndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_constraints_batch()
(which uses this method by default).The shape of x is the same as in
define_objectives()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of a constraint failed.
None
value indicates that evaluation of a constraint was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_constraints_batch
(x)¶ Default implementation of the method defining problem constraints. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – points batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
; default implementation also returnsndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
ProblemConstrained.evaluate()
to calculate constraints. Default implementation simply loops over the points batch x, callingdefine_constraints()
for each point. May be reimplemented by user to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is the same as in
define_objectives_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of a constraint failed.
None
value indicates that evaluation of a constraint was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_objectives
(x)¶ An abstract method to define problem objectives.
Parameters: x ( ndarray
, 1D) – point to evaluateReturns: evaluation results Return type: ndarray
, 1DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_objectives_batch()
(which uses this method by default).The shape of x is (1, m) where m is the input dimension (
size_x()
+size_s()
). The firstsize_x()
values are classic variables (seeadd_variable()
), while the followingsize_s()
values are stochastic variables (seeset_stochastic()
).The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some objective indicates that evaluation of an objective failed.
None
value indicates that evaluation of an objective was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_objectives_batch
(x)¶ Default implementation of the method defining problem objectives. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – points batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
; default implementation also returnsndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
ProblemConstrained.evaluate()
to calculate objectives. Default implementation simply loops over the points batch x, callingdefine_objectives()
for each point. May be reimplemented to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is (n, m) where n is the number of points to evaluate (at most GTOpt/BatchSize) and m is the input dimension (
size_x()
+size_s()
). For each row, the firstsize_x()
values are classic variables (seeadd_variable()
), while the followingsize_s()
values are stochastic variables (seeset_stochastic()
).The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of an objective failed.
None
value indicates that evaluation of an objective was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
evaluate
(queryx, querymask)¶ Default implementation of the
evaluate()
method inherited from the base classProblemGeneric
. Should not be reimplemented; usedefine_objectives()
anddefine_constraints()
.
11.9.2. ProblemCSP
— constraint satisfaction problem¶
-
class
da.p7core.gtopt.
ProblemCSP
(*iargs, **ikwargs)¶ Simplified problem class for constraint satisfaction problems (CSP). Inherits from
ProblemConstrained
.This class does not support the usage of analytical constraint gradients, and should not add any problem objectives.
To define a constraint satisfaction problem, create your own problem class, inheriting from
ProblemCSP
. This class must implement the following methods:prepare_problem()
inherited fromProblemGeneric
define_constraints()
(ordefine_constraints_batch()
) inherited fromProblemConstrained
Note that this class should not implement
evaluate()
.-
define_objectives
(x)¶ Empty definition of objectives. Does nothing. Should not be reimplemented (CSP must not define any objectives).
11.9.3. ProblemGeneric
— base problem¶
-
class
da.p7core.gtopt.
ProblemGeneric
(*iargs, **ikwargs)¶ Base optimization problem class.
To define an optimization problem, create your own problem class, inheriting from
ProblemGeneric
or its descendants.All problem properties are defined in the
prepare_problem()
method of the derived class. Insideprepare_problem()
, use these inherited methods:- Basic problem definition:
- Advanced features:
enable_objectives_gradient()
— use analytical gradients of objective functionsenable_constraints_gradient()
— use analytical gradients of constraint functionsset_history()
— configure saving objective and constraint evaluations; note that the memoryhistory
is enabled by default, which increases memory consumptionset_stochastic()
— essential method for robust optimization problems
In all classes derived directly from
ProblemGeneric
, you also have to implement theevaluate()
method that calculates values of objectives and constraints. This method is the only one that supports all optimizer features, but due to its complexity it may be difficult to use (see code sample). Because of that, GTOpt module includes a number of simplified problem classes:A problem object can be converted to a string to obtain a short human-readable problem description, for example:
>>> import da.p7core.gtopt >>> class MyProblem(da.p7core.gtopt.ProblemGeneric): >>> def prepare_problem(self): >>> self.add_variable((0, 1), 0.5) >>> self.add_variable((0, 2), 0.5) >>> self.add_objective() >>> def evaluate(self, xquery, maskquery): >>> return [[x[0]**2] for x in xquery], [[1] for x in xquery] >>> problem = MyProblem() >>> print problem da.p7core GTOpt problem: Type: MyProblem Number of variables: 2 Number of objectives: 1 Number of constraints: 0 Analytical objectives gradients: False Analytical constraints gradients: False Variables bounds: x1: 0.000000 1.000000 x2: 0.000000 2.000000 Initial guess: [0.500000,0.500000] >>> result = da.p7core.gtopt.Solver().solve(problem) >>> print result.optimal.x [[7.116129095440896e-07, 0.5050600736179194]] >>> print result.optimal.f [[5.063929330298047e-13]]
-
add_constraint
(bounds, name=None, hints=None)¶ Add a new problem constraint.
Parameters: - bounds (array-like) – low and high bounds
- name (
str
) – the name of the constraint - hints (
dict
) – optimization hints
Initializes a new constraint in the problem.
The bounds argument is a tuple of two values:
(lower, upper)
. One of the bounds can beNone
, meaning that there is no respective bound for the constraint.The name argument is optional; if you do not provide a name, it is generated automatically. Auto names are
"c1"
,"c2"
,"c3"
, and so on, in the order of adding constraints to a problem.Changed in version 3.0 Release Candidate 1: names of constraints are no longer required to be valid Python identifiers.
The hints argument sets constraint-specific options that may direct optimizer to use alternative internal algorithms to increase performance (see Hint Reference). It is a dictionary
{hint name: value}
, for example{"@GTOpt/LinearityType": "Quadratic"}
.If you implement
evaluate()
, constraints in querymask are indexed after objectives and in the order of adding constraints to a problem. This indexing order is also kept inResult
attributes.Changed in version 3.0 Release Candidate 1: name indexing for constraints is no longer supported.
This method should be called from
prepare_problem()
.
-
add_objective
(name=None, hints=None)¶ Add a new problem objective.
Parameters: - name (
str
) – the name of the objective - hints (
dict
) – optimization hints
Initializes a new objective in the problem.
The name argument is optional; if you do not provide a name, it is generated automatically. Auto names are
"f1"
,"f2"
,"f3"
, and so on, in the order of adding objectives to a problem.Changed in version 3.0 Release Candidate 1: names of objectives are no longer required to be valid Python identifiers.
The hints argument sets objective-specific options that may direct optimizer to use alternative internal algorithms to increase performance (see Hint Reference). It is a dictionary
{hint name: value}
, for example{"@GTOpt/LinearityType": "Quadratic"}
.If you implement
evaluate()
, objectives in querymask are indexed in the order of adding them to a problem. This indexing order is also kept inResult
attributes.Changed in version 3.0 Release Candidate 1: name indexing for objectives is no longer supported.
This method should be called from
prepare_problem()
.- name (
-
add_variable
(bounds, initial_guess=None, name=None, hints=None)¶ Add a new problem variable.
Parameters: - bounds (
tuple(float)
) – bounds or levels - initial_guess (
float
) – initial guess - name (
str
) – the name of the variable - hints (
dict
) – additional hints
Changed in version 6.14: added discrete and categorical variables support when using the class with GTDoE only.
Changed in version 6.15: added discrete variables support to GTOpt.
Changed in version 6.29: added stepped variables support.
Changed in version 6.33: added categorical variables support to GTOpt.
Declares a new variable in the problem.
For continuous and integer variables, bounds is a tuple of two
float
values:(lower, upper)
. The initial_guess, if specified, must be within bounds.For discrete, stepped, and categorical variables, bounds is a tuple specifying allowed values (levels) of the variable. The sorting order of those values does not matter, the list of levels may be unsorted. All values must be
float
— for example, if your problem includes a categorical variable with string values, you should denote its categories with arbitraryfloat
numbers. The initial_guess, if specified, must be one of the level values specified by bounds.For continuous variables only,
None
is valid as the lower or upper bound, meaning that the variable is unbound in the respective direction. Your problem may declare continuous variables that are unbound in one or both directions, given that the problem satisfies the following:- All responses are computationally cheap, that is, you do not set
the @GTOpt/EvaluationCostType hint to
"Expensive"
for any response. - There are no integer or discrete variables in the problem.
- There are no stochastic variables in the problem.
In other kinds of problems, each variable requires numeric bounds or levels, and using unbound variables leads to an
InvalidProblemError
exception when solving.The name argument is optional; if you do not provide a name, it is generated automatically. Auto names are
"x1"
,"x2"
,"x3"
, and so on, in the order of adding variables to a problem.Changed in version 3.0 Release Candidate 1: names of variables are no longer required to be valid Python identifiers.
The hints parameter can be used to specify type of the variable — see Hint Reference for details. It is a dictionary
{hint name: value}
, for example{"@GT/VariableType": "Integer"}
.Variables are always indexed in the order of adding them to a problem. This indexing is kept in the queryx parameter to
ProblemGeneric.evaluate()
, in the x parameter to problem definition methods of the simplified problem classes (such asProblemConstrained.define_objectives()
,ProblemConstrained.define_constraints()
and the like), and inResult
attributes.Changed in version 3.0 Release Candidate 1: name indexing for variables (as in
x["name"]
orx.name
) is no longer supported.This method should be called from
prepare_problem()
.- bounds (
-
clear_history
()¶ Clear
history
.New in version 4.0.
Removes all evaluations currently stored in the memory history, but does not disable it. For disabling, see
disable_history()
orset_history()
.
-
constraints_bounds
()¶ Get constraints bounds.
Returns: constraints bounds as tuple of two iterable objects Return type: tuple
-
constraints_gradient
()¶ Get constraint gradient info.
Returns: constraint gradient info Return type: tuple(bool, bool, tuple, tuple)
This method returns a tuple of four elements (enabled, sparse, non-zero rows, non-zero columns).
First boolean element (enabled) is
True
if analytical constraint gradients are enabled in the problem. If enabled isFalse
, all other elements should be ignored as meaningless.Second boolean (sparse) has a meaning only if enabled is
True
. Value of sparse isTrue
if the gradients are sparse. If sparse isFalse
(gradients are dense), all other elements in the returned tuple should be ignored as meaningless.Tuple elements provide the lists of non-zero rows and columns for sparse gradients. Naturally, these lists only have a meaning when both enabled and sparse are
True
; in all other cases the tuples are empty.
-
constraints_names
()¶ Get names of constraints.
Returns: name list Return type: list[str]
-
designs
¶ Compacted history of problem evaluations.
Type: array-like New in version 5.1.
Similar to
history
, but ensures that each evaluated point appears only once by combining all evaluation results available for this point. Can still containNone
values (meaning that some function was never evaluated) and NaN (meaning that a function was evaluated but calculation failed). For more details on the array structure and the meaning ofNone
and NaN values seehistory
.
-
disable_constraints_gradient
()¶ Disable using analytical constraint gradients.
New in version 2.0 Release Candidate 1.
Disables analytical gradients for constraints and switches back to using numerical differentiation (see
enable_constraints_gradient()
).This method should be called from
prepare_problem()
. It is intended to cancel analytical constraint gradients in a new problem class inherited from a problem with enabled analytical gradients.
-
disable_history
()¶ Disable saving objective and constraint evaluations completely.
New in version 2.0 Release Candidate 1.
Disables both memory and file history. Objective and constraint evaluation results will no longer be stored in
history
or the configured history file (see file inset_history()
).Disabling does not clear current contents of
history
(seeclear_history()
).
-
disable_objectives_gradient
()¶ Disable using analytical objective gradients.
New in version 2.0 Release Candidate 1.
Disables analytical gradients for objectives and switches back to using numerical differentiation (see
enable_objectives_gradient()
).This method should be called from
prepare_problem()
. It is intended to cancel analytical objective gradients in a new problem class inherited from a problem with enabled analytical gradients.
-
elements_hint
(indexElement, nameHint)¶ Get current hints for problem element.
Parameters: - indexElement (
int
) – index of element in order: variables, objectives, constraints - nameHint (
str
) – name of hint
Returns: hint value
Return type: str
orNone
This method returns current value of hint nameHint for an element of the problem (variable, objective function or constraint) with the given indexElement index, or
None
if the hint with given name is not available for the element.For the list of available hints, see Hint Reference.
- indexElement (
-
enable_constraints_gradient
(sparse=None)¶ Enable using analytical constraint gradients.
Parameters: sparse (array-like) – non-zero rows and columns By default, the problem automatically uses numerical differentiation to provide constraint gradient values to
Solver
. Alternatively, you may provide gradients inevaluate()
— see its description for more details. Before that, the problem has to be switched to analytical constraint gradients mode by callingenable_constraints_gradient()
once upon initialization. This method should be called fromprepare_problem()
. Note that not all problem classes support analytical gradients.Gradients may be set sparse using the sparse argument. This is a tuple of two lists of the same length where the first list contains the indices of non-zero rows in the gradient, and the second list contains the indices of non-zero columns.
None
(default) means that constraint gradient is dense.For example, consider a problem with two variables and two constraints:
\[\begin{split}\begin{array}{cc} (x_1 - 1)^2 &\le 0\\ x_2 &\le 0 \end{array}\end{split}\]The Jacobian matrix for this problem is
\[\begin{split}\left(\begin{array}{cc} 2x_1 - 2 & 0\\ 0 & 1 \end{array}\right)\end{split}\]Non-zero elements in the Jacobian are (0, 0) and (1, 1), so the sparse argument should be
([0, 1], [0, 1])
. The problem can be defined as follows:from da.p7core import gtopt class MyProblem(gtopt.ProblemGeneric): def prepare_problem(self): self.add_variable((None,None)) self.add_variable((None,None)) self.add_constraint((None, 0)) self.add_constraint((None, 0)) self.enable_constraints_gradient(([0, 1], [0, 1])) def evaluate(self, x_batch, mask_batch): c_batch = [] # mask_batch is ignored for brevity for x in x_batch: c_batch.append([(x[0] - 1)**2, x[1], 2*(x[0] - 1), 1]) # since all responses were calculated, extend the mask to [1, 1, 1, 1] mask_batch = [1, 1, 1, 1] * len(mask_batch) return c_batch, mask_batch
There are four elements in the list of evaluations in
c_batch.append
, while in the case of dense gradients it would bec_batch.append([(x[0] - 1)**2, x[1], 2*(x[0] - 1), 0, 0, 1])
.
-
enable_history
(inmemory=True, file_arg=None, header=True)¶ Enable saving objective and constraint evaluations.
Parameters: - file_arg (
str
orfile
) – write history to file - header (
bool
) – add a header to the history file - inmemory (
bool
) – store history in memory (on by default)
New in version 1.11.0.
Deprecated since version 4.0: use
set_history()
instead.Since version 4.0, replaced by a more convenient
set_history()
method. See alsoclear_history()
anddisable_history()
.- file_arg (
-
enable_objectives_gradient
(sparse=None)¶ Enable using analytical objective gradients.
Parameters: sparse (array-like) – non-zero rows and columns By default, the problem automatically uses numerical differentiation to provide objective gradient values to
Solver
. Alternatively, you may provide gradients inevaluate()
— see its description for more details. Before that, the problem has to be switched to analytical objective gradients mode by callingenable_objectives_gradient()
once upon initialization.Gradients may be set sparse using the sparse argument. This is a tuple of two integer arrays of same length where first array contains indices of non-zero rows in objective gradient, and second array contains indices of non-zero columns.
None
(default) means that objective gradient is dense.This method should be called from
prepare_problem()
. Note that not all problem classes support analytical gradients.For an example of using sparse gradients, see
enable_constraints_gradient()
.
-
evaluate
(queryx, querymask)¶ Calculates values of objective functions and constraints. This method must be implemented by user.
Parameters: - queryx (
ndarray
, 2Dfloat
) – points to evaluate - querymask (
ndarray
, 2Dbool
) – evaluation requests mask
Returns: evaluation results (array-like, 2D) and masks (array-like, 2D, Boolean)
Return type: tuple(array-like, array-like)
Changed in version 3.0 Release Candidate 1: the queryx argument is
ndarray
.Changed in version 6.19: it is now possible to skip some evaluations requested by
Solver
.Changed in version 6.24: skipped evaluations may be indicated with
None
response values, regardless of the response flag in the output mask.When
Solver
requests values of problem objectives and constraints, it sends the queryx sample toevaluate()
. The shape of this array is (n, m) where n is the number of points to evaluate (at most GTOpt/BatchSize) and m is the input dimension (size_x()
+size_s()
). For each row, the firstsize_x()
values are classic variables (seeadd_variable()
), while the followingsize_s()
values are stochastic variables (seeset_stochastic()
).evaluate()
has to process queryx and return values of objectives and constraints (and gradients, if they are enabled in the problem) according to the querymask.The querymask contains a mask of responses requested by
Solver
for each point. It is a 2Dndarray
(bool
) of shape (n, l) where n is the number of points in queryx (a mask for each point; note that each point may have a different mask), and l is the mask length equal tosize_full()
.Mask order is [objectives, constraints, objective gradients, constraint gradients]: for example, if three variables, one objective and two constraints were defined in the problem, and all gradients are dense, mask length is 12 (1 + 2 + 1 \(\cdot\) 3 + 2 \(\cdot\) 3). Masks are used to perform evaluations selectively — that is, if GTOpt requests only one gradient value, there is no need to evaluate all other gradients, as well as objectives and constraints. To take advantage of this feature, the evaluation method should be implemented in such a way that supports selective evaluation by mask.
An implementation of this method must return both evaluation results and evaluation masks as 2D arrays. Indexing order for both is the same as in the input querymask: [objectives, constraints, objective gradients, constraint gradients], and array shape is determined by the length of the input batch, the number of objectives and constraints, and the number of gradient values (see
size_full()
for more details).Returned evaluation mask informs
Solver
what responses (objectives, constraints, gradients) were evaluated. For mask flags, use eitherbool
or0
and1
.- A response flagged
True
in the input mask should be evaluated. However,Solver
can handle failed evaluations to some extent, so there are several possibilities:- Evaluate the response, add its value to results, and flag it
True
in the output mask. If response evaluation fails, set its value to NaN and flag itTrue
. - Skip evaluation, flag the response
False
, and put any value into results (Solver
discards this value; indesigns
andhistory
, the value is replaced withNone
). - Skip evaluation and set the response value to
None
. In this case, the response flag in the returned mask is disregarded (you may set itTrue
for simplicity).
- Evaluate the response, add its value to results, and flag it
- A response flagged
False
in the input mask is optional. You may choose to:- Skip evaluation, flag it
False
, and put any value into results (Solver
discards it; indesigns
andhistory
, the value will beNone
). - Skip evaluation and set the response value to
None
. In this case, the response flag in the returned mask is disregarded (you may set itTrue
for simplicity). - Evaluate the response, add it to results, and flag it
True
. DespiteSolver
did not request this value, it may still be useful in optimization. If response evaluation fails, set its value to NaN and flag itTrue
.
- Skip evaluation, flag it
Note that skipped (
None
or flaggedFalse
) and failed (NaN but flaggedTrue
) evaluations may stop optimization prematurely.General advice is to evaluate responses selectively, separating those requested frequently from the ones requested rarely, for example:
- Cheap and expensive functions (see Hint Reference).
- Response values and gradient values.
- Generic, linear and quadratic response functions (see Hint Reference).
- In some cases you may prefer to evaluate objectives and constraints separately.
Other separations mostly make sense only if they do not complicate the code.
See the example_gtopt_generic.py code sample for an example implementation of this method.
- queryx (
-
history
¶ Exact history of problem evaluations stored in memory.
Type: array-like New in version 1.11.0.
Stores values of variables and evaluation results. Each element of the top-level list is one evaluated point. Nested list structure is [variables, objectives, constraints, objective gradients, constraint gradients]. Gradients are added only if analytical gradients are enabled, see
enable_objectives_gradient()
andenable_constraints_gradient()
).Changed in version 5.1: missing evaluation results are stored as
None
values, not NaN (float
).Often
Solver
requests only a partial evaluation of the problem (see the querymask argument toevaluate()
). For such points, non-evaluated functions (objectives, constraints, gradients) are noted withNone
values to distinguish them from afloat
NaN value. NaN in history specifically indicates that a function was evaluated but calculation failed (for example, the point to evaluate for was out of the function’s domain).The history stores all inputs and outputs exactly as they were evaluated, which may be unconvenient in some cases. For example,
Solver
can request objective and constraint values for the same point on different iterations, and in this case the point will appear in history two or more times. This is useful for tracing the optimization process, but when you want to re-use evaluation data, a more convenient representation can be found indesigns
.Note
Memory history is enabled by default, which increases memory consumption. If you implement your own way to save the history of evaluations, always use
disable_history()
. If there are a lot of evaluations in your problem, consider reconfiguring history to only write it to a file (seeset_history()
).
-
initial_guess
()¶ Get initial guess for all variables.
Returns: initial guess iterable (if present) Return type: list[float]
orNone
-
objectives_gradient
()¶ Get objective gradient info.
Returns: objective gradient info Return type: tuple(bool, bool, tuple, tuple)
This method returns a tuple of four elements (enabled, sparse, non-zero rows, non-zero columns).
First boolean element (enabled) is
True
if analytical objective gradients are enabled in the problem. If enabled isFalse
, all other elements should be ignored as meaningless.Second boolean (sparse) has a meaning only if enabled is
True
. Value of sparse isTrue
if the gradients are sparse. If sparse isFalse
(gradients are dense), all other elements in the returned tuple should be ignored as meaningless.Tuple elements provide the lists of non-zero rows and columns for sparse gradients. Naturally, these lists only have a meaning when both enabled and sparse are
True
; in all other cases the tuples are empty.For an example of using sparse gradients, see
enable_constraints_gradient()
.
-
objectives_names
()¶ Get names of objectives.
Returns: name list Return type: list[str]
-
prepare_problem
()¶ The problem initialization method, has to be implemented by user. Use the following methods for problem definition:
- Basic problem definition:
- Advanced features:
enable_objectives_gradient()
— use analytical gradients of objective functionsenable_constraints_gradient()
— use analytical gradients of constraint functionsset_history()
— configure saving objective and constraint evaluations; note that the memoryhistory
is enabled by default, which increases memory consumptionset_stochastic()
— essential method for robust optimization problems
See the usage in example_gtopt_generic.py.
-
set_constraint_bounds
(index, bounds)¶ Set bounds for a constraint.
Parameters: - index (
int
) – index of the constraint in the list of problem constraints - bounds (array-like) – lower and upper bounds
- index (
-
set_constraint_hints
(index, hints)¶ Set hints for a constraint.
Parameters: - index (
int
) – index of the constraint in the list of problem constraints - hints (
dict
) – hint settings
Resets all hints previously set for the constraint, replacing all existing settings with the new settings from hints. To update hint settings without resetting them, use
update_constraint_hints()
.- index (
-
set_history
(**kwargs)¶ Configure saving objective and constraint evaluations.
Parameters: - add_header (
bool
) – add a header to the history file - file (
str
,file
orNone
) – write history to file - memory (
bool
) – store history in memory
New in version 4.0.
Return values of
evaluate()
can be saved to memory or to a file on disk. History saving modes are independent: both can be enabled simultaneously so history is saved in memory while also writing to a file. Default configuration is to save history to memory only.Note
Default configuration increases memory consumption. If you implement your own way to save the history of evaluations, always use
disable_history()
. If there are a lot of evaluations in your problem, consider reconfiguring history to only write it to a file.If memory is
True
, evaluations are saved tohistory
. IfFalse
, disables updatinghistory
but does not clear it. Re-enabling in casehistory
is not empty appends to existing history; if it is not wanted, callclear_history()
first.The file argument can be a path string or a file-like object (enables writing history to file). Note that the file is opened in append mode. To disable the file history, set file to
None
. Values in a history file are comma-separated.If add_header is
True
, the first line appended to file is a header containing the names of problem variables, objectives and constraints set byadd_variable()
,add_objective()
, andadd_constraint()
. The header is enabled by default, and can be disabled by setting add_header toFalse
.- add_header (
-
set_objective_hints
(index, hints)¶ Set hints for an objective.
Parameters: - index (
int
) – index of the objective in the list of problem objectives - hints (
dict
) – hint settings
Resets all hints previously set for the objective, replacing all existing settings with the new settings from hints. To update hint settings without resetting them, use
update_objective_hints()
.- index (
-
set_stochastic
(distribution)¶ Set stochastic distribution for a robust optimization problem.
Parameters: distribution – a stochastic distribution object Changed in version 6.15: the generator, name, and seed arguments are no longer used.
This method is essential for robust optimization problems. It adds stochastic variables \(\xi_i\) (see section Robust Problem Formulation) and sets the stochastic distribution used in generating random values for these variables.
The distribution is implemented by user, see section Using Stochastic Variables for details. The number of stochastic variables added is equal to the distribution dimension (see
getDimension()
). Stochastic variables are always added and indexed after normal variables. For example, inprepare_problem()
you can do something like:bounds = (0, 1) add_variable(bounds) # indexed 0 add_variable(bounds) # indexed 1 set_stochastic(my_distr) # assuming the distribution is 2-dimensional, # adds 2 variables indexed (!) 3 and 4 add_variable(bounds) # indexed 2 despite here it is called after set_stochastic()
Then, when you process queryx in
evaluate()
, the variables are indexed as noted above. The fact that you callset_stochastic()
before the finaladd_variable()
call does not matter.This method should be called from
prepare_problem()
. See Using Stochastic Variables for a guide.
-
set_variable_bounds
(index, bounds)¶ Set bounds for a variable.
Parameters: - index (
int
) – index of the variable in the list of problem variables - bounds (array-like) – bounds or levels for the variable
Changed in version 6.14: added the support for discrete and categorical variables.
See
add_variable()
for details on how to use bounds for discrete and categorical variables.- index (
-
set_variable_hints
(index, hints)¶ Set hints for a variable.
Parameters: - index (
int
) – index of the variable in the list of problem variables - hints (
dict
) – hint settings
Resets all hints previously set for the variable, replacing all existing settings with the new settings from hints. To update hint settings without resetting them, use
update_variable_hints()
.- index (
-
set_variable_initial_guess
(index, initial_guess)¶ Set initial guess to a given problem variable.
Parameters: - index (
int
) – variable index in the list of problem variables. - initial_guess (
None
,float
) – initial guess for variable
- index (
-
size_c
()¶ Get number of constraints in problem.
Returns: number of constraints Return type: int
-
size_f
()¶ Get number of objectives in problem.
Returns: number of objectives Return type: int
-
size_full
()¶ Get full size of evaluated data (including gradients)
Returns: total number of objectives, constraints, gradients, and noise components Return type: int
If gradients are not enabled (see
enable_objectives_gradient()
,enable_constraints_gradient()
), the full size is equal tosize_f()
+size_c()
.If all gradients are enabled and all gradients are dense, full size is (
size_f()
+size_c()
) × (1 +size_x()
).In the case of using sparse gradients, full size depends on the number of non-zero elements in the gradient (see the sparse argument to
enable_objectives_gradient()
andenable_constraints_gradient()
). You can also get the number of gradient values fromobjectives_gradient()
andconstraints_gradient()
, for example:enabled, sparse, rows, columns = problem.objectives_gradient() if sparse: size_obj_grad = len(rows) # the number of objective gradient values # len(rows) and len(columns) are equal
-
size_s
()¶ Get number of stochastic variables in problem.
Returns: number of stochastic variables Return type: int
For adding stochastic variables, see
set_stochastic()
.
-
size_x
()¶ Get number of variables in problem.
Returns: number of variables Return type: int
-
update_constraint_hints
(index, hints)¶ Update hints for a constraint.
Parameters: - index (
int
) – index of the constraint in the list of problem constraints - hints (
dict
) – hint settings
New in version 6.35.
Updates hint settings for the constraint: if hints sets some hint, the new setting replaces the existing one, but hints not found in hints keep existing settings. To reset all existing hint settings, use
set_constraint_hints()
.- index (
-
update_objective_hints
(index, hints)¶ Update hints for an objective.
Parameters: - index (
int
) – index of the objective in the list of problem objectives - hints (
dict
) – hint settings
New in version 6.35.
Updates hint settings for the objective: if hints sets some hint, the new setting replaces the existing one, but hints not found in hints keep existing settings. To reset all existing hint settings, use
set_objective_hints()
.- index (
-
update_variable_hints
(index, hints)¶ Update hints for a variable.
Parameters: - index (
int
) – index of the variable in the list of problem variables - hints (
dict
) – hint settings
New in version 6.35.
Updates hint settings for the variable: if hints sets some hint, the new setting replaces the existing one, but hints not found in hints keep existing settings. To reset all existing hint settings, use
set_variable_hints()
.- index (
-
variables_bounds
(index=None)¶ Get bounds and levels of variables.
Parameters: index ( int
) – index of a categorical or discrete variableReturns: bounds of variables or levels for a variable specified by index Return type: numpy.ndarray
Changed in version 6.14: added the index parameter
If index is
None
, returns a tuple of two lists containing values of the lower and upper bounds for all problem variables. For continuous and integer variables, these values are the same as those specified by the bounds parameter toadd_variable()
. For discrete and categorical variables, the bounds are the minimum and maximum values from the set of their levels specified by the bounds parameter. Note that bounds are generally nonsensical for a categorical variable, since categorical values cannot be compared by magnitude.If index is
int
, returns a tuple of two values (lower and upper bound) if the variable under this index is continuous or integer, and a tuple containing all level values if this variable is discrete or categorical.
-
variables_names
()¶ Get names of variables.
Returns: name list Return type: list[str]
11.9.4. ProblemMeanVariance
— mean variance problem¶
-
class
da.p7core.gtopt.
ProblemMeanVariance
(*iargs, **ikwargs)¶ Simplified problem class for mean variance problems. Inherits from
ProblemGeneric
.To define a mean variance problem, create your own problem class, inheriting from
ProblemMeanVariance
. This class must implement the following methods:prepare_problem()
inherited fromProblemGeneric
define_objective()
(ordefine_objective_batch()
)define_objective_gradient()
(ordefine_objective_gradient_batch()
)define_constraints()
(ordefine_constraints_batch()
)define_constraints_gradient()
(ordefine_constraints_gradient_batch()
)
A mean variance problem must also add stochastic variables. See
set_stochastic()
and section Using Stochastic Variables for details.Note
Mean variance problem defines only one objective, hence the name of the
define_objective()
method.To add an objective to a mean variance problem, use
set_objective()
instead of the inheritedadd_objective()
method. Theset_objective()
method adds both the objective function and its mean variance. This method should be used only once. Its main purpose is to support optimization hints in mean variance problems (see the hints argument in method description).Example:
class MyProblem(da.p7core.gtopt.ProblemMeanVariance): def prepare_problem(self): self.add_variable((0, 1), 0.5) self.add_variable((0, 2), 0.5) self.add_constraint() self.set_objective() self.set_stochastic(distribution) def define_objective(self, x): #f = x0**2 + x1**2 return x[0]**2 + x[1]**2 def define_constraints(self, x): #c = x0 + x1 return [x[0] + x[1]] def define_objective_gradient(self, x): #return [ df/dx0 ... df/dxn ] def define_constraints_gradient(self, x): #return [ dc0/dx0 ... dc0/dxn ... dcm/dxn ] problem = MyProblem()
-
add_objective
(name=None, hints=None)¶ A prohibiting implementation of the
add_objective()
method inherited fromProblemGeneric
. Will simply raise an exception if you attempt to use this method; useset_objective()
instead.
-
define_constraints
(x)¶ An abstract method to define mean variance problem constraints.
Parameters: x ( ndarray
, 1D) – point to evaluateReturns: evaluation results Return type: array-like, 1D Changed in version 3.0 Release Candidate 1: the x argument is
ndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_constraints_batch()
, default implementation of which uses this method.The shape of x is the same as in
define_objectives()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of a constraint failed.
None
value indicates that evaluation of a constraint was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_constraints_batch
(x)¶ Default implementation of the method defining mean variance problem constraints. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – points batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
; default implementation also returnsndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
evaluate()
to calculate constraints. Default implementation simply loops over the points batch \(x\), callingdefine_constraints()
for each point. May be reimplemented by user to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is the same as in
define_objectives_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of a constraint failed.
None
value indicates that evaluation of a constraint was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_constraints_gradient
(x)¶ An abstract method to define gradients for mean variance problem constraints.
Parameters: x ( ndarray
, 1D) – point to evaluateReturns: evaluation results Return type: array-like, 1D Changed in version 3.0 Release Candidate 1: the x argument is
ndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_constraints_gradient_batch()
, default implementation of which uses this method.The shape of x is the same as in
define_objectives()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some gradient indicates that evaluation of a gradient failed.
None
value indicates that evaluation of a gradient was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_constraints_gradient_batch
(x)¶ Default implementation of the method defining gradients for mean variance problem constraints. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – points batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
; default implementation also returnsndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
evaluate()
to calculate gradients for constraints. Default implementation simply loops over the points batch \(x\), callingdefine_constraints_gradient()
for each point. May be reimplemented by user to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is the same as in
define_objectives_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some gradient indicates that evaluation of a gradient failed.
None
value indicates that evaluation of a gradient was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_objective
(x)¶ An abstract method to define mean variance problem objective.
Parameters: x ( ndarray
, 1D) – point to evaluateReturns: evaluation results Return type: array-like, 1D Changed in version 3.0 Release Candidate 1: the x argument is
ndarray
.Changed in version 6.24: objective value may be
None
to indicate skipped evaluation.Defines the problem objective (mean variance problem includes only one objective). This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_constraints_batch()
, default implementation of which uses this method.The returned objective value may be NaN or
None
with the following meaning:- NaN value indicates that objective evaluation failed.
None
value indicates that objective evaluation was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_objective_batch
(x)¶ Default implementation of the method defining mean variance problem objective. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – points batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
; default implementation also returnsndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
evaluate()
to calculate the objective value (mean variance problem includes only one objective). Default implementation simply loops over the points batch \(x\), callingdefine_objective()
for each point. May be reimplemented by user to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is the same as in
define_objectives_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value indicates that objective evaluation failed.
None
value indicates that objective evaluation was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_objective_gradient
(x)¶ An abstract method to define mean variance problem objective gradient.
Parameters: x ( ndarray
, 1D) – point to evaluateReturns: evaluation results Return type: array-like, 1D Changed in version 3.0 Release Candidate 1: the x argument is
ndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.Defines the objective gradient (mean variance problem includes only one objective). This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_objective_gradient_batch()
, default implementation of which uses this method.The shape of x is the same as in
define_objectives_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some gradient indicates that evaluation of a gradient failed.
None
value indicates that evaluation of a gradient was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_objective_gradient_batch
(x)¶ Default implementation of the method defining mean variance problem objective gradient. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – points batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: the x argument is
ndarray
; default implementation also returnsndarray
.Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
evaluate()
to calculate the objective gradient (mean variance problem includes only one objective). Default implementation simply loops over the points batch \(x\), callingdefine_objective_gradient()
for each point. May be reimplemented by user to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is the same as in
define_objectives_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some gradient indicates that evaluation of a gradient failed.
None
value indicates that evaluation of a gradient was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
evaluate
(queryx, querymask)¶ Default implementation of the
evaluate()
method inherited from the base classProblemGeneric
. Should not be reimplemented; usedefine_objective()
,define_objective_gradient()
,define_constraints()
,define_constraints_gradient()
, or their batch counterparts.
-
set_objective
(name=None, hints=None)¶ Set mean variance problem objective.
Parameters: - name (
str
) – the name of the objective - hints (
dict
) – optimization hints
Adds an objective function and its mean variance.
Let \(f(x, \xi)\) be the objective function, then its mean is
\[\langle f \rangle (x) ~=~ \int f(x,\xi) \rho (\xi) \, \mathrm{d}\xi,\]and mean variance
\[V = \sqrt{\langle f^2 \rangle ~-~ {\langle f \rangle}^2}\]To simplify problem implementation,
set_objective()
allows you to simply define \(f(x, \\xi)\), and its mean and variance are added automatically.The name argument is optional; if you do not provide a name, the objective is automatically named
"f1"
.The hints argument sets objective-specific options that may direct optimizer to use alternative internal algorithms to increase performance (see Hint Reference). It is a dictionary
{hint name: value}
, for example{"@GTOpt/LinearityType": "Quadratic"}
.This method should be used only once, from
prepare_problem()
.- name (
11.9.5. ProblemUnconstrained
— unconstrained problem¶
-
class
da.p7core.gtopt.
ProblemUnconstrained
(*iargs, **ikwargs)¶ Simplified problem class for unconstrained problems. Inherits from
ProblemConstrained
.This class does not support the usage of analytical objective gradients, and should not add any problem constraints.
To define an unconstrained optimization problem, create your own problem class, inheriting from
ProblemUnconstrained
. This class must implement the following methods:prepare_problem()
inherited fromProblemGeneric
define_objectives()
(ordefine_objectives_batch()
) inherited fromProblemConstrained
Note that this class should not implement
evaluate()
.-
define_constraints
(x)¶ Empty definition of constraints for an unconstrained problem. Does nothing. Should not be reimplemented (an unconstrained problem must not define any constraints).
11.9.6. ProblemFitting
— special class for fitting problem¶
-
class
da.p7core.gtopt.
ProblemFitting
(*iargs, **ikwargs)¶ Specialized problem class for fitting problems (see Data Fitting Problem for details). Inherits from
ProblemGeneric
. A fitting problem requires to define a model (\(f\)), variables (\(x\)) and the data to fit (\(model_x\), \(model_y\)).The problem statement is to find \(x\) (parameters of the model) minimizing RMS:
\[\sqrt{N^{-1} \sum_{i=1}^N w^2_i (f(model_x^i, x) - model_y^i)^2 }\]where \(w_i\) are given weights. Multiple models and constraints on \(x\) are supported.
Note
Stochastic variables are not supported in fitting problems.
Note
History and designs do not contain model evaluations. They contain residuals and parameters and can be used to depict convergence or resume optimization within SBO framework.
To define a fitting problem, create your own problem class, inheriting from
ProblemFitting
. This class must implement the following methods:prepare_problem()
inherited fromProblemGeneric
define_models()
(ordefine_models_batch()
)define_constraints()
(ordefine_constraints_batch()
)
-
add_model_x
(sample, name=None)¶ Set model_x values of fitting data.
Parameters: - sample (
ndarray
, 1D or 2D) – sample in model_x space - name (
str
) – name of model input
Initializes model_x values of fitted data (points, in which value of model is known). The length of this sample must match the length of observables data set with
add_model_y()
.If sample is a 2D array then multidimensional sample is added. The first dimension is number of points, the second is dimensionality.
This method should be called from
prepare_problem()
.- sample (
-
add_model_y
(sample, weights=None, name=None, hints=None)¶ Add a new model to fit.
Parameters: - sample (
ndarray
, 1D or 2D) – fitted observables - weights (
ndarray
, 1D or 2D) – aggregation weights - name (
str
) – the name of the model - hints (
dict
) – optimization hints
Initializes a new model in the problem.
The length of the sample and weights (if set) must match the length of design data set with
add_model_x()
The weights argument is optional; if it is not provided, unit weights is used.
The name argument is optional; if you do not provide a name, it is generated automatically. Auto names are
"f1"
,"f2"
,"f3"
, and so on, in the order of adding models to a problem.The hints argument sets objective-specific options that may direct optimizer to use alternative internal algorithms to increase performance (see Hint Reference). It is a dictionary
{hint name: value}
, for example{"@GTOpt/EvaluationCostType": "Expensive"}
. Please, note that “@GTOpt/LinearityType” can be ignored and setting different computational cost type for different models may lead to undesirable results.If \(sample\) is 2D array then multidimensional model is added. The first dimension is number of points, the second is number of models.
This method should be called from
prepare_problem()
.- sample (
-
add_objective
(name=None, hints=None)¶ A prohibiting implementation of the
add_objective()
method inherited fromProblemGeneric
. Will simply raise an exception if you attempt to use this method; useadd_model_y()
instead.
-
define_constraints
(x)¶ An optional method to define problem constraints.
Parameters: x ( ndarray
, 1D) – combination of parameter values to test against constraintsReturns: evaluation results Return type: array-like, 1D Changed in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_constraints_batch()
(which uses this method by default).The shape of parameters is the same as in
define_models()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of a constraint failed.
None
value indicates that evaluation of a constraint was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_constraints_batch
(x)¶ Default implementation of the method defining problem constraints. Supports non-batch and batch modes.
Parameters: x ( ndarray
, 2D) – x batchReturns: evaluation results Return type: ndarray
, 2DChanged in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
ProblemFitting.evaluate()
to calculate constraints. Default implementation simply loops over the points batch parameters, callingdefine_constraints()
for each point. May be reimplemented by user to support parallel calculations. Such implementation may return any 2D array-like.The shape of x is the same as in
define_models_batch()
.The returned array may contain NaN and
None
values, which have the following meaning:- NaN value of some constraint indicates that evaluation of a constraint failed.
None
value indicates that evaluation of a constraint was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
-
define_models
(t, x)¶ An abstract method to define fitted models.
Parameters: - t (
ndarray
, 1D) – point from sample to evaluate - x (
ndarray
, 1D) – parameters of models to evaluate
Returns: evaluation results
Return type: ndarray
, 1DChanged in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method does not support the batch mode (evaluates single point only). May be implemented by user instead of
define_models_batch()
(which uses this method by default).The shape of x is (1, m) where m is the input dimension (
size_x()
).The shape of t is (1, n) where n is the sample dimension (
size_model_x()
).The returned array may contain NaN and
None
values, which have the following meaning:- NaN value indicates that evaluation failed.
None
value indicates that evaluation was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
- t (
-
define_models_batch
(t, x)¶ Default implementation of the method defining fit models. Supports non-batch and batch modes.
Parameters: - t (
ndarray
, 2D) – design sample - x (
ndarray
, 2D) – parameters batch
Returns: evaluation results
Return type: ndarray
, 3DChanged in version 6.24: evaluation results may contain
None
values to indicate skipped evaluations.This method is used by
ProblemFitting.evaluate()
to calculate fitting error Default implementation simply loops over the points batch x for each t, callingdefine_models()
for each point. May be reimplemented by user to support parallel calculations. Such implementation must return any 3D array-like.The shape of x is (n, m) where n is the number of points to evaluate (at most GTOpt/BatchSize) and m is the input dimension (
size_x()
).The shape of t is (p, q) where p is the sample length number of points to evaluate (at most GTOpt/BatchSize) and q is the its dimension (
size_model_x()
).The implementation of method must return values for each design in design sample (t) for each set of parameters in batch (x) for each model. The results must be a 3D
ndarray
with the shape (len(x),size_model_x()
,size_f()
)The returned array may contain NaN and
None
values, which have the following meaning:- NaN value indicates that evaluation failed.
None
value indicates that evaluation was skipped.
Note that skipped and failed evaluations may stop optimization prematurely.
- t (
-
evaluate
(queryx, querymask)¶ Default implementation of the
evaluate()
method inherited from the base classProblemGeneric
. Should not be reimplemented; usedefine_models()
anddefine_constraints()
.
-
get_sample
()¶ Get fitting data.
Returns: design and observables of fitting data as a pair (designs, observables) Return type: tuple(ndarray, ndarray)
-
model_info
()¶ Provide accumulated information about all added models.
Returns: list of dicts with models properties Return type: list
ofdict
-
model_x_names
()¶ Get names of model variables.
Returns: name list Return type: list[str]
-
set_stochastic
(distribution)¶ A prohibiting implementation of the
set_stochastic()
method inherited fromProblemGeneric
. The fitting problem is incompatible with stochastic variables, so this method simply raises anInvalidProblemError
exception when used.
-
size_model_x
()¶ Get dimensionality of points in sample.
Returns: dimensionality Return type: int
11.9.7. Result
— solution¶
-
class
da.p7core.gtopt.
Result
(info, status, problem_ref, optimal_points, converged_points, infeasible_points, diagnostics)¶ Optimization result. An object of this class is only returned by
solve()
and should never be instantiated by user.Changed in version 3.0: removed the converged point set.
-
infeasible
¶ New in version 3.0 Beta 1.
Additional Pareto-optimal points from the evaluated set that somehow violate problem constraints (see Optimal and Infeasible Result Sets for details). Note that if GTOpt/OptimalSetType is
"Strict"
, this attribute contains no data.-
infeasible.
c
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.Constraint values. Array shape is (n,
size_c()
) where n is the number of found points.For robust optimization problems (see Robust Optimization) the interpretation of values in this array depends on constraint type (see Robust Problem Formulation):
- For expectation constraints, the value is the estimate of the expected constraint value.
- For chance constraints, the value is the estimated probability of constraint violation.
Estimation errors are stored in
infeasible.ce
.
-
infeasible.
ce
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.The errors of constraint estimates for stochastic problems. For non-stochastic problems, all errors are considered to be
0.0
.Array shape is the same as of
infeasible.c
.
-
infeasible.
f
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.Objective function values. Array shape is (n,
size_f()
) where n is the number of found points.For stochastic problems (see Robust Optimization) this array contains estimated values of objectives, and estimation errors are stored in
infeasible.fe
.
-
infeasible.
fe
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.The errors of objective function estimates for stochastic problems. For non-stochastic problems, all errors are considered to be
0.0
.Array shape is the same as of
infeasible.f
.
-
infeasible.
psi
¶ Type: ndarray
, 1DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.For non-stochastic problems: point feasibility measures defined as \(\psi(x) = \max_i \psi^i(x)\) (see
infeasible.v
). For stochastic problems: \(\psi^*_{N_s}\) estimates.Array shape is (n, ) where n is the number of points in the result.
-
infeasible.
psie
¶ Type: ndarray
, 1DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.For stochastic problems: \(\psi^*_{N_s}\) estimation errors. For non-stochastic problems this attribute has no meaning and is filled with zeros.
Array shape is the same as of
infeasible.psi
.
-
infeasible.
v
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.Normalized constraint violation/satisfaction values defined as \(\psi^i(x) = \max\left[ \frac{c^i_L - c^i(x)}{\max(1., |c^i_L|)}, \frac{c^i(x) - c^i_U}{\max(1., |c^i_U|)} \right]\). Array shape is the same as of
infeasible.c
.For stochastic problems (see Robust Optimization) this array contains estimated values since it is derived from
infeasible.c
. The errors of violation/satisfaction estimates are stored ininfeasible.ve
.
-
infeasible.
ve
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.The errors of constraint violation/satisfaction estimates for stochastic problems. For non-stochastic problems, all errors are considered to be
0.0
.Array shape is the same as of
infeasible.v
.
-
-
info
¶ Type: dict
Human-readable report on the solved problem and optimizer settings used.
-
names
¶ Names of problem variables, objectives and constraints.
-
names.
c
¶ Type: list[str]
The names of constraints set by
add_constraint()
, listed in order of adding constraints to a problem.
-
names.
f
¶ Type: list[str]
The names of objectives set by
add_objective()
, listed in order of adding objectives to a problem.
-
names.
x
¶ Type: list[str]
The names of variables set by
add_variable()
, listed in order of adding variables to a problem.
-
-
optimal
¶ All feasible Pareto-optimal points from the evaluated set (see Optimal and Infeasible Result Sets for details).
-
optimal.
c
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.Constraint values. Array shape is (n,
size_c()
) where n is the number of found points.For robust optimization problems (see Robust Optimization) the interpretation of values in this array depends on constraint type (see Robust Problem Formulation):
- For expectation constraints, the value is the estimate of the expected constraint value.
- For chance constraints, the value is the estimated probability of constraint violation.
Estimation errors are stored in
optimal.ce
.
-
optimal.
ce
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.The errors of constraint estimates for stochastic problems. For non-stochastic problems, all errors are considered to be
0.0
.Array shape is the same as of
optimal.c
.
-
optimal.
f
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.Objective function values. Array shape is (n,
size_f()
) where n is the number of found points.For stochastic problems (see Robust Optimization) this array contains esimetd values of objectives, and estimation errors are stored in
optimal.fe
.
-
optimal.
fe
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.The errors of objective function estimates for stochastic problems. For non-stochastic problems, all errors are considered to be
0.0
.Array shape is the same as of
optimal.f
.
-
optimal.
psi
¶ Type: ndarray
, 1DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.For non-stochastic problems: point feasibility measures defined as \(\psi(x) = \max_i \psi^i(x)\) (see
optimal.v
). For stochastic problems: \(\psi^*_{N_s}\) estimates.Array shape is (n, ) where n is the number of points in the result.
-
optimal.
psie
¶ Type: ndarray
, 1DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.For stochastic problems: \(\psi^*_{N_s}\) estimation errors. For non-stochastic problems this attribute has no meaning and is filled with zeros.
Array shape is the same as of
optimal.psi
.
-
optimal.
v
¶ Type: ndarray
, 2DChanged in version 3.0 Release Candidate 1: attribute type is
ndarray
.Normalized constraint violation/satisfaction values defined as \(\psi^i(x) = \max\left[ \frac{c^i_L - c^i(x)}{\max(1., |c^i_L|)}, \frac{c^i(x) - c^i_U}{\max(1., |c^i_U|)} \right]\). Array shape is the same as of
optimal.c
.For stochastic problems (see Robust Optimization) this array contains estimated values since it is derived from
optimal.c
. The errors of violation/satisfaction estimates are stored inoptimal.ve
.
-
-
11.9.8. Solver
— problem solver¶
-
class
da.p7core.gtopt.
Solver
¶ Optimizer interface.
-
license
¶ Optimizer license.
Type: License
General license information interface. See section License Usage for details.
-
options
¶ Optimizer options.
Type: Options
General options interface for the optimizer. See section Options Interface for usage and the GTOpt option reference.
-
set_logger
(logger)¶ Set logger.
Parameters: logger – logger object Returns: None
Used to set up a logger for the optimization process. See section Loggers for details.
-
set_watcher
(watcher)¶ Set watcher.
Parameters: watcher – watcher object Returns: None
Used to set up a watcher that can interrupt or monitor the optimization process. See section Watchers for details and the Intermediate result example for usage.
-
solve
(problem, **kwargs)¶ Solve an optimization problem.
Parameters: - problem – optimization problem
- options (
dict
) – solver options; not required, overrides the options set through theoptions
interface - sample_x (array-like, 1D or 2D) – optional initial sample containing values of variables (added in 2.0 Release Candidate 1)
- sample_f (array-like, 1D or 2D) – optional initial sample of objective function values, requires sample_x (added in 2.0 Release Candidate 1)
- sample_c (array-like, 1D or 2D) – optional initial sample of constraint function values, requires sample_x (added in 2.0 Release Candidate 1)
- compatibility (
bool
) – produces old style results (added in 6.14)
Returns: solution
Return type: gtopt.Result
by default, orp7core.Result
if compatibility isFalse
Changed in version 2.0 Release Candidate 1: added the initial sample support (sample_x, sample_f, sample_c).
The problem should be an instance of a user problem class inherited from
ProblemGeneric
or one of its descendants (simplified classesProblemConstrained
,ProblemUnconstrained
,ProblemCSP
, orProblemMeanVariance
). See theProblemGeneric
class documentation for details on how to define an optimization problem.Validate your problem with
validate()
before solving to avoid errors caused by incorrect problem definition.Alternatively to using the
options
interface, solver options may be specified insolve()
as the options argument, which is a dictionary with option names as keys, for example:solver.solve(my_problem, options={"GTOpt/LogLevel": "Debug"})
If options contain an option previously set through the
options
interface, then the value from options overrides the one stored in solver configuration, but does not replace it (that is, the options argument has higher priority, but it works only in thesolve()
scope).Since version 2.0 Release Candidate 1, GTOpt supports initial sample given as sample_x, sample_f, and sample_c arguments. If only sample_x is specified, it may be considered as an extended initial guess for variables — the solver always evaluates points from sample_x (the initial guess point, if it was specified when adding variables, is also evaluated). If sample_f and/or sample_c is specified in addition to sample_x, the solver does not call
evaluate()
for these points, but takes objective and/or constraint function values from the respective samples. Naturally, specifying either of sample_f, sample_c requires sample_x, and all samples have to be of the same size. 1D samples are supported as a simplified form for the case of 1D input and/or response.Changed in version 6.16.3: invalid values of discrete or integer variables in sample_x now cause
InvalidProblemError
Note that if the problem defines discrete or integer variables, sample_x must contain only valid values of such variables. For a discrete variable, each its value in sample_x must match one of the level values specified by the bounds argument to
add_variable()
. For integer variables, all their values in sample_x must be integers. Otherwise,solve()
raises anInvalidProblemError
exception.
-
validate
(problem, **kwargs)¶ Validate an optimization problem definition.
Parameters: - problem – optimization problem
- options (
dict
) – solver options - sample_x (array-like, 1D or 2D) – optional initial sample of variables
- sample_f (array-like, 1D or 2D) – optional initial sample of objectives
- sample_c (array-like, 1D or 2D) – optional initial sample of constraints
- compatibility (
bool
) – unused, recognized for compatibility withsolve()
Returns: validation outcome
Return type: New in version 6.33.
Validates your problem definition and returns a
ValidationResult
object providing general validation status (status
,True
if validation passed,False
otherwise) and details, if any (details
).Test your problem before running
solve()
to avoid errors caused by incorrect problem definition. When callingvalidate()
, pass the same arguments as you would pass tosolve()
, including option settings and initial samples. Allvalidate()
parameters have the same meaning assolve()
parameters.
-
11.9.9. ValidationResult
— problem validation data¶
-
class
da.p7core.gtopt.
ValidationResult
¶ Validation result and details. An object of this class is only returned by
validate()
and should never be instantiated by user.-
status
¶ Validation status.
Type: bool
General result of validation: pass (
True
) or fail (False
).For validation details or failure reasons, see
details
.
-
details
¶ Validation messages.
Type: list
ofDiagnosticRecord
objectsContains validation errors, warnings, and other messages as
DiagnosticRecord
objects, which provide message text, severity level, string and integer representations, and equality comparison.
-
11.10. da.p7core.gtopt.diagnostic
¶
Classes
da.p7core.gtopt.diagnostic.DiagnosticRecord (…) |
Diagnostic record definition. |
da.p7core.gtopt.diagnostic.DiagnosticSeverity (id, …) |
Severity levels
da.p7core.gtopt.diagnostic.DIAGNOSTIC_ERROR |
|
da.p7core.gtopt.diagnostic.DIAGNOSTIC_WARNING |
|
da.p7core.gtopt.diagnostic.DIAGNOSTIC_HINT |
|
da.p7core.gtopt.diagnostic.DIAGNOSTIC_MISC |
11.10.1. DiagnosticRecord
— message¶
da.p7core.gtopt.diagnostic.DiagnosticRecord (…) |
Diagnostic record definition. |
da.p7core.gtopt.diagnostic.DiagnosticRecord.message |
Diagnostic message. |
da.p7core.gtopt.diagnostic.DiagnosticRecord.severity |
Severity level. |
-
class
da.p7core.gtopt.diagnostic.
DiagnosticRecord
¶ A record containing a diagnostic message and its severity level. Also provides a string representation, which is composed of the severity level string representation and the message text.
-
message
¶ Diagnostic message text.
Type: str
-
severity
¶ Diagnostic severity level.
Type: DiagnosticSeverity
-
11.10.2. DiagnosticSeverity
— level¶
da.p7core.gtopt.diagnostic.DiagnosticSeverity (id, …) |
|
da.p7core.gtopt.diagnostic.DiagnosticSeverity.id |
Severity level ID. |
-
class
da.p7core.gtopt.diagnostic.
DiagnosticSeverity
¶ Diagnostic severity level definition.
Provides integer and string representations of the severity level, and equality/inequality comparisons. This class should never be instantiated by user.
DiagnosticSeverity
is similar toStatus
in usage, and same notes apply (see Status).-
id
¶ Severity level ID.
Type: int
This numeric ID is the same as the severity level integer representation.
-
11.10.3. Diagnostic Levels¶
-
da.p7core.gtopt.diagnostic.
DIAGNOSTIC_ERROR
¶ Type: DiagnosticSeverity
- ID: 2
- String representation:
"Error"
A problem is diagnosed, which could cause run-time errors.
-
da.p7core.gtopt.diagnostic.
DIAGNOSTIC_WARNING
¶ Type: DiagnosticSeverity
- ID: 1
- String representation:
"Warning"
A problem is diagnosed, which could cause run-time warnings.
-
da.p7core.gtopt.diagnostic.
DIAGNOSTIC_HINT
¶ Type: DiagnosticSeverity
- ID: 0
- String representation:
"Hint"
Additional information. No problems diagnosed.
-
da.p7core.gtopt.diagnostic.
DIAGNOSTIC_MISC
¶ Type: DiagnosticSeverity
- ID: -1
- String representation:
"Misc"
Miscellaneous messages.