# Multi-Objective Optimization Basics¶

This tutorial shows how to solve a very simple multi-objective optimization problem. Its purpose is to provide a basic example of solving optimization problems with conflicting objectives.

Note

This tutorial continues the *Optimization Basics* tutorial and requires the workflow created there.

## Before You Begin¶

This tutorial requires an existing **Tutorials** project. If you have not created this project yet, see *Tutorial Project* first.

- Open the
**Tutorials**project. - Open
**OptimizationBasics**(`OptimizationBasics.p7wf`). - Select
*File ‣ Save Workflow As...*to create a copy of the workflow. Save the workflow as**OptimizationBasicsMO**. - Verify that you are editing
**OptimizationBasicsMO**and continue with the tutorial.

## Task¶

To illustrate the idea of multi-objective optimization, this tutorial considers the same case as in *Optimization Basics*, but this time there will be no minimum volume requirement. Instead, we want to both minimize the cost and maximize the possible volume, so the task may be given as follows:

A cylindrical tank is constructed using two different materials. The material for the top and bottom costs $10 per square meter, and material for the side costs $8 per square meter. Find the radius \(r\) and height \(h\) of this tank so as to minimize the construction cost and provide maximum tank volume.

The task seems self-contradictory: maximizing volume means increasing \(r\) and/or \(h\), which in its turn increases the cost. Such a view, however, is not entirely correct.

For example, consider design \(D_1\): \(r = 3\), \(h = 2\) that costs $867 for a volume of \(18 \pi\) cubic meters. Compare it with the optimal design from *Optimization Basics*, \(D_0\): \(r = 2\), \(h = 5\) that costs $754 for a volume of \(20 \pi\) cubic meters.

If you begin with \(D_1\), it is possible to improve: increase volume without increasing cost, or decrease cost without decreasing volume, or even both decrease cost and increase volume, as \(D_0\) proves. So, \(D_1\) is an inefficient design because there are clearly better alternatives. It is also said that corresponding solution gets dominated by others, or (for example) \(D_0\) dominates \(D_1\).

On the contrary, \(D_0\) cannot be improved in such a way. Once you reach \(D_0\), it is impossible to make any of the objectives better without making another one worse. Corresponding solution is called Pareto optimal, or Pareto efficient, and is said to be non-dominated.

The notion of Pareto optimality reflects the nature of multi-objective optimization problems: in a nontrivial case, there does not exist a single solution that simultaneously optimizes every objective. Optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.

Now consider design \(D_2\): \(r = 2.34\), \(h = 5.84\) that costs $1028 for a volume of 100 cubic meters. It includes another Pareto optimal solution — that is, you cannot find a solution that improves both objectives compared to \(D_2\). If you compare \(D_2\) with \(D_0\), there is no objective reason to prefer one over another. These designs can be considered “equally good” until you introduce some additional subjective preference — maybe you like \(D_2\) more because the volume is divisible by 10, or \(D_0\) because dimensions are integer.

So, when given a multi-objective optimization problem, the goal is not to find a single optimal solution, but to obtain a number of efficient designs that are then studied by a human decision maker.

Note that in general (for a nondegenerate multi-objective problem) there exists an infinite number of Pareto optimal solutions. The set of corresponding objective values is often called the Pareto frontier — for example, the Pareto frontier for the problem in this tutorial can be represented by a curve on \(CV\) plane. In practice, the goal is not to reconstruct the Pareto frontier exactly, but to obtain enough data for human study — that is, to get a well-representative collection of Pareto optimal solutions that can give an idea of what does the true Pareto frontier look like.

The collection of Pareto optimal solutions, commonly understood as the result of solving a multi-objective problem, is finite and should be visialized using several scatter plots (for example, on \(rh\) and \(CV\) planes) or a parallel coordinates plot, as shown in this tutorial. For brevity, the tutorial further refers to the \(CV\) scatter plot as a Pareto frontier plot, despite it is not strictly correct.

## Solution¶

The solution follows the same general steps as in the *Optimization Basics* tutorial, except that now you are editing an existing single-objective optimization workflow. Required changes are more naturally done in the following order:

- Reformulate the problem to replace the constraint with an objective. As a consequence, you will have to add upper bounds to variables, as explained further.
- Update Optimizer configuration.
- Develop a model for optimization. Already done, but there is an important point which requires editing the model. The volume is to be maximized, however pSeven always seeks to minimize objectives. For that reason, the objective’s sign should be changed in Model (minimizing \(-V\) is the same as maximizing \(V\)).
- Update Model configuration and the links connecting it to Optimizer.

### Problem Definition¶

The block that defines the optimization problem (Optimizer) already exists in the workflow, but requires changes to its configuration in order to redefine the problem as a multi-objective one.

Begin with removing the existing constraint, then add the new objective.

- Specify the name:
`vol`. - Leave other settings default and click .

The new objective appears in the *Objectives* pane. In background, Optimizer removes the existing vol port, then adds it back, now mapping it to the new objective in problem definition. At this time, the link from Model.vol to Optimizer.vol is lost (see the *Issues* pane).

Also you can see that current definition does not limit the search space, since the original Optimizer configuration set only lower bounds for \(r\) and \(h\). In practice, the search should be limited both for logical and computational reasons. Set upper bounds for both variables to add this limit.

- In the
*Variables*pane, double-click the*Upper bound*field to edit. - Set some upper bounds for radius and height — for example,
`5`and`10`respectively.

You can also use the solution found in *Optimization Basics* as the initial guess in optimization.

- In the
*Variables*pane, double-click the*Initial guess*field to edit. - For radius, guess
`2`. - For height, guess
`5`.

Verify the Optimizer configuration.

Click to apply changes and close the dialog.

### Model¶

As a result of Optimizer update, two changes are needed in the current workflow: change the sign of the model function that calculates volume, and restore the link from Model.vol to Optimizer.vol.

Open Model configuration to change the formula.

- Add a minus sign to the right part in the volume formula.
- Leave other settings as is and click to apply changes.

Next, restore the link.

- Without removing existing links, drag a link from Model to Optimizer.

Port names match, so the *Autolink* dialog appears.

- Accept the suggested link and click .

This completes the required changes to the workflow.

## Workflow¶

No changes in workflow structure were needed, so it looks the same as before.

The workflow also retains port monitoring settings from *Optimization Basics*, no changes are required to workflow configuration. Optionally you can remove the “Optimizer.optimal_c” monitor — it becomes useless because the reformulated problem has no constraints except trivial ones (variable bounds).

- Save the workflow, run and switch to
*Analyse*to view the results.

## Results¶

The data gathered from the workflow is stored to the project database. The *Optimizer.optimal_x* and *Optimizer.optimal_f* records contain solution data, other records contain the history of evaluations.

- In
*Analyze*, create a new report. - Add the records shown above to data series. It is recommended to rename data series afterwards to make plot configuration easier.

Note

In the *Optimizer.optimal_x* record, variables come in order of the *Variables* list in Optimizer configuration (\(r\), \(h\) in this tutorial). Similarly, in the *Optimizer.optimal_f* record the order of objectives is \(C\), \(V\).

Sample Pareto frontier plot with added history of evaluations. Note that the volume on the plot is negative because the objective sign was changed in Model to allow minimization.

- First dataset, history (all evaluations): cost as X, volume as Y.
- Second dataset (Pareto points): optimal cost as X, optimal volume as Y.

Sample optimal solutions plot in \(r,h\) with added history of evaluations.

- First dataset, history (evaluated points): radius as X, height as Y.
- Second dataset (optimum points): optimal radius as X, optimal height as Y.

Another useful tool to visualize a multi-objective problem solution is a parallel coordinates plot. Such plots can show multidimensional (\(r,h,C,V\)) data.

This plot supports one dataset only. For faster plotting, you can first select all optimal data in the *Data Series* panel, then click the parallel coordinates plot button on the report toolbar.

Dimension axes can be reordered freely by dragging them on the plot. Also you can select specific ranges to get “slices” of data.

Such plots are commonly used when making decisions, that is, finally selecting the design to be accepted. As mentioned in section *Task* above, the practical importance of multi-objective optimization is that it limits the number of designs subject to human analysis, offering only optimal (for example, cost-effective) solutions.

## Conclusion¶

This tutorial describes the essentials of multi-objective optimization in pSeven. It is recommended to keep the workflow you have created because similar configuration is used in other optimization tutorials.

- Save the workflow from this tutorial to keep an example of multi-objective optimization problem. The report you have created when viewing results is no longer needed, but you can also save it if you wish.

Real optimization tasks usually involve many more model variables and functions, but general steps to solve them remain the same. Note also that the model to be optimized is often implemented as a *Composite* block that wraps external solvers and other programs; related example can be found in the advanced *Integrated Component Optimization* tutorial.