Model Evaluation

This tutorial shows how to use an existing approximation model: given a model file, evaluate model responses for values of variables from a test sample and save the results.


This tutorial requires the approximation model trained in the Sample-Based Approximation tutorial.

Before You Begin

This tutorial requires an existing Tutorials project. If you have not created this project yet, see Tutorial Project first. You will also need the approximation model trained in the Sample-Based Approximation tutorial (model.gtapprox should be found in the Tutorials project directory).

  • Open the Tutorials project.
  • Create a new empty workflow. If you have never created a workflow before, see the Simple Workflow tutorial.
  • Save the workflow as ApproxEvaluation.
  • Switch to Workspace and verify that the workflow file (ApproxEvaluation.p7wf) is added to the project. You can see it in the Project pane.
  • While in Workspace, verify that a file named model.gtapprox exists in the project.
  • Switch to Edit and select the workflow you have just created to continue with the tutorial.


The tutorial is aimed at using an existing approximation model to predict function values for the values of variables from a test sample (evaluate model responses).

The test sample can be prepared beforehand (stored in a file) or generated by the workflow itself. This tutorial uses a randomly generated sample, since loading sample files is already described in the Sample-Based Approximation tutorial.

The end result is three samples — two model inputs and model responses — which are stored to the project database and can be used to study the model.


Evaluating a pSeven approximation model, which is stored in a file, can be divided into the following general steps:

  1. Configure an Approximation model block to read the file and load the model.
  2. Prepare a test sample — load it from a file or configure a block which will generate it when the workflow runs.
  3. Send the test sample to the Approximation model block to get responses.
  4. Save the results for analysis.


Approximation models are loaded to and evaluated by the Approximation model block. In this tutorial, the block will load the model from a file. Alternatively, you can also send the model to the Model.Model file port, if it comes from an ApproxBuilder block in the same workflow.

  • Add an Approximation model block. Name it Model.

  • Double-click Model to open its configuration.

  • Click b_browse or the path field in the Approximation model pane to bring up the file selection dialog.

  • In the Select file dialog, select model.gtapprox from the current project directory and click Select.


After selecting the model file you can see general model information in the Model summary pane and model inputs and outputs in the Variables and Responses panes.


The model has two inputs named x1 and x2 and one output named f. Note that these names were read from the training sample file and saved to the model in the Sample-Based Approximation tutorial.

In the Model summary pane, note the Input bounds section. It lists the minimum and maximum values of model inputs (variables) which were found in the training sample. These values specify the bounding box of the model and will be required when you configure the block which generates the input sample.

  • Valid range for the x1 input is \((-4.9614, 9.9396)\).
  • Valid range for the x2 input is \((0.1293, 14.8778)\).

Values of variables which you send to the model for evaluation should be within these ranges. The model can evaluate any input, but it has no information about the input-output dependency outside the bounding box, because the training data did not contain this information. Due to this, evaluation results for inputs outside the bounding box will be very inaccurate, because the model will tend to linear extrapolation for such inputs.

To save model input and output data for analysis, you have to monitor the Model.x1, Model.x2, and Model.f ports. Monitored data will be stored to the project database when the workflow runs, allowing you to process the results in Analyze.

  • Switch to the Ports tab in the Model block configuration dialog.
  • In the Inputs pane, enable monitoring for the x1 and x2 ports.
  • In the Outputs pane, enable monitoring for the f port.
  • Click OK to save changes and close the configuration dialog.

Test Sample Generation

To generate a random test sample, you can use a Design space exploration block.

  • Add a Design space exploration block. Name it Test Sample.
  • Open Test Sample configuration.

Generating a sample requires you to add sample variables and set their lower and upper bounds.

  • Click the b_blconf_add button in the Variables pane or select Add variables… from the pane’s b_blconf_context menu to bring up the Add variables dialog.

  • Specify names of variables: x1, x2. Leave other settings in this dialog default and click Add.

After adding variables, edit their lower and upper bounds in the Variables pane. Note that the bounds should be set in such a way that the sample’s bounding box is equal or less than the model’s bounding box. For simplicity, set the same bounds as shown in the model details in the Model block.

  • For x1: set lower bound to -4.9614 and upper bound to 9.9396.
  • For x2: set lower bound to 0.1293 and upper bound to 14.8778.

Note that when you add variables, the block automatically selects the design space exploration technique to use and the number of sample points to generate (the Number of designs setting). For this tutorial, use the automatically selected technique (Latin hypercube sampling) but increase the number of designs to get enough data for plotting in Analyze.

  • Set Number of designs to 1000.
  • Click OK in the block configuration dialog to save settings and close it.

The Test Sample block will generate a sample in 2 parts — for x1 and x2. Each of them is a RealMatrix which is output to the Test Sample.x1.Optimal designs and Test Sample.x2.Optimal designs ports, respectively. This data has to be sent to Model for evaluation.

  • Link Test Sample.x1.Optimal designs to Model.x1.
  • Link Test Sample.x2.Optimal designs to Model.x2.

Monitoring Settings

The Test Sample block enables monitoring for some of its ports by default. These monitors are not needed in the current tutorial, so it is recommended to remove them for convenience. In addition, it is recommended to assign monitoring aliases for the Model ports. These aliases that will become the names of the respective records in the project database which store the model’s input and output data.

  • Open workflow configuration b_runconf and switch to the Monitoring tab.
  • Select all Test Sample ports and click b_blconf_remove in the toolbar to remove them.
  • Specify monitoring aliases for the Model block ports:
    • Input x1 for Model.x1,
    • Input x2 for Model.x2, and
    • Response for Model.f.
  • Verify your monitoring settings and click OK to save workflow configuration.

Monitored data (model input and response) will be stored to the project database when the workflow runs, allowing you to process the results in Analyze.


Finished workflow is very simple: it contains two blocks only. Test Sample generates a data sample with specified number of points and bounds of variables. Model loads the model from the model.gtapprox file in the project directory and evaluates model outputs for inputs form the generated sample.


Note that results of this workflow are saved to the project database because Model input and output ports are monitored.

  • Verify that monitoring is enabled for the Model.x1, for Model.x2, and Model.f ports.
  • Verify that monitoring is disabled for the Test Sample block ports. This block enables monitoring for some of its ports by default, but currently this data is not needed.
  • Save the workflow and run it.

When the workflow finishes, switch to Analyze to view the results.


The data from Model.x1, Model.x2 and Model.f ports is saved to the project database — the Input x1, Input x2 and Response records, respectively. Note that the records in Monitoring are named the same as the monitoring aliases in workflow configuration.


To visualize the results you can, for example, plot the model’s response surface in 3D.

  • In Analyze, create a new report and open the Data series pane to add report data.
  • Select the Input x1, Input x2 and Response records in the project database and drag them to the Data series pane. This creates new data series containing model input and response data.

Add a 3D response surface plot.

  • Dataset: model inputs as D0 and D1, model response as D2. See 3D Plot for details on using this plot.

Additionally, you can use a 2D plot to show the sample generated by the Test Sample block.

  • Dataset: Input x1 as D0, Input x2 as D1. See 2D Plot for details on using this plot.


For a guide on how to add and configure plots, see also section Plotting in the Results and Reports tutorial.


This tutorial explains the basics of evaluating an approximation model in pSeven in batch mode — that is, the input contains a number of sample points, and all responses are evaluated at once. Approximation model, in fact, is not limited to batch calculations and can process sequential input. For example, you can remove the Test Sample block and uplink Model.x1, Model.x2 and Model.f instead to study the model manually, or use the Model block in an iterative workflow, such as an optimization cycle.