15.5. Version Compatibility Issues

15.5.1. pSeven Core v2024.03

Upgrading, downgrading and using multiple versions

This version cancels the support for the custom side-by-side installation feature, which was first introduced by MACROS 1.7.6. It is an obsolete feature, completely replaced by the standard Python virtual environments, which enable side-by-side installation of different versions, and basic pip functions, which handle the upgrading and downgrading of packages. It is also incompatible with pip, the recommended and commonly used Python package installer.

For installation in virtual environments, see venv in Python documentation and installing packages in a virtual environment using pip. For version upgrade and downgrade, see the guidelines in section Version Upgrade and Downgrade and the pip documentation.

Note that in order to install pSeven Core v2024.03 or later, you have to manually remove pSeven Core versions v2024.02 and earlier. If you get an error while installing pSeven Core with pip, see section Installation and Uninstallation in known issues.

15.5.2. pSeven Core 6.47

Compatibility between an initial model and the selected training technique in GTApprox

pSeven Core 6.47 removes an inconsistency in the gtapprox.Builder.build() and gtapprox.Builder.build_smart() method behavior when you use them to update an initial model but select a technique not compatible with that model by setting the GTApprox/Technique option.

In 6.46 and below versions, GTApprox behavior in the cases of the initial model and training technique incompatibility was inconsistent: certain cases raised an exception, while in other cases the initial model was silently ignored and not used, or ignored with a warning and not used. In the cases where the initial model was not used, the update in fact replaced the initial model with a new model trained using only the new data set — so effectively the initial model was lost.

pSeven Core 6.47 makes that behavior consistent so as to avoid that loss. Since 6.47 every such incompatibility raises an InapplicableTechniqueException, meaning that GTApprox in pSeven Core 6.47 and above versions will raise an exception in certain cases which did not raise exceptions in 6.46 and below. This behavior is intended to block the model update in case of model-technique incompatibility.

If you are using the model update feature (initial models, incremental training) in GTApprox, review your code when you update from pSeven Core 6.46 or an earlier version to any newer version (6.47 and above). Any code that passes the initial_model argument to build() or build_smart() must handle the possible InapplicableTechniqueException.

Changes in default GTApprox behavior

Note

The issue described below is fixed in pSeven Core 6.48. If you update from pSeven Core 6.46 or an earlier version, you can avoid this issue altogether by skipping 6.47 and updating directly to 6.48 or a more recent version.

pSeven Core 6.47 contains a compatibility issue where the default GTApprox behavior when training models may change unexpectedly due to the change of the GTApprox/OutputTransformation option’s default value.

In 6.46 and earlier versions, the default was "none", meaning not to apply any data transformations by default. 6.47 changes the default to "auto" to enable tuning the GTApprox/OutputTransformation option during smart training (build_smart()). Thus the default behavior in 6.47 is to run statistical tests on the training data and automatically apply the log transformation to any outputs where the output value distribution is statistically similar to the exponential distribution. As a result, the models trained with default settings in 6.47 may appear different from those trained in 6.46 or earlier. Note that the issue appears in 6.47 if you do not set the GTApprox/OutputTransformation option because the changes affect the default behavior.

To ensure that the update from 6.46 or earlier to 6.47 does not change the behavior, update your code so it sets the GTApprox/OutputTransformation option to "none" (explicitly set the value that was the default in 6.46 and earlier versions).

This issue is fixed since pSeven Core 6.48 in a way compatible with 6.47 as well as 6.46 and earlier versions. If you update from 6.46 or earlier, it is recommended to skip 6.47 and update to 6.48 or a more recent version, thus avoiding the issue. If you have already updated to 6.47 and follow with updating to 6.48 or greater, no changes are needed. If you did not apply the above fix in 6.47, updating to 6.48 or greater will bring back the default behavior from 6.46 and earlier. If you have already applied the fix in 6.47, the update will keep the current behavior achieved by the fix. However in the latter case you may consider reverting the fix, as it is no longed needed after you update from 6.47 to 6.48 or greater.

15.5.3. pSeven Core 6.36

Categorical variables in GTDoE

This release changes handling of categorical variables in GTApprox, resulting in improved compatibility between the Adaptive Design technique and other techniques, which may be used to obtain an initial sample for Adaptive Design. However, results of certain techniques in 6.36 may be different from 6.35 by default.

The Adaptive Design technique treats categorical variables as a special case, where it is not possible to model the variable-response dependency — which requires to run an independent adaptive study for every possible combination of categories (levels of categorical variables). In tasks with categorical variables, an initial sample passed to Adaptive Design should include all combinations of categories, and sample points should be distributed between those combinations as evenly as possible. Naturally, samples generated by other techniques should have similar properties, since those samples may further be used as initial samples in Adaptive Design. However, many techniques disregarded this consideration and handled categorical variables generally the same as discrete ones; and certain space-filling techniques did not support categorical variables at all.

To allow using results of various techniques as initial samples in Adaptive Design without issues, pSeven Core 6.36 changes behavior of those techniques to generate all possible combinations of categories — except the Fractional Factorial and Orthogonal Array techniques where it contradicts the purpose. That behavior is now default in build_doe(), while generate() keeps old behavior by default but switches to the new one if you call it with compatibility set to False. To reproduce behavior from 6.35 in build_doe(), you can redefine categorical variables as discrete using the @GT/VariableType hint.

The changes described above do not affect the following techniques, which have the same behavior in 6.35 and 6.36, regarding categorical variables:

  • Full Factorial
  • Fractional Factorial
  • Orthogonal Array
  • Adaptive Design

15.5.4. pSeven Core 6.31

Regression model information for RSM

This release changes handling of categorical variables in GTApprox, which results in improved quality of models with categorical variables but creates a minor compatibility issue in RSM models with categorical variables.

Previously, all RSM models provided regression model information: the "Regression Model" key was always found in model details except the single special case — when output transformation was applied during training (using the GTApprox/OutputTransformation option).

Since 6.31, the "Regression Model" key may also be omitted for RSM models with categorical variables. This case is described in more detail in section Regression Model Information.

This issue is related to RSM models only, and does not affect RSM models where all variables are continuous.

15.5.5. pSeven Core 6.17

Parallelization options

Due to a rework of internal parallelization mechanisms and to keep consistency between pSeven Core tools, this release lowers the maximum valid value for the following options to 512 (was 100000):

If your code that uses pSeven Core sets one of these options to a non-default value, you may have to review the code to ensure that it is valid with 6.17; values higher than 512 will now cause an InvalidOptionValueError exception.

This change does not affect the default behavior of the above options.

Orthogonal Array options

Due to the improvements of the Orthogonal Array GTDoE technique, this release adds an auto setting and reintroduces the upper limit for the GTDoE/OrthogonalArray/MultistartIterations option, which sets the number of initial guesses (multistarts) to generate an orthogonal array. The new limit is 65535 (was unlimited); values higher than 65535 will now cause an InvalidOptionValueError exception. The auto setting is now default.

If your code sets this option to a non-default value, it is recommended to unset it (use the default auto setting). By default, the Orthogonal Array technique now automatically selects an appropriate number of multistarts according to the number of variables, levels, and the type of array set by the GTDoE/OrthogonalArray/ArrayType option.

15.5.6. pSeven Core 6.16.1

Orthogonal Array DoE technique options

This release removes the GTDoE/OrthogonalArray/MaxIterations option, which is deprecated since 6.16 as it has no effect in the new version of the Orthogonal Array (OA) technique released in pSeven Core 6.16. GTDoE no longer recognizes "GTDoE/OrthogonalArray/MaxIterations" as a valid option name and raises an InvalidOptionNameError exception if you try to set this option. If you run the OA technique, you may need to update your code to use the GTDoE/OrthogonalArray/MultistartIterations option instead.

For full details on the OA technique improvements and new features available since pSeven Core 6.16, see the technique description in section Orthogonal Array.

15.5.7. pSeven Core 6.16

New version of the Orthogonal Array DoE technique

This release contains a new implementation of the Orthogonal Array (OA) technique in GTDoE. Generally the new technique is compatible with the old version and does not require changes in code. Notable changes are:

For full details on the OA technique improvements and new features, see the technique description in section Orthogonal Array.

The OA technique is also used internally by GTSDA rank() when calculating sensitivity indices using the Taguchi ranking technique (see Taguchi Indices). Due to this, Taguchi ranking results in pSeven Core 6.16 may show differences when compared to result obtained in previous pSeven Core versions, however qualitatively these results are the same.

15.5.8. pSeven Core 6.14

GTApprox changes

This release adds an upper limit for the training sample size in the GP and HDAGP techniques: the maximum supported size is now 4000 points. If the sample size exceeds the limit, these techniques will now raise an exception. If you have used the GP or HDAGP techniques on large samples, it is recommended to switch to the SGP technique.

A related change is the new upper limit for the GTApprox/SGPNumberOfBasePoints option: the maximum valid value is now 4000. Previously there was no specific limit, so you may find the need to update your option settings if you were using a value greater than 4000.

This release also introduces certain changes in GTApprox options which do not create compatibility issues but may alter its behavior or results in some cases. The following options, which previously used a fixed default value, are now tuned automatically by default (use the newly added "Auto" value):

It means that in version 6.14 and above the actual defaults of these options may be different compared to the previous versions, since the default values are now determined automatically during training. However, in most cases this should improve model quality due to better GTApprox tuning.

15.5.9. pSeven Core 6.9

Computationally expensive optimization problems

In this release, significant changes were done to the computational budget allocation and selection policies in various GTOpt algorithms. Due to this, the path of solving an expensive optimization problem has in most cases changed as well. This has lead to difference in optimization results when comparing them with results obtained from previous versions. This difference is not an error and should not raise any issues, since in expensive problem context the optimal solution can always be established only approximately so direct comparison is never reliable.

This change also does not introduce any compatibility issues with the scripts prepared for older versions, and does not require any changes in code. However, you may want to review your solver configuration in tasks involving global optimization — that is, surrogate based optimization (problems that include expensive functions, see Hint Reference) or problems where a non-default GTOpt/GlobalPhaseIntensity value was used. In both these cases, a certain adjustment of GTOpt/GlobalPhaseIntensity may be needed; for more details on selecting the value to use, see section Local and Global Methods.

15.5.10. pSeven Core 6.8

GTApprox pre- and post-processing

This release removes the deprecated pre- and post-processing methods (preprocess() and postprocess()) from GTApprox. As a replacement, it is recommended to use the automated smart training feature available since version 6.6. Generally, smart model training is more convenient and provides better results due to numerous internal improvements. Note that smart training also supports additional (manual) tuning through hints, similarly to the pre- and post-processing methods (see section Smart Training for details).

Linear approximation

In previous versions, linear approximation was available both as a separate technique called LR (Linear Regression) and as an option when training RSM models. The LR technique actually used RSM internally, and this deprecated name was kept for compatibility purposes only. Version 6.8 finally removes this technique; as such, "LR" is no longer a valid value of GTApprox/Technique. To create linear approximations in 6.8, select the RSM technique (set GTApprox/Technique to "RSM") and set GTApprox/RSMType set to "Linear". Since these settings are actually those that were (internally) used in previous versions when the LR technique was selected, a linear RSM model will be the same as the LR model from previous versions.

15.5.11. pSeven Core 6.7

Project rename

Since version 6.7, the MACROS project is renamed to pSeven Core. Following the project rename, the main macros module was also renamed to p7core. This change does not break the compatibility with previous versions: the old name now works as an alias for p7core, so scripts implemented for older versions will run with no issues.

Module imports such as:

from da.macros import gtopt

continue to work correctly. Note that if there are multiple versions installed, the import from da.macros will actually import from the latest version regardless of its name, so in general there is no need to update your scripts with the new module name.

Issues can arise if both names are used within the same scope — for example, in the same Python script or in a project containing MACROS-based modules. One of the possible issues is the type compatibility: for example, objects returned by methods imported from macros will have a da.macros.module.Object type, not da.p7core.module.Object. Consequently, these objects will raise errors in methods imported from p7core, which expect the p7core name.

Thus using both macros and p7core names in the same scope should be avoided as in most cases it leads to unexpected behavior. It is recommended to keep using the macros name in older projects and to use p7core in projects implemented with pSeven Core 6.7 or above.

15.5.12. MACROS 6.6

Robust optimization and linearity type hints

Prior to 6.6, GTOpt effectively ignored the @GTOpt/LinearityType hint for objectives and constraints in robust optimization problems, thus losing the opportunity to speed up evaluations of such functions. Version 6.6 improves robust optimization algorithms, in particular allowing to specify linear or quadratic functions in robust optimization problems. The algorithm now does not ignore the @GTOpt/LinearityType hint but assumes that functions hinted as linear or quadratic do not depend on any stochastic variable (which is correct by definition). However, there is no way for GTOpt to check your problem formulation in this case — that is, it cannot automatically test the methods you implemented to evaluate problem objectives and constraints. This can lead to unexpected behavior if you define a function that does depend on a stochastic variable, and at the same time you specify it to be linear or quadratic using the hint.

Note that such problem formulation is incorrect and should be avoided for general reasons. In previous MACROS versions it was not an issue because @GTOpt/LinearityType was simply ignored in robust optimization problems.

It is recommended to recheck the code developed for previous versions of MACROS if:

All other problem types are not affected.

15.5.13. MACROS 6.0 Release Candidate 1

GTIVE

This version finally removes the deprecated Generic Tool for Important Variable Extraction (GTIVE) and the corresponding Python module. Sensitivity analysis methods are available in GTSDA. For more details, see GTIVE to GTSDA Migration Guide.

15.5.14. MACROS 5.0

Randomization options

Prior to 5.0 there were several options controlling the behavior of randomized algorithms in certain GTApprox techniques, namely GTApprox/HDAInitialization, GTApprox/HDARandomSeed, and GTApprox/SGPSeedValue. To simplify the options structure, these options were removed and replaced with two common options: GTApprox/Deterministic that switches between the deterministic and non-deterministic modes and GTApprox/Seed that sets the fixed seed used in the deterministic mode (if it is on). Also, GTApprox/Seed does not have a special value to generate a random seed (as it was for GTApprox/HDARandomSeed and GTApprox/SGPSeedValue) — this function is now entirely controlled by GTApprox/Deterministic.

This change can affect compatibility due to option name changes, but default technique behavior and available controls remain the same.

By default, model training in previous versions was deterministic; this behavior is kept as default (GTApprox/Deterministic is on by default). To change HDA, HDAGP and SGP initialization, you can now do one of the following:

  • Keep GTApprox/Deterministic on but change GTApprox/Seed (new deterministic initialization).
  • Switch GTApprox/Deterministic off. In this case GTApprox/Seed is ignored. This is random initialization, the same as setting GTApprox/HDAInitialization to "Random" or using special GTApprox/HDARandomSeed and GTApprox/SGPSeedValue values in previous versions.

Similar changes were done in internal validation (cross validation) options in GTApprox and GTDF to keep their names and behavior in line with the described above:

  • GTApprox/IVRandomSeed is renamed to GTApprox/IVSeed and simply sets the seed used if GTApprox/IVDeterministic is on. Default behavior is deterministic, the same as in previous versions.
  • GTApprox/Postprocess/IVRandomSeed hint is renamed to GTApprox/Postprocess/IVSeed. Note that cross validation in post-processing is always deterministic.
  • GTDF/IVRandomSeed is renamed to GTDF/IVSeed and sets the seed used if GTDF/IVDeterministic is on. Default behavior is deterministic, the same as in previous versions.

Also, to help reproduce the results obtained in a non-deterministic mode, the seeds generated for training and cross validation are now stored in model info.

15.5.15. MACROS 3.0

Converged points in GTOpt optimization results

Prior to MACROS 3.0, GTOpt Result contained the converged point set in addition to the optimal and infeasible sets. The converged point set was removed because it is intended for internal purposes and often confuses users. In fact, it still exists as a “hidden” attribute Result._converged.

This change can affect compatibility but does not lead to any data loss, as explained below.

The process of forming resulting data sets described in section Optimal and Infeasible Result Sets in fact has more stages. GTOpt first searches for points that satisfy the Karush-Kuhn-Tucker (KKT) optimality conditions. Such points are called “converged” and can be attributed to the true (unknown) Pareto frontier. After obtaining a certain number of converged points, which is based on the GTOpt/FrontDensity value, GTOpt “diffuses” from them to find more Pareto-optimal points, as illustrated below.

../_images/converged_vs_optimal.png

The rest of the process goes as described: all evaluated points are collected, dominated points are discarded, and non-dominated points form three sets:

  1. converged set — contains the converged points only,
  2. optimal set — contains all feasible non-dominated points, and
  3. infeasible set — contains all infeasible (up to a given threshold) non-dominated points.

In the end, converged points appear both in the converged set (obviously) and in the optimal set (because all converged points are feasible and non-dominated). Thus the difference between the converged and optimal sets is really out of user’s interest. Converged points are useful for debugging internal solver algorithms, but not much more.

A confusing fact is that the distribution of converged points over the Pareto frontier can change dramatically as a result of seemingly minor changes in solver options. This does not affect result quality but is often misinterpreted as an error by users who consider such behavior unexpected, and finally has lead to the interface change described above.

15.5.16. MACROS 3.0 Release Candidate 1

NumPy data types

Following the introduction of NumPy requirement in version 1.9.6, MACROS now uses ndarray as the primary data type. Scripts created for an older MACROS version may require a review if results were processed as a list[list[float]]. This change also requires an update for scripts that use name indexing of variables in da.p7core.gtopt.ProblemGeneric.evaluate(), da.p7core.blackbox.Blackbox.evaluate(), and similar methods (name indexing is no longer supported).

15.5.17. MACROS 3.0 Beta 2

Status

Prior to version 3.0 Beta 2, MACROS used a plain string as a result status (for example, gtopt.Result.status). Hence when you test for the status, you would do:

>>> result.status == "Success"  # No longer works

Since tool statuses were implemented as objects, this comparison no longer works. Instead, the status module should be used for comparisons as follows:

>>> from da.p7core import status
>>> result.status == status.SUCCESS

See section Status for full details.

Deprecated options and methods

This release removes a number of deprecated options and methods which are replaced with equivalent or better ones. Overall functionality is not affected, but scripts created for an older MACROS version may require an update when upgrading to 3.0 Release Candidate 1 or above.

All incompatible changes are listed below.

  • GTDF
    • Removed the deprecated GTDF/InterpolationRequired option; GTDF/ExactFitRequired should be used instead. Option type, values and effect are the same; this removal simply finalizes the option name change (see MACROS 1.8.1 changelog for details).
  • GTDoE
    • Removed the deprecated GTDoE/Adaptive/InterpolationRequired option; GTDoE/Adaptive/ExactFitRequired should be used instead. Option type, values and effect are the same; this removal simply finalizes the option name change (see MACROS 1.8.1 changelog for details).
  • GTOpt

    • Removed deprecated GTOpt/GlobalSearch option. Newer option GTOpt/GlobalPhaseIntensity provides equivalent settings and also allows finer tuning of the global optimization algorithm.

      The applications that do not explicitly set GTOpt/GlobalSearch require no update. However, a script that sets GTOpt/GlobalSearch (even if it just sets an explicit default) cannot be updated by simply renaming the option. Use an equivalent GTOpt/GlobalPhaseIntensity value:

      In both cases, the update does not change the solver behavior compared to the previous one: GTOpt/GlobalSearch in fact always used GTOpt/GlobalPhaseIntensity internally. In addition, you can now make global search more or less aggressive, specifying other option values (see GTOpt/GlobalPhaseIntensity description for details).

15.5.18. MACROS 1.11.0

Blackboxes

As described in the Blackbox section, some of the MACROS techniques require the user to implement a blackbox wrapping calls to an external function. Prior to 1.11.0, the related functionality was split across several similar classes: the base Blackbox class, InteractiveBlackbox class which added the gradients support and was used in Adaptive DoE and blackbox-based data fusion (build_BB()), da.p7core.gtdr.Blackbox which essentially copied the base class. To resolve this situation, 1.11.0 removed all blackbox classes except the base one, which now serves as the common blackbox class. As a result, the scripts using blackbox-based methods require certain changes when updating to 1.11.0:

  • The InteractiveBlackbox class was removed. Use common Blackbox instead.
    • Gradient support is now implemented in the common blackbox. Instead of InteractiveBlackbox.evaluate_with_gradient(), use enable_gradients() and evaluate().
    • InteractiveBlackbox.get_gradient_items_order() is replaced with gradients_order.
    • InteractiveBlackbox.get_step_numerical_gradient() is replaced with numerical_gradient_step.
  • The da.p7core.gtdr.Blackbox class was removed. Use common Blackbox instead.

15.5.19. MACROS 1.10.0

Mixture of Approximators

Version 1.9.0 added Mixture of Approximators (MoA) to MACROS Extras, making it available as a separate xtmoa module. In 1.10.0, this module is removed, and MoA is now directly available from GTApprox as one of the approximation techniques — see GTApprox/Technique and MoA options in the GTApprox Option Reference section.

Thus any scripts importing the xtmoa module will no longer run in 1.10.0 and have to be updated to use MoA as the GTApprox technique. Section Examples contains updated MoA usage examples: Mixture of Approximators Basics and Mixture of Approximators Advanced.

15.5.20. MACROS 1.9.0

GTApprox model smoothing

Prior to MACROS 1.9.0, GTApprox model featured the so-called dynamic smoothing: specifying the smoothness parameter in calc(), grad(), calc_ae() or grad_ae() method call allowed to evaluate the smoothed model, but did not preserve it. This proved to be an inconvenient and error-prone approach, and MACROS 1.7.1 added a static smoothing method which at the time has received the name ironing() to be distinguished from the dynamic smoothing approach. Since then it has undergone several improvements and finally took the form of the three flexible methods released in MACROS 1.9.0:

The following changes will create compatibility issues between MACROS 1.9.0 and previous versions (1.8.5 and older). Scripts using these capabilities will not run in 1.9.0:

This change also affects the models exported using export_to() and (deprecated) save_to_octave() methods: exported models now should be smoothed before export and will not provide dynamic smoothing.

Post-processing hints

Some post-processing hints were renamed and are incompatible with previous versions.

  • GTApprox/Postprocess/ExactFit was renamed to GTApprox/Postprocess/ExactFitRequired.
  • GTApprox/Postprocess/LearningTime was renamed to GTApprox/Postprocess/LearningSpeed.
  • Key names in the dictionary which is the value for the GTApprox/Postprocess/Artifacts hint were shortened:
    • 'GTApprox/Postprocess/LargeGradients' to 'LargeGradients',
    • 'GTApprox/Postprocess/SmallScaleNoise' to 'SmallScaleNoise',
    • 'GTApprox/Postprocess/BadExtrapolation' to 'BadExtrapolation'.

15.5.21. MACROS 1.7.6

Upgrading, downgrading and using multiple versions

Note

This feature is no longer available since pSeven Core v2024.03. If you need to install multiple versions, use standard Python virtual environments (venv).

The MACROS and pSeven Core versions that used the distutils installation method (setup.py) supported a custom way of installing multiple versions side-by-side to the same Python installation and importing any of the installed versions specifying its number. This feature effectively became deprecated once Python 3.3 had introduced virtual environments, which are now the standard and recommended method to install multiple pSeven Core versions on the same host. pSeven Core versions that are published on PyPI, starting with v2024.03, no longer support this custom side-by-side installation feature.

15.5.22. MACROS 1.6.0

Platform compatibility of GTDF models

Due to a bug in da.p7core.gtdf.Model.save(), GTDF models from 1.6.0 saved on 32-bit platforms can not be loaded on 64-bit platforms, and vice versa. Since the error persists in the saved file, it can not be fixed by updating MACROS (newer versions are also unable to load GTDF models from 1.6.0 if you switch platforms).

This bug is fixed in version 1.6.1 onwards: models saved from 1.6.1 or higher load normally.