optimagic API#

Optimization#

maximize
optimagic.maximize(fun: FunType | CriterionType | None = None, params: PyTree | None = None, algorithm: AlgorithmType | None = None, *, bounds: Bounds | ScipyBounds | Sequence[tuple[float, float]] | None = None, constraints: ConstraintsType | None = None, fun_kwargs: dict[str, Any] | None = None, algo_options: dict[str, Any] | None = None, jac: JacType | list[JacType] | None = None, jac_kwargs: dict[str, Any] | None = None, fun_and_jac: FunAndJacType | CriterionAndDerivativeType | None = None, fun_and_jac_kwargs: dict[str, Any] | None = None, numdiff_options: NumdiffOptions | NumdiffOptionsDict | None = None, logging: bool | str | Path | LogOptions | dict[str, Any] | None = None, error_handling: ErrorHandling | ErrorHandlingLiteral = ErrorHandling.RAISE, error_penalty: dict[str, float] | None = None, scaling: bool | ScalingOptions | ScalingOptionsDict = False, multistart: bool | MultistartOptions | MultistartOptionsDict = False, collect_history: bool = True, skip_checks: bool = False, x0: PyTree | None = None, method: str | None = None, args: tuple[Any] | None = None, hess: HessType | None = None, hessp: HessType | None = None, callback: CallbackType | None = None, options: dict[str, Any] | None = None, tol: NonNegativeFloat | None = None, criterion: CriterionType | None = None, criterion_kwargs: dict[str, Any] | None = None, derivative: JacType | None = None, derivative_kwargs: dict[str, Any] | None = None, criterion_and_derivative: CriterionAndDerivativeType | None = None, criterion_and_derivative_kwargs: dict[str, Any] | None = None, log_options: dict[str, Any] | None = None, lower_bounds: PyTree | None = None, upper_bounds: PyTree | None = None, soft_lower_bounds: PyTree | None = None, soft_upper_bounds: PyTree | None = None, scaling_options: dict[str, Any] | None = None, multistart_options: dict[str, Any] | None = None) OptimizeResult[source]#

Maximize fun using algorithm subject to constraints.

Parameters
  • fun – The objective function of a scalar, least-squares or likelihood optimization problem. Non-scalar objective functions have to be marked with the mark.likelihood or mark.least_squares decorators. fun maps params and fun_kwargs to an objective value. See How to write objective functions for details and examples.

  • params – The start parameters for the optimization. Params can be numpy arrays, dictionaries, pandas.Series, pandas.DataFrames, NamedTuples, floats, lists, and any nested combination thereof. See How to specify params for details and examples.

  • algorithm – The optimization algorithm to use. Can be a string, subclass of optimagic.Algorithm or an instance of a subclass of optimagic.Algorithm. For guidelines on how to choose an algorithm see Which optimizer to use. For examples of specifying and configuring algorithms see How to specify algorithms and algorithm specific options.

  • bounds – Lower and upper bounds on the parameters. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are used for sampling based optimizers but are not enforced during optimization. Each bound type mirrors the structure of params. See How to specify bounds for details and examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

  • constraints – Constraints for the optimization problem. Constraints can be specified as a single optimagic.Constraint object, a list of Constraint objects. For details and examples check How to specify constraints.

  • fun_kwargs – Additional keyword arguments for the objective function.

  • algo_options – Additional options for the optimization algorithm. algo_options is an alternative to configuring algorithm objects directly. See Optimizers for supported options of each algorithm.

  • jac – The first derivative of fun. Providing a closed form derivative can be a great way to speed up your optimization. The easiest way to get a derivative for your objective function are autodiff frameworks like JAX. For details and examples see How to speed up your optimization using derivatives.

  • jac_kwargs – Additional keyword arguments for jac.

  • fun_and_jac – A function that returns both the objective value and the derivative. This can be used do exploit synergies in the calculation of the function value and its derivative. For details and examples see How to speed up your optimization using derivatives.

  • fun_and_jac_kwargs – Additional keyword arguments for fun_and_jac.

  • numdiff_options – Options for numerical differentiation. Can be a dictionary or an instance of optimagic.NumdiffOptions.

  • logging – If None, no logging is used. If a str or pathlib.Path is provided, it is interpreted as path to an sqlite3 file (which typically has the file extension .db. If the file does not exist, it will be created. and the optimization history will be stored in that database. For more customization, provide LogOptions. For details and examples see How to use logging.

  • error_handling – If “raise” or ErrorHandling.RAISE, exceptions that occur during the optimization are raised and the optimization is stopped. If “continue” or ErrorHandling.CONTINUE, exceptions are caught and the function value and its derivative are replaced by penalty values. The penalty values are constructed such that the optimizer is guided back towards the start parameters until a feasible region is reached and then continues the optimization from there. For details see How to handle errors during optimization.

  • error_penalty – A dictionary with the keys “slope” and “constant” that influences the magnitude of the penalty values. For maximization problems both should be negative. For details see How to handle errors during optimization.

  • scaling – If None or False, the parameter space is not rescaled. If True, a heuristic is used to improve the conditioning of the optimization problem. To choose which heuristic is used and to customize the scaling, provide a dictionary or an instance of optimagic.ScalingOptions. For details and examples see How to scale optimization problems.

  • multistart – If None or False, no multistart approach is used. If True, the optimization is restarted from multiple starting points. Note that this requires finite bounds or soft bounds for all parameters. To customize the multistart approach, provide a dictionary or an instance of optimagic.MultistartOptions. For details and examples see How to do multistart optimizations.

  • collect_history – If True, the optimization history is collected and returned in the OptimizeResult. This is required to create criterion_plot or params_plot from an OptimizeResult.

  • skip_checks – If True, some checks are skipped to speed up the optimization. This is only relevant if your objective function is very fast, i.e. runs in a few microseconds.

  • x0 – Alias for params for scipy compatibility.

  • method – Alternative to algorithm for scipy compatibility. With method you can select scipy optimizers via their original scipy name.

  • args – Alternative to fun_kwargs for scipy compatibility.

  • hess – Not yet supported.

  • hessp – Not yet supported.

  • callback – Not yet supported.

  • options – Not yet supported.

  • tol – Not yet supported.

  • criterion – Deprecated. Use fun instead.

  • criterion_kwargs – Deprecated. Use fun_kwargs instead.

  • derivative – Deprecated. Use jac instead.

  • derivative_kwargs – Deprecated. Use jac_kwargs instead.

  • criterion_and_derivative – Deprecated. Use fun_and_jac instead.

  • criterion_and_derivative_kwargs – Deprecated. Use fun_and_jac_kwargs instead.

  • lower_bounds – Deprecated. Use bounds instead.

  • upper_bounds – Deprecated. Use bounds instead.

  • soft_lower_bounds – Deprecated. Use bounds instead.

  • soft_upper_bounds – Deprecated. Use bounds instead.

  • scaling_options – Deprecated. Use scaling instead.

  • multistart_options – Deprecated. Use multistart instead.

minimize
optimagic.minimize(fun: FunType | CriterionType | None = None, params: PyTree | None = None, algorithm: AlgorithmType | None = None, *, bounds: Bounds | ScipyBounds | Sequence[tuple[float, float]] | None = None, constraints: ConstraintsType | None = None, fun_kwargs: dict[str, Any] | None = None, algo_options: dict[str, Any] | None = None, jac: JacType | list[JacType] | None = None, jac_kwargs: dict[str, Any] | None = None, fun_and_jac: FunAndJacType | CriterionAndDerivativeType | None = None, fun_and_jac_kwargs: dict[str, Any] | None = None, numdiff_options: NumdiffOptions | NumdiffOptionsDict | None = None, logging: bool | str | Path | LogOptions | dict[str, Any] | None = None, error_handling: ErrorHandling | ErrorHandlingLiteral = ErrorHandling.RAISE, error_penalty: dict[str, float] | None = None, scaling: bool | ScalingOptions | ScalingOptionsDict = False, multistart: bool | MultistartOptions | MultistartOptionsDict = False, collect_history: bool = True, skip_checks: bool = False, x0: PyTree | None = None, method: str | None = None, args: tuple[Any] | None = None, hess: HessType | None = None, hessp: HessType | None = None, callback: CallbackType | None = None, options: dict[str, Any] | None = None, tol: NonNegativeFloat | None = None, criterion: CriterionType | None = None, criterion_kwargs: dict[str, Any] | None = None, derivative: JacType | None = None, derivative_kwargs: dict[str, Any] | None = None, criterion_and_derivative: CriterionAndDerivativeType | None = None, criterion_and_derivative_kwargs: dict[str, Any] | None = None, log_options: dict[str, Any] | None = None, lower_bounds: PyTree | None = None, upper_bounds: PyTree | None = None, soft_lower_bounds: PyTree | None = None, soft_upper_bounds: PyTree | None = None, scaling_options: dict[str, Any] | None = None, multistart_options: dict[str, Any] | None = None) OptimizeResult[source]#

Minimize criterion using algorithm subject to constraints.

Parameters
  • fun – The objective function of a scalar, least-squares or likelihood optimization problem. Non-scalar objective functions have to be marked with the mark.likelihood or mark.least_squares decorators. fun maps params and fun_kwargs to an objective value. See How to write objective functions for details and examples.

  • params – The start parameters for the optimization. Params can be numpy arrays, dictionaries, pandas.Series, pandas.DataFrames, NamedTuples, floats, lists, and any nested combination thereof. See How to specify params for details and examples.

  • algorithm – The optimization algorithm to use. Can be a string, subclass of optimagic.Algorithm or an instance of a subclass of optimagic.Algorithm. For guidelines on how to choose an algorithm see Which optimizer to use. For examples of specifying and configuring algorithms see How to specify algorithms and algorithm specific options.

  • bounds – Lower and upper bounds on the parameters. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are used for sampling based optimizers but are not enforced during optimization. Each bound type mirrors the structure of params. See How to specify bounds for details and examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

  • constraints – Constraints for the optimization problem. Constraints can be specified as a single optimagic.Constraint object, a list of Constraint objects. For details and examples check How to specify constraints.

  • fun_kwargs – Additional keyword arguments for the objective function.

  • algo_options – Additional options for the optimization algorithm. algo_options is an alternative to configuring algorithm objects directly. See Optimizers for supported options of each algorithm.

  • jac – The first derivative of fun. Providing a closed form derivative can be a great way to speed up your optimization. The easiest way to get a derivative for your objective function are autodiff frameworks like JAX. For details and examples see How to speed up your optimization using derivatives.

  • jac_kwargs – Additional keyword arguments for jac.

  • fun_and_jac – A function that returns both the objective value and the derivative. This can be used do exploit synergies in the calculation of the function value and its derivative. For details and examples see How to speed up your optimization using derivatives.

  • fun_and_jac_kwargs – Additional keyword arguments for fun_and_jac.

  • numdiff_options – Options for numerical differentiation. Can be a dictionary or an instance of optimagic.NumdiffOptions.

  • logging – If None, no logging is used. If a str or pathlib.Path is provided, it is interpreted as path to an sqlite3 file (which typically has the file extension .db. If the file does not exist, it will be created. and the optimization history will be stored in that database. For more customization, provide LogOptions. For details and examples see How to use logging.

  • error_handling – If “raise” or ErrorHandling.RAISE, exceptions that occur during the optimization are raised and the optimization is stopped. If “continue” or ErrorHandling.CONTINUE, exceptions are caught and the function value and its derivative are replaced by penalty values. The penalty values are constructed such that the optimizer is guided back towards the start parameters until a feasible region is reached and then continues the optimization from there. For details see How to handle errors during optimization.

  • error_penalty – A dictionary with the keys “slope” and “constant” that influences the magnitude of the penalty values. For minimization problems both should be positive. For details see How to handle errors during optimization.

  • scaling – If None or False, the parameter space is not rescaled. If True, a heuristic is used to improve the conditioning of the optimization problem. To choose which heuristic is used and to customize the scaling, provide a dictionary or an instance of optimagic.ScalingOptions. For details and examples see How to scale optimization problems.

  • multistart – If None or False, no multistart approach is used. If True, the optimization is restarted from multiple starting points. Note that this requires finite bounds or soft bounds for all parameters. To customize the multistart approach, provide a dictionary or an instance of optimagic.MultistartOptions. For details and examples see How to do multistart optimizations.

  • collect_history – If True, the optimization history is collected and returned in the OptimizeResult. This is required to create criterion_plot or params_plot from an OptimizeResult.

  • skip_checks – If True, some checks are skipped to speed up the optimization. This is only relevant if your objective function is very fast, i.e. runs in a few microseconds.

  • x0 – Alias for params for scipy compatibility.

  • method – Alternative to algorithm for scipy compatibility. With method you can select scipy optimizers via their original scipy name.

  • args – Alternative to fun_kwargs for scipy compatibility.

  • hess – Not yet supported.

  • hessp – Not yet supported.

  • callback – Not yet supported.

  • options – Not yet supported.

  • tol – Not yet supported.

  • criterion – Deprecated. Use fun instead.

  • criterion_kwargs – Deprecated. Use fun_kwargs instead.

  • derivative – Deprecated. Use jac instead.

  • derivative_kwargs – Deprecated. Use jac_kwargs instead.

  • criterion_and_derivative – Deprecated. Use fun_and_jac instead.

  • criterion_and_derivative_kwargs – Deprecated. Use fun_and_jac_kwargs instead.

  • lower_bounds – Deprecated. Use bounds instead.

  • upper_bounds – Deprecated. Use bounds instead.

  • soft_lower_bounds – Deprecated. Use bounds instead.

  • soft_upper_bounds – Deprecated. Use bounds instead.

  • scaling_options – Deprecated. Use scaling instead.

  • multistart_options – Deprecated. Use multistart instead.

slice_plot
optimagic.slice_plot(func, params, bounds=None, func_kwargs=None, selector=None, n_cores=1, n_gridpoints=20, plots_per_row=2, param_names=None, share_y=True, expand_yrange=0.02, share_x=False, color='#497ea7', template='simple_white', title=None, return_dict=False, make_subplot_kwargs=None, batch_evaluator='joblib', lower_bounds=None, upper_bounds=None)[source]#

Plot criterion along coordinates at given and random values.

Generates plots for each parameter and optionally combines them into a figure with subplots.

# TODO: Use soft bounds to create the grid (if available). # TODO: Don’t do a function evaluation outside the batch evaluator.

Parameters
  • criterion (callable) – criterion function that takes params and returns scalar, PyTree or FunctionValue object.

  • params (pytree) – A pytree with parameters.

  • bounds – Lower and upper bounds on the parameters. The bounds are used to create a grid over which slice plots are drawn. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are not used for slice_plots. Each bound type mirrors the structure of params. Check our how-to guide on bounds for examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

  • selector (callable) – Function that takes params and returns a subset of params for which we actually want to generate the plot.

  • n_cores (int) – Number of cores.

  • n_gridpoins (int) – Number of gridpoints on which the criterion function is evaluated. This is the number per plotted line.

  • plots_per_row (int) – Number of plots per row.

  • param_names (dict or NoneType) – Dictionary mapping old parameter names to new ones.

  • share_y (bool) – If True, the individual plots share the scale on the yaxis and plots in one row actually share the y axis.

  • share_x (bool) – If True, set the same range of x axis for all plots and share the x axis for all plots in one column.

  • expand_y (float) – The ration by which to expand the range of the (shared) y axis, such that the axis is not cropped at exactly max of Criterion Value.

  • color – The line color.

  • template (str) – The template for the figure. Default is “plotly_white”.

  • layout_kwargs (dict or NoneType) – Dictionary of key word arguments used to update layout of plotly Figure object. If None, the default kwargs defined in the function will be used.

  • title (str) – The figure title.

  • return_dict (bool) – If True, return dictionary with individual plots of each parameter, else, ombine individual plots into a figure with subplots.

  • make_subplot_kwargs (dict or NoneType) – Dictionary of keyword arguments used to instantiate plotly Figure with multiple subplots. Is used to define properties such as, for example, the spacing between subplots (governed by ‘horizontal_spacing’ and ‘vertical_spacing’). If None, default arguments defined in the function are used.

  • batch_evaluator (str or callable) – See Batch evaluators.

Returns

Returns either dictionary with individual slice

plots for each parameter or a plotly Figure combining the individual plots.

Return type

out (dict or plotly.Figure)

criterion_plot
optimagic.criterion_plot(results, names=None, max_evaluations=None, template='simple_white', palette=['rgb(102,194,165)', 'rgb(252,141,98)', 'rgb(141,160,203)', 'rgb(231,138,195)', 'rgb(166,216,84)', 'rgb(255,217,47)', 'rgb(229,196,148)', 'rgb(179,179,179)'], stack_multistart=False, monotone=False, show_exploration=False)[source]#

Plot the criterion history of an optimization.

Parameters
  • results (Union[List, Dict][Union[OptimizeResult, pathlib.Path, str]) – A (list or dict of) optimization results with collected history. If dict, then the key is used as the name in a legend.

  • names (Union[List[str], str]) – Names corresponding to res or entries in res.

  • max_evaluations (int) – Clip the criterion history after that many entries.

  • template (str) – The template for the figure. Default is “plotly_white”.

  • palette (Union[List[str], str]) – The coloring palette for traces. Default is “qualitative.Plotly”.

  • stack_multistart (bool) – Whether to combine multistart histories into a single history. Default is False.

  • monotone (bool) – If True, the criterion plot becomes monotone in the sense that only that at each iteration the current best criterion value is displayed. Default is False.

  • show_exploration (bool) – If True, exploration samples of a multistart optimization are visualized. Default is False.

Returns

The figure.

Return type

plotly.graph_objs._figure.Figure

params_plot
optimagic.params_plot(result, selector=None, max_evaluations=None, template='simple_white', show_exploration=False)[source]#

Plot the params history of an optimization.

Parameters
  • result (Union[OptimizeResult, pathlib.Path, str]) – An optimization results with collected history. If dict, then the key is used as the name in a legend.

  • selector (callable) – A callable that takes params and returns a subset of params. If provided, only the selected subset of params is plotted.

  • max_evaluations (int) – Clip the criterion history after that many entries.

  • template (str) – The template for the figure. Default is “plotly_white”.

  • show_exploration (bool) – If True, exploration samples of a multistart optimization are visualized. Default is False.

Returns

The figure.

Return type

plotly.graph_objs._figure.Figure

OptimizeResult
class optimagic.OptimizeResult(params: Any, fun: float, start_fun: float, start_params: Any, algorithm: str, direction: str, n_free: int, message: Optional[str] = None, success: Optional[bool] = None, n_fun_evals: Optional[int] = None, n_jac_evals: Optional[int] = None, n_hess_evals: Optional[int] = None, n_iterations: Optional[int] = None, status: Optional[int] = None, jac: Optional[Any] = None, hess: Optional[Any] = None, hess_inv: Optional[Any] = None, max_constraint_violation: Optional[float] = None, history: Optional[optimagic.optimization.history.History] = None, convergence_report: Optional[Dict] = None, multistart_info: Optional[optimagic.optimization.optimize_result.MultistartInfo] = None, algorithm_output: Optional[Dict[str, Any]] = None, logger: Optional[optimagic.logging.logger.LogReader] = None)[source]#

Optimization result object.

Attributes

params#

The optimal parameters.

Type

Any

fun#

The optimal criterion value.

Type

float

start_fun#

The criterion value at the start parameters.

Type

float

start_params#

The start parameters.

Type

Any

algorithm#

The algorithm used for the optimization.

Type

str

direction#

Maximize or minimize.

Type

str

n_free#

Number of free parameters.

Type

int

message#

Message returned by the underlying algorithm.

Type

str | None

success#

Whether the optimization was successful.

Type

bool | None

n_fun_evals#

Number of criterion evaluations.

Type

int | None

n_jac_evals#

Number of derivative evaluations.

Type

int | None

n_iterations#

Number of iterations until termination.

Type

int | None

history#

Optimization history.

Type

optimagic.optimization.history.History | None

convergence_report#

The convergence report.

Type

Optional[Dict]

multistart_info#

Multistart information.

Type

Optional[optimagic.optimization.optimize_result.MultistartInfo]

algorithm_output#

Additional algorithm specific information.

Type

Optional[Dict[str, Any]]

to_pickle(path)[source]#

Save the OptimizeResult object to pickle.

Parameters

path (str, pathlib.Path) – A str or pathlib.path ending in .pkl or .pickle.

Bounds
class optimagic.Bounds(lower: 'PyTree | None' = None, upper: 'PyTree | None' = None, soft_lower: 'PyTree | None' = None, soft_upper: 'PyTree | None' = None)[source]#
Constraints
class optimagic.FixedConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>)[source]#

Constraint that fixes the selected parameters at their starting values.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

Raises

InvalidConstraintError – If the selector is not callable.

class optimagic.IncreasingConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>)[source]#

Constraint that ensures the selected parameters are increasing.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

Raises

InvalidConstraintError – If the selector is not callable.

class optimagic.DecreasingConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>)[source]#

Constraint that ensures that the selected parameters are decreasing.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

Raises

InvalidConstraintError – If the selector is not callable.

class optimagic.EqualityConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>)[source]#

Constraint that ensures that the selected parameters are equal.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

Raises

InvalidConstraintError – If the selector is not callable.

class optimagic.ProbabilityConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>)[source]#

Constraint that ensures that the selected parameters are probabilities.

This constraint ensures that each of the selected parameters is positive and that the sum of the selected parameters is 1.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

Raises

InvalidConstraintError – If the selector is not callable.

class optimagic.PairwiseEqualityConstraint(selectors: list[Callable[[Any], Any]])[source]#

Constraint that ensures that groups of selected parameters are equal.

This constraint ensures that each pair between the selected parameters is equal.

selectors#

A list of functions that take as input the parameters and return the subsets of parameters to be constrained.

Type

list[Callable[[Any], Any]]

Raises

InvalidConstraintError – If the selector is not callable.

class optimagic.FlatCovConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>, *, regularization: float = 0.0)[source]#

Constraint that ensures the selected parameters are a valid covariance matrix.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

regularization#

Helps in guiding the optimization towards finding a positive definite covariance matrix instead of only a positive semi-definite matrix. Larger values correspond to a higher likelihood of positive definiteness. Defaults to 0.

Type

float

Raises

InvalidConstraintError – If the selector is not callable or regularization is not a non-negative float or int.

class optimagic.FlatSDCorrConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>, *, regularization: float = 0.0)[source]#

Constraint that ensures the selected parameters are a valid correlation matrix.

This constraint ensures that each of the selected parameters is positive and that the sum of the selected parameters is 1.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

regularization#

Helps in guiding the optimization towards finding a positive definite covariance matrix instead of only a positive semi-definite matrix. Larger values correspond to a higher likelihood of positive definiteness. Defaults to 0.

Type

float

Raises

InvalidConstraintError – If the selector is not callable or regularization is not a non-negative float or int.

class optimagic.LinearConstraint(selector: typing.Callable[[typing.Any], typing.Union[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[typing.Union[bool, int, float, complex, str, bytes]], pd.Series[float]]] = <function identity_selector>, *, weights: typing.Optional[typing.Union[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[typing.Union[bool, int, float, complex, str, bytes]], pd.Series[float]]] = None, lower_bound: float | int | None = None, upper_bound: float | int | None = None, value: float | int | None = None)[source]#

Constraint that bounds a linear combination of the selected parameters.

This constraint ensures that a linear combination of the selected parameters with the ‘weights’ is either equal to ‘value’, or is bounded by ‘lower_bound’ and ‘upper_bound’.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Union[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]

weights#

The weights for the linear combination. If a scalar is provided, it is used for all parameters. Otherwise, it must have the same structure as the selected parameters.

Type

Optional[Union[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]

lower_bound#

The lower bound for the linear combination. Defaults to None.

Type

float | int | None

upper_bound#

The upper bound for the linear combination. Defaults to None.

Type

float | int | None

value#

The value to compare the linear combination to. Defaults to None.

Type

float | int | None

Raises

InvalidConstraintError – If the selector is not callable, or if the weights, lower_bound, upper_bound, or value are not valid.

class optimagic.NonlinearConstraint(selector: typing.Callable[[typing.Any], typing.Any] = <function identity_selector>, *, func: typing.Optional[typing.Callable[[typing.Any], typing.Union[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[typing.Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]] = None, derivative: typing.Optional[typing.Callable[[typing.Any], typing.Any]] = None, lower_bound: typing.Optional[typing.Union[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[typing.Union[bool, int, float, complex, str, bytes]], pd.Series[float]]] = None, upper_bound: typing.Optional[typing.Union[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[typing.Union[bool, int, float, complex, str, bytes]], pd.Series[float]]] = None, value: typing.Optional[typing.Union[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[typing.Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[typing.Union[bool, int, float, complex, str, bytes]], pd.Series[float]]] = None, tol: float = 1e-05)[source]#

Constraint that bounds a nonlinear function of the selected parameters.

This constraint ensures that a nonlinear function of the selected parameters is either equal to ‘value’, or is bounded by ‘lower_bound’ and ‘upper_bound’.

selector#

A function that takes as input the parameters and returns the subset of parameters to be constrained. By default, all parameters are constrained.

Type

Callable[[Any], Any]

func#

The constraint function which is applied to the selected parameters.

Type

Optional[Callable[[Any], Union[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]]

derivative#

The derivative of the constraint function with respect to the selected parameters. Defaults to None.

Type

Optional[Callable[[Any], Any]]

lower_bound#

The lower bound for the nonlinear function. Can be a scalar or of the same structure as output of the constraint function. Defaults to None.

Type

Optional[Union[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]

upper_bound#

The upper bound for the nonlinear function. Can be a scalar or of the same structure as output of the constraint function. Defaults to None.

Type

Optional[Union[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]

value#

The value to compare the nonlinear function to. Can be a scalar or of the same structure as output of the constraint function. Defaults to None.

Type

Optional[Union[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]], numpy._typing._nested_sequence._NestedSequence[numpy._typing._array_like._SupportsArray[numpy.dtype[Any]]], bool, int, float, complex, str, bytes, numpy._typing._nested_sequence._NestedSequence[Union[bool, int, float, complex, str, bytes]], pd.Series[float]]]

tol#

The tolerance for the constraint function. Defaults to optimagic.optimization.algo_options.CONSTRAINTS_ABSOLUTE_TOLERANCE.

Type

float

Raises

InvalidConstraintError – If the selector is not callable, or if the func, derivative, lower_bound, upper_bound, or value are not valid.

NumdiffOptions
class optimagic.NumdiffOptions(method: Literal['central', 'forward', 'backward', 'central_cross', 'central_average'] = 'central', step_size: Optional[float] = None, scaling_factor: float = 1, min_steps: Optional[float] = None, n_cores: int = 1, batch_evaluator: Union[Literal['joblib', 'pathos'], Callable] = 'joblib')[source]#

Options for numerical differentiation.

method#

The method to use for numerical differentiation. Can be “central”, “forward”, or “backward”.

Type

Literal[‘central’, ‘forward’, ‘backward’, ‘central_cross’, ‘central_average’]

step_size#

The step size to use for numerical differentiation. If None, the default step size will be used.

Type

float | None

scaling_factor#

The scaling factor to use for numerical differentiation.

Type

float

min_steps#

The minimum step size to use for numerical differentiation. If None, the default minimum step size will be used.

Type

float | None

n_cores#

The number of cores to use for numerical differentiation.

Type

int

batch_evaluator#

The batch evaluator to use for numerical differentiation. Can be “joblib” or “pathos”, or a custom function.

Type

Union[Literal[‘joblib’, ‘pathos’], Callable]

Raises

InvalidNumdiffError – If the numdiff options cannot be processed, e.g. because they do not have the correct type.

MultistartOptions
class optimagic.MultistartOptions(n_samples: Optional[int] = None, stopping_maxopt: Optional[int] = None, sampling_distribution: Literal['uniform', 'triangular'] = 'uniform', sampling_method: Literal['sobol', 'random', 'halton', 'latin_hypercube'] = 'random', sample: Optional[Sequence[Any]] = None, mixing_weight_method: Union[Literal['tiktak', 'linear'], Callable[[int, int, float, float], float]] = 'tiktak', mixing_weight_bounds: tuple[float, float] = (0.1, 0.995), convergence_xtol_rel: Optional[float] = None, convergence_max_discoveries: int = 2, n_cores: int = 1, batch_evaluator: Union[Literal['joblib', 'pathos'], optimagic.typing.BatchEvaluator] = 'joblib', batch_size: Optional[int] = None, seed: Optional[Union[int, numpy.random._generator.Generator]] = None, error_handling: Optional[Literal['raise', 'continue']] = None, share_optimization: Optional[float] = None, convergence_relative_params_tolerance: Optional[float] = None, optimization_error_handling: Optional[Literal['raise', 'continue']] = None, exploration_error_handling: Optional[Literal['raise', 'continue']] = None)[source]#

Multistart options in optimization problems.

n_samples#

The number of points at which the objective function is evaluated during the exploration phase. If None, n_samples is set to 100 times the number of parameters.

Type

int | None

stopping_maxopt#

The maximum number of local optimizations to run. Defaults to 10% of n_samples. This number may not be reached if multistart converges earlier.

Type

int | None

sampling_distribution#

The distribution from which the exploration sample is drawn. Allowed are “uniform” and “triangular”. Defaults to “uniform”.

Type

Literal[‘uniform’, ‘triangular’]

sampling_method#

The method used to draw the exploration sample. Allowed are “sobol”, “random”, “halton”, and “latin_hypercube”. Defaults to “random”.

Type

Literal[‘sobol’, ‘random’, ‘halton’, ‘latin_hypercube’]

sample#

A sequence of PyTrees or None. If None, a sample is drawn from the sampling distribution.

Type

Optional[Sequence[Any]]

mixing_weight_method#

The method used to determine the mixing weight, i,e, how start parameters for local optimizations are calculated. Allowed are “tiktak” and “linear”, or a custom callable. Defaults to “tiktak”.

Type

Union[Literal[‘tiktak’, ‘linear’], Callable[[int, int, float, float], float]]

mixing_weight_bounds#

The lower and upper bounds for the mixing weight. Defaults to (0.1, 0.995).

Type

tuple[float, float]

convergence_max_discoveries#

The maximum number of discoveries for convergence. Determines after how many re-descoveries of the currently best local optima the multistart algorithm stops. Defaults to 2.

Type

int

convergence_xtol_rel#

The relative tolerance in parameters for convergence. Determines the maximum relative distance two parameter vecctors can have to be considered equal. Defaults to 0.01.

Type

float | None

n_cores#

The number of cores to use for parallelization. Defaults to 1.

Type

int

batch_evaluator#

The evaluator to use for batch evaluation. Allowed are “joblib” and “pathos”, or a custom callable.

Type

Union[Literal[‘joblib’, ‘pathos’], optimagic.typing.BatchEvaluator]

batch_size#

The batch size for batch evaluation. Must be larger than n_cores or None.

Type

int | None

seed#

The seed for the random number generator.

Type

int | numpy.random._generator.Generator | None

error_handling#

The error handling for exploration and optimization errors. Allowed are “raise” and “continue”.

Type

Optional[Literal[‘raise’, ‘continue’]]

Raises

InvalidMultistartError – If the multistart options cannot be processed, e.g. because they do not have the correct type.

ScalingOptions
class optimagic.ScalingOptions(method: Literal['start_values', 'bounds'] = 'start_values', clipping_value: float = 0.1, magnitude: float = 1.0)[source]#

Scaling options in optimization problems.

method#

The method used for scaling. Can be “start_values” or “bounds”. Default is “start_values”.

Type

Literal[‘start_values’, ‘bounds’]

clipping_value#

The minimum value to which elements are clipped to avoid division by zero. Must be a positive number. Default is 0.1.

Type

float

magnitude#

A factor by which the scaled parameters are multiplied to adjust their magnitude. Must be a positive number. Default is 1.0.

Type

float

Raises

InvalidScalingError – If scaling options cannot be processed, e.g. because they do not have the correct type.

LogOptions
class optimagic.SQLiteLogOptions(path: str | pathlib.Path, fast_logging: bool = True, if_database_exists: Union[optimagic.logging.types.ExistenceStrategy, Literal['raise', 'extend', 'replace']] = ExistenceStrategy.RAISE)[source]#

Configuration class for setting up an SQLite database with SQLAlchemy.

This class extends the SQLAlchemyConfig class to configure an SQLite database. It handles the creation of the database engine, manages database files, and applies various optimizations for logging performance.

Parameters
  • path (str | Path) – The file path to the SQLite database.

  • fast_logging (bool) – A boolean that determines if “unsafe” settings are used to speed up write processes to the database. This should only be used for very short running criterion functions where the main purpose of the log is a real-time dashboard, and it would not be catastrophic to get a corrupted database in case of a sudden system shutdown. If one evaluation of the criterion function (and gradient if applicable) takes more than 100 ms, the logging overhead is negligible.

  • if_database_exists (ExistenceStrategy) – Strategy for handling an existing database file. One of “extend”, “replace”, “raise”.

create_engine() sqlalchemy.engine.base.Engine[source]#

Create and return an SQLAlchemy engine.

Returns

An SQLAlchemy Engine object.

History
class optimagic.History[source]#
count_free_params
optimagic.count_free_params(params, constraints=None, bounds=None, lower_bounds=None, upper_bounds=None)[source]#

Count the (free) parameters of an optimization problem.

Parameters
  • params (pytree) – The parameters.

  • constraints (list) – The constraints for the optimization problem. If constraints are provided, only the free parameters are counted.

  • bounds – Lower and upper bounds on the parameters. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are used for sampling based optimizers but are not enforced during optimization. Each bound type mirrors the structure of params. Check our how-to guide on bounds for examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

Returns

Number of (free) parameters

Return type

int

check_constraints
optimagic.check_constraints(params, constraints, bounds=None, lower_bounds=None, upper_bounds=None)[source]#

Raise an error if constraints are invalid or not satisfied in params.

Parameters
  • params (pytree) – The parameters.

  • constraints (list) – The constraints for the optimization problem.

  • bounds – Lower and upper bounds on the parameters. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are used for sampling based optimizers but are not enforced during optimization. Each bound type mirrors the structure of params. Check our how-to guide on bounds for examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

Raises
  • InvalidParamsError – If constraints are valid but not satisfied.

  • InvalidConstraintError – If constraints are invalid.

Derivatives#

first_derivative
optimagic.first_derivative(func: Callable[[Any], Any], params: Any, *, bounds: Optional[optimagic.parameters.bounds.Bounds] = None, func_kwargs: Optional[dict[str, Any]] = None, method: Literal['central', 'forward', 'backward'] = 'central', step_size: Optional[Union[float, Any]] = None, scaling_factor: Union[float, Any] = 1, min_steps: Optional[Union[float, Any]] = None, f0: Optional[Any] = None, n_cores: int = 1, error_handling: Literal['continue', 'raise', 'raise_strict'] = 'continue', batch_evaluator: Union[Literal['joblib', 'pathos'], Callable] = 'joblib', unpacker: Optional[Callable[[Any], Any]] = None, lower_bounds: Optional[Any] = None, upper_bounds: Optional[Any] = None, base_steps: Optional[Any] = None, key: Optional[str] = None, step_ratio: Optional[float] = None, n_steps: Optional[int] = None, return_info: Optional[bool] = None, return_func_value: Optional[bool] = None) optimagic.differentiation.derivatives.NumdiffResult[source]#

Evaluate first derivative of func at params according to method and step options.

Internally, the function is converted such that it maps from a 1d array to a 1d array. Then the Jacobian of that function is calculated.

The parameters and the function output can be optimagic-pytrees; for more details on estimagi-pytrees see EP-01: Pytrees. By default the resulting Jacobian will be returned as a block-pytree.

For a detailed description of all options that influence the step size as well as an explanation of how steps are adjusted to bounds in case of a conflict, see generate_steps().

Parameters
  • func – Function of which the derivative is calculated.

  • params – A pytree. See How to specify params.

  • bounds – Lower and upper bounds on the parameters. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are not used during numerical differentiation. Each bound type mirrors the structure of params. Check our how-to guide on bounds for examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

  • func_kwargs – Additional keyword arguments for func, optional.

  • method – One of [“central”, “forward”, “backward”], default “central”.

  • step_size – 1d array of the same length as params. step_size * scaling_factor is the absolute value of the first (and possibly only) step used in the finite differences approximation of the derivative. If step_size * scaling_factor conflicts with bounds, the actual steps will be adjusted. If step_size is not provided, it will be determined according to a rule of thumb as long as this does not conflict with min_steps.

  • scaling_factor – Scaling factor which is applied to step_size. If it is an numpy.ndarray, it needs to be as long as params. scaling_factor is useful if you want to increase or decrease the base_step relative to the rule-of-thumb or user provided base_step, for example to benchmark the effect of the step size. Default 1.

  • min_steps – Minimal possible step sizes that can be chosen to accommodate bounds. Must have same length as params. By default min_steps is equal to step_size, i.e step size is not decreased beyond what is optimal according to the rule of thumb.

  • f0 – 1d numpy array with func(x), optional.

  • n_cores – Number of processes used to parallelize the function evaluations. Default 1.

  • error_handling – One of “continue” (catch errors and continue to calculate derivative estimates. In this case, some derivative estimates can be missing but no errors are raised), “raise” (catch errors and continue to calculate derivative estimates at first but raise an error if all evaluations for one parameter failed) and “raise_strict” (raise an error as soon as a function evaluation fails).

  • batch_evaluator (str or callable) – Name of a pre-implemented batch evaluator (currently ‘joblib’ and ‘pathos_mp’) or Callable with the same interface as the optimagic batch_evaluators.

  • unpacker – A callable that takes the output of func and returns the part of the output that is needed for the derivative calculation. If None, the output of func is used as is. Default None.

Returns

A numerical differentiation result.

Return type

NumdiffResult

second_derivative
optimagic.second_derivative(func: Callable[[Any], Any], params: Any, *, bounds: Optional[optimagic.parameters.bounds.Bounds] = None, func_kwargs: Optional[dict[str, Any]] = None, method: Literal['forward', 'backward', 'central_average', 'central_cross'] = 'central_cross', step_size: Optional[Union[float, Any]] = None, scaling_factor: Union[float, Any] = 1, min_steps: Optional[Union[float, Any]] = None, f0: Optional[Any] = None, n_cores: int = 1, error_handling: Literal['continue', 'raise', 'raise_strict'] = 'continue', batch_evaluator: Union[Literal['joblib', 'pathos'], Callable] = 'joblib', unpacker: Optional[Callable[[Any], Any]] = None, lower_bounds: Optional[Any] = None, upper_bounds: Optional[Any] = None, base_steps: Optional[Any] = None, step_ratio: Optional[float] = None, n_steps: Optional[int] = None, return_info: Optional[bool] = None, return_func_value: Optional[bool] = None, key: Optional[str] = None) optimagic.differentiation.derivatives.NumdiffResult[source]#

Evaluate second derivative of func at params according to method and step options.

Internally, the function is converted such that it maps from a 1d array to a 1d array. Then the Hessians of that function are calculated. The resulting derivative estimate is always a numpy.ndarray.

The parameters and the function output can be pandas objects (Series or DataFrames with value column). In that case the output of second_derivative is also a pandas object and with appropriate index and columns.

Detailed description of all options that influence the step size as well as an explanation of how steps are adjusted to bounds in case of a conflict, see generate_steps().

Parameters
  • func – Function of which the derivative is calculated.

  • params – 1d numpy array or pandas.DataFrame with parameters at which the derivative is calculated. If it is a DataFrame, it can contain the columns “lower_bound” and “upper_bound” for bounds. See How to specify params.

  • bounds – Lower and upper bounds on the parameters. The most general and preferred way to specify bounds is an optimagic.Bounds object that collects lower, upper, soft_lower and soft_upper bounds. The soft bounds are not used during numerical differentiation. Each bound type mirrors the structure of params. Check our how-to guide on bounds for examples. If params is a flat numpy array, you can also provide bounds via any format that is supported by scipy.optimize.minimize.

  • func_kwargs – Additional keyword arguments for func, optional.

  • method – One of {“forward”, “backward”, “central_average”, “central_cross”} These correspond to the finite difference approximations defined in equations [7, x, 8, 9] in Rideout [2009], where (“backward”, x) is not found in Rideout [2009] but is the natural extension of equation 7 to the backward case. Default “central_cross”.

  • step_size – 1d array of the same length as params. step_size * scaling_factor is the absolute value of the first (and possibly only) step used in the finite differences approximation of the derivative. If step_size * scaling_factor conflicts with bounds, the actual steps will be adjusted. If step_size is not provided, it will be determined according to a rule of thumb as long as this does not conflict with min_steps.

  • scaling_factor – Scaling factor which is applied to step_size. If it is an numpy.ndarray, it needs to be as long as params. scaling_factor is useful if you want to increase or decrease the base_step relative to the rule-of-thumb or user provided base_step, for example to benchmark the effect of the step size. Default 1.

  • min_steps – Minimal possible step sizes that can be chosen to accommodate bounds. Must have same length as params. By default min_steps is equal to step_size, i.e step size is not decreased beyond what is optimal according to the rule of thumb.

  • f0 – 1d numpy array with func(x), optional.

  • n_cores – Number of processes used to parallelize the function evaluations. Default 1.

  • error_handling – One of “continue” (catch errors and continue to calculate derivative estimates. In this case, some derivative estimates can be missing but no errors are raised), “raise” (catch errors and continue to calculate derivative estimates at first but raise an error if all evaluations for one parameter failed) and “raise_strict” (raise an error as soon as a function evaluation fails).

  • batch_evaluator – Name of a pre-implemented batch evaluator (currently ‘joblib’ and ‘pathos_mp’) or Callable with the same interface as the optimagic batch_evaluators.

  • unpacker – A callable that takes the output of func and returns the part of the output that is needed for the derivative calculation. If None, the output of func is used as is. Default None.

Returns

A numerical differentiation result.

Return type

NumdiffResult

Benchmarks#

get_benchmark_problems
optimagic.get_benchmark_problems(name, *, additive_noise=False, additive_noise_options=None, multiplicative_noise=False, multiplicative_noise_options=None, scaling=False, scaling_options=None, seed=None, exclude=None)[source]#

Get a dictionary of test problems for a benchmark.

Parameters
  • name (str) – The name of the set of test problems. Currently “more_wild” is the only supported one.

  • additive_noise (bool) – Whether to add additive noise to the problem. Default False.

  • additive_noise_options (dict or None) – Specifies the amount and distribution of the addititve noise added to the problem. Has the entries: - distribition (str): One of “normal”, “gumbel”, “uniform”, “logistic”, “laplace”. Default “normal”. - std (float): The standard deviation of the noise. This works for all distributions, even if those distributions are normally not specified via a standard deviation (e.g. uniform). - correlation (float): Number between 0 and 1 that specifies the auto correlation of the noise.

  • multiplicative_noise (bool) – Whether to add multiplicative noise to the problem. Default False.

  • multiplicative_noise_options (dict or None) – Specifies the amount and distribition of the multiplicative noise added to the problem. Has entries: - distribition (str): One of “normal”, “gumbel”, “uniform”, “logistic”, “laplace”. Default “normal”. - std (float): The standard deviation of the noise. This works for all distributions, even if those distributions are normally not specified via a standard deviation (e.g. uniform). - correlation (float): Number between 0 and 1 that specifies the auto correlation of the noise. - clipping_value (float): A non-negative float. Multiplicative noise becomes zero if the function value is zero. To avoid this, we do not implement multiplicative noise as f_noisy = f * epsilon but by f_noisy = f + (epsilon - 1) * f_clipped` where f_clipped is bounded away from zero from both sides by the clipping value.

  • scaling (bool) – Whether the parameter space of the problem should be rescaled.

  • scaling_options (dict) – Dict containing the keys “min_scale”, and “max_scale”. If scaling is True, the parameters the optimizer sees are the standard parameters multiplied by np.linspace(min_scale, max_scale, len(params)). If min_scale and max_scale have very different orders of magnitude, the problem becomes harder to solve for many optimizers.

  • seed (Union[None, int, numpy.random.Generator]) – If seed is None or int the numpy.random.default_rng is used seeded with seed. If seed is already a Generator instance then that instance is used.

  • exclude (str or List) – Problems to exclude.

Returns

Nested dictionary with benchmark problems of the structure:

{“name”: {“inputs”: {…}, “solution”: {…}, “info”: {…}}} where “inputs” are keyword arguments for minimize such as the criterion function and start parameters. “solution” contains the entries “params” and “value” and “info” might contain information about the test problem.

Return type

dict

run_benchmark
optimagic.run_benchmark(problems, optimize_options, *, batch_evaluator='joblib', n_cores=1, error_handling='continue', max_criterion_evaluations=1000, disable_convergence=True)[source]#

Run problems with different optimize options.

Parameters
  • problems (dict) – Nested dictionary with benchmark problems of the structure: {“name”: {“inputs”: {…}, “solution”: {…}, “info”: {…}}} where “inputs” are keyword arguments for minimize such as the criterion function and start parameters. “solution” contains the entries “params” and “value” and “info” might contain information about the test problem.

  • optimize_options (list or dict) – Either a list of algorithms or a Nested dictionary that maps a name for optimizer settings (e.g. "lbfgsb_strict_criterion") to a dictionary of keyword arguments for arguments for minimize (e.g. {"algorithm": "scipy_lbfgsb", "algo_options": {"convergence.ftol_rel": 1e-12}}). Alternatively, the values can just be an algorithm which is then benchmarked at default settings.

  • batch_evaluator (str or callable) – See Batch evaluators.

  • n_cores (int) – Number of optimizations that is run in parallel. Note that in addition to that an optimizer might parallelize.

  • error_handling (str) – One of “raise”, “continue”.

  • max_criterion_evaluations (int) – Shortcut to set the maximum number of criterion evaluations instead of passing them in via algo options. In case an optimizer does not support this stopping criterion, we also use this as max iterations.

  • disable_convergence (bool) – If True, we set extremely strict convergence convergence criteria by default, such that most optimizers will exploit their full computation budget set by max_criterion_evaluations.

Returns

Nested Dictionary with information on the benchmark run. The outer keys

are tuples where the first entry is the name of the problem and the second the name of the optimize options. The values are dicts with the entries: “params_history”, “criterion_history”, “time_history” and “solution”.

Return type

dict

profile_plot
optimagic.profile_plot(problems, results, *, runtime_measure='n_evaluations', normalize_runtime=False, stopping_criterion='y', x_precision=0.0001, y_precision=0.0001, template='simple_white')[source]#

Compare optimizers over a problem set.

This plot answers the question: What percentage of problems can each algorithm solve within a certain runtime budget?

The runtime budget is plotted on the x axis and the share of problems each algorithm solved on the y axis.

Thus, algorithms that are very specialized and perform well on some share of problems but are not able to solve more problems with a larger computational budget will have steep increases and then flat lines. Algorithms that are robust but slow, will have low shares in the beginning but reach very high.

Note that failing to converge according to the given stopping_criterion and precisions is scored as needing an infinite computational budget.

For details, see the description of performance and data profiles by Moré and Wild (2009).

Parameters
  • problems (dict) – optimagic benchmarking problems dictionary. Keys are the problem names. Values contain information on the problem, including the solution value.

  • results (dict) – optimagic benchmarking results dictionary. Keys are tuples of the form (problem, algorithm), values are dictionaries of the collected information on the benchmark run, including ‘criterion_history’ and ‘time_history’.

  • runtime_measure (str) – “n_evaluations”, “n_batches” or “walltime”. This is the runtime until the desired convergence was reached by an algorithm. This is called performance measure by Moré and Wild (2009).

  • normalize_runtime (bool) – If True the runtime each algorithm needed for each problem is scaled by the time the fastest algorithm needed. If True, the resulting plot is what Moré and Wild (2009) called data profiles.

  • stopping_criterion (str) – one of “x_and_y”, “x_or_y”, “x”, “y”. Determines how convergence is determined from the two precisions.

  • x_precision (float or None) – how close an algorithm must have gotten to the true parameter values (as percent of the Euclidean distance between start and solution parameters) before the criterion for clipping and convergence is fulfilled.

  • y_precision (float or None) – how close an algorithm must have gotten to the true criterion values (as percent of the distance between start and solution criterion value) before the criterion for clipping and convergence is fulfilled.

  • template (str) – The template for the figure. Default is “plotly_white”.

Returns

plotly.Figure

convergence_plot
optimagic.convergence_plot(problems, results, *, problem_subset=None, algorithm_subset=None, n_cols=2, distance_measure='criterion', monotone=True, normalize_distance=True, runtime_measure='n_evaluations', stopping_criterion='y', x_precision=0.0001, y_precision=0.0001, combine_plots_in_grid=True, template='simple_white', palette=['#636EFA', '#EF553B', '#00CC96', '#AB63FA', '#FFA15A', '#19D3F3', '#FF6692', '#B6E880', '#FF97FF', '#FECB52'])[source]#

Plot convergence of optimizers for a set of problems.

This creates a grid of plots, showing the convergence of the different algorithms on each problem. The faster a line falls, the faster the algorithm improved on the problem. The algorithm converged where its line reaches 0 (if normalize_distance is True) or the horizontal blue line labeled “true solution”.

Each plot shows on the x axis the runtime_measure, which can be walltime, number of evaluations or number of batches. Each algorithm’s convergence is a line in the plot. Convergence can be measured by the criterion value of the particular time/evaluation. The convergence can be made monotone (i.e. always taking the bast value so far) or normalized such that the distance from the start to the true solution is one.

Parameters
  • problems (dict) – optimagic benchmarking problems dictionary. Keys are the problem names. Values contain information on the problem, including the solution value.

  • results (dict) – optimagic benchmarking results dictionary. Keys are tuples of the form (problem, algorithm), values are dictionaries of the collected information on the benchmark run, including ‘criterion_history’ and ‘time_history’.

  • problem_subset (list, optional) – List of problem names. These must be a subset of the keys of the problems dictionary. If provided the convergence plot is only created for the problems specified in this list.

  • algorithm_subset (list, optional) – List of algorithm names. These must be a subset of the keys of the optimizer_options passed to run_benchmark. If provided only the convergence of the given algorithms are shown.

  • n_cols (int) – number of columns in the plot of grids. The number of rows is determined automatically.

  • distance_measure (str) – One of “criterion”, “parameter_distance”.

  • monotone (bool) – If True the best found criterion value so far is plotted. If False the particular criterion evaluation of that time is used.

  • normalize_distance (bool) – If True the progress is scaled by the total distance between the start value and the optimal value, i.e. 1 means the algorithm is as far from the solution as the start value and 0 means the algorithm has reached the solution value.

  • runtime_measure (str) – “n_evaluations”, “walltime” or “n_batches”.

  • stopping_criterion (str) – “x_and_y”, “x_or_y”, “x”, “y” or None. If None, no clipping is done.

  • x_precision (float or None) – how close an algorithm must have gotten to the true parameter values (as percent of the Euclidean distance between start and solution parameters) before the criterion for clipping and convergence is fulfilled.

  • y_precision (float or None) – how close an algorithm must have gotten to the true criterion values (as percent of the distance between start and solution criterion value) before the criterion for clipping and convergence is fulfilled.

  • combine_plots_in_grid (bool) – decide whether to return a one figure containing subplots for each factor pair or a dictionary of individual plots. Default True.

  • template (str) – The template for the figure. Default is “plotly_white”.

  • palette – The coloring palette for traces. Default is “qualitative.Plotly”.

Returns

The grid plot or dict of individual plots

Return type

plotly.Figure

Log reading#

OptimizeLogReader
class optimagic.OptimizeLogReader(*args, **kwargs)[source]#

Other:#