For those interested in testing their algorithms against a collection of selected use-cases, we created the “otbenchmark” module:
The goal of this project is to provide benchmark classes for OpenTURNS. It provides a framework to create use-cases which are associated with reference values. Such a benchmark problem may be used in order to check that a given algorithm works as expected and to measure its performance in terms of accuracy and speed.
Two categories of benchmark classes are currently provided:
- reliability problems, i.e. estimating the probability that the output of a function is less than a threshold,
- sensitivity problems, i.e. estimating sensitivity indices, for example Sobol’ indices.
Many of the reliability problems were adapted from the RPRepo :
This module allows you to create a problem, run an algorithm and compare the computed probability with a reference probability:
import otbenchmark as otb import openturns as ot problem = otb.RminusSReliability() event = problem.getEvent() pfReference = problem.getProbability() # exact probability # Create the Monte-Carlo algorithm algoProb = ot.ProbabilitySimulationAlgorithm(event) algoProb.setMaximumOuterSampling(1000) algoProb.setMaximumCoefficientOfVariation(0.01) algoProb.run() resultAlgo = algoProb.getResult() pf = resultAlgo.getProbabilityEstimate() absoluteError = abs(pf - pfReference)
Moreover, we can loop over all problems and run several methods on these problems.
To be completely honest, one of the goals of this module is to avoid: “Hey! I have a brand new algorithm which perfectly works.”… on a single use-case that I fine tuned to make the paper be published
We hope to create a pip package will be created within a few weeks. We plan to cover more topics in the future release, such as integration problems (i.e. central tendency), optimization, metamodeling.
Feel free to request new features or new benchmark problems in this topic. A few suggestions are already identified in the “Issues” section of the repo.