OTbenchmark: a benchmark module for UQ available on PyPi!

Hi all!

This post introduces a new Python module “OTbenchmark” and its deployment on the PyPI platform. In the vein of the Black-box Reliability Challenge, OTbenchmark aims at providing several automatic tools to evaluate the performance of a large panel of uncertainty quantification algorithms by relying on the probabilistic programming framework offered by OpenTURNS. In other words, this module provides benchmark classes for OpenTURNS. It sets up a framework to create use-cases or problems associated with reference values. Such a benchmark problem may be used in order to check that a given algorithm works as expected and to measure its performance in terms of speed and accuracy.

Two categories of benchmark classes are currently provided:

  • reliability estimation problems (i.e., estimating failure probabilities),
  • sensitivity analysis problems (i.e., estimating sensitivity indices such as the Sobol’ indices).

OTbenchmark is composed of 26 reliability problems and 4 sensitivity problems. For all these problems, reference solutions are provided. These solutions are obtained, either from a crude Monte Carlo estimation with a controlled convergence, or using (when possible) an exact resolution (e.g., provided by algebraic operations on input distributions). Additionally, OTbenchmark provides several convergence and accuracy metrics to compare the performance of each algorithm. Finally, in order to perform a complete benchmark, a loop can be automatically set to evaluate a large panel of algorithms over the complete set of examples. Graphical representations are often useful to help the analyst to understand the underlying behavior of complex reliability or sensitivity problems. Since many of these problems have dimensions larger than two, it raises numerous practical issues. OTbenchmark offers tools to draw multidimensional events, functions and distributions based on cross-cuts.

To ensure OTbenchmark’s accuracy, a test-driven software development method was followed (using, among others, Git for collaborative development, unit tests and continuous integration to respect each step of this process). This new module completes the existing panel of modules developed around OpenTURNS. Thus, OTbenchmark is an industrial benchmark platform for uncertainty management algorithms, and can be seen as a versatile tool offering diverse problems with corresponding solutions, robust metrics and graphical representations for high-dimensional problems.

The following table illustrates the main purpose of OTbenchmark by comparing failure probability estimations to the exact results for different problems and using well known methods available on OpenTURNS.

Install OTbenchmark using PyPI : [otbenchmark · PyPI]

OpenTURNS modules : [Modules · openturns/openturns Wiki · GitHub]

Upcoming communication at the UNCECOMP 2021 Minisymposium “Software for Uncertainty Quantification” :slight_smile:

Michaël, Vincent, Youssef and Elias


Many thanks for your very valuable work!

I was quite surprised by the poor performance of OT on RP25 & RP31, so I checked the reference values and found that they were wrong (see The reference value is wrong in reliability problem #31 · Issue #71 · mbaudin47/otbenchmark · GitHub and The reference value is wrong in reliability problem #25 · Issue #72 · mbaudin47/otbenchmark · GitHub). I have not checked the other reference values, but some others may have an issue…



1 Like

Hi Régis,

Thanks for this key info, the exact values are sometimes tricky to compute and we will investigate theses variations on all the cases using your suggestions.