How to distribute a computer simulation model wrapper?

Hi everyone,

I am looking for efficient ways to wrap a computer simulation model in order to perform uncertainty quantification on it. More precisely, my numerical model uses input files containing input variables, after executing it, the model creates output repertories containing files with output variables of interest. My goal is to first wrap this process by a PythonFunction, second to distribute calls to this function. After an overview of the OT documentation and its modules, I used coupling_tools to create a wrapper which achieves my first goal but I struggle at finding efficient ways to distribute it which implies handling simultaneous writings on the input files and output repertory management. The otwrapy module seems to provide some complementary tools but I am looking for examples that combine these two modules or extended otwrapy examples.

To illustrate my issue, please find attached a mock Python script corresponding to my current developments.

import openturns as ot
import openturns.coupling_tools as ct
import numpy as np
import os

# This script is an illustration of a wrapper of two computer 
# chained simulation models (called code1 and code2).

code1_command = "./code1.exe code1_input.inp"
code2_command = "./code2.exe code2_main.par"

def code1_code2_simulator(x):
    # Modify code1 input file
    code1_in_file = "code1_input.inp"
    code1_intemp_file = "code1_input_template.inp"
    # my_seed: random seed
    # x0: first variable
    # x1: second variable
    code1_replaced_tokens = ["@my_seed", "@x0", "@x1"]
    ct.replace(code1_intemp_file, code1_in_file, code1_replaced_tokens, x[:3])
    # Execute code1
    print("Code1 done")

    # Modify code2 input files
    code2_in_file = "code2_input.dat"
    code2_intemp_file = "code2_input_template.dat"
    # x2: third variable
    code2_replaced_tokens = ["@x2"]
    ct.replace(code2_intemp_file, code2_in_file, code2_replaced_tokens, [x[3]])

    code2_main_file = "main.par"
    code2_maintemp_file = "main_template.par"
    # nout: output rep index (e.g. 34 for repertory /OUT34)
    code2_replaced_tokens = ["@nout"]
    ct.replace(code2_main_file, code2_maintemp_file, code2_replaced_tokens, [x[4]])
    # Exectue code2
    print("Code2 done")
    return None

# My DoE :
x_1 = [1, 12.6, 3, 1.6, 10]
# Wrapper execution

Thanks in advance for any idea or suggestion regarding this topic.


PS : so far I tried using this OT wrapper doc and this otwrapy’s doc

1 Like

Hi, you’re right, otwrapy gives you everything (or almost) you need to parallelize your evaluations and manage output repositories. I suggest you have a look on the code of the beam example here : otwrapy/ at master · openturns/otwrapy · GitHub

I usually create it this way, the goal is to create the _exec function for on single evaluation with 3 parts : create the input file, execute the commands (1 or more), and read the outputs from some files either with ot.coupling_tools.get or other modules if it’s an xml, json…
What’s important in the _exec function is to use the otwrapy.TempWorkDir context manager that will create a temporary directory and perform the evaluation in it.

Then to have your parallelized function, you just have to use the otwrapy.Parallelizer class :

model = otw.Parallelizer(Wrapper(tmpdir=tmp, sleep=1), 
                         backend=backend, n_cpus=n_cpus)

You have several available backends : multiprocessing, joblib, ipyparallel and pathos.


Hi Antoine, Thanks for your quick answer. I went over this example too fast and didn’t fully understand its purpose at first but with you explanations it is clearer, I am cloning the git and will play around this example.
Enjoy your weekend,

Hi Elias and welcome on this forum!

If your intent is to use the Python layer, otwrapy is the right tool. Notice however that a simpler way to install it is from conda:

conda install -c conda-forge otwrapy

This seems simpler than cloning the git repo for me, and ensures that the package is correctly installed and matches the OT version you use.

If your intent is to perform the analysis, not necessarily using the Python interface, notice that PERSALYS is the way to go:
This is because:

  • the coupling model provides a graphical user interface of coupling_tools, plus extra features (including post-processing, chain of commands, cache system, etc…),
  • using it from SALOME at EDF will make you able to use the supercomputers we have at our disposal (e.g. Gaïa).

The last point requires that you have an account on at least one supercomputer: if so, the installation of SALOME on your machine will automatically detect this and configure the necessary settings. Most users should be able to do this in less than 5 minutes, which massively outbeats the time required to setup the Python scripts (I know it: I have done both :slight_smile: ).

Of course, not all algorithms are available on PERSALYS, so that the Python layer is still relevant in many situations (e.g. a custom algorithm).



Hey Michaël,

Thanks a lot for your complementary answer. I just finished the otwrapy/coupling_tools wrapper, it wasn’t the easiest since I chained two different models that each use multiple input files but locally it works fine. Next step will be to take it to the cluster but I would rather stick to Python for more flexibility although the PERSALYS features seem very efficient.


Hi Elias,
Gaëtan Blondet from Phiméca shared a script which shows how a Python function can be made parallel:
hpcuqtraining/TP2_wrapper_Central_dispersion_otwrapy.ipynb at master · mbaudin47/hpcuqtraining · GitHub
Converting the function into a parallel function is as easy as calling the Parallelizer class from otwrappy:

The slides are available at:
hpcuqtraining/Distributing_OpenTURNS_OtWraPy.pdf at master · mbaudin47/hpcuqtraining · GitHub
This content was prepared for the PRACE training.

Hi Michaël,

Thanks for this very nice example. I used the same class in my case and should come back to it next month to start trials on the cluster.

See you soon!