How should i memoize my function in a calibration setting? (797 Bytes)


I would like to use the OpenTurns MemoizeFunction class in a calibration context. I have no idea what I am doing wrong but it doesn’t seems to be working :frowning:

I’ve included a script which is simply the example that is cited in the OT official documentation, NonLinearLeastSquaresCalibration — OpenTURNS 1.18dev documentation
with the slight difference that I replaced the SymbolicFunction model with a PythonFunction object that i’ve memoized before feeding it to a ParametricFunction() object.

One can see that the memoization is not working by launching the deterministic calibration algorithm twice for instance.


Hi Sanaa,

Bad luck: the implementation of the calibration algorithms tries to do a clever work regarding the evaluation of the model by embedding all its internal functions into MemoizeFunction classes. Unfortunately, it triggers many copy on write that make the cache private to those copies: it is lost at the end of the algorithm. As a consequence, if you run the algorithm a second time, you recompute everything.
Another problem is that by default, a Function computes its gradient and Hessian using finite differences based on a copy of the evaluation. It is the case for the PythonFunction class when you provide only the evaluation. If you embed this function into a MemoizeFunction, it embeds the evaluation into a MemoizeEvaluation but the clones stored into the gradient and the Hessian remain unchanged. As a consequence, the evaluations triggered by the gradient and the Hessian are not checked against the cache. I added a counter to your script to count how many times the actual Python function is called in different contexts. If I remove the MemoizeFunction:

ot_function = ot.PythonFunction(4 , 1, model_python)

I get 28631 calls to model_python. If I embed this function into a MemoizeFunction the way you did it:

ot_function = ot.PythonFunction(4 , 1, model_python)
ot_function = ot.MemoizeFunction(ot_function)

I get 26564 calls to model_python. But if I embed it in a more convolved way:

ot_function = ot.PythonFunction(4 , 1, model_python)
ot_function = ot.MemoizeFunction(ot.Function(

I get 12900 calls to model_python. This time, (almost) all the calls to model_python are checked against a unique cache. I checked if model_function was actually called with different input, and for a reason I don’t understand yet the 10 initial evaluation combining the candidate and the DOE are duplicated.

Thank you Régis for taking the time to consider my problem.

I am not sure i understand all the intricacies of the calibration algorithm and your subsequent answer, but I get that there is no simple Openturns workaround

I think i am going to resort to building my own wrapper around my function in order to ensure that the calculations cache is properly used

In my opinion this issue should be addressed in some future release, i think it would be quite useful


Hi Sanaa,
To make things clear:

  • if you embed your model into a memoize function the way I shown you are 100% sure the cache is properly used during the iterations of the algorithm, even when the gradient or the Hessian are computed using finite difference. It remains to fix the case of the initial evaluation, but most of the time it is only a small portion of the total evaluation number (less than 0.1% in your example) so it is not an actual issue.
  • the only problem is when you run the calibration multiple times over the exact same optimization path, where you could expect a huge speedup but IMO it is a quite uncommon use case. Why would you run the algorithm twice on the exact same data?
    But you are right, OT should be improved wrt this aspect. And of course you can implement your own cache mechanism. It would be interesting for us to have a feedback on how many evaluations you saved wrt the construction based on MemoizeEvaluation I detailed.



Hi Regis,
I have no practical use in running the algorithm twice on the same optimization path, i was doing so merely to highlight the fact that some of the previously run calculations - that were supposed to be included in the cache - were re run again during the second call to the calibration algorithm

i agree that your workaround has helped reduce the numbers of calls to the main function, thanks