When we calibrate the parameters of a function, we need to compute the gradient â€¦of the function with respect to the parameters (e.g. with LinearLeastSquaresCalibration). With a `PythonFunction`, this may generate more function evaluations than required. This does not generate a wrong calculation, but limits the speed. The root cause of the problem is that the `ParametricFunction` implements the gradient by computing the gradient of the full underlying function, then extract the required components of the gradient with respect to the parameters.
In the following script, we create a `ParametricFunction` with name `g`. This function is created based on the full model `f` which has inputs `a, b, x0, x1, x2`. Then `g` is created by setting `a` and `b` as parameters. Hence the parametric function `g` has (a,b) as parameters and (x0, x1, x2) as inputs.
```
import openturns as ot
def f(x):
x = ot.Point(x)
a, b, x0, x1, x2 = x
print("x=", x)
y = a + b + x0 + x1 + x2
return [y]
f_py = ot.PythonFunction(5, 1, f)
indices = [0, 1]
referencePoint = [1.0, 2.0]
g = ot.ParametricFunction(f_py, indices, referencePoint)
x = ot.Point([3.0, 4.0, 5.0])
print("g(x)=", g(x))
```
To evaluate the gradient with respect to its input x, we use the script:
```
# Compute gradient with respect to inputs x
gradient_x = g.gradient(x)
print("Gradient with respect to x = (x0, x1, x2)=")
print(gradient_x)
```
which prints:
```
x= [1.00001,2,3,4,5]
x= [0.99999,2,3,4,5]
x= [1,2.00001,3,4,5]
x= [1,1.99999,3,4,5]
x= [1,2,3.00001,4,5]
x= [1,2,2.99999,4,5]
x= [1,2,3,4.00001,5]
x= [1,2,3,3.99999,5]
x= [1,2,3,4,5.00001]
x= [1,2,3,4,4.99999]
```
This shows that the underlying full model `f` is used to compute the gradient with respect to all inputs (using finite differences). This is useless, since only the partial derivatives with respect to x0, x1, and x2 are required.
When we want to compute the gradient with respect to the parameters, we use:
```
# Compute gradient with respect to parameters
gradient_p = g.parameterGradient(x)
print("Gradient with respect to parameters (a, b)=")
print(gradient_p)
```
This prints the same message.
The problem is in:
https://github.com/openturns/openturns/blob/b5797d7e4a71c71faf86df51f26ad0d8d551ad08/lib/src/Base/Func/ParametricGradient.cxx#L62
The code is:
```
const Matrix fullGradient(p_evaluation_->getFunction().gradient(x));
```
A possible implementation to solve the problem would be to provide a way to access to a specific component of the gradient.
Thanks to @sanaaZ for pointing this problem.