You could use the TensorizedCovarianceModel class for that.
As far as optimization goes, it is best to do it one output at a time, because it makes the solver’s life easier.
However, after the optimization, you can get the optimized 1D covariance models and feed them to a TensorizedCovarianceModel
.
Implementation
At the start of your script, define an empty list:
list_covmodels = []
Inside your for
loop, add the following line after result = algo.getResult()
list_covmodels.append(result.getCovarianceModel())
After the loop, insert the following code:
# Create a TensorizedCovarianceModel from the list of covariance models
tencov = ot.TensorizedCovarianceModel(list_covmodels)
# Get the "marginal" sample corresponding to the outputs
y = sample.getMarginal(range(dim, sample.getDimension()))[0:Test_sample]
# Create basis
basis = ot.LinearBasisFactory(dim).build()
# Create the KrigingAlgorithm
algo = ot.KrigingAlgorithm(x, y, tencov, basis)
# Do NOT optimize the covariance parameters (they are already optimal!)
algo.setOptimizeParameters(False)
# Run the algorithm and get the result as before
algo.run()
result = algo.getResult()
# The kriging metamodel is multidimensional
krigingMetamodel = result.getMetaModel()
# Validate the metamodel as before
X_test = sample.getMarginal(range(0,dim))[Test_sample:]
Y_test = sample.getMarginal(range(dim, sample.getDimension()))[Test_sample:]
val = ot.MetaModelValidation(X_test, Y_test, krigingMetamodel)
Q2 = val.computePredictivityFactor()