Hi,
Thanks for these information !
I have indeed no idea what the shapes of the posterior distributions will look like…
I have 6 uncertain variables to calibrate, and my surrogate model outputs 2 quantities of interest.
This problem is related to the topic i’ve created before here:
Here I can show you the prior distributions I use :
# variable X0
distribName = 'Normal'
P_mean = 1.0e5
P_std = 2000
PdistParams = [P_mean,P_std]
# variable X1
distribName = 'Normal'
lev_mean = 0.27
lev_std = 3.5e-2 # good 1.5e-2
levDistParam = [lev_mean,lev_std]
# variable X2
distribName = 'LogNormal'
m_mu = 0.1
m_sigma = 1.0
mDistParam = [m_mu,m_sigma]
# variable X3
alphadistribName = 'Uniform'
alpha_min = 0.047
alpha_max = 0.75
alphaDistParams = [alpha_min,alpha_max]
# variable X4
betaDistribName = 'Uniform'
beta_min = 0.7
beta_max = 0.9
betaDistParams = [beta_min,beta_max]
# variable X5
gammaDistribName = 'Uniform'
gamma_min = 0.29
gamma_max = 0.7
gammaDistParams = [gamma_min,gamma_max]
The Random MH algorithm and Gibbs sampler definition i tried is the following (note that i tried to take proposals distributions having the same mean as the priors) :
propStd = 1
proposal =[ot.Normal(1e5,propStd),
ot.Normal(0.27 ,propStd),
ot.Normal(0.1 ,propStd),
ot.Normal(0.3985 ,propStd),
ot.Normal(0.8 ,propStd),
ot.Normal(0.495 ,propStd)]
mh_coll = [
ot.RandomWalkMetropolisHastings(priorDistrib,initialState,proposal,[i])
for i in range(0,Nd)
]
for mh in mh_coll:
mh.setLikelihood(conditional,Yexp,linkFunction)
sampler = ot.Gibbs(mh_coll)
sampleSize = int(5e4)
sampler.setBurnIn(int(sampleSize/5))
Also, to be complete, the linkFunction is a Kriging model mapping the 6 uncertain inputs to the 2 quantities of interest. Of course, the experimental data used to calibrate are within the bounds of the quantities of interest when propagated with these prior distributions.
I get really small acceptance rates with this set of parameters…
Thanks again !
Best regards,
Elie