IntegrationStrategy guidelines

Are there any tips on the DOEs to use with IntegrationStrategy ?

In the integration example with the beam from the doc, GaussProductExperiment works great:

But if I try to use anything else (MC, QMC, LHS) the Q2 is off.

I tried to play with the parameters a bit more on the ishigami example, but here GaussProduct is the worst, whereas LHS works okish for Integration (and LSQ).

t_FunctionalChaos_ishigami.py (4.6 KB)

indexMax : 165
basisDimension : 35
#
LeastSquaresStrategy CleaningStrategy GaussProductExperiment
residuals= [0.00326924]
relativeErrors= [0.000148401]
q2= 0.6742196972690251
#
LeastSquaresStrategy CleaningStrategy MonteCarloExperiment
residuals= [0.00398503]
relativeErrors= [0.000200195]
q2= 0.9996719545183788
#
LeastSquaresStrategy CleaningStrategy LHSExperiment
residuals= [0.00384169]
relativeErrors= [0.00023227]
q2= 0.999748079348371
#
LeastSquaresStrategy CleaningStrategy LowDiscrepancyExperiment
residuals= [0.00405544]
relativeErrors= [0.00023333]
q2= 0.999570535163289
#
IntegrationStrategy CleaningStrategy GaussProductExperiment
residuals= [0.429421]
relativeErrors= [0]
q2= 0.13608543870079193
#
IntegrationStrategy CleaningStrategy MonteCarloExperiment
residuals= [0.458122]
relativeErrors= [0]
q2= 0.3015436285332618
#
IntegrationStrategy CleaningStrategy LHSExperiment
residuals= [0.220629]
relativeErrors= [0]
q2= 0.2585766677022334
#
IntegrationStrategy CleaningStrategy LowDiscrepancyExperiment
residuals= [0.242792]
relativeErrors= [0]
q2= 0.7110390071876536
1 Like

nevermind, with MC sampling size must be set up to 1e6 to get a decent Q2 (only 256 with GaussProd)

Hi Julien,
I wanted to see how the different sampling methods compare in terms of convergence of the Q^2 coefficient depending on the sampling method. I made some experiments in the following notebook. Based on your simulations, I expect that the tensorized Gauss rule leads to a much smaller sample size, but the actual speed of convergence is unknown to me in the cantilever beam case.

So I used four different sampling methods:

  • Gauss product
  • Monte-Carlo sampling
  • Latin hypercube sampling
  • Quasi Monte Carlo with a Sobol’ sequence

Since some of these methods are random, I repeat each experiment 5 times.

This is the result:

image

We see that the tensorized Gauss rule leads to a Q^2 coefficient which is very close to 1 with as few as approximately 500 nodes. The Sobol’ sequence needs approximately 10^4 points to get Q^2 > 0.9. Both LHS and Monte-Carlo need more than 10^5 points. Overall, the ranking is as follows:

\textrm{Gauss} > \textrm{Sobol'} > \textrm{Monte-Carlo} \approx \textrm{LHS}.

We will see how Smolyak behaves when this quadrature is available.

Best regards,

Michaël