Why does the LHS class exist?

Hi !
The LHS class estimates a probability using a Latin Hypercube Design.
On the other hand, the ProbabilitySimulationAlgorithm can use a Latin Hypercube Design to estimate a probability, but can use other WeightedExperiments such as a LowDiscrepancyExperiment.
It seems to mee that the LHS class has no purpose: what feature is provided by the LHS class which is not covered by ProbabilitySimulationAlgorithm?
Regards,
Michaël

Hello Michaël,

You have a point, it might be misleading for the users only wanting to generate LHDs to have a class dedicated to rare event probability estimation simply named LHS.

I also had a naive question regarding LHS in OpenTURNS: is there a reason why the LHSExperiment class does not accept non-independent distributions? The LowDiscrepancyExperiment does not have this constraint.

Best,
Elias

I agree: this is as misleading as possible. Do you agree on the fact that the LHS class is useless? More precisely, does the algorithm is designed specifically for LHS DoEs or does ProbabilitySimulationAlgorithm with a LHSExperiment would make the same algorithm?

I think you are also true on the LHSExperiment with respect to the non-independence restriction: a transformation into the independent copula can be done as in LowDiscrepancyExperiment.

I kindly suggest to create two new bug reports, so that the conclusions can be taken into account.

Is there any other inconsistencies that you found in the reliability section of the lib?

Regards,
Michaël

Hi Michaël,

Yes I totally agree, the LHS serves the same purpose as the ProbabilitySimulationAlgorithm with a LHSExperiment.

Two other questions/remarks:

  • What is the difference between the class PostAnalyticalSimulation and the ProbabilitySimulationAlgorithm with an ImportanceSamplingExperiment? If there is a difference it is not very understandable.
  • I wish the input-output samples evaluated by any algo.run() of a reliability class could be stored and accessible. It is very important for the user to understand how the algorithms behaves, and possibly visualize the failure domain. Currently, the only way I know to do this is by wrapping the limit-state function into a MemoizeFunction, which is not straightforward.

Thanks in advance for the GitHub tickets!
Cheers,
Elias

Hi Elias,

  • The PostAnalyticalSimulation implements the FORM-IS algorithm, i.e. run FORM, get the design point and use this to run an importance sampling method. There are two ways to implement this idea. 1) The PostAnalyticalImportanceSampling implements its directly. 2) The PostAnalyticalControlledImportanceSampling uses extra tricks, but I do not remember exactly (and the help does not help !).
  • I understand your need for a (input, output) history method for reliability algorithms. Actually, notice that this is not a specific need for reliability algorithm, but for any algorithm. One of the issues to implement it is that it is specific to each algorithm. For example, in the subset method, each sub-sample has its own meaning. Hence, a list of Sample would be required. In the subset example the code cuts the history into pieces, based on a prior knowledge of the algorithm. Actually, I think that if someone really want the feature, implement it into the library will be done. Meanwhile, I would implement a SubsetSimulationWithHistory algorithm with getInputHistory() and getOutputHistory() methods. In the same spirit, the getKrigingResult() method was added in OT 1.17 according to the ChangeLog in order to get the kriging surrogate that is produced at the end of the algorithm. Do you have a specific reliability algorithm in mind for which post-processing the MemoizeFunction is not possible at all?

Regards,

Michaël

Hi Michaël,

Thanks for your answer, the PostAnalyticalImportanceSampling seems a bit more straightforward but FORM should still be computed before it. I’m basing this on the comparison of the two following examples:

https://openturns.github.io/openturns/latest/auto_reliability_sensitivity/reliability/plot_post_analytical_importance_sampling.html#sphx-glr-auto-reliability-sensitivity-reliability-plot-post-analytical-importance-sampling-py

https://openturns.github.io/openturns/latest/user_manual/_generated/openturns.ProbabilitySimulationAlgorithm.html#openturns.ProbabilitySimulationAlgorithm

About the input/ouput history for reliability analysis, I think that it is even more needed than when fitting a Kriging model since the reliability algorithm determines the points on which the limit-state function is evaluated. For all the adaptive sampling methods SubsetSampling and NAIS, we could name the accessor methods differently. E.g., .getInputSubsets() (resp. .getOutputSubsets()) could either return list of ot.Sample objects or a single ot.Sample object with an additional column representing the subset index.

Best,
Elias

lets deprecate the LHS class

1 Like

Hi!
There are two updates on this topic:

Regards,
Michaël

1 Like

Hi!
This issue NAIS/CrossEntropy: add samples accessors for each step with selection was created to provide the same feature to the NAIS.
Regards,
Michaël

1 Like