There is an interesting methodological discussion going on over at GitHub:
@MichaelBaudin and @regislebrun wrote interesting contributions that can be read more easily below thanks to Discourse’s LaTeX support.
First message by @MichaelBaudin
The “FORM explained” example is very detailed and interesting:
An illustrated example of a FORM probability estimate — OpenTURNS 1.24dev documentation
It can, however, be improved. The formula:
should be avoided, because the FORM probability depends on the fact that the origin of the standard space is in the failure domain or not.
The full explanation is:
“If h is linear in the standard space, therefore:”
where \Phi is the CDF of the standard Gaussian (which is, I believe, the standard notation for it : Normal distribution - Wikipedia).
(This was, if memory is correct, a bug in earlier versions of OT.) Of course, when the origin in the standard space fails, then the structure has more than 50% chance of failure: this kind of structure rarely happens in practice, at least if OpenTURNS is used
Secondly, the variable name
pf
should be avoided, because it is not the probability of failure, but an estimate of this probability.This example is for training purposes, therefore I guess that it should be impeccable.
This is why I suggest to fix the text and equations and use the following code:
isOriginFail = result.getIsStandardPointOriginInFailureSpace() normal = ot.Normal() if isOriginFail: pf_FORM = normal.computeCDF(beta) else: pf_FORM = normal.computeCDF(-beta)
Second message by @MichaelBaudin
It may use the
StandardEvent
class which provides the function in the U-space:standardEvent = ot.StandardEvent(event) limitStateFunction = standardEvent.getFunction()
Improving the example would be possible, but I suggest to wait until the generalized draw features of events are created, probably during 2021.
Here is the code for plotting the event in the U-space in the cantilever beam example:
event = ot.ThresholdEvent(Y, ot.Greater(), 20.0) # Lower threshold to make it fail more often. alpha = 1.0 - 0.001 standardDistribution = ot.Normal(4) bounds, marginalProb = standardDistribution.computeMinimumVolumeIntervalWithMarginalProbability( alpha ) standardEvent = ot.StandardEvent(event) import otbenchmark as otb # with "pip install otbenchmark" drawEvent = otb.DrawEvent(standardEvent) _ = drawEvent.draw(bounds)
It produces:
Third message by @MichaelBaudin
The equation:
should be fixed into:
The code:
pf = (1.0 - ot.Normal().computeCDF(betaHL)) * coeff
should write:
pf = ot.Normal().computeCDF(-betaHL) * coeff
because of the loss of accuracy of floating point numbers near 1.
@regislebrun’s reply
IMO it is time to recall some basics about FORM/SORM approximations, as I see the discussion going into moving sands.
The actual theorem on the asymptotic approximation of the probability of a rare event of the form F(X)>s is due to Karl Breitung and reads:
- If U is distributed according to a multivariate normal distribution \mathcal{N}(0,1) of dimension n,
- if the domain D=\{F(U)>s\} is a scaled version of a reference domain D_0 at unit distance of the origin (d(O, D_0)=1 and D=\beta D_0),
- if O \notin D_0,
- if there is a unique point U^{0*} \in D_0 closest to the origin O,
- if \kappa^0_i (i=1, \dots, n-1) are the main curvatures of D_0 at U^*,
then P(X\in D)=\Phi(-\beta)\prod_{k=i}^{n-1}\frac{1}{\sqrt{1+\kappa^0_i}}(1+o(1)) when \beta \to \infty.
If a finite number of U^* exist, then one has to sum the contributions over all these pointsFour remarks here:
- The sign of the curvature depends on the orientation chosen for the boundary of the domain, so you may read \sqrt{1-\kappa^0_i} in some places.
- The point U^{0*} \in D_0 maps to the point U^*= \beta U^{0*} \in D, which is our well-known design point, and the main curvatures of D at U^* satisfy \kappa_i(\beta)=\kappa^0_i/\beta. The classical presentation of Breitung’s formula is misleading as it suggests that the product tends to zero as \beta grows when the curvatures are positive and is not defined when one of the curvatures is negative.
- The FORM approximation is an approximation of Breitung’s formula where the main curvatures are nullified. It is no more an asymptotic result, and it becomes very questionable when n is large as the product may become very small even if all the curvatures are close to zero. Another way to look at the link between FORM and SORM is to see SORM as a geometric correction to FORM, but it is the other way which is correct.
- When considering a random vector X with a general distribution, a first step is to map X into a vector U=T(X) with a multivariate standard normal distribution, hence the notion of isoprobabilistic transformation (Rosenblatt, Nataf).
During my PhD, I extended this setting to the case where U has a general spherical distribution with a decreasing radial PDF (such as the Student(\nu, 0, 1) distribution): Breitung’s formula is still valid in this case. So, given a random vector X with an arbitrary distribution, one can build T to target any U with such a spherical distribution. Using the Rosenblatt transformation, one always get a multivariate standard normal distribution, and using the generalized Nataf transformation associated to the standard univariate elliptical distribution which CDF is E, one get the associated multivariate standard spherical distribution if this transformation is applied to a random vector X with an elliptical copula of the same family as E.
In the case where E=\Phi, the standard normal distribution CDF, then the Nataf transformation maps a vector X with a normal copula into a multivariate standard normal distribution and is point-wise equal to the Rosenblatt transformation (another result in my PhD), but with a more efficient algebraic formulation. The generalized Nataf transformation keeps this efficient algebraic formulation so it should be used when performance is critical, e.g. when building meta-models embedding the transformation (PCE).Now, you know the origin of the notation E for the marginal CDF used in the FORM approximation. You have E=\Phi if X has a normal copula, E=F_{\nu} if X has a Student(\nu) copula aso. It is why the FORM and SORM approximations should be written:
P_{FORM}=E(-\beta)
P_{SORM}=E(-\beta)\prod_{i=1}^{n-1}\frac{1}{\sqrt{1+\kappa^0_i}}
to express the full generality of the results, to show the geometric multiplicative factor between the two approximations and to show that this factor is independent from \beta.I am well aware of the fact that this presentation is unusual, but it is the best I have found to now, after all these years working on the subject…