In that case, we would define \widehat{VT}_{i}^{alt} as \frac{1}{N} \sum_{k=1}^N \tilde{G}(\mathcal{A}_k)^2 - \hat{V}_{-i}^{alt} (keep in mind that \sum_{k=1}^N \tilde{G}(\boldsymbol{A}_k) is supposed to be null). Therefore
Of course, in OpenTURNS, we typically use unbiased variance estimators so \frac{1}{N} should be replaced in all formulas by \frac{1}{N-1}. Not that it matters with the large N involved in Sobol estimation.
In the code, q is the output dimension. For the sake of simplicity, let us assume that the output is one-dimensional, so q is 0. The input dimension (denoted by i in the post above) is here denoted by p: p=i.
Currently, \tilde{G} is centered with respect to the full (G(\boldsymbol{A}), G(\boldsymbol{B}), G(\boldsymbol{E}^1),...,G(\boldsymbol{E}^{n_X}))^T:
Case 1 : \tilde{G} centerered with respect to the sample \boldsymbol{A}
In this case, we assume that the code snippet above is changed in order to center the output sample with respect to \boldsymbol{A}.
That is to say that \tilde{G}(\cdot) = G(\cdot) - \frac{1}{N} \sum_{k=1}^N G(\boldsymbol{A}_k). In this case we would have:
The tip here consists, as for first order, to replace \left(\frac{1}{N}\sum_{k=1}^N \tilde{G(\boldsymbol{A}_k)}\right)^2 by \frac{1}{N}\sum_{k=1}^N \tilde{G(\boldsymbol{A}_k)} \tilde{G(\boldsymbol{B}_k)}. Thus we get:
If we have a look for example to R-sensitivity, the estimator is implemented like that.
However, in OpenTURNS, we replace the first term \frac{1}{N}\sum_{k=1}^N\tilde{G(\boldsymbol{A}_k)}(\tilde{G(\boldsymbol{A}_k}) by var(y_a) + \mu_a^2. In other words, we should implement the good one with the previous equation