*This question is related to Issue #1615, but not limited to it.*

*Some of the people present at the technical committee meeting on October 13th, 2020 (disclaimer: I am one of them) have argued that some methods in OpenTURNS should be allowed to produce infinity (+INF or -INF) as output, or else take infinity as input Scalar (which is the OT name for a float).*

*This would represent a change in OpenTURNS coding policy and the discussion could not be finished during the technical committee meeting. The main arguments have been summarized on the linked Github issue, and I reproduce them below. Thanks to @MichaelBaudin for the summary.*

There are two correct strategies to deal with exceptional cases such as when `computeLogPDF`

is applied to a point outside the distribution support:

- generate exceptions,
- generate exceptional floating point numbers such as INFs and NANs.

There are wrong strategies.

- A wrong strategy is to produce warnings, because these rather silent message are unnoticed by most users.
- A wrong strategy is to produce MaxScalar when the result is INF. This is because using a finite floating point number is very different from using INF. For example, we may use the following code in order to produce the number 1 based on MaxScalar:

```
x = MaxScalar
for i in range(53):
x /= 2.0
```

This is impossible when x is INF, which is good.

Here are the arguments against INF and NANs:

*A INF or a NAN is not a real number. Hence, this is not what most users expect as the return of a function. However, because we always perform computations with floating point numbers, indeed, nobody should expect a real number as the output of a C++ or a Python function returning a float.
Using them slow down the computation, because the performance of the libraries are poor when dealing with exceptional floating point numbers.* @regislebrun This assertion requires some factual proofs.

*We do not know how OT dependencies (e.g. NLOPT, etcâ€¦) react to these exceptional numbers.*@regislebrun This assertion requires some factual proofs, with examples of poor management of floating point numbers.

The lowest field is provided in `std::numeric_limits`

:

However, according to the previous page, â€śFor floating-point types: implementation-dependent; generally, the negative of `max()`

.â€ť. Hence, the code in OT would be:

```
const Scalar Specfun::Lowest = -MaxScalar;
```

Then, we would use `Lowest`

in place of `-MaxScalar`

wherever appropriate.

An option would be to build the code with an option which raises an exception each time an INF number is produced.

Below is a reference which might be related:

*Here are further arguments against the policy change from @regislebrun:*

We all should definitely read the series of posts here, in particular this one: https://randomascii.wordpress.com/2012/05/20/thats-not-normalthe-performance-of-odd-floats/

As I said during the CT, I learned from experience that exceptional floating point numbers have a huge performance penalty. But it was based on quite old experiment, when x87 was the norm and SSE2 was not so widespread. And you can read in the previous article that the performance penalty was up to a 900x slowdown. Now, it looks like NaNs and Infs are processed at full speed.

For those who are too much optimistic about the guarantees provided by a standard regarding eg the reproducibility of a computation, they should read this article, or at least have a look at the final flowchart https://randomascii.files.wordpress.com/2012/03/image6.png

Remember that the different rounding modes are not taken into account in this flowchart!

Concerning the added value of exceptions, read this, already referenced in the summary. **Note that the author underline the utility of letting inf sneak into the computation from time to time** so the use of inf is **not** a systematic devil, but its global propagation **is.**