Relative vs absolute tolerance in unit tests

Hello,

In OT’s unit tests, we have frequently statements that look like this:

assert_almost_equal(a, b, 1e-12, 1e-12)

My understanding of this matter, which is entirely due to a recent training session by @MichaelBaudin, is that generally, relative tolerance (the first one in assert_almost_equal) should be preferred to absolute tolerance, except when the expected value is 0 (in which case relative tolerance makes no sense).

What is the reasoning behind having both relative and absolute tolerance?

It is to deal with the case where the value is very small but not zero, in order to keep a kind of continuity in the evaluation of the correctness of a result.

To summarize:

  • if b is “far” from zero, use assert_almost_equal(a, b, 1e-12, 0.0)
  • if b is nonzero but “close” (what exactly does this mean?) to zero, use assert_almost_equal(a, b, 1e-12, 1e-12)
  • if b==0, use assert_almost_equal(a, b, 0.0, 1e-12)

Is this correct?

Yes. And what “close to zero” mean is context dependent, at one point there is a user with a brain in the loop!