0

In C# / .NET, the expression String.Format("{0:R}", 0.1 * 199) yields 19.900000000000002.

Because it's a floating-point number, I obviously never expect to get an exact "19.9" result. However from my tests, it looks like error always seems to be positive and never negative. That is, my result is always just a tiny bit larger than it should be, never just a tiny bit smaller.

Can I always count on that behavior? Or am I just doing the wrong tests?

(I assume this is a language-agnostic principle, not exclusive to C# / .NET)

asked Apr 5, 2018 at 8:36
2
  • No, you can't count on it and yes you are doing the wrong, or not enough tests. But it might help to simply read a little more about floating point and rounding in particular. That is much better than your trial-and-error way of learning. Commented Apr 5, 2018 at 15:54
  • Nope Commented Apr 6, 2018 at 5:48

3 Answers 3

2

Floating-point arithmetic has multiple rounding behaviors. The most common default behavior is to round the exact mathematical result to the nearest representable value and, in case of a tie, to round to the value with an even low digit.

The other rounding behaviors defined by the IEEE-754 floating-point standard are:

  • Round toward zero.
  • Round up (toward +∞).
  • Round down (toward −infinity).
  • Round to nearest representable value, but, in case of a tie, round away from 0.

As you can see, none of these are "round away from zero," so it is not a standardized rounding behavior and you are unlikely to find it in common hardware or software. However, the Wikipedia page on rounding discusses it and other behaviors.

Although the common default is round-to-nearest ties-to-even, this is not language agnostic. Each programming language and/or each computing platform may choose what rounding behaviors it makes available and which one is the default.

answered Apr 5, 2018 at 15:46
Sign up to request clarification or add additional context in comments.

Comments

1

Any single operation goes for the closest floating-point representation to the actual result; try 0.1 + 0.7.

answered Apr 5, 2018 at 8:48

Comments

1

It's not absolutely language agnostic but still is that in a kind of practical interpretation. In C#, the only rounding supported by runtime is roundToNearestTiesEven (IEEE754 term). Any other mode requires ugly hacks as unsafe switching of processor mode, or specialized libraries. It's not alone in this; the same is true for Java, Python, and many others. (That's not nessessary true for Decimal that has its own specifics.) Virtually, there are less languages that explicitly define rounding control support (as C, C++) than "unaware" ones.

For other details, I would second answers by @Ry and @EricPostpischil. Example of 0.1 + 0.7 is good for rounding of double values that is the same roundToNearestTiesToEven really rounds toward zero.

Talking on C#, one would also notice that 32-bit x86 C# has its own specifics tied with internal FPU processing that can use wider accuracy for intermediate values. Here is example that comparing of 0.1+0.2 converted to Single isn't equal to the same value remaining Double in FPU, and here is its reproduction in C.

answered Apr 6, 2018 at 15:21

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.