If you've played Monopoly, you'll know abuot the Bank Error in Your Favor
card in the Community Chest. Remember this?
A bank error in your favor? Sweet! But what if the bank makes an error in its favor? Surely that's just as possible, right?
I'm here to tell you that if you're doing everyday financial calculations—nothing fancy, but involving money that you care about—then you might need to know that using binary floating point numbers, then something might be going wrong. Let's see how binary floating-point numbers might yield bank errors in your favor—or the bank's.
In a wonderful paper on decimal floating-point numbers, Mike Colishaw gives an example.
Here's how you can reproduce that in JavaScript:
(1.05 * 0.7).toPrecision(2);
# 0.73
Some programmers might not be aware of this, but many are. By pointing this out I'm not trying to be a smartypants who knows something you don't. For me, this example illustrates just how common this sort of error might be.
For programmers who are aware of the issue, one typical approache to dealing with it is this: Never work with sub-units of a currency. (Some currencies don't have this issue. If that's you and your problem domain, you can kick back and be glad that you don't need to engage in the following sorts of headaches.) For instance, when working with US dollars of euros, this approach mandates that one never works with euros and cents, but only with cents. In this setting, dollars exist only as an abstraction on top of cents. As far as possible, calculations never use floats. But if a floating-point number threatens to come up, some form of rounding is used.
Another aproach for a programmer is to delegate financial calculations to an external system, such as a relational database, that natively supports proper decimal calculations. One difficulty is that even if one delegates these calculations to an external system, if one lets a floating-point value flow int your program, even a value that can be trusted, it may become tainted just by being imported into a language that doesn't properly support decimals. If, for instance, the result of a calculation done in, say, Postgres, is exactly 0.1, and that flows into your JavaScript program as a number, it's possible that you'll be dealing with a contaminated value. For instance:
(0.1).toPrecision(25)
# 0.1000000000000000055511151
This example, admittedly, requires quite a lot of decimals (19!) before the ugly reality of the situation rears its head. The reality is that 0.1 does not, and cannot, have an exact representation in binary. The earlier example with the cost of a phone call is there to raise your awareness of the possibility that one doesn't need to go 19 decimal places before one starts to see some weirdness showing up.
There are all sorts of examples of this. It's exceedingly rare for a decimal number to have an exact representation in binary. Of the numbers 0.1, 0.2, …, 0.9, only 0.5 can be exactly represented in binary.
Next time you look at a bank statement, or a bill where some tax is calculated, I invite you to ask how that was calculated. Are they using decimals, or floats? Is it correct?
I'm working on the decimal proposal for TC39 to try to work what it might be like to add proper decimal numbers to JavaScript. There are a few very interesting degrees of freedom in the design space (such as the precise datatype to be used to represent these kinds of number), but I'm optimistic that a reasonable path forward exists, that consensus between JS programmers and JS engine implementors can be found, and eventually implemented. If you're interested in these issues, check out the README in the proposal and get in touch!
Jesse Alama