Because computers are limited, they work in a finite range of numbers, namely, those that can be represented straightforwardly as fixed-length (usually 32 or 64) sequences of bits. If you’ve only got 32 or 64 bits, it’s clear that there are only so many numbers you can represent, whether we’re talking about integers or decimals. For integers, it’s clear that there’s a way to exactly represent mathematical integers (within the finite domain permitted by 32 or 64 bits). For decimals, we have to deal with the limits imposed by having only a fixed number of bits: most decimal numbers cannot be exactly represented. This leads to headaches in all sorts of contexts where decimals arise, such as finance, science, engineering, and machine learning.
It has to do with our use of base 10 and the computer’s use of base 2. Math strikes again! Exactness of decimal numbers isn’t an abstruse, edge case-y problem that some mathematicians thought up to poke fun at programmers engineers who aren’t blessed to work in an infinite domain. Consider a simple example. Fire up your favorite JavaScript engine and evaluate this:
1 + 2 === 3
You should get true
. Duh. But take that example and work it with decimals:
0.1 + 0.2 === 0.3
You’ll get false
.
How can that be? Is floating-point math broken in JavaScript? Short answer: yes, it is. But if it’s any consolation, it’s not just JavaScript that’s broken in this regard. You’ll get the same result in all sorts of other languages. This isn’t wat. This is the unavoidable burden we programmers bear when dealing with decimal numbers on machines with limited precision.
Maybe you’re thinking OK, but if that’s right, how in the world do decimal numbers get handled at all? Think of all the financial applications out there that must be doing the wrong thing countless times a day.
You’re quite right! One way of getting around oddities like the one above is by always rounding. So instead of working with, say, this is by handling decimal numbers as strings (sequences of digits). You would then define operations such as addition, multiplication, and equality by doing elementary school math, digit by digit (or, rather, character by character).
Numbers in JavaScript are supposed to be IEEE 754 floating-point numbers. A consequence of this is, effectively, that 0.1 + 0.2
will never be 0.3
(in the sense of the ===
operator in JavaScript). So what can be done?
There’s an npm library out there, decimal.js, that provides support for arbitrary precision decimals. There are probably other libraries out there that have similar or equivalent functionality.
As you might imagine, the issue under discussion is old. There are workarounds using a library.
But what about extending the language of JavaScript so that the equation does get validated? Can we make JavaScript work with decimals correctly, without using a library?
Yes, we can.
It’s worth thinking about a similar issue that also arises from the finiteness of our machines: arbitrarily large integers in JavaScript. Out of the box, JavaScript didn’t support extremely large integers. You’ve got 32-bit or (more likely) 64-bit signed integers. But even though that’s a big range, it’s still, of course, limited. BigInt, a proposal to extend JS with precisely this kind of thing, reached Stage 4 in 2019, so it should be available in pretty much every JavaScript engine you can find. Go ahead and fire up Node or open your browser’s inspector and plug in the number of nanoseconds since the Big Bang:
13_787_000_000_000n // years
* 365n // days
* 24n // hours
* 60n // minutes
* 60n // seconds
* 1000n // milliseconds
* 1000n // microseconds
* 1000n // nanoseconds
(Not a scientician. May not be true. Not intended to be a factual claim.)
OK, enough about big integers. What about adding support for arbitrary precision decimals in JavaScript? Or, at least, high-precision decimals? As we see above, we don’t even need to wrack our brains trying to think of complicated scenarios where a ton of digits after the decimal point are needed. Just look at 0.1 + 0.2 = 0.3. That’s pretty low-precision, and it still doesn’t work. Is there anything analogous to BigInt for non-integer decimal numbers? No, not as a library; we already discussed that. Can we add it to the language, so that, out of the box—with no third-party library—we can work with decimals?
The answer is yes
. Work is proceeding on this matter, but things remain to unsettled. The relevant proposal is BigDecimal. I’ll be working on this for a while. I want to get big decimals into JavaScript. There are all sorts of issues to resolve, but they’re definitely resolvable. We have experience with arbitrary precision arithmetic in other languages. It can be done.
So yes, floating-point math is broken in JavaScript, but help is on the way. You’ll see more from me here as I tackle this interesting problem; stay tuned!
Jesse Alama