Use cases for decimals in JavaScript

Here, I'd like to sketch out some of the use cases that have been discussed for adding decimals to JavaScript.

Just to set up some of the terminology to avoid a potential misunderstanding: when we say adding decimals to JavaScript we of course are referring to something new in the language. In a sense, decimals already exist in JavaScript as numbers, and are expressed as literals in the langauge like so: 1.234, -42.9876, and so on. Syntatically, one could call those decimals, but it would be more accurate to say that those literals work out to binary floating-point numbers, which typically work out to 32-bit or 64-bit numbers at the CPU level (depending on the underlying machine architecture). When we talk about decimals here, we're not talking about binary floating-point numbers as they currently exist in JS. We mean something new.

In fact, there are two distinct data models under discussion: 128-bit floating-point numbers (or Decimal128, for short), or arbitrary-precision decimals (BigDecimal, for lack of a better term). In the first case, decimals would retain their floating-point nature but would be (1) exact, and (2) get a lot more precision than you find even with 64-bit binary floaring-point numbers. For an analogy, think of how going from IPv4 (32-bit IP addresses) to IPv6 (64-bit IP addresses) does vastly more than double the size of the addressable part of the Internet. Of course, the total address space afforded by IPv6 is limited, so one could say that IPv6 still isn't enough, but from where I'm sitting, today, IPv6 offers a lot of breathing room for growth.

Also, just to clarify: There is already support for decimals in various libraries. So when we say adding support, we mean more than just "embedding} one of these libraries into the language. What we mean is extending JavaScript so that, without any third-party libraries, support for decimals is already added.

Money

A natural use case for decimals that actually work is money. There are two needs here: exactness, and support for high-precision (that is, proper support for more than 2 or four digits after the decimal point).

Computing taxes is another case where, even with simple examples, one quickly gets into trouble. One typically needs to sum a bunch of decimal numbers (the items of an order), multiple the sum by a number (0.05, 0.07, 0.19, you name it), and get a grand total.

Exactness

We have already seen how 0.1 + 0.2 isn't 0.3. When working with money, this should be a strong clue that we need to do something to get thigns to work out as we need to. Somehow it feels wrong to resort to fallbacks like rounding, or working with numbers as digit strings (!) rather than actual numbers. We want to work with numbers as numbers, where less-than and equality work as we expect them to work in arithmetic.

This issue shows that the issue of exactness is independent of the idea of support for high-precision. In the examples above—of which countless similar examples can be generated—there is just one decimal after the digit there!

High-precision

In most currencies, there's a base unit (say, a dollar) that is typically (though not always!) divided into subunits (cents). The subunits are often defined in terms of decimals (a cent are, by definition, 1/100 of a dollar). Some calculations are done with 3 or even 4 "sub-units} (think: cents), but probably not (much) more than that.

But that's "classical} money. In many cryptocurrencies, a single unit is divided into millions of subunut, possibly even billions or trillions (or even more!). Here, we actually start brushing up against the bare possibility of even representing such fine divisions of a unit as a number at all. And even if these numbers could be represented exactly as floating-point numbers, we are likely asking for trouble when we do arithmetic with them, even more so than with "low-digit" examples like 0.1, 0.2, and so on.

Medicine and science

Think of all the data that gets recorded around the world. In airplanes, in weather stations, in satellites. The list is endless. And surely some of these measurements are done with high-precision decimal numbers. Floating-point numbers might well be good, but high-precision (or arbitrary-precision) decimals might be een better. Think of bioinformatics or medical systems.

Unit conversions

Converting between different units is often a way where lot of decimal places occur quite naturally.

Handling data from other systems

Imagine ingesting data from a database that works with high-precision or arrbitrary precision decimals. If you import those kinds of numbers into JavaScript, using floats, you may well be asking for trouble by accidentally rounding your results into a more coarse-grained data. In some applications, maybe that rounding doesn't matter much. But if the original DB was set up with high-precision or arbitrary-precision decimals, presumably that level of precision reflects a real need that your JS program can't respect. (Not without reaching for a third-party library to do the work.)

As an example, SQL databases