If you’re interested in decimal arithmetic in computers, you’ve got to check out Mike Cowlishaw’s FAQ on the subject. There’s a ton of insight to be had there. If you like the kind of writing that makes you feel smarter as you read it, this one is worth your time.
For context: Cowlishaw is the editor of the 2008 edition of the IEEE 754 standard, updating the 1985 and 1987 standards. The words thus carry a lot of authority, and it would be quite unwise to ignore Mike in these matters.
If you prefer similar information in article form, take a look at Mike’s Decimal Floating-Point: Algorism for Computers. (Note the delightful use of algorism
. Yes, it’s a word.)
The FAQ focused mainly on floating-point decimal arithmetic, not arbitrary-precision decimal arithmetic (which is what one might immediately think of when the one hears decimal arithmetic
). Arbitrary-precision decimal arithmetic is whole other ball of wax. In that setting, we’re talking about sequences of decimal digits whose length cannot be specified in advance. Proposals such as decimal128 are about a fixed bit width—128 bits—which allows for a lot of precision, but not arbitrary precision.
One crucial insight I take away from Mike’s FAQ—a real misunderstanding on my part which is a bit embarrassing to admit—is that decimal128 is not just a 128-bit version of the same old binary floating-point arithmetic we all know about (and might find broken). It’s not as though adding more bits meets the demands of those who want high-precision arithmetic. No! Although decimal128 is a fixed-width encoding (128 bits), the underlying encoding is decimal, not binary. That is, decimal128 isn’t just binary floating-point with extra juice. Just adding bits won’t unbreak busted floating-point arithmetic; some new ideas are needed. And decimal128 is a way forward. It is a new (well, relatively new) format that addresses all sorts of use cases that motivate decimal arithmetic, including needs in business, finance, accounting, and anything that uses human
decimal numbers. What probably led to my confusion is thinking that the adjective floating-point
, regardless of what it modifies, must be some kind of variation of binary floating-point arithmetic.