Decimal arithmetic in Python
As part of the project of exploring how decimal numbers could be added to JavaScript, I'd like to take a step back and look at how other languages support decimals (or not). Many languages do support decimal numbers. It may be useful to understand the range of options out there for supporting them. For instance, what kind of data model do they use? What are the limits (if there are any)? Does the langauge include any special syntax for decimal?
Here, I'd like to briefly summarize what Python has done.
Does Python support decimals?
Python supports decimal arithmetic. The functionality is part of the standard library. Decimals aren't available out-of-the-box, in the sense that all Python programs, regardless of what they import, can start working with decimals. There is no decimal literal syntax in the language. That said, all one needs to do is import * from decimal and you're ready to rock.
Decimals have been part of the Python standard library for a long time: they were added in version 2.4, in November 2001. Python does have a process for proposing extensions to the language, called PEP (Python Extension Proposal). Extensive discussions on the official mailing lists took place. Python decimals were formalized in PEP 327.
The decimal library provides access to some of the internals of decimal arithmetic, called the context. In the context, one can specify, for instance, the number of decimal digits that should be available when operations are carried out. One can also forbid mixing of decimal values with primitive built-in types, such as integers and (binary) floating-point numbers.
In general, the Python implementation aims to be an implementation of the General Decimal Arithmetic Specification. In particular, using this data model, it is possible to distinguish the digit strings 1.2 and 1.20, considered as decimal values, as mathematically equal but nonetheless distinct values.
Aside: How does this compare with Decimal128, one of the contender data models for decimals in JavaScript? Since Python's decimal feature is an implementation of the General Decimal Arithmetic Specification, it works with a sort of generalized IEEE 754 Decimal. No bit width is specified, so Python decimals are not literally the same as Decimal128. However, one can suitably parameterize Python's decimal to get something essentially equivalent to Decimal128:
- specify the minimum and maximum exponent to -6144 and 6143, respectively (the defaults are -999999 and 999999, respectively)
- specify the precision to 34 (default is 28)
API for Python decimals
Here are the supported mathematical functions:
- basic arithmetic: addition, subtraction, subtraction, division
- natural exponentiation and log (e^x, ln(x))
- log base 10
- a^b (two-argument exponentiation, though the exponent needs to be an integer)
- step up/down (1.451 → 1.450, 1.452)
- square root
- fused multiply-and-add (ab + c*)
As mentioned above, the data model for Python decimals allows for subnormal decimals, but one can always normalize a value (remove the trailing zeros). (This isn't exactly a mathematical function, since distinct members of a cohort are mathematically equal.)
In Python, when importing decimals, some of the basic arithmetic operators get overloaded. Thus, +, *, and **, etc., produce correct decimal results when given decimal arguments. (There is some possibility for something roughly similar in JavaScript, but that discussion has been paused.)
Trigonometric functions are not provided. (These functions belong to the "optional} part of the IEEE 754 specification.)