This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: decimal.py: to_integral() corner cases
Type: behavior Stage:
Components: Library (Lib) Versions: Python 3.3
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: Nosy List: mark.dickinson, skrah
Priority: normal Keywords:

Created on 2011-02-05 13:06 by skrah, last changed 2022-04-11 14:57 by admin.

Messages (2)
msg127986 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2011-02-05 13:06
Hi,

to_integral() should behave like quantize() for negative exponents:

"Otherwise (the operand has a negative exponent) the result is the
 same as using the quantize operation using the given operand as the
 left-hand-operand, 1E+0 as the right-hand-operand, and the precision
 of the operand as the precision setting. The rounding mode is taken
 from the context, as usual."


There are some corner cases where this matters:


>>> from decimal import *
>>> c = Context(prec=1, Emin=-1, Emax=1, traps=[])
>>> d = Context(prec=4, Emin=-1, Emax=1, traps=[])
>>> 
>>> c.to_integral(Decimal("999.9"))
Decimal('1000')
>>> d.quantize(Decimal("999.9"), Decimal("1e0"))
Decimal('NaN')


Indeed, decNumber returns NaN for to_integral(). This is an odd
situation, since for the result it is possible to exceed the
precision but not Emax:


>>> c = Context(prec=3, Emin=-3, Emax=3, traps=[])
>>> d = Context(prec=4, Emin=-3, Emax=3, traps=[])
>>> c.to_integral(Decimal("999.9"))
Decimal('1000')
>>> d.quantize(Decimal("999.9"), Decimal("1e0"))
Decimal('1000')


The specification is on the side of decNumber, but I wonder if this is
an oversight.
msg128106 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2011-02-07 11:02
For the record, I prefer Python's behavior. The quantize() definition
does not work well for arbitrary precision input and leads to situations
like:


Precision: 1
Maxexponent: 1
Minexponent: -1

tointegral  101  ->  101
tointegral  101.0  ->  NaN  Invalid_operation


A comment in tointegral.decTest suggests that the to-integral definition 
was modeled after IEEE 854 and 754r:

-- This set of tests tests the extended specification 'round-to-integral
-- value' operation (from IEEE 854, later modified in 754r).
-- All non-zero results are defined as being those from either copy or
-- quantize, so those are assumed to have been tested.
-- Note that 754r requires that Inexact not be set, and we similarly
-- assume Rounded is not set.


This definition of course works fine as long as the input does not
have more than 'precision' digits and Emax is sufficiently large. I
think that for arbitrary precision input the definition should read:


"Otherwise (the operand has a negative exponent) the result is the
 same as using the quantize operation using the given operand as the
 left-hand-operand and 1E+0 as the right-hand-operand. For the purpose
 of quantizing a temporary context is used with the precision of the
 operand as the precision setting, Emax >= prec and Emin <= -Emax. The
 rounding mode is taken from the context, as usual."
History
Date User Action Args
2022-04-11 14:57:12adminsetgithub: 55337
2011-02-07 11:02:16skrahsetmessages: + msg128106
2011-02-05 13:06:48skrahcreate