Message361599
Vedran, as Mark said, the result is defined to have no trailing zeroes. In general the module strives to return results "as if" infinite precision were used internally, not to actually _use_ infinite precision internally ;-) Given the same setup, e.g.,
>>> i * decimal.Decimal(0.5)
Decimal('2.0')
works fine.
This isn't purely academic. The `decimal` docs, at the end:
"""
Q. Is the CPython implementation fast for large numbers?
A. Yes. ...
However, to realize this performance gain, the context needs to be set for unrounded calculations.
>>> c = getcontext()
>>> c.prec = MAX_PREC
>>> c.Emax = MAX_EMAX
>>> c.Emin = MIN_EMIN
"""
I suggested this approach to someone on Stackoverflow, who was trying to compute and write out the result of a multi-hundred-million-digit integer exponentiation. Which worked fine, and was enormously faster than using CPython's bigints.
But then I noticed "trivial" calculations - like the one here - blowing up with MemoryError too. Which made sense for, e.g., 1/7, but not for 1/2.
I haven't looked at the implementation. I assume it's trying in advance to reserve space for a result with MAX_PREC digits.
It's not limited to division; e.g.,
>>> c.sqrt(decimal.Decimal(4))
...
MemoryError
is also surprising.
Perhaps the only thing to be done is to add words to the part of the docs _recommending_ MAX_PREC, warning about some "unintended consequences" of doing so. |
|
Date |
User |
Action |
Args |
2020-02-07 16:13:10 | tim.peters | set | recipients:
+ tim.peters, mark.dickinson, skrah, veky, BTaskaya |
2020-02-07 16:13:10 | tim.peters | set | messageid: <1581091990.88.0.127125707265.issue39576@roundup.psfhosted.org> |
2020-02-07 16:13:10 | tim.peters | link | issue39576 messages |
2020-02-07 16:13:10 | tim.peters | create | |
|