This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Surprising MemoryError in `decimal` with MAX_PREC
Type: enhancement Stage: resolved
Components: Library (Lib) Versions: Python 3.9, Python 3.8, Python 3.7
process
Status: closed Resolution:
Dependencies: Superseder:
Assigned To: skrah Nosy List: BTaskaya, Michael.Felt, mark.dickinson, miss-islington, ned.deily, pablogsal, skrah, tim.peters, veky, vstinner
Priority: normal Keywords:

Created on 2020-02-07 03:39 by tim.peters, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Pull Requests
URL Status Linked Edit
PR 18581 merged skrah, 2020-02-20 22:23
PR 18584 merged miss-islington, 2020-02-21 00:54
PR 18585 merged miss-islington, 2020-02-21 00:55
PR 18594 merged skrah, 2020-02-21 14:26
PR 18596 merged miss-islington, 2020-02-21 20:28
PR 18597 merged miss-islington, 2020-02-21 20:28
PR 18616 merged skrah, 2020-02-23 13:31
PR 18617 merged miss-islington, 2020-02-23 13:37
PR 18618 merged miss-islington, 2020-02-23 13:37
PR 20743 merged skrah, 2020-06-08 23:10
PR 20744 merged skrah, 2020-06-08 23:12
PR 20745 merged skrah, 2020-06-08 23:24
PR 20746 merged skrah, 2020-06-08 23:27
PR 20747 merged skrah, 2020-06-08 23:37
PR 20748 merged skrah, 2020-06-08 23:39
Messages (62)
msg361536 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-07 03:39
Here under Python 3.8.1 on 64-bit Windows:

>>> import decimal
>>> c = decimal.getcontext()
>>> c.prec = decimal.MAX_PREC
>>> i = decimal.Decimal(4)
>>> i / 2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError

Of course the result is exactly 2.  Which I have enough RAM to hold ;-)

The implicit conversion is irrelevant:

>>> i / decimal.Decimal(2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError

Floor division instead works fine:

>>> i // 2
Decimal('2')
msg361538 - (view) Author: Vedran Čačić (veky) * Date: 2020-02-07 06:44
> Of course the result is exactly 2.  Which I have enough RAM to hold ;-)

You might think so, but if you write it as 2.00...0 with 

>>> decimal.MAX_PREC
999999999999999999

zeros, I think you're overestimating your RAM capacity. :-P

Now, what is the exact significance of MAX_PREC, I don't know. But I guess some hard limits were needed for standards compliance, and Python didn't want to limit you there. Of course, same as with recursionlimit, it might be better to actually have some reasonable _lower_ limit that we know is actually safe.
msg361544 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2020-02-07 08:50
Agreed that this seems surprising.

@Vedran: Trailing zeros in a Decimal object are significant, so `Decimal("2.0")` and `Decimal("2.00")` are different (equal, but different). The rules about the "ideal exponent" of the result of an arithmetic operation are well-specified. In this case, the spec is clear that the answer should be `Decimal("2")`, and not `Decimal("2.0")` or `Decimal("2.00")`, or ....
msg361546 - (view) Author: Vedran Čačić (veky) * Date: 2020-02-07 09:05
Yeah, I should have said "represent" instead of "write". :-)
msg361596 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-07 15:19
With _pydecimal the memory also grows very slowly (I didn't have the patience 
to wait for MemoryError).

I'm pretty sure decNumber also does the same, it's just easier to implement 
and does not slow down division for small numbers.

libmpdec always favors precisions from roughly 9-34 digits whenever 
there's a potential performance issue.

The only thing I could imagine is to special-case, say, prec > 10000.
msg361597 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-07 15:35
MAX_PREC is chosen so that 5*MAX_PREC does not overflow 32-bit or 64-bit signed integers. This eliminates many overflow checks for the exponent.

Updating the exponent is (perhaps surprisingly) quite performance sensitive, that's why the 32-bit build does not use a 64-bit exponent.
msg361598 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-07 15:55
The feature would be nice to have; however, if you choose the precision to match the amount of available RAM things work (I have 8GB here, one word in the coefficient has 19 digits for the 4 bit version):

>>> from decimal import *
>>> c = getcontext()
>>> c.prec = 8 * 2**30 // 64 * 19
>>> c.prec
2550136832
>>> i = Decimal(4)
>>> i / 2
Decimal('2')



So I wonder if we really need to do something here.
msg361599 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-07 16:13
Vedran, as Mark said, the result is defined to have no trailing zeroes.  In general the module strives to return results "as if" infinite precision were used internally, not to actually _use_ infinite precision internally ;-)  Given the same setup, e.g.,

>>> i * decimal.Decimal(0.5)
Decimal('2.0')

works fine.

This isn't purely academic.  The `decimal` docs, at the end:

"""
Q. Is the CPython implementation fast for large numbers?

A. Yes. ...
However, to realize this performance gain, the context needs to be set for unrounded calculations.

>>> c = getcontext()
>>> c.prec = MAX_PREC
>>> c.Emax = MAX_EMAX
>>> c.Emin = MIN_EMIN
"""

I suggested this approach to someone on Stackoverflow, who was trying to compute and write out the result of a multi-hundred-million-digit integer exponentiation.  Which worked fine, and was enormously faster than using CPython's bigints.

But then I noticed "trivial" calculations - like the one here - blowing up with MemoryError too.  Which made sense for, e.g., 1/7, but not for 1/2.

I haven't looked at the implementation.  I assume it's trying in advance to reserve space for a result with MAX_PREC digits.

It's not limited to division; e.g.,

    >>> c.sqrt(decimal.Decimal(4))
    ...
    MemoryError

is also surprising.

Perhaps the only thing to be done is to add words to the part of the docs _recommending_ MAX_PREC, warning about some "unintended consequences" of doing so.
msg361601 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-07 17:01
> This isn't purely academic.  The `decimal` docs, at the end:

Yes, that is a good point. I think for libmpdec I'll just do a trial divmod if prec > BIGNUM_THRESHOLD.



> Perhaps the only thing to be done is to add words to the part of the docs _recommending_ MAX_PREC, warning about some "unintended consequences" of doing so.

This, too. Probably some formula based on the amount of available RAM would do.
msg361606 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-07 18:35
Formulas based on physical RAM probably work well on Linux, but not so well on Windows:  Windows has no "overcommit".  Whether a virtual memory request succeeds on Windows depends on how much RAM (+ swap space, if any) has already been requested across all processes.  It doesn't matter whether pages have actually been created, merely the total number of pages that _may_ be created.

I never bumped into this issue before, because I never used MAX_PREC before ;-)  When I intended to do exact arithmetic in `decimal`, I did this instead:

- guesstimated the max number of decimal digits a computation would need

- set `prec` to be comfortably - but not massively - larger than that

- enabled the Inexact trap, so if I guessed too low I'd get an exception

Maybe the docs could suggest that instead?
msg362364 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-20 23:06
Fortunately libmpdec raises MemoryError almost instantaneously, so the PR retries 
the affected operations with estimated upper bounds for exact results without 
slowing down the common case.

The docs still need updating because people will still wonder why 1 / Decimal(3)
freezes the Linux machine at MAX_PREC. ;)


The Python implementation cannot use this approach because memory grows slowly.  
I'm not sure though if anyone _would_ run _pydecimal at MAX_PREC.
msg362365 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-20 23:21
BTW, this PR implements the invariant:

"If there exists an exact result at a lower precision, this
result should also be returned at MAX_PREC (without MemoryError)".

So non-integer powers are left out, since _decimal has no notion
of exact non-integer powers yet.
msg362373 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 00:52
New changeset 90930e65455f60216f09d175586139242dbba260 by Stefan Krah in branch 'master':
bpo-39576: Prevent memory error for overly optimistic precisions (GH-18581)
https://github.com/python/cpython/commit/90930e65455f60216f09d175586139242dbba260
msg362374 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 01:15
New changeset c6f95543b4832c3f0170179da39bcf99b40a7aa8 by Miss Islington (bot) in branch '3.7':
bpo-39576: Prevent memory error for overly optimistic precisions (GH-18581) (#18585)
https://github.com/python/cpython/commit/c6f95543b4832c3f0170179da39bcf99b40a7aa8
msg362375 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 01:16
New changeset b6271025c640c228505dc9f194362a0c2ab81c61 by Miss Islington (bot) in branch '3.8':
bpo-39576: Prevent memory error for overly optimistic precisions (GH-18581) (#18584)
https://github.com/python/cpython/commit/b6271025c640c228505dc9f194362a0c2ab81c61
msg362376 - (view) Author: Vedran Čačić (veky) * Date: 2020-02-21 01:57
Hm... is "exact result" a technical term that's defined somewhere? Because to me it seems that this

> "If there exists an exact result at a lower precision, this
result should also be returned at MAX_PREC (without MemoryError)".

is a mathematical statement, and surely for Decimal('0.07776') ** Decimal('0.2') _there exists_ an exact result at a lower precision. Maybe this should be reworded as "If there exists ... _and an implementation knows about it_, it should also be returned ..."
msg362389 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 08:48
Vedran, msg362365 is meant to say:

"This PR implements $SOMEWHAT_MATHEMATICAL_SPEC except for inexact power."

Had I put the caveat inside the statement as well, the message would have been:

"This PR implements $SOMEWHAT_MATHEMATICAL_SPEC_EXCEPT_FOR_INEXACT_POWER except for inexact power."


Bur GNU is not UNIX and all that... ;)
msg362393 - (view) Author: Vedran Čačić (veky) * Date: 2020-02-21 08:58
Well, it all depends on how do you intend to read it. To me, the closing quote and "So non-integer powers are left out" after it suggested that the non-integer powers being left out is somehow a consequence of the above paragraph. When in fact instead of "so" I think "except" or "alas":-) should have been.
msg362394 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 09:11
"So non-integer powers are left out" in isolation would indeed be
wrong, but actual sentence is unambiguously qualified with:

"... since _decimal has no notion of exact non-integer powers yet.",
which clearly states that exact non-integer powers exist and
_decimal does not recognize them (it usually produces exact results
in the respective cases but still sets Inexact).
msg362408 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 14:55
Updated docs:

https://github.com/python/cpython/pull/18594

The PR uses some of Tim's suggestions while also explaining how to
calculate the amount of memory used in a single large decimal.

Hopefully it isn't too much information.
msg362426 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 20:27
New changeset a025d4ca99fb4c652465368e0b4eb03cf4b316b9 by Stefan Krah in branch 'master':
bpo-39576: docs: set context for decimal arbitrary precision arithmetic (#18594)
https://github.com/python/cpython/commit/a025d4ca99fb4c652465368e0b4eb03cf4b316b9
msg362429 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 20:36
New changeset 00e45877e33d32bb61aa13a2033e3bba370bda4d by Miss Islington (bot) in branch '3.7':
bpo-39576: docs: set context for decimal arbitrary precision arithmetic (GH-18594) (#18596)
https://github.com/python/cpython/commit/00e45877e33d32bb61aa13a2033e3bba370bda4d
msg362430 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 20:36
New changeset d6965ff026f35498e554bc964ef2be8f4d80eb7f by Miss Islington (bot) in branch '3.8':
bpo-39576: docs: set context for decimal arbitrary precision arithmetic (GH-18594) (#18597)
https://github.com/python/cpython/commit/d6965ff026f35498e554bc964ef2be8f4d80eb7f
msg362432 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-21 20:47
libmpdec and the docs are done, the question remains what to do with
decimal.py.

It has the same behavior, but I don't think users are running
decimal.py with very large precisions.


Anyway, unassigning myself in case anyone else wants to work on a patch.
msg362435 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-21 21:38
Thanks, Stefan!  This turned out better than I expected ;-)

I'm closing the report, under the likely assumption that nobody cares enough about obscure inelegancies in the Python version to bother.
msg362508 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-23 13:36
New changeset b76518d43fb82ed9e5d27025d18c90a23d525c90 by Stefan Krah in branch 'master':
bpo-39576: Clarify the word size for the 32-bit build. (#18616)
https://github.com/python/cpython/commit/b76518d43fb82ed9e5d27025d18c90a23d525c90
msg362509 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-23 13:43
New changeset 24c570bbb82a7cb70576c253a73390accfa7ed78 by Miss Islington (bot) in branch '3.7':
bpo-39576: Clarify the word size for the 32-bit build. (GH-18616) (#18617)
https://github.com/python/cpython/commit/24c570bbb82a7cb70576c253a73390accfa7ed78
msg362510 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-02-23 13:44
New changeset c6ecd9c14081a787959e13df33e250102a658154 by Miss Islington (bot) in branch '3.8':
bpo-39576: Clarify the word size for the 32-bit build. (GH-18616) (#18618)
https://github.com/python/cpython/commit/c6ecd9c14081a787959e13df33e250102a658154
msg364360 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-16 21:47
This code has introduced a regression in AIX in Python 3.7.7 as the new "test_maxcontext_exact_arith" test hangs indefinitely or just segfaults.
msg364361 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-16 21:48
This looks like a "new feature/improvement". Why was this code backported to a stable version?
msg364364 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-16 22:13
> This looks like a "new feature/improvement". Why was this code backported to a stable version?

Thanks for the lecture. This is an esoteric case between bugfix and
feature that only occurs with very large context precisions.

If Bloomberg isn't happy with _decimal, they can stop using it.
I'm not paid by them.

Which AIX buildbot fails? There are many green ones.
msg364368 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-16 22:36
> This code has introduced a regression in AIX in Python 3.7.7

Also this is a rather bold statement since probably no one has ever
run _decimal on AIX with MAX_PREC.
msg364369 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-16 23:05
> Thanks for the lecture. This is an esoteric case between bugfix and
feature that only occurs with very large context precisions.

Nobody is lecturing anyone. I am just asking why this was backported.

> If Bloomberg isn't happy with _decimal, they can stop using it.
I'm not paid by them.

Stefan, nobody has mentioned Bloomberg in this issue so please, stop.


> Also this is a rather bold statement since probably no one has ever
run _decimal on AIX with MAX_PREC.

Well, I just did and I can confirm that reverting the 3.7 backport fixes the problem.
msg364370 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-16 23:31
> Well, I just did and I can confirm that reverting the 3.7 backport fixes the problem.

If you are fortunate enough to have access to an AIX system, I guess
you have to find out why POWER6 AIX 3.8 and PPC64 AIX 3.8 apparently
work on https://buildbot.python.org/ but your 3.7 does not.

It seems rather strange to me. Are you compiling --with-system-libmpdec?
msg364371 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-16 23:38
Hi Michael, in case you have built 3.7.7 on AIX, have you observed
any problems with test_decimal?
msg364372 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-16 23:43
> If you are fortunate enough to have access to an AIX system, I guess
you have to find out why POWER6 AIX 3.8 and PPC64 AIX 3.8 apparently
work on https://buildbot.python.org/ but your 3.7 does not.

I am working on trying to debug where the problem comes from, but meanwhile, I wanted to report here that this introduced a difference between Python 3.7.6 and Python 3.7.7

I still did not test Python 3.8 but I suppose that it has the same problem.

> Are you compiling --with-system-libmpdec?

No, I am compiling with:

CFLAGS+= -qmaxmem=-1
OBJECT_MODE=64
CONFIGURE_ARGS=--enable-shared --with-system-ffi --without-computed-gotos ac_cv_rshift_extends_sign=no
msg364373 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-16 23:49
These flags worked for xlc when snakebite was still up:

./configure CC=xlc_r AR="ar -X64" CFLAGS="-q64 -qmaxmem=70000" LDFLAGS="-q64"

-qmaxmem was always finicky, I remember segfaults too (AIX problem,
not libmpdec problem).
msg364374 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-16 23:51
Btw, this is AIX 7.1.0.0 with xlc in case that is relevant.
msg364375 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-16 23:57
BTW, if you are compiling with xlc and there"s no xlc buildbot,
perhaps a company-that-shall-not-be-named should provide one.

I mean, it's not okay to complain about a regression and then
mention xlc about 10 mails later.
msg364377 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-17 00:05
> I mean, it's not okay to complain about a regression and then
mention xlc about 10 mails later.

How is this related? Or is not ok to report a behaviour change in a stable release even if the platform is "best-effort"?

A regression by definition is a behaviour change between two versions. AIX is a "best-effort" platform, not a fully-supported one and I am well aware. I have not asked for a revert in anycase, precisely because of that (that decision should be up the release manager anyway), I am just communicating the regression. And doing so is perfectly "ok", even If I am using xlc (which is the native compiler on AIX).
msg364404 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-17 09:45
A focused issue report would include at least the following:

  1) Acknowledge that gcc builds work on the AIX buildbots (a
     fact that has been entirely ignored so far).

  2) State the exact regression: msg364373 (which was also ignored,
     are you using xlc or xlc_r?) says that there have always been
     problems with xlc unless configured in a very specific manner.

     Then obviously we need the exact Python code that worked in
     3.7.6 but not in 3.7.7.


In short, given the flakiness of the xlc toolchain I'm not even
sure if anything can be classified as a regression. Most xlc users
I've talked to have switched to gcc.
msg364405 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-17 10:18
It is also instructive how Granlund views xlc:

https://gmplib.org/list-archives/gmp-bugs/2010-November/002119.html
msg364410 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-17 11:17
> Acknowledge that gcc builds work on the AIX buildbots (a fact that has been entirely ignored so far).

I do acknowledge that. I am not saying that I am sure there is a bug in the code. For what I know at this point
it may be something with xlc, with the OS itself, with the libc version or something misconfigured on the machine.

The only thing I know is that before I could run the test suite without any problem on 3.7.6 and now I cannot
in 3.7.7, and because 3.7 a maintenance branch I though this is a possible regression, so I reported it.

> are you using xlc or xlc_r?

I am using xlc_r. I am saying xlc because that is the name of the compiler. The _r suffix activates thread-safe compilation (which
I am doing). Apologies for the confusion.

> State the exact regression

The **exact** regression is that I could run the test suite without any crash or freeze on this AIX system
with 3.7.6 and I cannot in 3.7.7. At the very least this is due to the fact that there is a new test that crashes/hangs.
Why this happens I still have no idea as I am currently investigating. Debugging on AIX is not a pleasant task.


> n short, given the flakiness of the xlc toolchain I'm not even
sure if anything can be classified as a regression. Most xlc users
I've talked to have switched to gcc.

I notified the behaviour different here so we can discuss about this as a team. If we decide that this is xlc's fault or
is not worth to investigate or that because AIX is not really fully supported we do not want to spend resources I can totally
understand it. I thought this was important enough because this change was in 3.7, which is supposed to be stable across minor
releases (the same applies for 3.8 if the problem also appears there).
msg364415 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-17 11:58
"The **exact** regression is that I could run the test suite without any crash or freeze on this AIX system
with 3.7.6 and I cannot in 3.7.7. At the very least this is due to the fact that there is a new test that crashes/hangs."


In other words, contrary to your earlier dismissal, you did NOT
run _decimal on AIX with MAX_PREC but just ran the 3.7.6 tests
that do not include any tests with MAX_PREC.
msg364416 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-17 12:07
> In other words, contrary to your earlier dismissal, you did NOT
run _decimal on AIX with MAX_PREC but just ran the 3.7.6 tests
that do not include any tests with MAX_PREC.

I did, and it raises MemoryError:

❯ uname -a
AIX 1 7 powerpc 00CCAD974C00 AIX

❯ python3.7 --version
Python 3.7.6

❯ python3.7 Lib/test/test_decimal.py

...

======================================================================
ERROR: test_maxcontext_exact_arith (__main__.CWhitebox)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "Lib/test/test_decimal.py", line 5506, in test_maxcontext_exact_arith
    self.assertEqual(Decimal(4).sqrt(), 2)
MemoryError
msg364418 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-17 12:12
Just to be clear: I am saying that the *exact* regression is manifested with the new test because that is the behavioural difference that I experienced and how I found this problem. 

> but just ran the 3.7.6 tests that do not include any tests with MAX_PREC.

Independently on other considerations, If the test suite fails in 3.7.7 and succeeds in 3.7.6 that is a regression.
msg364420 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-17 12:29
Sorry, I'm reacting like Granlund now and close. These discussions
lead nowhere.
msg364421 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-17 12:38
After some debugging, I discovered that the new test succeeds if I configure and compile CPython without 'OBJECT_MODE=64' set.
msg364534 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-18 17:19
Since xlc has elementary bugs like

   https://github.com/openssl/openssl/issues/10624

it may be worth checking out if this is an optimizer bug with -q64.
msg364578 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-03-18 23:27
I will try to investigate that but it think this is because without OBJECT_MODE=64 the value of decimal.MAX_PREC is much lower (425000000
instead 999999999999999999) so whatever is happening downstream with the higher does not happen with the lower one.

I will update If I found something useful on what is going on.
msg364602 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-03-19 12:40
The lower MAX_PREC for 32-bit could be the reason.

On the other hand, historically, suncc and xlc have had a lot of
problems with the 64-bit build. The winner is suncc, which seriously
miscompiled libmpdec without a whole litany of flags:

https://bugs.python.org/issue15963#msg170661


This is why I keep insisting on experimenting with obscure flags,
but of course the actual cause may be different.
msg365714 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-04-03 18:04
Has anything emerged xlc-wise? Traceback, does it work --without-pymalloc?

I have the feeling that (in many OSS projects) the more complex xlc
issues never get resolved after the initial report.

So I'm contemplating to do the same as 

   https://devhub.vr.rwth-aachen.de/soehrl/arbor/commit/775fe80796c717da4ed61e1cb7ace27268afc965

and disable the xlc build for _decimal.
msg365715 - (view) Author: Pablo Galindo Salgado (pablogsal) * (Python committer) Date: 2020-04-03 18:54
> Traceback, does it work --without-pymalloc?

No, it does also not work with `--without-pymalloc?`.

> Has anything emerged xlc-wise?

Not for now, I keep investigating and I may try to contact
IBM about this, but at this stage, I am getting more confident
that this is a bug in xlc or something going on with the platform
or the libC implementation.

I am currently chasing something weird that I saw while investigating
regarding too big malloc() calls and the data segment size on AIX that
produces a similar hang.
msg371049 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:01
I think I'm going to revert this for 3.7 and 3.8 -- not because of xlc (it is almost certainly a compiler or missing flag error), but because coordination with the Linux distributions is a mess, see #40874.

I really want the system libmpdec to be the same as the one in Python.

So, there has been one 3.7 and 3.8 release in the meantime. However, I highly doubt that anyone will have relied on this in the meantime.

In the worst case, people can compile 3.7/3.8 with the upcoming libmpdec-2.5.0.
msg371051 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:21
New changeset 706de4e5a4b21880c67f6b90e3a2147a258d6fc5 by Stefan Krah in branch '3.8':
[3.8] Revert bpo-39576: Clarify the word size for the 32-bit build. (GH-20743)
https://github.com/python/cpython/commit/706de4e5a4b21880c67f6b90e3a2147a258d6fc5
msg371052 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:22
New changeset c0b79450bc9e93105799528151c48d25af8240a3 by Stefan Krah in branch '3.7':
[3.7] Revert bpo-39576: Clarify the word size for the 32-bit build. (GH-20744)
https://github.com/python/cpython/commit/c0b79450bc9e93105799528151c48d25af8240a3
msg371054 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:33
New changeset 32c1fb07e6f2ded90e5dd24d4b46b7aa7a795d2e by Stefan Krah in branch '3.8':
[3.8] Revert bpo-39576: docs: set context for decimal arbitrary precision arithmetic (GH-20745)
https://github.com/python/cpython/commit/32c1fb07e6f2ded90e5dd24d4b46b7aa7a795d2e
msg371055 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:34
New changeset 9bd891920a5186b7d02281ea9966225efa0ceba1 by Stefan Krah in branch '3.7':
[3.7] Revert bpo-39576: docs: set context for decimal arbitrary precision arithmetic (GH-20746)
https://github.com/python/cpython/commit/9bd891920a5186b7d02281ea9966225efa0ceba1
msg371059 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:55
New changeset 22faf6ad3bcc0ae478a9a3e2d8e35888d88d6ce8 by Stefan Krah in branch '3.7':
[3.7] Revert bpo-39576: Prevent memory error for overly optimistic precisions (GH-20748)
https://github.com/python/cpython/commit/22faf6ad3bcc0ae478a9a3e2d8e35888d88d6ce8
msg371060 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-08 23:57
New changeset 0f5a28f834bdac2da8a04597dc0fc5b71e50da9d by Stefan Krah in branch '3.8':
[3.8] Revert bpo-39576: Prevent memory error for overly optimistic precisions (GH-20747)
https://github.com/python/cpython/commit/0f5a28f834bdac2da8a04597dc0fc5b71e50da9d
msg371069 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-06-09 00:32
The 3.9 change (see #40874) works successfully on all buildbots, except for the 32-bit xlc bot which should use c99_r.

Additionally, it has been tested with the latest gcc/clang/icc/cl.exe, static analyzers and clang-tidy.

It survives brute force allocation failure tests under Valgrind and all sanitizers (ASAN + integrated LSAN, UBSAN, TSAN).


Except for one (standalone LSAN), and that is an LSAN issue:

https://github.com/google/sanitizers/issues/1257


It looks very similar to the xlc issue and was found by using a debugger, which is apparently not possible with the xlc toolchain. :-)
msg375494 - (view) Author: Stefan Krah (skrah) * (Python committer) Date: 2020-08-15 20:44
Thanks to David Edelsohn I have AIX access now. The issue reported
by Pablo is the same as #41540, for a summary see msg375480.

It is a trivial issue that requires that ulimits are in place due to
the fact that AIX over-allocates petabytes even when the physical memory
is just 16GB.


I have verified that the Python-3.7.7 release, which contains the
feature, behaves exactly like master:


No limits
=========

./configure CC=xlc AR="ar -X64" CFLAGS="-q64 -qmaxmem=70000 -qlanglvl=extc99 -qcpluscmt -qkeyword=inline -qalias=ansi -qthreaded -D_THREAD_SAFE -D__VACPP_MULTI__" LDFLAGS="-L/usr/lib64 -q64"

skrah@gcc119:[/home/skrah/Python-3.7.7]./python -m test -uall test_decimal
0:00:00 Run tests sequentially
0:00:00 [1/1] test_decimal
Killed


There is no segfault, the program receives SIGKILL.



Data limits (-bmaxdata)
=======================

./configure CC=xlc AR="ar -X64" CFLAGS="-q64 -qmaxmem=70000 -qlanglvl=extc99 -qcpluscmt -qkeyword=inline -qalias=ansi -qthreaded -D_THREAD_SAFE -D__VACPP_MULTI__" LDFLAGS="-L/usr/lib64 -q64 -bmaxdata:0x800000000"

skrah@gcc119:[/home/skrah/Python-3.7.7]./python -m test -uall test_decimal
0:00:00 Run tests sequentially
0:00:00 [1/1] test_decimal

== Tests result: SUCCESS ==

1 test OK.

Total duration: 18.0 sec
Tests result: SUCCESS



In summary, since 64-bit AIX users should be familiar with data limits,
I don't consider the situation a _decimal bug at all.
History
Date User Action Args
2022-04-11 14:59:26adminsetgithub: 83757
2020-08-15 20:44:57skrahsetmessages: + msg375494
2020-06-09 00:32:14skrahsetmessages: + msg371069
2020-06-08 23:57:14skrahsetmessages: + msg371060
2020-06-08 23:55:54skrahsetmessages: + msg371059
2020-06-08 23:39:35skrahsetpull_requests: + pull_request19953
2020-06-08 23:37:08skrahsetpull_requests: + pull_request19952
2020-06-08 23:34:06skrahsetmessages: + msg371055
2020-06-08 23:33:15skrahsetmessages: + msg371054
2020-06-08 23:27:11skrahsetpull_requests: + pull_request19951
2020-06-08 23:24:40skrahsetpull_requests: + pull_request19950
2020-06-08 23:22:07skrahsetmessages: + msg371052
2020-06-08 23:21:02skrahsetmessages: + msg371051
2020-06-08 23:12:21skrahsetpull_requests: + pull_request19949
2020-06-08 23:10:30skrahsetpull_requests: + pull_request19948
2020-06-08 23:01:10skrahsetmessages: + msg371049
2020-04-03 18:54:47pablogsalsetmessages: + msg365715
2020-04-03 18:04:12skrahsetmessages: + msg365714
2020-03-19 12:40:30skrahsetmessages: + msg364602
2020-03-18 23:34:15vstinnersetnosy: + vstinner
2020-03-18 23:27:20pablogsalsetmessages: + msg364578
2020-03-18 17:19:23skrahsetmessages: + msg364534
2020-03-17 12:38:13pablogsalsetmessages: + msg364421
2020-03-17 12:29:24skrahsetstatus: open -> closed

messages: + msg364420
2020-03-17 12:12:27pablogsalsetmessages: + msg364418
2020-03-17 12:07:05pablogsalsetmessages: + msg364416
2020-03-17 11:58:38skrahsetmessages: + msg364415
2020-03-17 11:17:32pablogsalsetmessages: + msg364410
2020-03-17 10:18:36skrahsetmessages: + msg364405
2020-03-17 09:45:23skrahsetmessages: + msg364404
2020-03-17 00:14:39skrahsetkeywords: - patch, 3.7regression
assignee: skrah
2020-03-17 00:05:27pablogsalsetassignee: skrah -> (no value)
messages: + msg364377
2020-03-16 23:58:05skrahsetassignee: skrah
2020-03-16 23:57:36skrahsetmessages: + msg364375
2020-03-16 23:51:29pablogsalsetmessages: + msg364374
2020-03-16 23:49:56skrahsetmessages: + msg364373
2020-03-16 23:43:15pablogsalsetmessages: + msg364372
2020-03-16 23:38:08skrahsetnosy: + Michael.Felt
messages: + msg364371
2020-03-16 23:31:58skrahsetmessages: + msg364370
2020-03-16 23:05:53pablogsalsetmessages: + msg364369
2020-03-16 22:36:31skrahsetmessages: + msg364368
2020-03-16 22:13:18skrahsetmessages: + msg364364
2020-03-16 21:48:32pablogsalsetnosy: + ned.deily
2020-03-16 21:48:12pablogsalsetmessages: + msg364361
2020-03-16 21:47:35pablogsalsetstatus: closed -> open

nosy: + pablogsal
messages: + msg364360

keywords: + 3.7regression
resolution: fixed ->
2020-02-23 13:44:32skrahsetmessages: + msg362510
2020-02-23 13:43:04skrahsetmessages: + msg362509
2020-02-23 13:37:40miss-islingtonsetpull_requests: + pull_request17983
2020-02-23 13:37:33miss-islingtonsetnosy: + miss-islington

pull_requests: + pull_request17982
2020-02-23 13:36:57skrahsetmessages: + msg362508
2020-02-23 13:31:32skrahsetpull_requests: + pull_request17981
2020-02-21 21:38:17tim.peterssetstatus: open -> closed

messages: + msg362435
2020-02-21 20:50:15skrahsetstage: patch review -> resolved
2020-02-21 20:47:25skrahsetassignee: skrah -> (no value)
resolution: fixed
messages: + msg362432
versions: + Python 3.7
2020-02-21 20:36:46skrahsetmessages: + msg362430
2020-02-21 20:36:05skrahsetmessages: + msg362429
2020-02-21 20:28:25miss-islingtonsetpull_requests: + pull_request17964
2020-02-21 20:28:14miss-islingtonsetpull_requests: + pull_request17963
2020-02-21 20:27:44skrahsetmessages: + msg362426
2020-02-21 14:55:33skrahsetmessages: + msg362408
2020-02-21 14:26:15skrahsetpull_requests: + pull_request17961
2020-02-21 09:11:19skrahsetmessages: + msg362394
2020-02-21 08:58:54vekysetmessages: + msg362393
2020-02-21 08:48:25skrahsetmessages: + msg362389
2020-02-21 01:57:08vekysetmessages: + msg362376
2020-02-21 01:16:49skrahsetmessages: + msg362375
2020-02-21 01:15:48skrahsetmessages: + msg362374
2020-02-21 00:55:00miss-islingtonsetpull_requests: + pull_request17955
2020-02-21 00:54:47miss-islingtonsetpull_requests: + pull_request17954
2020-02-21 00:52:51skrahsetmessages: + msg362373
2020-02-20 23:21:09skrahsetmessages: + msg362365
2020-02-20 23:06:57skrahsettype: enhancement
messages: + msg362364
2020-02-20 22:23:10skrahsetkeywords: + patch
stage: patch review
pull_requests: + pull_request17951
2020-02-07 18:35:53tim.peterssetmessages: + msg361606
2020-02-07 17:01:16skrahsetassignee: skrah
messages: + msg361601
2020-02-07 16:13:10tim.peterssetmessages: + msg361599
2020-02-07 15:55:35skrahsetmessages: + msg361598
2020-02-07 15:35:16skrahsetmessages: + msg361597
2020-02-07 15:19:28skrahsetnosy: + skrah
messages: + msg361596
2020-02-07 09:05:11vekysetmessages: + msg361546
2020-02-07 08:50:47mark.dickinsonsetnosy: + mark.dickinson
messages: + msg361544
2020-02-07 06:44:43vekysetnosy: + veky
messages: + msg361538
2020-02-07 05:17:17BTaskayasetnosy: + BTaskaya
2020-02-07 03:39:20tim.peterscreate