Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

constant folding opens compiler to quadratic time hashing #74601

Closed
dalke mannequin opened this issue May 20, 2017 · 15 comments
Closed

constant folding opens compiler to quadratic time hashing #74601

dalke mannequin opened this issue May 20, 2017 · 15 comments
Assignees
Labels
interpreter-core (Objects, Python, Grammar, and Parser dirs) type-bug An unexpected behavior, bug, or error

Comments

@dalke
Copy link
Mannequin

dalke mannequin commented May 20, 2017

BPO 30416
Nosy @mdickinson, @pitrou, @benjaminp, @serhiy-storchaka, @DimitrisJim
PRs
  • bpo-30416: Protect the optimizer during constant folding. #4860
  • [3.6] bpo-30416: Protect the optimizer during constant folding. #4865
  • [2.7] bpo-30416: Protect the optimizer during constant folding. (GH-4865) #4896
  • Files
  • safe-const-folding.diff
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/benjaminp'
    closed_at = <Date 2018-07-08.09:29:23.123>
    created_at = <Date 2017-05-20.21:01:37.163>
    labels = ['interpreter-core', 'type-bug']
    title = 'constant folding opens compiler to quadratic time hashing'
    updated_at = <Date 2018-07-08.09:29:31.925>
    user = 'https://bugs.python.org/dalke'

    bugs.python.org fields:

    activity = <Date 2018-07-08.09:29:31.925>
    actor = 'serhiy.storchaka'
    assignee = 'benjamin.peterson'
    closed = True
    closed_date = <Date 2018-07-08.09:29:23.123>
    closer = 'serhiy.storchaka'
    components = ['Interpreter Core']
    creation = <Date 2017-05-20.21:01:37.163>
    creator = 'dalke'
    dependencies = []
    files = ['46887']
    hgrepos = []
    issue_num = 30416
    keywords = ['patch']
    message_count = 15.0
    messages = ['294050', '294051', '294107', '294120', '294182', '294246', '294247', '308386', '308388', '308558', '308622', '314848', '318511', '318512', '321269']
    nosy_count = 6.0
    nosy_names = ['dalke', 'mark.dickinson', 'pitrou', 'benjamin.peterson', 'serhiy.storchaka', 'Jim Fasarakis-Hilliard']
    pr_nums = ['4860', '4865', '4896']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'behavior'
    url = 'https://bugs.python.org/issue30416'
    versions = ['Python 2.7']

    @dalke
    Copy link
    Mannequin Author

    dalke mannequin commented May 20, 2017

    Others have reported issues like bpo-21074 where the peephole compiler generates and discards large strings, and bpo-30293 where it generates multi-MB integers and stores them in the .pyc.

    This is a different issue. The code:

      def tuple20():
        return ((((((((1,)*20,)*20,)*20,)*20,)*20,)*20,)*20,)*20

    takes over four minutes (257 seconds) to compile on my machine. The seemingly larger:

      def tuple30():
        return ((((((((1,)*30,)*30,)*30,)*30,)*30,)*30,)*30,)*30

    takes a small fraction of a second to compile, and is equally fast to run.

    Neither code generates a large data structure. In fact, they only needs about 1K.

    A sampling profiler of the first case, taken around 30 seconds into the run, shows that nearly all of the CPU time is spent in computing the hash of the highly-nested tuple20, which must visit a very large number of elements even though there are only a small number of unique elements. The call chain is:

    Py_Main -> PyRun_SimpleFileExFlags -> PyAST_CompileObject -> compiler_body -> compiler_function -> compiler_make_closure -> compiler_add_o -> PyDict_GetItem and then into the tuple hash code.

    It appears that the peephole optimizer converts the highly-nested tuple20 into a constant value. The compiler then creates the code object with its co_consts. Specifically, compiler_make_closure uses a dictionary to ensure that the elements in co_consts are unique, and mapped to the integer used by LOAD_CONST.

    It takes about 115 seconds to compute hash(tuple20). I believe the hash is computed twice, once to check if the object is present, and the second to insert it. I suspect most of the other 26 seconds went to computing the intermediate constants in the tuple.

    Based on the previous issues I highlighted in my first paragraph, I believe this report will be filed under "Doctor, doctor, it hurts when I do this"/"Then don't do it." I see no easy fix, and cannot think of how it would come about in real-world use.

    I point it out because in reading the various issues related to the peephole optimizer there's a subset of people who propose a look-before-you-leap technical solution of avoiding an optimization where the estimated result is too large. While it does help, it does not avoid all of the negatives of the peephole optimizer, or any AST-based optimizer with similar capabilities. I suspect even most core developers aren't aware of this specific negative.

    @dalke dalke mannequin added 3.7 (EOL) end of life interpreter-core (Objects, Python, Grammar, and Parser dirs) labels May 20, 2017
    @serhiy-storchaka
    Copy link
    Member

    Nice example.

    The only fix I see is caching the hash in a tuple. This can even help in more cases, when tuples are used as dict keys. But this affect too much code, and can even break a code that mutates a tuple with refcount 1.

    @pitrou
    Copy link
    Member

    pitrou commented May 21, 2017

    Caching a tuple's hash value is a nice idea but it would increase the memory consumption of all tuples, which would probably hurt much more in the average case... Unless of course we find a way to cache the hash value in a separate memory area that's reserved on demand.

    @dalke
    Copy link
    Mannequin Author

    dalke mannequin commented May 22, 2017

    A complex solution is to stop constant folding when there are more than a few levels of tuples. I suspect there aren't that many cases where there are more than 5 levels of tuples and where constant creation can't simply be assigned and used as a module variable.

    This solution would become even more complex should constant propagation be supported.

    Another option is to check the value about to be added to co_consts. If it is a container, then check if it would require more than a few levels of hash calls. If so, then simply add it without ensuring uniqueness.

    This could be implemented because the compiler could be told how to carry out that check for the handful of supported container types.

    @serhiy-storchaka
    Copy link
    Member

    Proposed patch makes const folding more safe by checking arguments before doing expensive calculation that can create large object (multiplication, power and left shift). It fixes examples in this issue, bpo-21074, bpo-30293. The limit for repetition is increase from 20 to 256. There are no limits for addition/concatenation and like, since it is hard to create really large objects with these operations.

    @serhiy-storchaka serhiy-storchaka added the type-bug An unexpected behavior, bug, or error label May 22, 2017
    @mdickinson
    Copy link
    Member

    After testing on Python 2, I was surprised to discover that this issue was introduced to the 2.x series quite recently: 2.7.11 doesn't have the issue, while 2.7.13 does.

    The culprit appears to be this commit: 67edf73, introduced as part of bpo-27942.

    @pitrou
    Copy link
    Member

    pitrou commented May 23, 2017

    Yet another proof that performance improvements should *not* be committed to bugfix branches.

    Please, can someone learn a lesson?

    @serhiy-storchaka
    Copy link
    Member

    New changeset 2e3f570 by Serhiy Storchaka in branch 'master':
    bpo-30416: Protect the optimizer during constant folding. (bpo-4860)
    2e3f570

    @serhiy-storchaka
    Copy link
    Member

    New changeset b580f4f by Serhiy Storchaka in branch '3.6':
    [3.6] bpo-30416: Protect the optimizer during constant folding. (bpo-4865)
    b580f4f

    @serhiy-storchaka
    Copy link
    Member

    PR 4896 fixes this issue in 2.7, but I'm not sure that I want to merge it. The code in 2.7 is more complex because of two integer types.

    @serhiy-storchaka
    Copy link
    Member

    What do you suggest to do with 2.7? Revert the changes that introduced the regression, merge the backported fix, or keep all as is?

    @benjaminp
    Copy link
    Contributor

    We should revert the breaking 2.7 changes.

    On Sun, Mar 25, 2018, at 13:59, Gregory P. Smith wrote:

    Change by Gregory P. Smith <greg@krypto.org>:

    ----------
    assignee: -> benjamin.peterson


    Python tracker <report@bugs.python.org>
    <https://bugs.python.org/issue30416\>


    @serhiy-storchaka
    Copy link
    Member

    Reverting bpo-27942 (67edf73) doesn't solve this issue in 2.7. The previous revision has this issue, as well as 2.7.12 and 2.7.11. Even 2.6.9 and 2.5.6 have this issue.

    We have either merge PR 4896 or decide that this issue is "won't fix" in 2.7.

    @pitrou
    Copy link
    Member

    pitrou commented Jun 2, 2018

    I vote for "won't fix". 2.7 has lived long enough with this issue, AFAIU. And it won't be triggered by regular code.

    @serhiy-storchaka
    Copy link
    Member

    Okay, this issue is fixed in 3.6.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    interpreter-core (Objects, Python, Grammar, and Parser dirs) type-bug An unexpected behavior, bug, or error
    Projects
    None yet
    Development

    No branches or pull requests

    4 participants