This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author danielfleischman
Recipients Dennis Sweeney, danielfleischman
Date 2021-07-03.04:03:27
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <CA+vWjfgSsXSct=nbV8_KWmfES28FArE4PV5whbX159-Mp-C1+Q@mail.gmail.com>
In-reply-to <1625281842.38.0.559397890339.issue44555@roundup.psfhosted.org>
Content
Thank you for your message!

I'm not particularly looking for an implementation, I was just surprised by
this behavior.

It can get even worse. Consider this example:

```
d = large_dict()
# remove all but one element of d, runs in quadratic time as explained above
while len(d) > 1:
    del d[next(iter(d))]

# now iterating over d takes O(d), even when d has only one item:

# the following prints one key, but takes O(N)
for key in d:
    print(key)
```

I think this example is better, since a person would expect iterating over
a singleton dictionary would be really fast, but it can be made as slow as
one wants. A "print_keys()" function would reasonably be expected to take
O(size of dictionary), but it doesn't.

Am I right to think that this is a performance bug?

On Fri, Jul 2, 2021, 8:10 PM Dennis Sweeney <report@bugs.python.org> wrote:

>
> Dennis Sweeney <sweeney.dennis650@gmail.com> added the comment:
>
> For what it's worth, using collections.OrderedDict gives constant-time
> behavior you're looking for.
>
> ----------
> nosy: +Dennis Sweeney
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <https://bugs.python.org/issue44555>
> _______________________________________
>
History
Date User Action Args
2021-07-03 04:03:28danielfleischmansetrecipients: + danielfleischman, Dennis Sweeney
2021-07-03 04:03:28danielfleischmanlinkissue44555 messages
2021-07-03 04:03:27danielfleischmancreate