Message397007
Thank you for your reply. I didn't know that this was the expected behavior (found it at https://wiki.python.org/moin/TimeComplexity):
"For these operations, the worst case n is the maximum size the container ever achieved, rather than just the current size. For example, if N objects are added to a dictionary, then N-1 are deleted, the dictionary will still be sized for N objects (at least) until another insertion is made."
Thank you for pointing out!
I still disagree with this design since it's not how a person using a hash map expects it to behave. That said, this link clearly shows that it's *not* a bug, and that we just have different opinions on what is the best trade-off between space, code complexity, and speed, especially on something as ubiquitous as a dictionary.
Just one final attempt to gain support for my idea before I close the issue. When I proposed a linked list I didn't mean an "intro to programming" linked list, with pointers and a new malloc for every node. I meant simply adding two fields to PyDictKeyEntry with the indices of the next and previous valid entries. There would be a tiny memory overhead, and we would get a data structure that behaves how one would reasonably expect it to (compare with hash table found in the standard libraries of other languages. It's usually unexpected for an operation in a data structure to take time proportional to its historical value).
I will close this issue in two days if there are no further replies. Thank you for the discussion! |
|
Date |
User |
Action |
Args |
2021-07-05 16:37:53 | danielfleischman | set | recipients:
+ danielfleischman, rhettinger, methane, serhiy.storchaka, Dennis Sweeney |
2021-07-05 16:37:53 | danielfleischman | set | messageid: <1625503073.61.0.967861568932.issue44555@roundup.psfhosted.org> |
2021-07-05 16:37:53 | danielfleischman | link | issue44555 messages |
2021-07-05 16:37:53 | danielfleischman | create | |
|