This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: collections.deque.count performance enhancement
Type: performance Stage: resolved
Components: Extension Modules Versions: Python 3.10
process
Status: closed Resolution: wont fix
Dependencies: Superseder:
Assigned To: corona10 Nosy List: corona10, rhettinger
Priority: normal Keywords: patch

Created on 2020-07-20 14:46 by corona10, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
bench_deque_count.py corona10, 2020-07-20 14:46
Pull Requests
URL Status Linked Edit
PR 21566 closed corona10, 2020-07-20 14:49
Messages (4)
msg374010 - (view) Author: Dong-hee Na (corona10) * (Python committer) Date: 2020-07-20 14:46
Same situation as: https://bugs.python.org/issue39425

Mean +- std dev: [master_count] 946 ns +- 14 ns -> [ac_count] 427 ns +- 7 ns: 2.22x faster (-55%)
msg374011 - (view) Author: Dong-hee Na (corona10) * (Python committer) Date: 2020-07-20 14:46
Benchmark file
msg374037 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2020-07-20 23:49
I would rather not do this.   It optimizes for the uncommon case where all the objects are identical.  The common case is slightly worse off because the identity test is performed twice, once before the call to Py_RichCompareBool() and again inside it.  Also, the PR adds a little clutter which obscures the business logic.  

Another thought, micro-benchmarks on the identity tests require some extra care because they are super sensitive to branch prediction failures (See https://stackoverflow.com/questions/11227809 ).  A more realistic dataset would be:

  x = 12345
  data = [x] * 100 + list(range(500))
  random.shuffle(data)
  data.count(x)
msg374048 - (view) Author: Dong-hee Na (corona10) * (Python committer) Date: 2020-07-21 01:59
> I would rather not do this.

I would not like to say this change should be applied ;)
I found this point during I converting deque methods by using 
Argument Clinic(I will ping you later ;)


https://bugs.python.org/issue39425 was applied since PyObject_RichCompareBool requires reference counting and it caused performance regression. 

Know I remember, why we applied this micro-optimization.

So my conclusion is that don't apply this change.
We don't have to apply this change since PyObject_RichCompareBool does not cause a performance regression.

Thank you for your comment, Raymond.
History
Date User Action Args
2022-04-11 14:59:33adminsetgithub: 85519
2020-07-21 02:00:29corona10setstatus: open -> closed
resolution: wont fix
stage: patch review -> resolved
2020-07-21 01:59:51corona10setmessages: + msg374048
2020-07-20 23:49:40rhettingersetmessages: + msg374037
2020-07-20 15:13:32xtreaksetnosy: + rhettinger
2020-07-20 14:49:08corona10setkeywords: + patch
stage: patch review
pull_requests: + pull_request20710
2020-07-20 14:46:56corona10setfiles: + bench_deque_count.py

messages: + msg374011
2020-07-20 14:46:30corona10create