This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author lemburg
Recipients brett.cannon, lemburg, pitrou, r.david.murray, vstinner, yselivanov
Date 2016-04-19.11:25:31
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <5716159C.2010100@egenix.com>
In-reply-to <CAMpsgwb7MMr=P0+fUCt9yhKF4SgGLjZXJhW_O-Sa19KZ4nsmCg@mail.gmail.com>
Content
On 19.04.2016 13:11, STINNER Victor wrote:
> 
> STINNER Victor added the comment:
> 
>> Could you perhaps check what's causing these slowdowns ?
> 
> It's obvious, no? My patch causes the slowdown.

Well, yes, of course :-) I meant whether there's anything you
can do about those slowdowns.

> On a timeit microbenchmark, I don't see such slowdown. That's also why
> I suspect that pybench is unstable.
> 
> python3.6 -m timeit '{}' says 105 ns with and without the patch.
> 
> python3.6 -m timeit 'd={}; d[1]=1; d[2]=2; d[3]=3; d[4]=4; d[5]=5;
> d[6]=6; d[7]=7; d[8]=8; d[9]=9; d[10]=10' says 838 ns with and without
> the patch.
> 
> I have to "cheat": I run timeit enough times until I see the "minimum".

Those operations are too fast for timeit. The overhead associated
with looping is much larger than the time it takes to run the
operation itself. That's why in pybench I put the operations into
blocks of repeated statements. The interpreter then doesn't spend
time on branching when going from one statement execution to the
next (inside those blocks) and you get closer to the real runtime
of the operation you're trying to measure.
History
Date User Action Args
2016-04-19 11:25:31lemburgsetrecipients: + lemburg, brett.cannon, pitrou, vstinner, r.david.murray, yselivanov
2016-04-19 11:25:31lemburglinkissue26058 messages
2016-04-19 11:25:31lemburgcreate