Author scoder
Recipients Sergey.Kirpichev, asmeurer, mark.dickinson, rhettinger, scoder, terry.reedy, tim.peters
Date 2021-05-17.11:59:21
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1621252761.58.0.951790706843.issue43420@roundup.psfhosted.org>
In-reply-to
Content
Just FYI, I applied the same changes to the quicktions [1] module, a Cython compiled (and optimised) version of fractions.Fraction.

[1] https://github.com/scoder/quicktions/

The loss in performance for small values is much higher there, almost 2x for the example given (compared to 10-20% for CPython):

$ python3.8 -m timeit -r11 -s 'from fractions import Fraction as F' -s 'a=F(10,3)' -s 'b=F(6, 5)' 'a * b'
100000 loops, best of 11: 1.94 usec per loop

Original:
$ python3.8 -m timeit -r11 -s 'from quicktions import Fraction as F' -s 'a=F(10,3)' -s 'b=F(6, 5)' 'a * b'
1000000 loops, best of 11: 214 nsec per loop

Patched:
$ python3.8 -m timeit -r11 -s 'from quicktions import Fraction as F' -s 'a=F(10,3)' -s 'b=F(6, 5)' 'a * b'
500000 loops, best of 11: 391 nsec per loop


For the larger values example, the gain is tremendous, OTOH:

$ python3.8 -m timeit -r11 -s 'from fractions import Fraction as F' -s 'import random' -s 'n = [random.randint(1, 1000000) for _ in range(1000)]' -s 'd = [random.randint(1, 1000000) for _ in range(1000)]' -s 'a=list(map(lambda x: F(*x), zip(n, d)))' 'sum(a)'
2 loops, best of 11: 150 msec per loop

Original:
$ python3.8 -m timeit -r11 -s 'from quicktions import Fraction as F' -s 'import random' -s 'n = [random.randint(1, 1000000) for _ in range(1000)]' -s 'd = [random.randint(1, 1000000) for _ in range(1000)]' -s 'a=list(map(lambda x: F(*x), zip(n, d)))' 'sum(a)'         
2 loops, best of 11: 135 msec per loop

Patched:
$ python3.8 -m timeit -r11 -s 'from quicktions import Fraction as F' -s 'import random' -s 'n = [random.randint(1, 1000000) for _ in range(1000)]' -s 'd = [random.randint(1, 1000000) for _ in range(1000)]' -s 'a=list(map(lambda x: F(*x), zip(n, d)))' 'sum(a)'
50 loops, best of 11: 9.65 msec per loop

I'll have to see if the slowdown can be mitigated somehow.

Interesting enough, the telco benchmark seems to benefit slightly from this:

Original:
$ python3.8 benchmark/telco_fractions.py -n 200   # avg
0.0952927148342

Patched:
$ python3.8 benchmark/telco_fractions.py -n 200   # avg
0.0914235627651
History
Date User Action Args
2021-05-17 11:59:21scodersetrecipients: + scoder, tim.peters, rhettinger, terry.reedy, mark.dickinson, Sergey.Kirpichev, asmeurer
2021-05-17 11:59:21scodersetmessageid: <1621252761.58.0.951790706843.issue43420@roundup.psfhosted.org>
2021-05-17 11:59:21scoderlinkissue43420 messages
2021-05-17 11:59:21scodercreate