Author tim.peters
Recipients Sergey.Kirpichev, asmeurer, mark.dickinson, rhettinger, tim.peters
Date 2021-03-09.05:11:08
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
I agree with everyone ;-) That is, my _experience_ matches Mark's:  as a more-or-less "numeric expert", I use Fraction in cases where it's already fast enough. Python isn't a CAS, and, e.g., in pure Python I'm not doing things like computing or composing power series with rational coefficients (but routinely do such stuff in a CAS). It's usually just combinatorial probabilities in relatively simple settings, and small-scale computations where float precision would be fine except I don't want to bother doing error analysis first to ensure things like catastrophic cancellation can't occur.

On the other hand, the proposed changes are bog standard optimizations for implementations of rationals, and can pay off very handsomely at relatively modest argument sizes.

So I'm +0. I don't really mind the slowdown for small arguments because, as above, I just don't use Fraction for extensive computation. But the other side of that is that I won't profit much from optimizations either, and while the optimizations for * and / are close to self-evident, those for + and - are much subtler. Code obscurity imposes ongoing costs of its own.

WRT which, I added Python's Karatsuba implementation and regret doing so. I don't want to take it out now (it's already in ;-) ), but it added quite a pile of delicate code to what _was_ a much easier module to grasp. People who need fast multiplication are still far better off using gmpy2 anyway (or fiddling Decimal precision - Stefan Krah implemented "advanced" multiplication schemes for that module).
Date User Action Args
2021-03-09 05:11:09tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, Sergey.Kirpichev, asmeurer
2021-03-09 05:11:09tim.peterssetmessageid: <>
2021-03-09 05:11:09tim.peterslinkissue43420 messages
2021-03-09 05:11:08tim.peterscreate