This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author tim.peters
Recipients Jeffrey.Kintscher, mark.dickinson, pablogsal, rhettinger, tim.peters, veky
Date 2020-08-10.02:11:01
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Or, like I did, they succumbed to an untested "seemingly plausible" illusion ;-)

I generated 1,000 random vectors (in [0.0, 10.0)) of length 100, and for each generated 10,000 permutations.  So that's 10 million 100-element products overall.  The convert-to-decimal method was 100% insensitive to permutations, generating the same product (default decimal prec result rounded to float) for each of the 10,000 permutations all 1,000 times.

The distributions of errors for the left-to-right and pairing products were truly indistinguishable.  They ranged from -20 to +20 ulp (taking the decimal result as being correct).  When I plotted them on the same graph, I thought I had made an error, because I couldn't see _any_ difference on a 32-inch monitor!  I only saw a single curve.  At each ulp the counts almost always rounded to the same pixel on the monitor, so the color of the second curve plotted almost utterly overwrote the first curve.

As a sanity check, on the same vectors using the same driver code I compared sum() to a pairing sum. Pairing sum was dramatically better, with a much tighter error distribution with a much higher peak at the center ("no error"). That's what I erroneously expected to see for products too - although, in hindsight, I can't imagine why ;-)
Date User Action Args
2020-08-10 02:11:02tim.peterssetrecipients: + tim.peters, rhettinger, mark.dickinson, veky, pablogsal, Jeffrey.Kintscher
2020-08-10 02:11:01tim.peterssetmessageid: <>
2020-08-10 02:11:01tim.peterslinkissue41458 messages
2020-08-10 02:11:01tim.peterscreate