This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients mark.dickinson, rhettinger, serhiy.storchaka, skrah, steven.daprano, tim.peters
Date 2018-03-19.07:04:43
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
[Uncle Timmy]
> I doubt `fsum()` would add much value here:  all the addends have the 
> same sign, so cancellation is impossible

fsum() may be overkill for this problem.  I mentioned it because the math module already had the requisite code and because it improved accuracy with high dimensional data in machine learning examples I've encountered:

    >>> from math import fsum, sqrt

    >>> n = 1000
    >>> sum([0.1] * n)
    >>> fsum([0.1] * n)

    >>> sqrt(sum([0.1] * n) / n)
    >>> sqrt(fsum([0.1] * n) / n)

    # fsum() version exactly matches the decimal crosscheck
    >>> getcontext().prec = 40
    >>> (sum([Decimal(0.1)] * n) / n).sqrt()

If we care about those little differences (about 80 ulp in this example), the single-rounding dot products seems like a better way to go.
Date User Action Args
2018-03-19 07:04:43rhettingersetrecipients: + rhettinger, tim.peters, mark.dickinson, steven.daprano, skrah, serhiy.storchaka
2018-03-19 07:04:43rhettingersetmessageid: <>
2018-03-19 07:04:43rhettingerlinkissue33089 messages
2018-03-19 07:04:43rhettingercreate