This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author rhettinger
Recipients mark.dickinson, oscarbenjamin, rhettinger
Date 2013-09-30.05:52:05
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Conceptually, I support the idea of having some way to avoid information loss by returning uncollapsed partial sums.  As you say, it would provide an essential primitive for parallel fsums and for cumulative subtotals.

If something along those lines were to be accepted, it would likely take one of two forms:
    fsum(iterable, detail=True) --> list_of_floats
    fsum_detailed(iterable) --> list_of_floats

Note, there is no need for a "carry" argument because the previous result can simply be prepended to the new data (for a running total) or combined in a final summation at the end (for parallel computation):

    detail = [0.0]
    for group in groups:
        detail = math.fsum_detailed(itertools.chain(detail, group))
        print('Running total', math.fsum(detail))

On the negative side, I think the use case is too exotic and no one will care about or learn to use this function.  Also, it usually isn't wise to expose the internals (we may want a completely different implementation someday).  Lastly, I think there is some virtue in keeping the API simple; otherwise, math.fsum() may ward-off some its potential users, resulting in even lower adoption than we have now.

With respect to the concerns about speed, I'm curious whether you've run your msum() edits under pypy or cython.  I would expect both to give excellent performance.
Date User Action Args
2013-09-30 05:52:05rhettingersetrecipients: + rhettinger, mark.dickinson, oscarbenjamin
2013-09-30 05:52:05rhettingersetmessageid: <>
2013-09-30 05:52:05rhettingerlinkissue19086 messages
2013-09-30 05:52:05rhettingercreate