Issue19086

Created on **2013-09-25 10:46** by **oscarbenjamin**, last changed **2013-09-30 20:16** by **oscarbenjamin**. This issue is now **closed**.

Messages (8) | |||
---|---|---|---|

msg198381 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-09-25 10:46 | |

I would like to be able use fsum incrementally however it is not currently possible. With sum() you can do: subtotal = sum(nums) subtotal = sum(othernums, subtotal) This wouldn't work for fsum() because the returned float is not the same as the state maintained internally which is a list of floats. I propose instead that a fsum() could take an optional second argument which is a list of floats and in this case return the updated list of floats. I have modified Raymond Hettinger's msum() recipe to do what I want: def fsum(iterable, carry=None): "Full precision summation using multiple floats for intermediate values" partials = list(carry) if carry else [] for x in iterable: i = 0 for y in partials: if abs(x) < abs(y): x, y = y, x hi = x + y lo = y - (hi - x) if lo: partials[i] = lo i += 1 x = hi partials[i:] = [x] if carry is None: return sum(partials, 0.0) else: return partials Here's an interactive session showing how you might use it: >>> from fsum import fsum >>> fsum([1e20, 1, -1e20]) 1.0 >>> fsum([1e20, 1, -1e20], []) [1.0, 0.0] >>> fsum([1e20, 1, -1e20], []) [1.0, 0.0] >>> fsum([1e20, 1, -1e20], []) [1.0, 0.0] >>> fsum([1e20, 1], []) [1.0, 1e+20] >>> carry = fsum([1e20, 1], []) >>> fsum([-1e20], carry) [1.0, 0.0] >>> nums = [7, 1e100, -7, -1e100, -9e-20, 8e-20] * 10 >>> subtotal = [] >>> for n in nums: ... subtotal = fsum([n], subtotal) ... >>> subtotal [-1.0000000000000007e-19] >>> fsum(subtotal) -1.0000000000000007e-19 My motivation for this is that I want to be able to write an efficient sum function that is accurate for a mix of ints and floats while being as fast as possible in the case that I have a list of only floats (or only ints). What I have so far looks a bit like: def sum(numbers): exact_total = 0 float_total = 0.0 for T, nums in groupby(numbers): if T is int: exact_total = sum(nums, exact_total) elif T is float: # This doesn't really work: float_total += fsum(float_total) # ... However fsum is only exact if it adds all the numbers in a single pass. The above would have order-dependent results given a mixed list of ints and floats e.g.: [1e20, -1e20, 1, -1, 1.0, -1.0] vs [1e20, 1.0, -1e20, 1, -1, -1.0] Although fsum is internally very accurate it always discards information on output. Even if I build a list of all floats and fsum those at the end it can still be inaccurate if the exact_total cannot be exactly coerced to float and passed to fsum. If I could get fsum to return the exact float expansion then I could use that in a number of different ways. Given the partials list I can combine all of the different subtotals with Fraction arithmetic and coerce to float only at the very end. If I can also get fsum to accept a float expansion on input then I can use it incrementally and there is no need to build a list of all floats. I am prepared to write a patch for this if the idea is deemed acceptable. |
|||

msg198497 - (view) | Author: Raymond Hettinger (rhettinger) * | Date: 2013-09-27 18:40 | |

FWIW, the reason for __builtin__.sum() having a start argument was so that it could change the base datatype: sum(iterable, start=Fraction(0)). [Oscar Benjamin] > Although fsum is internally very accurate > it always discards information on output. A "start" argument won't help you, because you will discard information on input. A sequence like [1E100, 0.1, -1E100, 0.1] wouldn't work when split into subtotal=fsum([1E100, 0.1]) and fsum([-1E100, 0.1], start=subtotal). > My motivation for this is that I want to be able to write > an efficient sum function that is accurate for a mix of ints > and floats FWIW, fsum() already works well with integers as long as they don't exceed 53bits of precision. For such exotic use cases, the decimal module would be a better alternative. Now that the decimal module has a C implementation, there is no reason not to use it for high precision applications. It is possible to extent the current fsum() implementation to separately track integers and combine it with the float results at the end. That said, the probability that any user (other than yourself) will need this can only be expressed as a denormal number ;-) I'm not rejecting the request; rather, I'm questioning the need for it. |
|||

msg198520 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-09-28 14:46 | |

Thanks for responding Raymond. Raymond Hettinger wrote: > A "start" argument won't help you, because you will discard information on input. A sequence like [1E100, 0.1, -1E100, 0.1] wouldn't work when split into subtotal=fsum([1E100, 0.1]) and fsum([-1E100, 0.1], start=subtotal). I'm not sure if you've fully understood the proposal. The algorithm underlying fsum is (I think) called distillation. It compresses a list of floats into a shorter list of non-overlapping floats having the same exact sum. Where fsum deviates from this idea is that it doesn't return a list of floats, rather it returns only the largest float. My proposal is that there be a way to tell fsum to return the list of floats whose sum is exact and not rounded. Specifically the subtotal that is returned and fed back into fsum would be a list of floats and no information would be discarded. So fsum(numbers, []) would return a list of floats and that list can be passed back in again. > > My motivation for this is that I want to be able to write > > an efficient sum function that is accurate for a mix of ints > > and floats > > FWIW, fsum() already works well with integers as long as they don't exceed 53bits of precision. I was simplifying the use-case somewhat. I would also like this sum function to work with fractions and decimals neither of which coerces exactly to float (and potentially I want some non-stdlib types also). > For such exotic use cases, the decimal module would be a better alternative. Now that the decimal module has a C implementation, there is no reason not to use it for high precision applications. It is possible to coerce to Decimal and exactly sum a list of ints, floats and decimals (by trapping inexact and increasing the context precision). I have tried this under CPython 3.3 and it was 20-50x slower than fsum depending on how it manages the arithmetic context and whether it can be used safely with a generator that also manipulates the context - the safe version that doesn't build a list is 50x slower. It is also not possible to use Decimal exactly with Fractions. I believe that there are other use-cases for having fsum be usable incrementally. This would make it usable for accurate summation in incremental, parallel and distributed computation also. Unfortunately fsum itself already isn't used as much as it should be and I agree that probably all the use cases for this extension would be relatively obscure. |
|||

msg198677 - (view) | Author: Raymond Hettinger (rhettinger) * | Date: 2013-09-30 05:52 | |

Conceptually, I support the idea of having some way to avoid information loss by returning uncollapsed partial sums. As you say, it would provide an essential primitive for parallel fsums and for cumulative subtotals. If something along those lines were to be accepted, it would likely take one of two forms: fsum(iterable, detail=True) --> list_of_floats fsum_detailed(iterable) --> list_of_floats Note, there is no need for a "carry" argument because the previous result can simply be prepended to the new data (for a running total) or combined in a final summation at the end (for parallel computation): detail = [0.0] for group in groups: detail = math.fsum_detailed(itertools.chain(detail, group)) print('Running total', math.fsum(detail)) On the negative side, I think the use case is too exotic and no one will care about or learn to use this function. Also, it usually isn't wise to expose the internals (we may want a completely different implementation someday). Lastly, I think there is some virtue in keeping the API simple; otherwise, math.fsum() may ward-off some its potential users, resulting in even lower adoption than we have now. With respect to the concerns about speed, I'm curious whether you've run your msum() edits under pypy or cython. I would expect both to give excellent performance. |
|||

msg198687 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-09-30 10:27 | |

I should be clearer about my intentions. I'm hoping to create an efficient and accurate sum() function for the new stdlib statistics module: http://bugs.python.org/issue18606 http://www.python.org/dev/peps/pep-0450/ The sum() function currently proposed can be seen in this patch: http://bugs.python.org/file31680/statistics_combined.patch It uses Fractions to compute an exact sum and separately tries to keep track of type coercion to convert the end result. It is accurate and returns a result of the appropriate type for any mix of ints, Fractions, Decimals and floats with the caveat that mixing Fractions and Decimals is disallowed. I believe the most common use-cases will be for ints/floats and for these cases it is 100x slower than sum/fsum. The discussion here: http://bugs.python.org/issue18606#msg195630 resulted in the sum function that you can see in the above patch. Following that I've had off-list discussions with the author of the module about improving the speed of the sum function (which is a bottleneck for half of the functions exposed by the module). He is very much interested in speed optimisations but doesn't want to compromise on accuracy. My own interest is that I would like an efficient and accurate (for all types) sum function *somewhere* in the stdlib. The addition of the statistics module with its new sum() function is a good opportunity to achieve this. If this were purely for myself I wouldn't bother this much with speed optimisation since I've probably already spent more of my own time thinking about this function than I ever would running it! (If I did want to specifically speed this up in my own work I would, as you say, use cython). If fsum() were modified in the way that I describe that it would be usable as a primitive for the statistics.sum() function and also for parallel etc. computation. I agree that the carry argument is unnecessary and that the main is just the exactness of the return value: it seems a shame that fsum could so easily give me an exact result but doesn't. As for exposing the internals of fsum, is it not the case that a *non-overlapping* set of floats having a given exact sum is a uniquely defined set? For msum I believe it is only necessary to strip out zeros in order to normalise the output so that it is uniquely defined by the true value of the sum in question (the list is already in ascending order once the zeros are removed). Uniqueness: If a many-digit float is given by something like (+-)abcdefghijkl * 2 ** x and we want to break it into non-overlapping 2-digit floats of the form ab*2**x. Since the first digit of each 2-digit float must be non-zero (consider denormals as a separate case) the largest magnitude float is uniquely determined. Subtract that from the many-digit total and then the next largest float is uniquely determined and so on. There can be at most 1 denormal number in the result and this is also uniquely determined once the larger numbers are extracted. So the only way that the partials list can be non-unique is by the inclusion of zeros (unless I've missed something :). |
|||

msg198715 - (view) | Author: Tim Peters (tim.peters) * | Date: 2013-09-30 18:33 | |

The possible use cases are so varied & fuzzy it seems better to use an object for this, rather than funk-ify `fsum`. Say, class Summer. Then methods could be invented as needed, with whatever state is required belonging to Summer objects rather than passed around in funky calling-sequence/return conventions. Like, s = Summer() # new object s.add(3.1) # add one number to running sum s.sum() # returns sum so far as float: 3.1 s.update(iterable_returning_numbers) # add a bunch of numbers to sum s.combine(another_Summer_object) # add Summer's s.sum_as_decimal(precision=None) # sum so far as Decimal s.sum_as_fraction() # exact sum so far as Fraction Blah blah blah ;-) |
|||

msg198725 - (view) | Author: Raymond Hettinger (rhettinger) * | Date: 2013-09-30 19:55 | |

Let's close this one. As Uncle Timmy says, it would be a mistake to "funkify" the signature for math.fsum. If something like a Summer() class is born, it should start it life outside the standard library, gain a following, and let both the API and implementation mature. I'm sympathic about the loss of information in the final step of math.fsum() but don't think the standard library is the place for this to be born. |
|||

msg198729 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2013-09-30 20:16 | |

Fair enough. Thanks again for taking the time to look at this. |

History | |||
---|---|---|---|

Date | User | Action | Args |

2013-09-30 20:16:22 | oscarbenjamin | set | messages: + msg198729 |

2013-09-30 19:55:36 | rhettinger | set | status: open -> closed resolution: rejected messages: + msg198725 |

2013-09-30 18:33:58 | tim.peters | set | nosy:
+ tim.peters messages: + msg198715 |

2013-09-30 10:27:13 | oscarbenjamin | set | messages: + msg198687 |

2013-09-30 05:52:05 | rhettinger | set | messages: + msg198677 |

2013-09-28 14:46:27 | oscarbenjamin | set | messages: + msg198520 |

2013-09-27 18:40:39 | rhettinger | set | messages: + msg198497 |

2013-09-27 18:23:58 | rhettinger | set | priority: normal -> low assignee: rhettinger |

2013-09-25 10:46:10 | oscarbenjamin | create |