Issue20499
Created on 2014-02-03 12:46 by oscarbenjamin, last changed 2015-10-20 02:09 by steven.daprano.
Files | ||||
---|---|---|---|---|
File name | Uploaded | Description | Edit | |
statistics_with_exact_ratios.py | wolma, 2014-02-07 11:32 | full re-implementation of the statistics module | ||
statistics_with_exact_ratios.patch | wolma, 2014-02-07 15:08 | review | ||
statistics_with_exact_ratios_latest.py | wolma, 2014-02-07 15:10 | v2 of the re-implementation | ||
decimalsum.py | oscarbenjamin, 2014-02-07 15:43 |
Messages (10) | |||
---|---|---|---|
msg210121 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2014-02-03 12:46 | |
The mean/variance functions in the statistics module don't quite round correctly. The reasons for this are that although exact rational arithmetic is used internally in the _sum function it is not used throughout the module. In particular the _sum function should be changed to return an exact result and exact arithmetic should be used right up to the point before returning to the user at which point a rounding/coercion should be used to give the user their answer in the appropriate type correctly rounded once. Using exact arithmetic everywhere makes it possible to replace all of the variance* functions with single-pass algorithms based on the computational formula for variance which should be more efficient as well. For example the following implements pvariance so that it returns a perfectly rounded result for float input and output in a single pass: def pvariance(data): sx = 0 sx2 = 0 for n, x in enumerate(map(Fraction, data), 1): sx += x sx2 += x ** 2 Ex = sx / n Ex2 = sx2 / n var = Ex2 - Ex ** 2 return float(var) Comparing the above with the statistics module: >>> pvariance([0, 0, 1]) 0.2222222222222222 >>> statistics.pvariance([0, 0, 1]) 0.22222222222222224 The true answer is: >>> from fractions import Fraction as F >>> float(statistics.pvariance([F(0), F(0), F(1)])) 0.2222222222222222 The logic in the _sum function for computing exact integer ratios and coercing back to the output type could be moved into utility functions so that it does not need to be duplicated. Some examples of rounding issues: >>> from statistics import variance, mean >>> from decimal import Decimal as D, getcontext >>> from fractions import Fraction as F Variance with ints or floats returns a float but the float is not quite the nearest possible float: >>> variance([0, 0, 2]) 1.3333333333333335 >>> float(variance([F(0), F(0), F(2)])) # true result rounded once 1.3333333333333333 Another example with Decimal: >>> getcontext().prec = 5 >>> getcontext() Context(prec=5, rounding=ROUND_HALF_EVEN, Emin=-999999999, Emax=999999999, capitals=1, clamp=0, flags=[Rounded, Inexact], traps=[DivisionByZero, Overflow, InvalidOperation]) >>> variance([D(0), D(0), D(2)] * 2) # Rounded down instead of up Decimal('1.0666') >>> r = (variance([F(0), F(0), F(2)] * 2)) >>> D(r.numerator) / r.denominator # Correctly rounded Decimal('1.0667') The mean function may also not be correctly rounded: >>> getcontext().prec = 2 >>> r = mean((F('1.2'), F('1.3'), F('1.55'))) >>> r Fraction(27, 20) >>> D(r.numerator) / r.denominator # Correctly rounded Decimal('1.4') >>> mean([D('1.2'), D('1.3'), D('1.55')]) Decimal('1.3') |
|||
msg210452 - (view) | Author: Wolfgang Maier (wolma) * | Date: 2014-02-07 11:32 | |
I have written a patch for this issue (I'm uploading the complete new code for everyone to try it - importing it into Python3.3 works fine; a diff with additional tests against Oscar's example will follow soon). Just as Oscar suggested, this new version performs all calculations using exact rational arithmetics and rounds/coerces only before returning the final result to the user. Its precision is, thus, only limited by that of the input data sequence. It passes Oscar's examples 1-3 as you can easily test yourself. It also gives the correct answer in the fourth example - mean([D('1.2'), D('1.3'), D('1.55')]) -, although on my system the original statistics module gets this one right already. The implementation I chose for this is a bit different from Oscar's suggestion. Essentially, it introduces a dedicated module-private class _ExactRatio to represent numbers as exact ratios and that gets passed between different functions in the module. This class borrows many of its algorithms from fractions.Fraction, but has some specialized methods for central tasks in the statistics module making it much more efficient in this context than fractions.Fraction. This class is currently really minimal, but can easily be extended if necessary. In my implementation this new class is used throughout the module whenever calculations with or conversions to exact ratios have to be performed, which allowed me to preserve almost all of the original code and to factor out the changes to the class. As for performance, the gain imagined by Oscar is not always realized even though the variance functions are now using single passes over the data. Specifically, in the case of floats the overhead of having to convert everything to exact ratios first eats up all the savings. In the case of fractional input, there is a dramatic performance boost though. I compiled a small table comparing (kind of) average performance of the two versions with various input data types. Take this with a grain of salt because the differences can vary quite a bit depending on the exact data: data type performance gain(+)/loss(-) over original module / % --------- ---------------------------------------------------- float - 10 % short Decimal + 10 % long Decimal - 25 % Fraction + 80 % (!!) MyFloat + 25 With Decimal input the costs of conversion to exact ratios depends on the digits in the Decimals, so with short Decimals the savings from the single-pass algorithm are larger than the conversion costs, but this reverses for very long Decimals. MyFloat is a minimal class inheriting from float and overriding just its arithmetic methods to return MyFloat instances again. The performance gain with Fraction input comes from two changes, the single-pass algorithm and an optimization in _sum (with Fraction, more than with any other type, the dictionary built by _sum can grow quite large and the optimization is in the conversion of the dictionary elements to exact ratios). This is why the extent of this gain can sometimes be significantly higher than the 80% listed in the table. Try this, for example: from statistics import variance as v from statistics_with_exact_ratios import variance as v2 from fractions import Fraction data = [Fraction(1,x) for x in range(1,2000)] print('calculating variance using original statistics module ...') print(float(v(data))) print('now using exact ratio calculations ...') print(float(v2(data))) I invite everybody to test my implementation, which is very unlikely to be free of bugs at this stage. |
|||
msg210456 - (view) | Author: Stefan Krah (skrah) * ![]() |
Date: 2014-02-07 12:04 | |
We can add a fast Decimal.as_integer_ratio() in C. That said, why is the sum of Decimals not done in decimal arithmetic with a very high context precision? It would be exact and with usual exponents in the range [-384, 383] it should be very fast. >>> c.prec = 50 >>> sum([D("1e50"), D(1), D("-1e50")] * 1000) Decimal('0E+1') >>> >>> c.prec = 51 >>> sum([D("1e50"), D(1), D("-1e50")] * 1000) Decimal('1000') |
|||
msg210459 - (view) | Author: Wolfgang Maier (wolma) * | Date: 2014-02-07 12:44 | |
> We can add a fast Decimal.as_integer_ratio() in C. That would be a terrific thing to have !! Currently, Decimals perform really poorly with the statistics functions and this is entirely due to this bottleneck. With regard to calculating the sum of Decimals in decimal arithmetic, the problem is how you'd detect that the input is all Decimals (or contains enough Decimals to justify a switch in the algorithm). The current statistics module, accepts input data consisting of mixed types, e.g., you could have Decimal mixed with ints or floats. Much of the complexity of the module stems from this fact. You can also read about some of the complications under this issue: http://bugs.python.org/issue20481 . |
|||
msg210479 - (view) | Author: Wolfgang Maier (wolma) * | Date: 2014-02-07 15:08 | |
Ok, I finally managed to get the test suite right for this, so here is the patch diff for everything. I made a few final changes to the module itself (mostly to please the test suite), so I'll edited the complete code as well. |
|||
msg210481 - (view) | Author: Oscar Benjamin (oscarbenjamin) * | Date: 2014-02-07 15:43 | |
A fast Decimal.as_integer_ratio() would be useful in any case. If you're going to use decimals though then you can trap inexact and keep increasing the precision until it becomes exact. The problem is with rationals that cannot be expressed in a finite number of decimal digits - these need to be handled separately. I've attached decimalsum.py that shows how to compute an exact sum of any mix of int, float and Decimal, but not Fraction. When I looked at this before, having special cases for everything from int to float to Decimal to Fraction makes the code really complicated. The common cases are int and float. For these cases sum() and fsum() are much faster. However you need to also have code that checks everything in the iterable. One option is to do something like: import math import itertools from decimal import Decimal from decimalsum import decimalsum def _sum(numbers): subtotals = [] for T, nums in itertools.groupby(numbers, type): if T is int: subtotals.append(sum(nums)) elif T is float: subtotals.append(math.fsum(nums)) elif T is Decimal: subtotals.append(decimalsum(nums)) else: raise NotImplementedError return decimalsum(subtotals) The main problem here is that fsum rounds every time it returns meaning that this sum is order-dependent if there are a mix of floats and other types (See issue19086 where I asked for way to change that). Also having separate code blocks to manage all the different types internally in e.g. the less trivial variance calculations is tedious. |
|||
msg210491 - (view) | Author: Wolfgang Maier (wolma) * | Date: 2014-02-07 16:46 | |
In principle, your approach using itertools.groupby, the existing _sum, your decimalsum, and my _ExactRatio class could be combined. You could call decimalsum for Decimal and subclasses, the current _sum for everything else. _sum would return an _ExactRatio instance now anyway, just modify decimalsum accordingly and you are done. The problem I see is that it would cause a slow down in many cases where no Decimals or just a few are involved (think of mixes of ints and floats as a realistic scenario, and consider also that you would have to do an isinstance check to catch subclasses of Decimal). |
|||
msg210495 - (view) | Author: Stefan Krah (skrah) * ![]() |
Date: 2014-02-07 17:00 | |
Oscar Benjamin <report@bugs.python.org> wrote: > If you're going to use decimals though then you can trap inexact and > keep increasing the precision until it becomes exact. For sums that is not necessary. Create a context with MAX_EMAX, MIN_EMIN and MAX_PREC and mpd_add() -- the underlying libmpdec function -- will only use as many digits as neccessary. Of course, calculating 1/3 in MAX_PREC would be catastrophic. > The problem is with rationals that cannot be expressed in a finite number > of decimal > digits - these need to be handled separately. I've attached > decimalsum.py that shows how to compute an exact sum of any mix of > int, float and Decimal, but not Fraction. Yes, I was thinking of a "don't do that" approach. Do people have mixed Fractions and Decimals in their data? |
|||
msg210700 - (view) | Author: Wolfgang Maier (wolma) * | Date: 2014-02-08 22:59 | |
I worked out a slightly speedier version of decimal_to_ratio today (Stefan: that's when I duplicated your bug report): from decimal import Context def _decimal_to_ratio (x): _, digits, exp = x.as_tuple() if exp in _ExactRatio.decimal_infinite: # INF, NAN, sNAN assert not d.is_finite() raise ValueError if exp < 0: exp = -exp return int(x.scaleb(exp, Context(prec=len(digits)))), 10**exp return int(x), 1 makes the variance functions in my re-implementation about 20-30% faster for Decimal. It's not a big breakthrough, but still. |
|||
msg210702 - (view) | Author: Wolfgang Maier (wolma) * | Date: 2014-02-08 23:09 | |
oops, > if exp in _ExactRatio.decimal_infinite: # INF, NAN, sNAN should read if exp in ('F', 'n', 'N'): # INF, NAN, sNAN of course. What I pasted comes from a micro-optimization I tried, but which doesn't give any benefit. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2015-10-20 02:09:39 | steven.daprano | set | assignee: steven.daprano |
2014-10-14 15:23:09 | skrah | set | nosy:
- skrah |
2014-02-08 23:09:21 | wolma | set | messages: + msg210702 |
2014-02-08 22:59:51 | wolma | set | messages: + msg210700 |
2014-02-07 17:00:37 | skrah | set | messages: + msg210495 |
2014-02-07 16:46:52 | wolma | set | messages: + msg210491 |
2014-02-07 15:43:39 | oscarbenjamin | set | files:
+ decimalsum.py messages: + msg210481 |
2014-02-07 15:10:32 | wolma | set | files: + statistics_with_exact_ratios_latest.py |
2014-02-07 15:08:23 | wolma | set | files:
+ statistics_with_exact_ratios.patch keywords: + patch messages: + msg210479 |
2014-02-07 12:44:58 | wolma | set | messages: + msg210459 |
2014-02-07 12:04:48 | skrah | set | nosy:
+ skrah messages: + msg210456 |
2014-02-07 11:32:26 | wolma | set | files:
+ statistics_with_exact_ratios.py type: enhancement messages: + msg210452 |
2014-02-06 21:36:20 | oscarbenjamin | set | nosy:
+ wolma |
2014-02-03 12:46:19 | oscarbenjamin | create |