Author mark.dickinson
Recipients cool-RR, mark.dickinson, rhettinger, steven.daprano
Date 2016-06-09.08:23:34
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
Choice of algorithm is a bit tricky here. There are a couple of obvious algorithms that work mathematically but result in significant accuracy loss in an IEEE 754 floating-point implementation: one is `exp(mean(map(log, my_numbers)))`, where the log calls can introduce significant loss of information, and the other is `prod(x**(1./len(my_numbers)) for x in my_numbers)`, where the `**(1./n)` operation similarly discards information. A better algorithm numerically is `prod(my_numbers)**(1./len(my_numbers))`, but that's likely to overflow quickly for large datasets (and/or datasets containing large values).

I'd suggest something along the lines of `prod(my_numbers)**(1./len(my_numbers))`, but keeping track of the exponent of the product separately and renormalizing where necessary to avoid overflow.

There are also algorithms for improved accuracy in a product, along the same lines as the algorithm used in fsum. See e.g., the paper "Accurate Floating-Point Product and Exponentiation" by Stef Graillat. [1] (I didn't know about this paper: I just saw a reference to it in a StackOverflow comment [2], which reminded me of this issue.)

Date User Action Args
2016-06-09 08:23:34mark.dickinsonsetrecipients: + mark.dickinson, rhettinger, steven.daprano, cool-RR
2016-06-09 08:23:34mark.dickinsonsetmessageid: <>
2016-06-09 08:23:34mark.dickinsonlinkissue27181 messages
2016-06-09 08:23:34mark.dickinsoncreate