This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Add multi-dimensional Euclidean distance function to the math module
Type: enhancement Stage: resolved
Components: Library (Lib) Versions: Python 3.8
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: mark.dickinson, miss-islington, rhettinger, scoder, serhiy.storchaka, skrah, steven.daprano, tim.peters
Priority: normal Keywords: patch

Created on 2018-03-16 18:50 by rhettinger, last changed 2022-04-11 14:58 by admin. This issue is now closed.

Pull Requests
URL Status Linked Edit
PR 8474 merged rhettinger, 2018-07-26 05:57
PR 8561 merged rhettinger, 2018-07-30 06:51
PR 11896 merged rhettinger, 2019-02-16 18:55
Messages (26)
msg313967 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-03-16 18:50
A need for a distance-between-two-points function arises frequently enough to warrant consideration for inclusion in the math module.   It shows-up throughout mathematics -- everywhere from simple homework problems for kids to machine learning and computer vision.

In the latter cases, the function is called frequently and would benefit from a fast C implementation that includes good error checking and is algorithmically smart about numerical issues such as overflow and loss-of-precision.

A simple implementation would be something like this:

    def dist(p, q):
        'Multi-dimensional Euclidean distance'
        # XXX needs error checking:  len(p) == len(q)
        return sqrt(sum((x0 - x1) ** 2 for x0, x1 in zip(p, q)))

The implementation could also include value added features such as hypot() style scaling to mitigate overflow during the squaring step:

    def dist2(p, q):
        # https://en.wikipedia.org/wiki/Hypot#Implementation
        diffs = [x0 - x1 for x0, x1 in zip(p, q)]
        scale = max(diffs, key=abs)
        return abs(scale) * sqrt(fsum((d/scale) ** 2 for d in diffs))
msg313979 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2018-03-16 23:48
This was requested once before, but rejected. I would like to see that decision re-considered.

https://bugs.python.org/issue1880

Some months ago, there was a very brief discussion started by Lawrence D’Oliveiro about this on the comp.lang.python newsgroup. Unfortunately neither his posts nor any replies to them seem to have been archived at the Python-List mailing list. His original post is on google groups:

https://groups.google.com/forum/#!topic/comp.lang.python/GmrGx9QQINQ

but I see no replies there. There is one reply on comp.lang.python, from Stefan Ram, but that doesn't appear on either the mailing list or Google Groups.
msg313983 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2018-03-17 00:27
The obvious work-around of calling hypot multiple times is not only tedious, but it loses precision. For example, the body diagonal through a 1x1x1 cube should be √3 exactly:

py> from math import hypot, sqrt
py> hypot(hypot(1, 1), 1) == sqrt(3)
False


I know of at least five languages or toolkits which support this feature, or something close to it: Javascript, Julia, Matlab, GNU Scientific Library, and C++.

Javascript and Julia support arbitrary numbers of arguments:

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/hypot

https://github.com/JuliaLang/julia/pull/15649

Using Firefox's javascript console, I get the correct body diagonal for the cube:

>> Math.hypot(1, 1, 1) == Math.sqrt(3)
true


Matlab's hypot() function only takes two arguments, but norm(vector) returns the Euclidean 2-norm of the vector, i.e. equivalent to the hypot of multiple arguments.

https://au.mathworks.com/help/matlab/ref/norm.html

The GNU Scientific Library and C++ support a three-argument form of hypot:

http://git.savannah.gnu.org/cgit/gsl.git/commit/?id=e1711c84f7ba5c2b22d023dcb7e10810233fff27

http://en.cppreference.com/w/cpp/numeric/math/hypot
http://open-std.org/JTC1/SC22/WG21/docs/papers/2015/p0030r1.pdf


So +1 on math.hypot supporting arbitrary number of arguments.
msg313984 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2018-03-17 00:36
Ah wait, I appear to have misunderstood Raymond's request. Sorry Raymond!

(I've been spending too much time teaching Pythagoras' theorem and my mind was primed to go directly from Euclidean distance to hypotenuse.)

Not withstanding my misunderstanding, if hypot supported arbitrary number of arguments, then the implementation of distance could simply defer to hypot:

def distance(p, q):
    # TODO error checking that p and q have the same number of items
    return hypot(*(x-y for x,y in zip(p, q)))


giving results like this:

py> distance((11, 21), (14, 17))
5.0


In either case, I agree with Raymond that this would be a useful feature.
msg314074 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2018-03-19 01:51
I'd be +1 on generalizing math.hypot to accept an arbitrary number of arguments.  It's the natural building block for computing distance, but the reverse is strained.  Both are useful.

Here's scaling code translated from the Fortran implementation of SNRM2 in LAPACK.  It only looks at each element once, so would work almost as-is for an argument that's an arbitrary iterable (just change the formal argument from `*xs` to `xs`):

    # http://www.netlib.org/lapack/explore-html/d7/df1/snrm2_8f_source.html
    def hypot(*xs):
        import math
        scale = 0.0
        sumsq = 1.0
        for x in xs:
            if x:
                absx = abs(x)
                if scale < absx:
                    sumsq = 1.0 + sumsq * (scale / absx)**2
                    scale = absx
                else:
                    sumsq += (absx / scale)**2
        return scale * math.sqrt(sumsq)

I believe it does the right with infinities by magic (return math.inf).  It returns 0.0 if no arguments are passed, which makes sense.  For one argument, it's a convoluted way to return its absolute value.  The "if x:" test is necessary to prevent division by 0.0 if the first argument is 0.0.  It would need another blob of code to give up (and return math.nan) if one of the arguments is a NaN (there's no guessing what various C compilers would do without special-casing NaN).

I doubt `fsum()` would add much value here:  all the addends have the same sign, so cancellation is impossible.

To get really gonzo, a single-rounding dot product would be the best building block (the vector norm is the square root of the dot product of a vector with itself).  `fsum()` is a special case of that (the dot product of a vector with a same-length vector of all ones).
msg314078 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2018-03-19 06:46
With this implementation

>>> hypot(*range(15), 3)
32.00000000000001

The exact result is 32.
msg314079 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-03-19 07:04
[Uncle Timmy]
> I doubt `fsum()` would add much value here:  all the addends have the 
> same sign, so cancellation is impossible

fsum() may be overkill for this problem.  I mentioned it because the math module already had the requisite code and because it improved accuracy with high dimensional data in machine learning examples I've encountered:

    >>> from math import fsum, sqrt

    >>> n = 1000
    >>> sum([0.1] * n)
    99.9999999999986
    >>> fsum([0.1] * n)
    100.0

    >>> sqrt(sum([0.1] * n) / n)
    0.3162277660168357
    >>> sqrt(fsum([0.1] * n) / n)
    0.31622776601683794

    # fsum() version exactly matches the decimal crosscheck
    >>> getcontext().prec = 40
    >>> (sum([Decimal(0.1)] * n) / n).sqrt()
    Decimal('0.3162277660168379419769730258850242641698')

If we care about those little differences (about 80 ulp in this example), the single-rounding dot products seems like a better way to go.
msg314082 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2018-03-19 08:25
+1 for a single-rounded dot product. If we're allowed to assume IEEE 754, it's straightforward to code up something that's not too inefficient and gives correctly rounded results for "normal" cases, using a combination of Veltkamp splitting, Dekker multiplication, and fsum. The difficulties come in if you want to maintain correct rounding in cases where any of the partial products overflows or (especially awkwardly) underflows.

Also, if we can figure out how to do a correctly-rounded dot product, that gives us math.fma as a special case... (a*b + c = dot([a, c], [b, 1.0])). (Of course, that's a bit backwards, since usually you'd see fma as a primitive and *use* it in computing a dot product.)
msg314099 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2018-03-19 16:07
Some notes on the hypot() code I pasted in:  first, it has to special case infinities too - it works fine if there's only one of 'em, but returns a NaN if there's more than one (it ends up computing inf/inf, and the resulting NaN propagates).

Second, it's not clear to me what the result "should be" if there's at least one infinity _and_ at least one NaN.  At the start, "anything with a NaN input returns a NaN" was the rule everything followed.  Later an exception to that was made for NaN**0 == 1, under the theory that it didn't really matter if the computation of the base screwed up, because anything whatsoever to the 0 power is 1 0 viewing NaN as meaning "we have no idea what the true value is", it simply doesn't matter what the true value is in this context.  By the same logic, if there's an infinite argument to hypot(), it doesn't matter what any other argument is - the result is +inf regardless.  So that requires some investigation.  Offhand I'm inclined to return NaN anyway.

Finally, if there is a robust single-rounding dot product, of course scaling can be skipped (and so eliminate another source of small errors).
msg314102 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2018-03-19 16:41
> By the same logic, if there's an infinite argument to hypot(), it doesn't matter what any other argument is - the result is +inf regardless.

Yep, that's what IEEE 754-2008 says for the two-argument case, so I think that's the logic that should be followed in the many-argument case: if any of the inputs is an infinity, the output should be infinity.

From section 9.2.1 of IEEE 754:

> For the hypot function, hypot(±0, ±0) is +0, hypot(± , qNaN) is + , and
> hypot(qNaN, ± ) is + .
msg314103 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2018-03-19 16:43
Okay, that cut and paste didn't work so well. Just imagine an infinity symbol in those missing spaces. Trying again:

> For the hypot function, hypot(±0, ±0) is +0, hypot(±∞, qNaN) is +∞, and
> hypot(qNaN, ±∞) is +∞.
msg314113 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2018-03-19 19:12
Mark, thanks!  I'm happy with that resolution:  if any argument is infinite, return +inf; else if any argument is a NaN, return a NaN; else do something useful ;-)

Serhiy, yes, the scaling that prevents catastrophic overflow/underflow due to naively squaring unscaled values can introduce small errors of its own.  A single-rounding dot product could avoid that, leaving two sources of single-rounding errors (the dot product, and the square root).

Raymond, yes, fsum() on its own can reduce errors.  Note that scaling on its own can also reduce errors (in particular, when the arguments are all the same, they're each scaled to exactly 1.0):

>>> import math
>>> n = 1000
>>> math.sqrt(sum([0.1 ** 2] * n))
3.1622776601683524
>>> math.sqrt(math.fsum([0.1 ** 2] * n))
3.1622776601683795
>>> hypot(*([0.1] * n)) # using the code above
3.1622776601683795
>>> math.sqrt(n) * 0.1
3.1622776601683795
msg314129 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2018-03-20 01:49
Mark, how about writing a clever single-rounding dot product that merely _detects_ when it encounters troublesome cases?  If so, it can fall back to a (presumably) much slower method.  For example, like this for the latter:

    def srdp(xs, ys):
        "Single rounding dot product."
        import decimal
        from decimal import Decimal, Inexact
        # XXX check that len(xs) == len(ys)
        with decimal.localcontext(decimal.ExtendedContext) as ctx:
            ctx.traps[Inexact] = True
            total = Decimal(0)
            for x, y in zip(map(Decimal, xs), map(Decimal, ys)):
                while True:
                    try:
                        total += x * y
                        break
                    except Inexact:
                        ctx.prec += 1
            return float(total)

So it just converts everything to Decimal; relies on decimal following all the IEEE rules for special cases; retries the arithmetic boosting precision until decimal gets an exact result (which it eventually will since we're only doing + and *); and relies on float() to get everything about final rounding, overflow, and denorms right.  If that doesn't work, I'd say it's a bug in the decimal module ;-)

I'd bet a dollar that, in real life, falling back to this would almost never happen, outside of test cases.
msg320380 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-06-24 18:48
Would it be reasonable for me to get started with a "mostly good enough" version using scaling and Kahan summation?

from operator import sub
from math import sqrt, fabs

def kahan_summation(seq):
    # https://en.wikipedia.org/wiki/Kahan_summation_algorithm#The_algorithm
    csum = 0
    err = 0
    for x in seq:
        x -= err
        nsum = csum + x
        err = (nsum - csum) - x
        csum = nsum
    return csum

def hypot(*sides):
    scale = max(map(fabs, sides))
    return scale * sqrt(kahan_summation((s / scale)**2 for s in sides))

def dist(p, q):
    return hypot(*map(sub, p, q))

assert all(hypot(*([1]*d)) == sqrt(d) for d in range(1, 10000))
print(dist(p=(11, 4, 10), q=(9, 10, 13.5)))
msg320389 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2018-06-24 22:13
Raymond, I'd say scaling is vital (to prevent spurious infinities), but complications beyond that are questionable, slowing things down for an improvement in accuracy that may be of no actual benefit.

Note that your original "simple homework problems for kids to machine learning and computer vision" doesn't include cases where good-to-the-last-bit accuracy is important, but at least in machine learning and computer vision apps primitives may be called an enormous number of times - "speed matters" to them.

Perhaps add an optional "summer" argument defaulting to __builtins__.sum?  Then the user who wants to pay more for tighter error bounds can pass in whatever they like, from a raw Kahan summer, through one of its improvements, to math.fsum.  There just isn't a "one size fits all" answer.
msg322400 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2018-07-26 06:20
I think we should guarantee some properties:

1. The result of hypot(x, y, z, ...) shouldn't depend on the order of arguments.

2. It should be monotonic for all arguments: hypot(..., x, ...) <= hypot(..., y, ...) for abs(x) < abs(y).

3. hypot(..., inf, ...) = inf

4. hypot(..., nan, ...) = nan (if no infinities)

5. hypot(x, x, x, x) == 2*abs(x)
msg322401 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2018-07-26 06:25
6. hypot(x) == abs(x)

7. hypot(x, y) should return the same result as in previous Python version, e.g. should call C lib implementation.

8. hypot(x, y, 0, <all zeros>...) == hypot(x, y)
msg322433 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-07-26 15:02
I don't want to make any guarantees beyond what the doc patch says.

Commutativity guarantees are difficult to deliver without exact summation. A switchover to C's hypot() creates an external dependency such that we can't make any more guarantees than were given to us (which is nothing).  Also, algorithmic switchovers risk impairing monotonicity.  The Inf/NaN rules already existed before this patch and I believe that Mark elected to not document them on a function by function basis.  Another reason not to make guarantees is that this is a "mostly good-enough patch" and I expect that either Mark or Tim will someday improve it with a "single-rounding dot product".  My aim is to make it good before any effort is given to making it perfect.  In the meantime, it makes little sense to guarantee implementation details.
msg322434 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2018-07-26 15:11
Commutativity guarantees can be delivered by sorting arguments before summation. In any case the sorting is needed for more accurate summation (from smaller to larger).
msg322454 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-07-27 01:12
> Commutativity guarantees can be delivered by sorting arguments before summation.

No thanks -- that's too expensive for such a small payoff.
msg322564 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-07-28 14:48
New changeset c6dabe37e3c4d449562182b044184d1756bea037 by Raymond Hettinger in branch 'master':
bpo-33089: Multidimensional math.hypot() (GH-8474)
https://github.com/python/cpython/commit/c6dabe37e3c4d449562182b044184d1756bea037
msg322588 - (view) Author: Stefan Behnel (scoder) * (Python committer) Date: 2018-07-28 20:27
>> Commutativity guarantees can be delivered by sorting arguments before summation.

> No thanks -- that's too expensive for such a small payoff.

Since I don't really see people use this on vectors with hundreds of dimensions, let me still suggest using insertion sort. It's simple and short to implement into the current code (right in the first loop) and should be plenty fast for any reasonable usage of this function.
msg322651 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-07-30 04:23
I really don't want to introduce something slower than O(n) behavior here, particularly because some of my use cases have a large n and because this will be called many times.  Tim has already opined the even using a variant of Kahan summation would increase the cost unacceptably. IMO, sorting is even worse and would be a foolish thing to do.
msg322743 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2018-07-31 07:45
New changeset 9c18b1ae527346bc178250ad1ca07bffdacde5dd by Raymond Hettinger in branch 'master':
bpo-33089: Add math.dist() for computing the Euclidean distance between two points (GH-8561)
https://github.com/python/cpython/commit/9c18b1ae527346bc178250ad1ca07bffdacde5dd
msg322814 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2018-07-31 20:11
For the record, let me register my objection here. I'm not convinced by adding math.dist, and am surprised that this was committed. Almost all the discussion in this issue was about adding multi-argument hypot, which seems like an obviously useful building block. dist seems like a specialised function, and I don't think it belongs in the math module.
msg335710 - (view) Author: miss-islington (miss-islington) Date: 2019-02-16 19:00
New changeset 3ff5962d2e9af2c35d09d39465397c6fa6e9965c by Miss Islington (bot) (Raymond Hettinger) in branch 'master':
bpo-33089: Add math.dist() and math.hypot() to Whatsnew (GH-11896)
https://github.com/python/cpython/commit/3ff5962d2e9af2c35d09d39465397c6fa6e9965c
History
Date User Action Args
2022-04-11 14:58:58adminsetgithub: 77270
2019-02-16 19:00:48miss-islingtonsetnosy: + miss-islington
messages: + msg335710
2019-02-16 18:55:21rhettingersetpull_requests: + pull_request11923
2018-07-31 20:11:00mark.dickinsonsetmessages: + msg322814
2018-07-31 07:46:27rhettingersetstatus: open -> closed
resolution: fixed
stage: patch review -> resolved
2018-07-31 07:45:57rhettingersetmessages: + msg322743
2018-07-30 06:51:28rhettingersetpull_requests: + pull_request8075
2018-07-30 04:23:56rhettingersetmessages: + msg322651
2018-07-28 20:27:19scodersetnosy: + scoder
messages: + msg322588
2018-07-28 14:48:07rhettingersetmessages: + msg322564
2018-07-27 01:12:56rhettingersetmessages: + msg322454
2018-07-27 01:12:27rhettingersetmessages: - msg322453
2018-07-27 01:11:45rhettingersetmessages: + msg322453
2018-07-26 15:11:55serhiy.storchakasetmessages: + msg322434
2018-07-26 15:02:12rhettingersetmessages: + msg322433
2018-07-26 06:25:50serhiy.storchakasetmessages: + msg322401
2018-07-26 06:20:28serhiy.storchakasetmessages: + msg322400
2018-07-26 05:57:25rhettingersetkeywords: + patch
stage: patch review
pull_requests: + pull_request7997
2018-06-24 22:13:25tim.peterssetmessages: + msg320389
2018-06-24 18:48:02rhettingersetmessages: + msg320380
2018-03-20 01:49:38tim.peterssetmessages: + msg314129
2018-03-19 19:12:06tim.peterssetmessages: + msg314113
2018-03-19 16:43:34mark.dickinsonsetmessages: + msg314103
2018-03-19 16:41:56mark.dickinsonsetmessages: + msg314102
2018-03-19 16:07:10tim.peterssetmessages: + msg314099
2018-03-19 08:25:09mark.dickinsonsetmessages: + msg314082
2018-03-19 07:04:43rhettingersetmessages: + msg314079
2018-03-19 06:46:41serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg314078
2018-03-19 01:51:51tim.peterssetmessages: + msg314074
2018-03-17 00:36:38steven.dapranosetmessages: + msg313984
2018-03-17 00:27:53steven.dapranosetmessages: + msg313983
2018-03-16 23:48:54steven.dapranosetmessages: + msg313979
2018-03-16 18:50:58rhettingercreate