msg335862  (view) 
Author: Raymond Hettinger (rhettinger) * 
Date: 20190218 20:10 
Having gcd() in the math module has been nice. Here is another number theory basic that I've needed every now and then:
def multinv(modulus, value):
'''Multiplicative inverse in a given modulus
>>> multinv(191, 138)
18
>>> 18 * 138 % 191
1
>>> multinv(191, 38)
186
>>> 186 * 38 % 191
1
>>> multinv(120, 23)
47
>>> 47 * 23 % 120
1
'''
# https://en.wikipedia.org/wiki/Modular_multiplicative_inverse
# http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
x, lastx = 0, 1
a, b = modulus, value
while b:
a, q, b = b, a // b, a % b
x, lastx = lastx  q * x, x
result = (1  lastx * modulus) // value
if result < 0:
result += modulus
assert 0 <= result < modulus and value * result % modulus == 1
return result

msg335888  (view) 
Author: Tim Peters (tim.peters) * 
Date: 20190219 05:34 
 Some form of this would be most welcome!
 If it's spelled this way, put the modulus argument last? "Everyone expects" the modulus to come last, whether in code:
x = (a+b) % m
x = a*b % m
x = pow(a, b, m)
or in math:
a^(k*(p1)) = (a^(p1))^k = 1^k = 1 (mod p)
 Years ago Guido signed off on spelling this
pow(value, 1, modulus)
which currently raises an exception. Presumably
pow(value, n, modulus)
for int n > 1 would mean the same as pow(pow(value, 1, modulus), n, modulus), if it were accepted at all. I'd be happy to stop with 1.
 An alternative could be to supply egcd(a, b) returning (g, x, y) such that
a*x + b*y == g == gcd(a, b)
But I'm not sure anyone would use that _except_ to compute modular inverse. So probably not.

msg335894  (view) 
Author: Raymond Hettinger (rhettinger) * 
Date: 20190219 07:23 
> If it's spelled this way, put the modulus argument last?
Yes, that makes sense.
> Years ago Guido signed off on spelling this
>
> pow(value, 1, modulus)
+1 ;)

msg335921  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190219 10:35 
+1 for the pow(value, 1, modulus) spelling. It should raise `ValueError` if `value` and `modulus` are not relatively prime.
It would feel odd to me _not_ to extend this to `pow(value, n, modulus)` for all negative `n`, again valid only only if `value` is relatively prime to `modulus`.

msg335922  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190219 10:41 
Here's an example of some code in the standard library that would have benefited from the availability of `pow(x, n, m)` for arbitrary negative n: https://github.com/python/cpython/blob/e7a4bb554edb72fc6619d23241d59162d06f249a/Lib/_pydecimal.py#L957L960
if self._exp >= 0:
exp_hash = pow(10, self._exp, _PyHASH_MODULUS)
else:
exp_hash = pow(_PyHASH_10INV, self._exp, _PyHASH_MODULUS)
where:
_PyHASH_10INV = pow(10, _PyHASH_MODULUS  2, _PyHASH_MODULUS)
With the proposed addition, that just becomes `pow(10, self._exp, _PyHASH_MODULUS)`, and the `_PyHASH_10INV` constant isn't needed any more.

msg335952  (view) 
Author: Berry Schoenmakers (lschoe) * 
Date: 20190219 14:39 
Agreed, extending pow(value, n, modulus) to negative n would be a great addition!
To have modinv(value, modulus) next to that also makes a lot of sense to me, as this would avoid lots of confusion among users who are not so experienced with modular arithmetic. I know from working with generations of students and programmers how easy it is to make mistakes here (including lots of mistakes that I made myself;)
One would implement pow() for negative n, anyway, by first computing the modular inverse and then raising it to the power n. So, to expose the modinv() function to the outside world won't cost much effort.
Modular powers, in particular, are often very confusing. Like for a prime modulus p, all of pow(a, 1,p), pow(a, p2, p), pow(a, p, p) are equal to eachother, but a common mistake is to take pow(a, p1, p) instead. For a composite modulus things get much trickier still, as the exponent is then reduced in terms of the Euler phi function.
And, even if you are not confused by these things, it's still a bit subtle that you have to use pow(a, 1,p) instead of pow(a, p2, p) to let the modular inverse be computed efficiently. With modinv() available separately, one would expect and get an efficient implementation with minimal overhead (e.g., not implemented via a complete extendedgcd).

msg335956  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190219 15:32 
> it's still a bit subtle that you have to use pow(a, 1,p) instead of pow(a, p2, p) to let the modular inverse be computed efficiently
That's not 100% clear: the binary powering algorithm used to compute `pow(a, p2, p)` is fairly efficient; the extended gcd algorithm used to compute the inverse may or may not end up being comparable. I certainly wouldn't be surprised to see `pow(a, p2, p)` beat a pure Python xgcd for computing the inverse.

msg335970  (view) 
Author: Berry Schoenmakers (lschoe) * 
Date: 20190219 16:32 
> ... to see `pow(a, p2, p)` beat a pure Python xgcd for computing the inverse.
OK, I'm indeed assuming that modinv() is implemented efficiently, in CPython, like pow() is. Then, it should be considerably faster, maybe like this:
>>> timeit.timeit("gmpy2.invert(1023,p)", "import gmpy2; p=2**611")
0.18928535383349754
>>> timeit.timeit("gmpy2.invert(1023,p)", "import gmpy2; p=2**1271")
0.290736872836419
>>> timeit.timeit("gmpy2.invert(1023,p)", "import gmpy2; p=2**5211")
0.33174844290715555
>>> timeit.timeit("gmpy2.powmod(1023,p2,p)", "import gmpy2; p=2**611")
0.8771009990597349
>>> timeit.timeit("gmpy2.powmod(1023,p2,p)", "import gmpy2; p=2**1271")
3.460449585430979
>>> timeit.timeit("gmpy2.powmod(1023,p2,p)", "import gmpy2; p=2**5211")
84.38728888797652

msg335976  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190219 16:49 
> Then, it should be considerably faster
Why would you expect that? Both algorithms involve a number of (bigint) operations that's proportional to log(p), so it's going to be down to the constants involved and the running times of the individual operations. Is there a clear reason for your expectation that the xgcdbased algorithm should be faster?
Remember that Python has a subquadratic multiplication (via Karatsuba), but its division algorithm has quadratic running time.

msg335987  (view) 
Author: Berry Schoenmakers (lschoe) * 
Date: 20190219 17:24 
> Is there a clear reason for your expectation that the xgcdbased algorithm should be faster?
Yeah, good question. Maybe I'm assuming too much, like assuming that it should be faster;) It may depend a lot on the constants indeed, but ultimately the xgcd style should prevail.
So the powbased algorithm needs to do log(p) fullsize muls, plus log(p) modular reductions. Karatsuba helps a bit to speed up the muls, but as far as I know it only kicks in for quite sizeable inputs. I forgot how Python is dealing with the modular reductions, but presumably that's done without divisions.
The xgcdbased algorithm needs to do a division per iteration, but the numbers are getting smaller over the course of the algorithm. And, the worstcase behavior occurs for things involving Fibonacci numbers only. In any case, the overall bit complexity is quadratic, even if division is quadratic. There may be a few expensive divisions along the way, but these also reduce the numbers a lot in size, which leads to good amortized complexity for each iteration.

msg335994  (view) 
Author: Raymond Hettinger (rhettinger) * 
Date: 20190219 18:27 
> +1 for the pow(value, 1, modulus) spelling. It should raise
> `ValueError` if `value` and `modulus` are not relatively prime.
> It would feel odd to me _not_ to extend this to
> `pow(value, n, modulus)` for all negative `n`, again
> valid only only if `value` is relatively prime to `modulus`.
I'll work up a PR using the simplest implementation. Once that's in with tests and docs, it's fair game for someone to propose algorithmic optimizations.

msg336012  (view) 
Author: Tim Peters (tim.peters) * 
Date: 20190219 19:59 
Raymond, I doubt we can do better (faster) than straightforward egcd without heroic effort anyway. We can't even know whether a modular inverse exists without checking whether the gcd is 1, and egcd builds on what we have to do for the latter anyway. Even if we did know in advance that a modular inverse exists, using modular exponentiation to find it requires knowing the totient of the modulus, and computing the totient is believed to be no easier than factoring.
The only "optimization" I'd be inclined to _try_ for Python's use is an extended binary gcd algorithm (which requires no bigint multiplies or divides, the latter of which is especially sluggish in Python):
https://www.ucl.ac.uk/~ucahcjm/combopt/ext_gcd_python_programs.pdf
For the rest:
 I'd also prefer than negative exponents other than 1 be supported. It's just that 1 by itself gets 95% of the value.
 It's fine by me if `pow(a, 1, m)` is THE way to spell modular inverse. Adding a distinct `modinv()` function too strikes me as redundnt clutter, but not damaging enough to be worth whining about. So 0 on that.

msg336028  (view) 
Author: Raymond Hettinger (rhettinger) * 
Date: 20190220 00:02 
Changing the title to reflect a focus on buildingout pow() instead of a function in the math module.

msg336130  (view) 
Author: Berry Schoenmakers (lschoe) * 
Date: 20190220 17:55 
In pure Python this seems to be the better option to compute inverses:
def modinv(a, m): # assuming m > 0
b = m
s, s1 = 1, 0
while b:
a, (q, b) = b, divmod(a, b)
s, s1 = s1, s  q * s1
if a != 1:
raise ValueError('inverse does not exist')
return s if s >= 0 else s + m
Binary xgcd algorithms coded in pure Python run much slower.

msg344162  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190601 09:45 
I think GH13266 is ready to go, but I'd appreciate a second pair of eyes on it if anyone has time.

msg344260  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190602 09:24 
New changeset c52996785a45d4693857ea219e040777a14584f8 by Mark Dickinson in branch 'master':
bpo36027: Extend threeargument pow to negative second argument (GH13266)
https://github.com/python/cpython/commit/c52996785a45d4693857ea219e040777a14584f8

msg344261  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190602 09:25 
Done. Closing.

msg344271  (view) 
Author: Serhiy Storchaka (serhiy.storchaka) * 
Date: 20190602 11:01 
PR 13266 introduced a compiler warning.
Objects/longobject.c: In function ‘long_invmod’:
Objects/longobject.c:4246:25: warning: passing argument 2 of ‘long_compare’ from incompatible pointer type [Wincompatiblepointertypes]
if (long_compare(a, _PyLong_One)) {
^~~~~~~~~~~
Objects/longobject.c:3057:1: note: expected ‘PyLongObject * {aka struct _longobject *}’ but argument is of type ‘PyObject * {aka struct _object *}’
long_compare(PyLongObject *a, PyLongObject *b)
^~~~~~~~~~~~

msg344330  (view) 
Author: Petr Viktorin (petr.viktorin) * 
Date: 20190602 22:34 
I will fix the compiler warning along with another one that I just introduced.

msg344334  (view) 
Author: Petr Viktorin (petr.viktorin) * 
Date: 20190602 23:08 
New changeset e584cbff1ea78e700cf9943d50467e3b58301ccc by Petr Viktorin in branch 'master':
bpo36027 bpo36974: Fix "incompatible pointer type" compiler warnings (GH13758)
https://github.com/python/cpython/commit/e584cbff1ea78e700cf9943d50467e3b58301ccc

msg344345  (view) 
Author: Petr Viktorin (petr.viktorin) * 
Date: 20190603 00:28 
New changeset 1e375c6269e9de4f3d05d4aa6d6d74e00f522d63 by Petr Viktorin in branch 'master':
bpo36027: Really fix "incompatible pointer type" compiler warning (GH13761)
https://github.com/python/cpython/commit/1e375c6269e9de4f3d05d4aa6d6d74e00f522d63

msg344454  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20190603 17:27 
@Petr: Thanks for the quick fix!

msg362505  (view) 
Author: Mark Dickinson (mark.dickinson) * 
Date: 20200223 12:16 
For tracker historians: see also #457066

msg368821  (view) 
Author: paul rubin (phr) 
Date: 20200514 10:15 
https://bugs.python.org/issue457066 The old is new again ;).
