This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: [RFE] Add math.lcm() function: Least Common Multiple
Type: enhancement Stage: resolved
Components: Library (Lib) Versions: Python 3.9
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: Ananthakrishnan, lemburg, mark.dickinson, rhettinger, serhiy.storchaka, steven.daprano, stutzbach, tim.peters, veky, vstinner
Priority: normal Keywords: patch

Created on 2020-01-28 13:52 by Ananthakrishnan, last changed 2022-04-11 14:59 by admin. This issue is now closed.

Pull Requests
URL Status Linked Edit
PR 18328 closed Ananthakrishnan, 2020-02-04 13:38
PR 18547 merged Ananthakrishnan, 2020-02-18 12:29
Messages (31)
msg360875 - (view) Author: Ananthakrishnan (Ananthakrishnan) * Date: 2020-01-28 13:52
can we add an lcm and gcd function that can work as:

lcm(4,6) # returns 12

gcd(4,6) # returns 2
msg360876 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-01-28 14:06
There is math.gcd():
https://docs.python.org/dev/library/math.html#math.gcd

You can use numpy.lcm():
https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.lcm.html

Is it common to need lcm()? Do you have examples of applications which need lcm()? It's trivial to implement lcm():
https://stackoverflow.com/questions/51716916/built-in-module-to-calculate-least-common-multiple

I suggest to reject this feature request, since I never needed lcm() in 10 years of Python, whereas I use gcd() time to time.
msg360877 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2020-01-28 14:26
Uses for lcm are common enough that it is provided by Excel and the C++ boost. You can use it for working out problems like:

- if event A happens every 14 days, and event B happens every 6 days, then A and B will occur together even lcm(14, 6) days.


By the way, the "trivial" implementation given in the Stackoverflow link has a bug: if both arguments are zero, it raises instead of returning zero.

I wish that gcd took an arbitrary number of arguments, I often need to find the gcd of three or more numbers, and this is a pain:

    gcd(a, gcd(b, gcd(c, gcd(d, e)))))

when I could just say gcd(a, b, c, d, e) and have it work. Likewise of lcm. (For that matter, the gcd of a single number a is just a.)
msg360879 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2020-01-28 15:06
reduce(gcd, [a, b, c, d, e])
msg360880 - (view) Author: Ananthakrishnan (Ananthakrishnan) * Date: 2020-01-28 15:12
I created this issue as i came across the following question:

There are n students in class A,and m students in class B.each class divides into teams for a competition.What is the biggest possible team size that can be divided,such that  each team has same number of members.

We can solve this type of problems easily if we have lcm() in math library.And there are lots of real life applications where we have to use lcm.
msg360888 - (view) Author: Ananthakrishnan (Ananthakrishnan) * Date: 2020-01-28 18:34
Should i proceed with adding a pull request for adding a 'lcm' function in python's math module.
msg360895 - (view) Author: Vedran Čačić (veky) * Date: 2020-01-28 18:47
I must say that the problem (with two classes divided into teams) seems to me to be exactly one that can be solved with gcd, and lcm itself is mostly useless for it.
msg360898 - (view) Author: Ananthakrishnan (Ananthakrishnan) * Date: 2020-01-28 19:00
some problems that needs lcm function:

1:find the least number which when divided by 'a','b','c','d' leaves remainder 'e' in each case.

2:person A exercises every 'n' days and person B every 'm' days. A and B both exercised today. How many days will it be until they exercise together again?

3:The LCM is important when adding fractions which have different denominators



we have to use the lcm function when,

1) an event that is or will be repeating over and over.
2) To purchase or get multiple items in order to have enough.
3) To figure out when something will happen again at the same time.

All these shows lcm function should be included in the math library.

So can i proceed with adding pull request to add lcm function in python's math module.
msg360900 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2020-01-28 19:46
-1 Given that we had gcd(), I don't see any value to adding *lcm()* as well. 
 
Once you have gcd(), getting the least common multiple is trivial.

Also, it is rare to ever need a lcm() function.  I don't think I've ever seen it in real code.
msg360934 - (view) Author: Vedran Čačić (veky) * Date: 2020-01-29 02:39
I agree with Raymond that it's really seldom needed. However, I'd like to point out that the "trivial" implementation might not be so trivial after all: as Steven said, it mishandles (0,0) case. And even Tim fell into that trap, so it can't be said it's easily avoided. I agree that this case doesn't really appear in "real world" tasks, but that's not really an excuse: imagine a factorial that didn't work for 0.

Also, a bit more often used case: seeing the code for lcm of 2 arguments, people might (and do; I've seen it) generalize to 3 or more arguments in a way that seems logical and is often correct, but not always (a*b*c//gcd(a,b,c)).

And one more tidbit: the usual formula for lcm doesn't really work for the case of fraction inputs. I know that currently math.gcd doesn't handle fractions, but it could. It's imaginable that that feature will some day be added (just like pow recently started supporting modular inverses), and then suddenly lcd implementations will silently give the wrong result for fractions.

A smart man;-) once said that the main criteria for inclusion in stdlib is that the function is often needed by users, _and_ it's often implemented wrong. I think lcm doesn't really satisfy the first criterion, but it does satisfy the second.
msg360941 - (view) Author: Ananthakrishnan (Ananthakrishnan) * Date: 2020-01-29 08:19
I agree with Vedran Čačić.As the modules are not made for one person but it is for the ease of coding.There are so many functions which i don't use but used by other people.We are using functions to make coding easy and if lcm function is added many people will find it usefull.
msg360989 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-01-29 22:23
+0 from me.

Another use is computing the Carmichael function for composite numbers (like an RSA encryption modulus, in which context the Carmichael function is routinely used).

But only +0 instead of +1 because it's so easy to build from gcd().

I don't agree it's tricky at all.  While lcm(0, 0) undoubtedly should return 0 in a general-purpose library function, in my own code I've never supplied that.  Because in every application I've ever had for it, I would rather get an exception if I ever passed two zeroes - that would always have been a mistake.
msg361021 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2020-01-30 08:16
+0 from me as well; agreed with everything that Tim said (except that I've never had a need for the Carmichael function; my RSA implementations do the inefficient thing based on (p-1)(q-1)).

This is somewhat reminiscent of comb and perm: lcm is often taught as a natural counterpart to gcd, so despite the fact that it's less fundamental and has less utility, people often expect to see the two together.

@Ananthakrishnan: do you want to put together a PR? I'll commit to reviewing it if you do.
msg361029 - (view) Author: Ananthakrishnan (Ananthakrishnan) * Date: 2020-01-30 10:04
Yes,I want to put together a PR.
msg361033 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2020-01-30 10:31
Great. For clarity, here's a Python function giving the behaviour I'd expect from a 2-arg lcm:

    from math import gcd

    def lcm(a, b):
        if a == 0:
            return 0
        return abs(a // gcd(a, b) * b)
msg361035 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2020-01-30 10:41
If a < b, what is better,

    a // gcd(a, b) * b

or

    b // gcd(a, b) * a

? Or there is no difference?
msg361037 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2020-01-30 11:04
I'd guess a // gcd(a, b) * b would be faster, on the basis that division is slower than multiplication. But I doubt it's worth worrying about for this implementation, given that the gcd call is likely to be the bottleneck as a and b get large.
msg361080 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-01-30 22:15
And I in turn agree with everything Mark said ;-)  But I'll add that while the analogy to comb/perm is a good one, I think the case for lcm is stronger than for perm.  Not only is lcm more common in real life (although, no, not at all common overall!), perm was added at the same time as prod, and perm is essentially a special case of prod.
msg361084 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2020-01-31 00:05
Perhaps add, is_prime() as well.  Conceptually, it is in the same family of functions.  Unlike lcm(), it is one that I would use and would improve Python's value as a teaching tool.
msg361086 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-01-31 00:51
Raymond, there was a thread a while back about adding an `imath` module, to package the number-theoretic functions that frequently come up.  _All_ these things should really go into that instead, IMO.  `math` started as a thin wrapper around C's libm, and has been losing its once-exclusive focus on functions for working with Python floats.  I think that focus was valuable.

In that older thread, I suggested a `probable_prime(n)` predicate function, and posted my pure-Python Miller-Rabin implementation.  Simple (as such things go), but I wouldn't aim to compete with (say) gmp.

I don't think `is_prime(n)` is suitable for Python.  Proving that a large integer absolutely is prime is either quite slow or quite complicated.  In practice, even professionals in critical applications are happy to settle for probabilistic assurances, because an arbitrarily tiny probability of error can be obtained very much faster than a deterministic proof.

Anyway ;-) , ya, I like the idea, but I'd sure like it to go into a module where it obviously belongs.  Also a function, e.g., to generate primes, and ...
msg361098 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-01-31 10:44
I dislike the idea of adding a is_prime() function in Python since users will likely stress Python with huge numbers and then complain that Python is too slow. Correct is_prime() is very slow for large numbers, and statistically approach is not 100% correct... Or we may have to add both approaches. But then you have to pick an algorithm which would fit "most use cases".

I concur with Tim, it would be better to put such opinionated code on PyPI ;-)

--

I'm -0 on adding math.lcm(). For example, I failed to find any request to add such function on python-ideas archives. This issue seems to be the first user request.

I'm not opposed to add it. Some people seem to like the idea of the completeness of the stdlib (gcd & lcm go together). So if someone wants to add it, go ahead and propose a PR :-)
msg361101 - (view) Author: Vedran Čačić (veky) * Date: 2020-01-31 12:15
is_prime that's always correct is probably not the right thing to go into math. Besides, now we have isqrt, it's just

    n>1 and n&1 and all(n%d for d in range(3,isqrt(n)+1,2))

-- yes, it's damn slow, but so is everything else you want to be absolutely correct. :-]

is_probable_prime is another matter, but there is an enormous amount of bikeshedding about the API of that one.
msg361102 - (view) Author: Vedran Čačić (veky) * Date: 2020-01-31 12:17
And yeah, I managed to leave out 2. Speaking about "often implemented wrong"... :-))
msg361210 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-02 04:58
I'd have to hear back from Raymond more on what he had in mind - I may well have been reading far too much in the specific name he suggested.

Don't much care about API, etc - pick something reasonable and go with it.  I'm not overly ;-) concerned with being "newbie friendly".  If someone is in a context where they need to use probabilistic solutions, there is no substitute for them learning something non-trivial about them.  The usual API for a Miller-Rabin tester supports passing in the number of bases to try, and it's as clear as anything of this kind _can_ be then that the probability of getting back True when the argument is actually composite is no higher than 1 over 4 to the power of the number of bases tried.  Which is also the way they'll find it explained in every reference.  It's doing nobody a real favor to make up our own explanations for a novel UI ;-)

BTW, purely by coincidence, I faced a small puzzle today, as part of a larger problem:

Given that 25 is congruent to 55 mod 10, and also mod 15, what's the largest modulus we can be certain of that the congruence still holds?  IOW, given

    x = y (mod A), and
    x = y (mod B)

what's the largest C such that we can be certain

    x = y (mod C)

too?  And the answer is C = lcm(A, B) (which is 30 in the example).
msg361220 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2020-02-02 07:30
On Fri, Jan 31, 2020 at 12:15:37PM +0000, Vedran Čačić wrote:
> 
> Vedran Čačić <vedgar@gmail.com> added the comment:
> 
> is_prime that's always correct is probably not the right thing to go into math. Besides, now we have isqrt, it's just
> 
>     n>1 and n&1 and all(n%d for d in range(3,isqrt(n)+1,2))
> 
> -- yes, it's damn slow, but so is everything else you want to be absolutely correct. :-]

Lots of things are fast and still absolutely correct.

And I think that you probably are underestimating just how slow trial 
division is for testing primes. I can imagine you've probably tested it 
on "large numbers" like a trillion-trillion (1e18) and thought that's 
acceptably fast. Try it on numbers like this:

P = int('29674495668685510550154174642905332730771991'
        '79985304335099507553127683875317177019959423'
        '8596428121188033664754218345562493168782883')
Q = P*(313*(P-1)+1)*(353*(P-1)+1)

Q is a 397 digit Carmichael Number. Its smallest factor is P, which has 
131 digits. If your computer is fast enough to perform a thousand 
trillion (1e15) divisions per second, your trial division will take more 
than 3e107 years to complete. That's 10 million trillion trillion 
trillion trillion trillion trillion trillion trillion trillion times 
longer than the universe has existed.

In a practical sense, your algorithm is never going to terminate. The 
sun will burn out and die first.

Upgrading your CPU is not going to help.

My old and slow PC can prove that Q is a composite number, using pure 
Python, in less than six seconds. And I'm sure that a better programmer 
than me would be able to shave off some of that time.

> is_probable_prime is another matter, but there is an enormous amount 
> of bikeshedding about the API of that one.

What is there to bikeshed about the API?

The API should be a function that takes a single integer, and returns 
False if that number is certainly composite[1] otherwise True.

I will argue that any other details -- including whether it is 
probabilistic or not -- are *quality of implementation* issues and not 
part of the API.

For the same reason, the name should be just "is_prime" (or "isprime") 
and not disturb naive users by calling it "probably prime".

Experts will read the documentation and/or the source code, and everyone 
else can happily ignore the microscopic[2] chance of a false positive 
with really huge numbers. (Sometimes ignorance really is bliss.)

We can argue on technical grounds just what the implementation should 
be, but that's not part of the API, and the implementation could change:

- Miller-Rabin, or Baillie–PSW, or AKS, or something else;
- whether probabilistic or deterministic;
- what error rate (if any) we are willing to accept;

etc.

If we choose the fast, simple Miller-Rabin test, with just 30 random 
iterations the error rate is less than approximately one in a trillion 
trillion. If we tested a million numbers ever second, it would take over 
18,000 years (on average) to find one false positive.

If that isn't good enough, increase the number of iterations to 50, in 
which case you would expect one false positive every 20,000 trillion 
years.

In comparison, it is estimated that cosmic rays cause memory bit 
flips as often one error per 4GB of RAM per day. This probabilistic 
algorithm is more reliable than your determinstic computer.

https://blogs.oracle.com/linux/attack-of-the-cosmic-rays-v2

https://www.johndcook.com/blog/2019/05/20/cosmic-rays-flipping-bits/

I don't have much time for worries about Miller-Rabin being 
"probabilistic". When I was young and naive I worried about it a lot, 
and maybe that was a legitimate worry for a naive Miller-Rabin 
implementation that did, say, five iterations (probability of a false 
positive about 0.1%).

But with 30 or 50 rounds, I am confident that nobody will ever 
experience such a false positive, not if they spend their entire 
lifetime doing nothing but calling `is_prime`.

Let's be real: if the banks are willing to trust the transfer of 
billions of dollars to probabilistic primality testing, why are we 
worring about this?

(By the way: for smallish numbers, under 2**64, no more than twelve 
rounds of Miller-Rabin are sufficient to deterministically decide 
whether a number is prime or not. Likewise for Baillie–PSW, which is 
also completely deterministic for numbers up to 2**64.)

[1] For ease of discussion, we'll count zero and one as composite.

[2] More like nanoscopic or picoscopic.
msg361221 - (view) Author: Vedran Čačić (veky) * Date: 2020-02-02 07:52
Tim: Considering that congruence is _defined_ as x=y(mod m) :<=> m|y-x, it's really not so surprising. :-)

Steven: It seems that we completely agree about inclusion of is_probabilistic_prime in stdlib. And we agree that it should be called isprime (or is_prime if Raymond names it;). About the bikeshedding, see Tim's comment. :-P

About absolutely correct algorithms: first, what exactly is your claim? If it's "I can write an algorithm in pure Python that can for every number of 397 digits mathematically exactly determine whether it is prime in under 6 seconds", I'd very much like to see that algorithm. (I guess it must be publicly available, since we're speaking about inclusion in Python stdlib.) I really don't have much expertise in number theory that I can be convinced it doesn't exist, but I very much doubt it.
msg361251 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-02 18:26
[Steven]
> ... Try it on numbers like this:
> ...
> Q = P*(313*(P-1)+1)*(353*(P-1)+1)
>
> Q is a 397 digit Carmichael Number. Its smallest factor is P,
> which has 131 digits.
> ...
> My old and slow PC can prove that Q is a composite number,
> using pure Python, in less than six seconds.
>
> And I'm sure that a better programmer than me would be
> able to shave off some of that time.

The pure-Python Miller-Rabin code i posted in the aforementioned thread is typically at least 100 times faster than that.  But it's not deterministic.  Because it tries (pseudo)random bases, it all depends on how many it needs to try on each run before finding a witness to that Q is composite.  It usually (at least 75% of runs) finds one on the first try.

BTW, it doesn't much matter that this is "pure Python" - for ints of this size the bulk of the time regardless is spent in CPython's C-coded bigint arithmetic.  I expect that your code must be doing more than _just_ Miller-Rabin, and in the Q case is paying through the nose for "optimizations" that all fail before getting to Miller-Rabin.

About the API, I can't agree to the one you propose.  Different applications have different appropriate tradeoffs between degree of certainty and time consumed - "one size fits all" doesn't fly here.

    def probably_prime(n, maxsteps=20)

supplies a _default_ instead.  I don't want an API that's useful _only_ to people who don't know what they're doing ;-)
 

> (By the way: for smallish numbers, under 2**64, no more than
> twelve rounds of Miller-Rabin are sufficient to
> deterministically decide whether a number is prime or not.

But only if you use a fixed set of 12 specific bases for which the claim has been proved.  "Real" Miller-Rabin picks bases at random, relying only on properties that have been proved independent of the argument size.

[Vedran]
> Tim: Considering that congruence is _defined_ as
> x=y(mod m) :<=> m|y-x, it's
> really not so surprising. :-)

Didn't say it was ;-)  Was just noting the odd coincidence that I just happened to stumble into a real use for lcm(), not previously mentioned in this report, while doing something else.
msg361262 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2020-02-03 01:28
> [Steven]
> > ... Try it on numbers like this:
> > ...
> > Q = P*(313*(P-1)+1)*(353*(P-1)+1)
> >
> > Q is a 397 digit Carmichael Number. Its smallest factor is P,
> > which has 131 digits.

[Tim]
> The pure-Python Miller-Rabin code i posted in the aforementioned 
> thread is typically at least 100 times faster than that.

This is exactly the sort of argument about quality of implementation 
which isn't, or shouldn't be, part of the argument about the API, IMO. 
Just as the choice of Timsort over Quicksort or Bubblesort *wink* isn't 
part of the list.sort() API, let alone the implementation choices in 
Timsort such as MIN_GALLOP.

I'm happy to have a discussion about implementation, here or off-list, 
I'm sure I will learn a lot. But briefly, the Q I quoted above was 
carefully designed (not by me, I hasten to add!) to be a preudoprime to 
the first 307 prime bases, so it's something of a worst-case scenario 
for my version.

> BTW, it doesn't much matter that this is "pure Python" - for ints of 
> this size the bulk of the time regardless is spent in CPython's 
> C-coded bigint arithmetic.

A fair point, thank you.

> About the API, I can't agree to the one you propose.  Different 
> applications have different appropriate tradeoffs between degree of 
> certainty and time consumed - "one size fits all" doesn't fly here.
> 
>     def probably_prime(n, maxsteps=20)
> 
> supplies a _default_ instead.  I don't want an API that's useful 
> _only_ to people who don't know what they're doing ;-)

It's not just for people who don't know what they're doing. It is for 
people who don't want to sweat the small details, they just want an 
answer True or False and are prepared to trust that the function author 
knows what they are doing.

If someone cares about the small details like how many bases to 
try, they might also care about details like:

- which specific bases are used;
- whether to use Miller-Rabin at all;
- how many trial divisions to do first;
- which specific primality test to use;

etc. What if the implementation shifts away from Miller-Rabin to (say) 
Baillie-PSW? Then your maxsteps parameter becomes meaningless. Do we 
deprecate it or leave it there in case the implementation changes again 
to something that has a concept of number of steps?

I think that if someone cares sufficiently about the details, then they 
probably won't be satisfied with a single isprime function, but may want 
is_fermat_prime, is_miller_rabin_prime, is_lucas_prime etc.

> > (By the way: for smallish numbers, under 2**64, no more than
> > twelve rounds of Miller-Rabin are sufficient to
> > deterministically decide whether a number is prime or not.
> 
> But only if you use a fixed set of 12 specific bases for which the 
> claim has been proved.

Correct. The first twelve primes make up such a minimal set.

http://oeis.org/A014233
msg361263 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2020-02-03 03:10
[Tim]
>> The pure-Python Miller-Rabin code i posted in the
>> aforementioned thread is typically at least 100
>> times faster than that.

[Steven]
> This is exactly the sort of argument about quality of
> implementation which isn't, or shouldn't be, part of
> the argument about the API, IMO.

I wasn't at all talking about API at that point.  I was backing the argument _you_ were making, that trial division is a hopelessly inefficient implementation, compared to what's possible with probabilistic methods, regardless of API.  You were in fact underselling that argument, because it's straightforward to get an order or two of magnitude faster than you demonstrated.


> the Q I quoted above was carefully designed (not by me, I hasten
> to add!)

I know the paper it came from.

> to be a preudoprime to the first 307 prime bases, so it's
> something of a worst-case scenario for my version.

Which is why I have no problem picturing how this "should be" approached:  the original Miller-Rabin (which uses random bases, not succumbing to the "premature optimization" catastrophe magnet) has no problem with Q (or with anything else!), and hasn't really been improved on for general-purpose use since it was published.  It's a darned-near perfect mix of simplicity, brevity, clarity, robustness, generality, and speed.  "Good enough" by a long shot on all counts.

>>    def probably_prime(n, maxsteps=20)
>>
>> supplies a _default_ instead.  I don't want an API that's useful
>> _only_ to people who don't know what they're doing ;-)

> It's not just for people who don't know what they're doing. It
> is for people who don't want to sweat the small details,

Then they can accept the default.  In what conceivable sense is that a burden?

> they just want an nswer True or False and are prepared
> to trust that the function author knows what they are
> doing.

But the function author can't possibly know what the _user_ needs this for.  In some apps degree of certainty is far more important than speed, while in other apps it's the opposite.  Be realistic here?  Your argument here makes sense for always-right functions, but you're not willing to accept one of those for this specific purpose (& neither am I).

> If someone cares about the small details like how
> many bases to try,

It's not a "small detail" where it matters:  it is THE way to trade off computation time against confidence in the results.  It's a _fundamental_ aspect of how Miller-Rabin works.

> they might also care about details like:
>
> - which specific bases are used;
> - whether to use Miller-Rabin at all;
> - how many trial divisions to do first;
> - which specific primality test to use;

Then they should go use a special-purpose library ;-)  Letting them fiddle the single most important parameter isn't down in the weeds, it's a top-level control knob.

My answers to all the above:

- bases are picked via randrange(2, n-1)
- yes, because no better general-purpose algorithm is known
- none - although I'll allow that there may be a
  speed advantage in some apps if a gcd is done with a
  relatively small primorial first (more efficient than
  one-at-a-time trial divisions by small primes)
- Miller-Rabin

> What if the implementation shifts away from Miller-Rabin
> to (say) Baillie-PSW?

It can't ;-)  I would absolutely document that Miller-Rabin _is_ the algorithm being used, with the random-bases implementation choice.  Then the curious can search the web for a mountain of good information about it.

> Then your maxsteps parameter becomes meaningless. Do we
> deprecate it or leave it there in case the implementation
> changes again to something that has a concept of number
> of steps?

All moot, given that I have no interest in hiding the specific algorithm in use.  YAGNI.

> I think that if someone cares sufficiently about the details,
> then they probably won't be satisfied with a single isprime
> function, but may want is_fermat_prime, is_miller_rabin_prime,
> is_lucas_prime etc.

Again, go to a special-purpose library if that's what they want.  And, again, I don't agree with your characterization of the Miller-Rabin maxsteps parameter as a "detail".  It's a fundamental aspect of what the algorithm does.  Which casual users can certainly ignore, but at their own risk.

> ...
> Correct. The first twelve primes make up such a minimal set.

And if you don't care about picking the fixed bases from a prefix of the primes, you only need 7 bases for a 100% reliable test through 2**64:  2, 325, 9375, 28178, 450775, 9780504, and 1795265022.  Or so this table claims:

https://miller-rabin.appspot.com/

But I don't care here.  Using a fixed set of bases is begging for embarrassment (for every fixed finite set of bases, there are an infinite number of composites they'll _claim_ are prime).  There are no systemic failure modes when using random bases.
msg361270 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2020-02-03 08:29
If someone wants to continue the discussion on is_prime(), I suggest to open a separated issue.
msg362287 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2020-02-19 18:21
New changeset f2ee21d858bc03dd801b97afe60ee2ea289e2fe9 by ananthan-123 in branch 'master':
bpo-39479:Add math.lcm() function: Least Common Multiple (#18547)
https://github.com/python/cpython/commit/f2ee21d858bc03dd801b97afe60ee2ea289e2fe9
History
Date User Action Args
2022-04-11 14:59:25adminsetgithub: 83660
2020-02-19 18:22:25mark.dickinsonsetstatus: open -> closed
resolution: fixed
stage: patch review -> resolved
2020-02-19 18:21:51mark.dickinsonsetmessages: + msg362287
2020-02-18 12:29:38Ananthakrishnansetpull_requests: + pull_request17925
2020-02-04 13:38:11Ananthakrishnansetkeywords: + patch
stage: patch review
pull_requests: + pull_request17719
2020-02-03 08:29:00vstinnersetmessages: + msg361270
2020-02-03 03:10:05tim.peterssetmessages: + msg361263
2020-02-03 01:28:08steven.dapranosetmessages: + msg361262
2020-02-02 18:26:45tim.peterssetmessages: + msg361251
2020-02-02 07:52:28vekysetmessages: + msg361221
2020-02-02 07:30:48steven.dapranosetmessages: + msg361220
2020-02-02 04:58:36tim.peterssetmessages: + msg361210
2020-01-31 12:17:24vekysetmessages: + msg361102
2020-01-31 12:15:37vekysetmessages: + msg361101
2020-01-31 10:44:19vstinnersetmessages: + msg361098
2020-01-31 00:51:59tim.peterssetmessages: + msg361086
2020-01-31 00:05:59rhettingersetmessages: + msg361084
2020-01-30 22:15:54tim.peterssetmessages: + msg361080
2020-01-30 11:04:43mark.dickinsonsetmessages: + msg361037
2020-01-30 10:41:23serhiy.storchakasetmessages: + msg361035
2020-01-30 10:31:06mark.dickinsonsetmessages: + msg361033
2020-01-30 10:04:27Ananthakrishnansetmessages: + msg361029
2020-01-30 08:16:58mark.dickinsonsetmessages: + msg361021
2020-01-29 22:23:55tim.peterssetnosy: + tim.peters
messages: + msg360989
2020-01-29 08:19:32Ananthakrishnansetmessages: + msg360941
2020-01-29 02:39:52vekysetmessages: + msg360934
2020-01-28 19:46:57rhettingersetmessages: + msg360900
2020-01-28 19:00:46Ananthakrishnansetmessages: + msg360898
2020-01-28 18:47:03vekysetnosy: + veky
messages: + msg360895
2020-01-28 18:34:31Ananthakrishnansetmessages: + msg360888
2020-01-28 15:12:19Ananthakrishnansetmessages: + msg360880
2020-01-28 15:06:26serhiy.storchakasetmessages: + msg360879
2020-01-28 14:28:05vstinnersetnosy: + serhiy.storchaka
2020-01-28 14:26:11steven.dapranosetnosy: + steven.daprano
messages: + msg360877
2020-01-28 14:06:06vstinnersetnosy: + stutzbach, rhettinger, lemburg, mark.dickinson, vstinner
title: can we add a lcm and gcd function. -> [RFE] Add math.lcm() function: Least Common Multiple
messages: + msg360876

versions: + Python 3.9, - Python 3.8
2020-01-28 13:52:43Ananthakrishnancreate