I'm not convinced, although I agree relaxing k <= n is less damaging than relaxing k >= 0.
Python isn't aimed at mathematicians (although some 3rd-party packages certainly are, and they're free to define things however they like). We have to trade off convenience for experts in edge cases against the likelihood that an ordinary user is making a mistake.
For example, that's why, despite that Python supports complex numbers, math.sqrt(-1) raises ValueError. For _most_ users, trying that was probably an error in their logic that led to the argument being negative. Experts can use cmath.sqrt() instead.
Ordinary users think comb(n, k) is the value of n!//(k!*(n-k)!), and as far as they're concerned factorial is only defined for non-negative integers. 0 <= k <= n follows from that. (Indeed, the current docs for `comb()` _define_ the result via the factorial expression.)
And ordinary users think of the sum of the first n integers as "number of terms times the average", or memorize directly that the answer is n*(n+1)/2. That works fine for n=0. Only an expert thinks of it as `comb(n+1, 2)`.
So I would still rather enforce 0 <= k <= n at the start, and relax it later only if there's significant demand. In going on three decades of using Python and having written my own `comb()` at least a dozen times, I've never found that constraint limiting, and enforcing it _has_ caught errors in my code. |