This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Unsupported provider

classification
Title: Add gamma function, error functions and other C99 math.h functions to math module
Type: enhancement Stage: commit review
Components: Library (Lib) Versions: Python 3.2, Python 2.7
process
Status: closed Resolution:
Dependencies: Superseder:
Assigned To: mark.dickinson Nosy List: mark.dickinson, ned.deily, nirinA, rhettinger, steven.daprano, stutzbach, terry.reedy, tim.peters
Priority: normal Keywords: needs review, patch

Created on 2008-07-15 17:37 by terry.reedy, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
math_private.h nirinA, 2008-07-21 13:16 header for erf, erfc, lgamma and tgamma
pymath.c.diff nirinA, 2008-07-25 00:31
mathmodule.c.diff nirinA, 2008-07-25 00:31
pymath.h.diff nirinA, 2008-07-25 00:32
test_math.py.diff nirinA, 2008-07-25 00:32
math.rst.diff nirinA, 2008-07-25 00:33
mathmodule.diff nirinA, 2008-07-25 18:32 patch for erf, erfc, lgamma and gamma
pymath.c.diff nirinA, 2008-07-29 20:55 erf and gamma patches without math_private.h
gamma.patch mark.dickinson, 2009-09-05 19:27
gamma3.patch mark.dickinson, 2009-09-18 20:43 Implementation of math.gamma
gamma4.patch mark.dickinson, 2009-09-21 10:45
gamma5.patch mark.dickinson, 2009-09-21 17:56
lgamma1.patch mark.dickinson, 2009-09-30 18:35
gamma-failures.txt ned.deily, 2009-10-17 05:25
pymath.h.diff nirinA, 2009-12-08 06:40 Euler constant
mathmodule.c.diff nirinA, 2009-12-08 06:41 expm1
erf.py nirinA, 2009-12-08 06:42 error function
mathmodule.c.diff nirinA, 2009-12-08 06:53
erf.patch mark.dickinson, 2009-12-13 12:53
erf_c.patch mark.dickinson, 2009-12-13 14:37 Add erf and erfc functions to math module.
expm1.patch mark.dickinson, 2009-12-16 12:43
Messages (54)
msg69698 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2008-07-15 17:37
From Py3000 list: in C99 standard math library, but need code when not
yet in particular implementation.  
Dickinson: open and assign to me
Stutzbach: I suggest using the versions from newlib's libm.  They contain
extensive comments explaining the math and have a generous license,
e.g.,:
http://sourceware.org/cgi-bin/cvsweb.cgi/~checkout~/src/newlib/libm/math/s_erf.c?rev=1.1.1.1&cvsroot=src
msg70106 - (view) Author: nirinA raseliarison (nirinA) Date: 2008-07-21 13:10
here is an attempt to make erf, erfc, lgamma and tgamma
accessible from math module.
erf and erfc are crude translation of the code pointed
out by Daniel Stutbach.
lgamma is also taken from sourceware.org, but doesn't
use the reentrant call for the sign.

i tested only on gcc-4.3.1/linux-2.6.26.
i'll write a testsuite soon.

some results:

Python 3.0b2 (r30b2:65080, Jul 21 2008, 13:13:13) 
[GCC 4.3.1] on linux2

>>> print(math.tgamma(0.5))
1.77245385091
>>> print(math.sqrt(math.pi))
1.77245385091
>>> print(math.tgamma(1.5))
0.886226925453
>>> print(math.sqrt(math.pi)/2)
0.886226925453
>>> print(math.sqrt(math.pi)*3/4)
1.32934038818
>>> print(math.tgamma(2.5))
1.32934038818
>>> for i in range(14):
	print(i, math.lgamma(i), math.tgamma(i))
	
0 inf inf
1 0.0 1.0
2 0.0 1.0
3 0.69314718056 2.0
4 1.79175946923 6.0
5 3.17805383035 24.0
6 4.78749174278 120.0
7 6.57925121201 720.0
8 8.52516136107 5040.0
9 10.6046029027 40320.0
10 12.8018274801 362880.0
11 15.1044125731 3628800.0
12 17.5023078459 39916800.0
13 19.9872144957 479001600.0

>>> for i in range(-14,0):
	print(i/5, math.lgamma(i/5), math.tgamma(i/5))
	
-2.8 0.129801291662 -1.13860211111
-2.6 -0.118011632805 -0.888685714647
-2.4 0.102583615968 -1.10802994703
-2.2 0.790718673676 -2.20498051842
-2.0 inf inf
-1.8 1.15942070884 3.18808591111
-1.6 0.837499812222 2.31058285808
-1.4 0.978052353322 2.65927187288
-1.2 1.57917603404 4.85095714052
-1.0 inf -inf
-0.8 1.74720737374 -5.73855464
-0.6 1.30750344147 -3.69693257293
-0.4 1.31452458994 -3.72298062203
-0.2 1.76149759083 -5.82114856863

>>> math.erf(INF)
1.0
>>> math.erf(-INF)
-1.0
>>> math.erfc(INF)
0.0
>>> math.erfc(-INF)
2.0
>>> math.erf(0.6174)
0.61741074841139409
msg70108 - (view) Author: nirinA raseliarison (nirinA) Date: 2008-07-21 13:16
and this is the header.
it is stolen from glibc.
msg70120 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-07-21 21:24
Thanks for the patch!  I probably won't get to this properly until after 
2.6 final, but it won't get lost.  It seems like there's pretty good 
support for adding these functions.

By the way, there's already a setup in Python 2.6 (ad 3.0) for substitutes 
for missing math functions:  take a look at the file Python/pymath.c, 
which provides inverse hyperbolic functions for those platforms that need 
them, as well as the bits in configure.in that check for these functions.  
This is the really the right place for the tgamma/lgamma/erf/erfc code.

So a full patch for this should touch at least Python/pymath.c, 
Modules/mathmodule.c, configure.in, Lib/test/test_math.py, and 
Doc/Library/math.rst.  If you have time to fill in any of these pieces, it 
would be greatly appreciated.
msg70125 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2008-07-21 22:34
Since we're not in a hurry for Py2.7 and 3.1, I would like to this 
kicked around a bit on the newsgroup and in numpy forums (personally, I 
would also post a pure python equivalent to the ASPN cookbook for 
further commentary).

There are several different approximations to choose from.  Each of 
them has their own implications for speed and accuracy.  IIRC, the one 
I used in test.test_random.gamma() was accompanied by a proof that its 
maximum error never exceeded a certain amount.  I think there were some 
formulas that made guarantees only over a certain interval and others 
that had nice properties in the first and second derivatives (one that 
don't have those properties can throw newtonian solvers wildly off the 
mark). 

Let's let the community use its collective wisdom to select the best 
approach and not immediately commit ourselves to the one in this patch.

At one point, Tim was reluctant to add any of these functions because 
it is non-trivial to do well and because it would open a can of worms 
about why python gets a different answer (possibly not as close to 
true) and some other favorite tool (matlab, excel, numpy, and 
engineering calculator, etc).

FWIW, I changed the 2.6 version of test_random.gamma() to take 
advantage of msum() to increase its accuracy when some of the summands 
have nearly cancelling values.
msg70128 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2008-07-21 23:20
On Mon, Jul 21, 2008 at 5:34 PM, Raymond Hettinger
<report@bugs.python.org> wrote:
> There are several different approximations to choose from.  Each of
> them has their own implications for speed and accuracy.

FWIW, the current patch dynamically selects one of a few different
approximation methods based on the input value.

+1 on kicking it around and comparing the output with that from other
mathematics tools.
msg70131 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2008-07-22 01:12
It would be nice if we knew the error bounds for each of the 
approximation methods. Do we know how the coefficients were generated?

When switching from one method to another, it might be nice to have a 
range where the results slowly blend from one method to another:
  if x < a: return f1(x)
  if x > b: return f2(x)
  t = (x - a) / (b - a)
  return (1-t)*f1(x) + t*f2(x)
This minimizes discontinuities in the first and second derivatives.

The lowword/highword macros look to be tightly tied to the internal 
processor representation for floats.  It would be more portable and 
maintainable to replace that bit twiddling with something based on frexp
().
msg70232 - (view) Author: nirinA raseliarison (nirinA) Date: 2008-07-25 00:33
Mark Dickinson :
> So a full patch for this should touch at least Python/pymath.c, 
> Modules/mathmodule.c, configure.in, Lib/test/test_math.py, and 
> Doc/Library/math.rst.
here are patches for Python/pymath.c, Modules/mathmodule.c,
Lib/test/test_math.py, Doc/Library/math.rst and also Include/pymath.h.
i haven't touched configure.in as i've erf, erfc, lgamma and tgamma
on my platform, so the patches can be tested.
the doc may need better wording and test_math.py some additionnal tests.

Raymond Hettinger:
> It would be nice if we knew the error bounds for each of the 
> approximation methods. Do we know how the coefficients were generated?
it will be really great if we're able to regenerate all these
coefficients. this may be done by following the comments in
the original codes, but not that easy and not sure one will obtain
the same things.

> The lowword/highword macros look to be tightly tied to the internal 
> processor representation for floats.  It would be more portable and 
> maintainable to replace that bit twiddling with something based on frexp
it seems possible to avoid the use of these macros.
i'll try to find time to dig up these possibilities.
msg70233 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2008-07-25 02:05
Why isn't tgamma() simply named gamma()?  The t prefix does nothing for 
me except raise questions.
msg70241 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2008-07-25 04:29
I think he just carried the names over from C, where:

tgamma() is the "true gamma function"
lgamma() is the log of the gamma function

and gamma() might be tgamma() or lgamma() depending on which C library
you use (sigh).

I'm +1 on making gamma() be the true gamma function and not carrying
over this brain-damage to Python.
msg70262 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-07-25 17:46
> I'm +1 on making gamma() be the true gamma function and not carrying
> over this brain-damage to Python.

+1 from me, too.
msg70266 - (view) Author: nirinA raseliarison (nirinA) Date: 2008-07-25 18:32
ouch!
i was asleep, at 3AM, when i edited the issue
and didn't really know what i was doing.
i just see that i removed, instead of edited the
mathmodule.diff and didn't check after.
so here it is again, with the url for original code
added.

if you want to test the erf, erfc, lgamma and gamma
please use this mathmodule.diff and the math_private.h header

the other diffs are incomplete and don't work, unless you have
erfs and gammas functions implemented in your platform.
i'll try to work on these files this weekend and
will rename tgamma to gamma.
msg70267 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2008-07-25 18:36
Can you also implement blending of approximations:  (1-t)*f1(x) + t*f2
(x)
msg70270 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2008-07-25 19:12
On Fri, Jul 25, 2008 at 1:37 PM, Raymond Hettinger
<report@bugs.python.org>wrote:

> Raymond Hettinger <rhettinger@users.sourceforge.net> added the comment:
>
> Can you also implement blending of approximations:  (1-t)*f1(x) + t*f2
> (x)
>

Is this necessary?  Are the approximations significantly different near the
transition points?

(Haven't had a chance to download the patch and try it myself as a I'm
getting on a plane in the morning--sorry)
msg70361 - (view) Author: nirinA raseliarison (nirinA) Date: 2008-07-28 17:04
i wrote pure Python equivalent of the patch and will
post it to the ASPN cookbook as suggested. i'm wondering
if i have to upload the code here too, or just the link
will suffice?

Raymond Hettinger:
> Can you also implement blending of approximations:
> (1-t)*f1(x) + t*f2(x)

i do this with the Python code for now, and my question is:
how to choose the values for the borders?

with erf, for example, one has typically 5 domains
of approximation and one can write something like:

    def erf(x):
        f = float(x)
        if (f == inf):
            return 1.0
        elif (f == -inf):
            return -1.0
        elif (f == nan):
            return nan
        else:
            if (abs(x)<0.84375):
                return erf1(x)
            elif (0.84375<=abs(x)<1.25):
                return erf2(x)
            elif (1.25<=abs(x)<2.857142):
                return erf3(x)
            elif (2.857142<=abs(x)<6):
                return erf4(x)
            elif (abs(x)>=6):
                return erf5(x)

implementing the blending of approximations,
one may write:

    def blend(x, a, b, f1, f2):
        if x < a:
            return f1(x)
        if x > b:
            return f2(x)
        return (((b-x)*f1(x))-((a-x)*f2(x)))/(b-a)

and the definition becomes:

    def erf(x):
	f=float(x)
	if (abs(x)<0.83):
		return erf1(x)
	elif (0.83<=abs(x)<0.85):
		return blend(x,0.83,0.85,erf1,erf2)
	elif (0.85<=abs(x)<1.24):
		return erf2(x)
	elif (1.24<=abs(x)<1.26):
		return blend(x,1.24,1.26,erf2,erf3)
	elif (1.26<=abs(x)<2.84):
		return erf3(x)
	elif (2.84<=abs(x)<2.86):
		return blend(x,2.84,2.86,erf3,erf4)
	elif (2.86<=abs(x)<5.99):
		return erf4(x)
	elif (5.99<=abs(x)<6.01):
		return blend(x,5.99,6.01,erf4,erf5)
	elif (abs(x)>=6.01):
		return erf5(x)

but the choice of these values is somewhat arbitrary.
or i completely misunderstood what you mean by "blending of approximations"!

for erf functions, blending of approximations, as i understand it,
may be implemented easily as the functions are monotonics.
for gammas, this will be a bit complicated, especially for
negative values, where there are extremums for half integers and
discontinuities for integers.
in the other hand, i'm wondering if these coefficients had been chosen
with optimization of discontinuities already taken into account.
msg70412 - (view) Author: nirinA raseliarison (nirinA) Date: 2008-07-29 20:55
pure Python codes are posted to ASPN:

http://code.activestate.com/recipes/576391/
http://code.activestate.com/recipes/576393/

i also rewrote the patch so that they don't
use the high_word/low_word macros anymore.
msg77075 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-05 21:43
There have been requests to add other C99 libm functions to Python:
expm1 is requested in issue 3501, and log2 came up in the issue 3724 
discussion.  I'm absorbing those issues into this one.

With these two, and the gamma and error functions, we're pretty close to 
implementing all the functions in C99s math.h.  Seems to me that we might 
as well aim for completeness and implement the whole lot.
msg92294 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-05 19:27
Finally got around to looking at this properly.

Here's a first attempt at a patch to add gamma, lgamma, erf and erfc to 
the math module.  It uses the pymath.c.diff code from nirinA, and adds 
docs, tests, and the extra infrastructure needed for the build system.

The code has gone in a new file called Modules/math_cextra.c.  The old 
approach of sticking things in pymath.c really isn't right, since that 
ends up putting the extra code into the core Python executable instead 
of the math module.

There are currently many test failures;  some of these are undoubtedly 
due to the tests being too strict, but some of them show problems with 
the code for these functions.  It should be possible to have erf and 
erfc produce results accurate to within a few ulps (<50, say) over the 
entire extended real line;  getting gamma and lgamma accurate for 
positive inputs should also be perfectly possible.  Negative arguments 
seem to be more difficult to get right.

To test the code on non-Windows, build with something like:

CC='gcc -DTEST_GAMMA' ./configure && make

the '-DTEST_GAMMA' just forces the build to use nirinA's code;  without 
this, the math module will use the libm functions if they're available.

This is just work-in-progress;  apart from the test failures, there's 
still a lot of polishing to do before this can go in.

The patch is against the trunk.
msg92844 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-18 20:43
Here's a more careful patch for just the gamma function.  It's a fairly 
direct implementation of Lanczos' formula, with the choice of parameters 
(N=13, g=6.024680040776729583740234375) taken from the Boost library.  In 
testing of random values and known difficult values, it's consistently 
giving errors of < 5ulps across its domain.  This is considerably better 
than the error from the exp(lgamma(x)) method, since the exp call alone 
can easily introduce errors of hundreds of ulps for larger values of x.

I'd appreciate any feedback, especially if anyone has the chance to test 
on Windows:  I'd like to know whether I got the build modifications right.

This patch *doesn't* use the system gamma function, even when available.  
At least on OS X 10.5.8, the supplied gamma function has pretty terrible 
accuracy, and some severe numerical problems near negative integers.  I 
don't imagine Linux is much better.

Once this is working, I'll concentrate on lgamma.  Then erf and erfc.
msg92928 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-21 10:45
New patch for gamma , with some tweaks:

 - return exact values for integral arguments: gamma(1) through gamma(23)
 - apply a cheap correction to improve accuracy of exp and pow
   computations
 - use a different form of the reflection formula:

     gamma(x) = -pi/sinpi(x)/x/gamma(x)

   (the usual reflection formula has accuracy problems for
    x close to a power of 2;  e.g., x in (-64,-63) or x
    in (-128, -127))

 - avoid duplication formula for large negative arguments
 - add a few extra tests

On my machine, testing with approx. 10**7 random samples, this version 
achieves an accuracy of <= 10 ulps across the domain (comparing with 
correctly-rounded results generated by MPFR).  Limiting the test to 
arguments in the range (-256.0, 1/256.0] + [1/256.0, 256.0) (with each 
float in that range equally likely), the error in ulps from 10**6 samples 
has mean -0.104 and standard deviation 1.230.

I plan to check this in in a week or two.  Feedback welcome!  It would be 
especially useful to know whether this patch compiles correctly on 
Windows.
msg92935 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2009-09-21 12:58
I'll test your patch on Windows.  Are you working against the trunk or
the py3k branch?
msg92936 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-21 13:04
Thanks!  The patch is against the trunk.  (It doesn't quite apply cleanly 
to py3k, but the changes needed to make it do so should be minimal.)

Hmm. Rereading my previous comment, I seem to have a blindness for 
negative signs:

    gamma(x) = -pi/sinpi(x)/x/gamma(x)

should have been

    gamma(-x) = -pi/sinpi(x)/x/gamma(x)

and

    (-256.0, 1/256.0] + [1/256.0, 256.0)

should have been

    (-256.0, -1/256.0] + [1/256.0, 256.0)
msg92939 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2009-09-21 13:47
I'm only setup to test the official Windows build setup under PCbuild. 
I'm not testing the legacy build setups under PC/VC6, PC/VS7.1, or PC/VS8.0.

The patch against the trunk failed for PC/VC6/pythoncore.dsp.  I don't
need that to build, though.

Your patch built fine, with one warning about a loss of precision on the
following line in sinpi():

    n = round(2.0*y);

I recommend explicitly casting the result of round() to int, to get rid
of the warning and because explicit is better than implicit. :)

I ran the following tests, all of which passed:

PCbuild/python.exe Lib/test/regrtest.py -v test_math

I appreciate your work on this patch.  Let me know if there's anything
else I can do.
msg92944 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-21 14:30
Many thanks, Daniel.

> The patch against the trunk failed for PC/VC6/pythoncore.dsp.  I don't
> need that to build, though.

I've no idea why that would happen.  A line-ending issue, perhaps?  If 
it doesn't stop me committing the change, then perhaps it's not a 
problem.

> Your patch built fine, with one warning about a loss of precision on
> the following line in sinpi():
>
>    n = round(2.0*y);
>
>I recommend explicitly casting the result of round() to int, to get
>rid of the warning and because explicit is better than implicit. :)

Done.  Thanks!
msg92948 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-21 17:56
gamma5.patch.  very minor changes from the last patch:

 - add (int) cast suggested by Daniel Stutzbach
 - Misc/NEWS entry
 - ..versionadded in docs
 - minor corrections to some comments
msg93228 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-28 19:24
Gamma function added in r75117 (trunk), r75119 (py3k).
I'm working on lgamma.
msg93370 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-09-30 18:35
A first attempt for lgamma, using Lanczos' formula.

In general, this gives very good accuracy.  As the tests show, there's one 
big problem area, namely when gamma(x) is close to +-1, so that lgamma(x) 
is close to 0.  This happens for x ~ 1.0,  x ~ 2.0, and various negative 
values of x (mathematically, lgamma(x) hits zero twice in each interval 
(n-1, n) for n <= -2).  In that case the accuracy is still good in 
absolute terms (around 1e-15 absolute error), but terrible in ulp terms.

I think it's reasonable to allow an absolute error of a few times the 
machine epsilon in lgamma:  for many uses, the lgamma result will be
exponentiated at some point (often after adding or subtracting other 
lgamma results), and at that point this absolute error just becomes a 
small relative error.
msg93373 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2009-09-30 19:36
I agree with your reasoning.

Any chance of also getting the incomplete beta and gamma functions,
needed to compute the CDF of many commonly used probability
distributions? (Beta, Logarithmic, Poisson, Chi-Square, Gamma, etc.)
msg94165 - (view) Author: Ned Deily (ned.deily) * (Python committer) Date: 2009-10-17 05:25
FYI, approximately 20 of the gamma test cases fail on PPC Macs.  Attached 
are snippets from regrtest runs with trunk and with py3k, both on a G4 ppc 
(32-bit) running OS X 10.5.  Identical failures for trunk (did not try 
py3k) were observed on a G3 ppc with OS X 10.4.
msg94166 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2009-10-17 05:29
FYI, mysterious numeric differences on PPC are often due to the C
compiler generated code to use the "fused multiply-add" HW instruction.
 In which case, find a way to turn that off :-)
msg94168 - (view) Author: Ned Deily (ned.deily) * (Python committer) Date: 2009-10-17 06:31
For the record, these OS X installer build scripts (running on 10.5) 
results in these gcc options:

gcc-4.0 -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -
fno-strict-aliasing -fno-common -dynamic -DNDEBUG -g -O3 -
I/tmp/_py/libraries/usr/local/include -I. -IInclude -I/usr/local/include 
-I/private/tmp/_t/Include -I/private/tmp/_py/_bld/python -c 
/private/tmp/_t/Modules/mathmodule.c -o build/temp.macosx-10.3-fat-
3.2/private/tmp/_t/Modules/mathmodule.o

and:

$ gcc-4.0 --version
powerpc-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 (Apple Inc. build 5493)
msg94169 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-10-17 06:59
Hmm.  It looks as though the gamma computation itself is fine;  it's the 
accuracy check that's going wrong.

I'm suspecting an endianness issue and/or a struct module bug.

Ned, do you still get these failures with a direct single-architecture 
build on PPC?  Or just for the universal build?
msg94170 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2009-10-17 07:06
Adding

-mno-fused-madd

would be worth trying.  It usually fixes PPC bugs ;-)
msg94171 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-10-17 07:13
It was a stupid Mark bug.  I'd missed out a '<' from the struct.unpack 
call that was translating floats into integers, for the purposes of 
computing ulps difference.  Fixed in r75454 (trunk), r75455 (py3k).

Thanks for reporting this, Ned.
msg94172 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-10-17 07:38
BTW, the gamma computation shouldn't be at all fragile, unless I've messed 
up somewhere:  the Lanczos sum is evaluated as a rational function in 
which both numerator and denominator have only positive coefficients, so 
there's little danger of nasty cancellations boosting the relative error.
I'd be surprised if use of fma instructions made more than 1 or 2 ulps 
difference to the result, but I'll take a look.
msg94175 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2009-10-17 12:28
Mark, you needn't bother:  you found the smoking gun already!  From your
description, I agree it would be very surprising if FMA made a
significant difference in the absence of catastrophic cancellation.
msg94224 - (view) Author: Ned Deily (ned.deily) * (Python committer) Date: 2009-10-18 20:04
Verified that r75454 makes the gamma test errors go away on a PPC Mac.
msg96108 - (view) Author: nirinA raseliarison (nirinA) Date: 2009-12-08 06:40
having the gamma function, one certainly needs the other Euler constant:
C = 0.5772....

the C constant is taken from GSL
msg96109 - (view) Author: nirinA raseliarison (nirinA) Date: 2009-12-08 06:41
for expm1, we use the Taylor development near zero, but
we have to precise what means small value for x. here this means
abs(x) < math.log(2.0).
one can also use abs(x) < C
msg96110 - (view) Author: nirinA raseliarison (nirinA) Date: 2009-12-08 06:42
finally, for the error function, with the same kind of idea as with expm1,
here is a pure python definition

these patches are against r27a1:76674

if all these make sense, i'll check for NAN, infinity, test suite...
msg96111 - (view) Author: nirinA raseliarison (nirinA) Date: 2009-12-08 06:53
may be some error when i send this the first time,
i cannot see the file, so i resend mathmodule.c.diff
sorry if you get it twice
msg96266 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-11 17:26
nirinA:  thanks for prodding this issue.  Yes, it is still alive (just). 
:)

About adding the Euler constant:  I agree this should be added.  Do you 
have time to expand your patch to include docs and a test or two?

For expm1, I was planning to use the libm function when it exists (which 
I expect it will on most platforms), and just use a quick hack based on 
the identity

  expm1(x) = sinh(x/2)*exp(x/2)

for platforms where it's missing.

I'll look at the erf and erfc implementations.

Daniel:  re incomplete beta and gamma functions, I'd prefer to limit 
this particular issue to the C99 math functions.  It's difficult to know 
where to draw the line, but it seems to me that these are really the 
domain of numpy/scipy.  And they involve multiple input arguments, which 
makes them *darned hard* to write and test!  Anyway, if you'd like to 
see those added to Python's math module, maybe this could be brought up 
on python-dev to see how much support there is.
msg96267 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-11 17:26
> expm1(x) = sinh(x/2)*exp(x/2)

Should be 2*sinh(x/2)*exp(x/2).
msg96268 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-11 17:32
lgamma patch committed to trunk (with looser tests) in r76755.
msg96332 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-13 12:53
nirinA: thanks for the erf patch.  It's fine as far as it goes;  the main 
technical issue is that the series development only converges at a 
sensible rate for smallish x;  for larger x something more is needed.

Here's a prototype patch for the erf and erfc functions that uses a power 
series (not quite the same as the series nirinA used;  by pulling out a 
factor of exp(-x*2) you can make all coefficients in the remaining series 
positive, which is good for the numerics) for small x, and a continued 
fraction expansion for larger x.   The erf and erfc functions are 
currently written in Python.  I'll do some more testing (and possibly 
tweaking) of this patch and then translate the Python to C.
msg96333 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-13 14:37
Here's the C version.
msg96485 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-16 12:43
Here's a patch to add expm1.  Rather than putting the code in pymath.c, 
which is the wrong place (see issue #7518), I've added a new file 
Modules/_math.c for this;  log1p, atanh, etc. should also eventually be 
moved from pymath.c into this file.

I'm fairly confident that the maths and numerics are okay, but less 
confident about my changes to the build structure;  if anyone has a chance 
to test this patch, especially on Windows, I'd welcome feedback.  I've 
tested it on OS X, and will test on Linux.
msg96497 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-16 20:40
I committed the patch for expm1 in r76861 (trunk) and r76863 (py3k), with 
one small change:  instead of using 2 * exp(x/2) * sinh(x/2), expm1 is now 
computed using a method due to Kahan that involves only exp and log.  This 
seems a little safer, and guards against problems on platforms with a 
naive implementation of sinh.  (I *hope* no such platforms exist, but 
suspect that's wishful thinking. :-)

Thanks to Eric Smith for testing the build system changes on Windows.
msg96599 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-19 11:23
Added erf and erfc in r76879 (trunk), r76880 (py3k).
msg96600 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-12-19 11:24
Oops.  That's r76878 (trunk) and r76879 (py3k).
msg97127 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2010-01-02 10:55
The last two functions to consider adding are exp2 and log2.  Does anyone 
care about these?
msg97131 - (view) Author: Daniel Stutzbach (stutzbach) (Python committer) Date: 2010-01-02 14:48
Any time I've ever needed log2(x), log(x)/log(2) was sufficient.

In Python, exp2(x) can be spelled 2.0**x.  What would exp2(x) gain us?
msg97133 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2010-01-02 14:58
> In Python, exp2(x) can be spelled 2.0**x.  What would exp2(x) gain us?

Not much, I suspect.  :)

I'd expect (but am mostly guessing) exp2(x) to have better accuracy than 
pow(2.0, x) for some math libraries;  I'd further guess that it's somewhat 
more likely to give exact results for (small) integral x.

Similarly for log2:  log2(n) should be a touch more accurate than 
log(n)/log(2), and the time you're most likely to notice the difference is 
when n is an exact power of 2.

But we've already got the 'bit_length' method for integers, which fills 
some of the potential uses for log2.  So unless there's a feeling that 
these functions are needed, I'd rather leave them out.
msg97160 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2010-01-03 12:42
Will close this unless there's an outcry of support for exp2 and log2.
nirinA,  if you're still interested in adding the euler constant, please open another issue.
History
Date User Action Args
2022-04-11 14:56:36adminsetgithub: 47616
2010-01-09 18:57:05mark.dickinsonsetstatus: pending -> closed
2010-01-03 12:42:46mark.dickinsonsetstatus: open -> pending

messages: + msg97160
2010-01-02 14:58:57mark.dickinsonsetmessages: + msg97133
2010-01-02 14:48:15stutzbachsetmessages: + msg97131
2010-01-02 10:55:08mark.dickinsonsetmessages: + msg97127
2009-12-19 11:24:28mark.dickinsonsetmessages: + msg96600
2009-12-19 11:23:27mark.dickinsonsetmessages: + msg96599
2009-12-16 20:40:35mark.dickinsonsetmessages: + msg96497
2009-12-16 12:43:55mark.dickinsonsetfiles: + expm1.patch

messages: + msg96485
2009-12-13 14:37:45mark.dickinsonsetfiles: + erf_c.patch

messages: + msg96333
2009-12-13 12:53:46mark.dickinsonsetfiles: + erf.patch

messages: + msg96332
2009-12-11 17:32:24mark.dickinsonsetmessages: + msg96268
2009-12-11 17:26:58mark.dickinsonsetmessages: + msg96267
2009-12-11 17:26:06mark.dickinsonsetmessages: + msg96266
2009-12-08 06:53:01nirinAsetfiles: + mathmodule.c.diff

messages: + msg96111
2009-12-08 06:42:00nirinAsetfiles: + erf.py

messages: + msg96110
2009-12-08 06:41:20nirinAsetfiles: + mathmodule.c.diff

messages: + msg96109
2009-12-08 06:40:36nirinAsetfiles: + pymath.h.diff

messages: + msg96108
2009-10-18 20:04:21ned.deilysetmessages: + msg94224
2009-10-17 12:28:10tim.peterssetmessages: + msg94175
2009-10-17 07:38:47mark.dickinsonsetmessages: + msg94172
2009-10-17 07:13:22mark.dickinsonsetmessages: + msg94171
2009-10-17 07:06:59tim.peterssetmessages: + msg94170
2009-10-17 06:59:56mark.dickinsonsetmessages: + msg94169
2009-10-17 06:31:51ned.deilysetmessages: + msg94168
2009-10-17 05:29:12tim.peterssetnosy: + tim.peters
messages: + msg94166
2009-10-17 05:25:59ned.deilysetfiles: + gamma-failures.txt
nosy: + ned.deily
messages: + msg94165

2009-09-30 19:36:35stutzbachsetmessages: + msg93373
2009-09-30 18:35:30mark.dickinsonsetfiles: + lgamma1.patch

messages: + msg93370
2009-09-29 06:34:04steven.dapranosetnosy: + steven.daprano
2009-09-28 19:24:48mark.dickinsonsetmessages: + msg93228
2009-09-21 17:56:30mark.dickinsonsetfiles: + gamma5.patch

messages: + msg92948
2009-09-21 14:30:54mark.dickinsonsetmessages: + msg92944
2009-09-21 13:47:33stutzbachsetmessages: + msg92939
2009-09-21 13:04:31mark.dickinsonsetmessages: + msg92936
2009-09-21 12:58:47stutzbachsetmessages: + msg92935
2009-09-21 10:45:23mark.dickinsonsetkeywords: + needs review
files: + gamma4.patch
messages: + msg92928

stage: commit review
2009-09-18 20:43:53mark.dickinsonsetfiles: + gamma3.patch

messages: + msg92844
2009-09-05 19:27:15mark.dickinsonsetfiles: + gamma.patch

messages: + msg92294
versions: + Python 3.2, - Python 3.1
2009-05-17 19:30:41mark.dickinsonsetfiles: - unnamed
2008-12-05 21:44:58mark.dickinsonlinkissue3501 superseder
2008-12-05 21:43:20mark.dickinsonsetmessages: + msg77075
title: Add gamma and error functions to math module -> Add gamma function, error functions and other C99 math.h functions to math module
2008-07-29 20:55:35nirinAsetfiles: + pymath.c.diff
messages: + msg70412
2008-07-28 17:04:19nirinAsetmessages: + msg70361
2008-07-25 19:12:27stutzbachsetfiles: + unnamed
messages: + msg70270
2008-07-25 18:36:28rhettingersetmessages: + msg70267
2008-07-25 18:32:21nirinAsetfiles: + mathmodule.diff
messages: + msg70266
2008-07-25 17:46:01mark.dickinsonsetmessages: + msg70262
2008-07-25 04:29:40stutzbachsetmessages: + msg70241
2008-07-25 02:05:47rhettingersetmessages: + msg70233
2008-07-25 00:33:28nirinAsetfiles: + math.rst.diff
messages: + msg70232
2008-07-25 00:32:46nirinAsetfiles: + test_math.py.diff
2008-07-25 00:32:26nirinAsetfiles: + pymath.h.diff
2008-07-25 00:31:54nirinAsetfiles: + mathmodule.c.diff
2008-07-25 00:31:19nirinAsetfiles: + pymath.c.diff
2008-07-25 00:30:31nirinAsetfiles: - mathmodule.diff
2008-07-22 01:12:41rhettingersetmessages: + msg70131
2008-07-21 23:20:44stutzbachsetmessages: + msg70128
2008-07-21 22:34:43rhettingersetnosy: + rhettinger
messages: + msg70125
2008-07-21 21:24:08mark.dickinsonsetpriority: normal
messages: + msg70120
2008-07-21 13:16:04nirinAsetfiles: + math_private.h
messages: + msg70108
2008-07-21 13:10:34nirinAsetfiles: + mathmodule.diff
nosy: + nirinA
messages: + msg70106
keywords: + patch
2008-07-15 17:42:26stutzbachsetnosy: + stutzbach
2008-07-15 17:37:53terry.reedycreate