This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Make conversions from long to float correctly rounded.
Type: enhancement Stage: resolved
Components: Interpreter Core Versions: Python 3.1, Python 2.7
process
Status: closed Resolution: accepted
Dependencies: Superseder:
Assigned To: mark.dickinson Nosy List: belopolsky, drj, ggenellina, mark.dickinson, vstinner
Priority: normal Keywords: patch

Created on 2008-06-21 21:11 by mark.dickinson, last changed 2022-04-11 14:56 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
long_as_double3.patch mark.dickinson, 2008-12-14 12:53
long_as_double4.patch mark.dickinson, 2009-04-02 17:15
Messages (24)
msg68545 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-06-21 21:11
If n is a Python long, then one might expect float(n) to return the 
closest float to n.  Currently it doesn't do this.  For example (with 
Python 2.6, on OS X 10.5.2/Intel):

>>> n = 295147905179352891391L

The closest float to n is equal to n+1.  But float(n) returns the 
further of the two floats bracketing n, equal to n-65535:

>>> float(n)
2.9514790517935283e+20
>>> long(float(n))
295147905179352825856L
>>> n - long(float(n))
65535L

It's fairly straightforward to fix PyLong_AsDouble to return the closest 
double to a given long integer n (using the round-half-to-even rule in 
the case of a tie).  The attached patch does this.

Having a correctly rounded float(n) can be useful for testing other 
floating-point routines that are supposed to be correctly rounded.
msg71997 - (view) Author: David Jones (drj) * Date: 2008-08-26 21:10
I agree, longs should be correctly rounded when coerced to floats.

There is an ugly (but amusing) workaround while people wait for this 
patch:  Go via a string:

int(float(repr(295147905179352891391)[:-1]))

Though I assume this relies on the platform's strtod working correctly.  
Which it does for me.
msg75510 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2008-11-04 23:17
You may use "if (nbits == (size_t)-1 && PyErr_Occurred())" to check 
_PyLong_NumBits() error (overflow). Well, "if (numbits > DBL_MAX_EXP)" 
should already catch overflow, but I prefer explicit test to check the 
error case.

Anyway, interresting patch! Python3 vanilla:
>>> n = 295147905179352891391; int(float(n)) - n
-65535

Python3 + your patch:
>>> int(float(n)) - n
1
msg75568 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2008-11-06 17:30
Mark,

I noticed that you replaced a call to _PyLong_AsScaledDouble with your 
round to nearest algorithm.  I wonder if _PyLong_AsScaledDouble itself 
would benefit from your change.  Currently it is used in PyLong_AsDouble 
and long_true_divide.  I would think that long_true_divide would be more 
accurate if longs were rounded to the nearest float.

I also wonder whether round to nearest float can be implemented without 
floating point arithmetics.  I would think round towards zero should be 
a simple matter of extracting an appropriate number of bits from the 
long and round to nearest would at most require a long addition.

I believe _PyLong_AsScaledDouble is written the way it is to support 
non-IEEE floating formats, but I am not sure that your algorithm would 
always return the nearest float on an arbitrary non-IEEE platform.

Maybe it would be worthwhile to provide a simple IEEE specific code with   
well specified semantics for both PyLong_AsDouble and long_true_divide, 
but fall back to the current code on non-IEEE platforms.
msg77411 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2008-12-09 13:13
float(295147905179352891391L) gives different result on Python 2.5 and 
Python 2.6:
- 2.9514790517935289e+20   # Python 2.5.1
- 2.9514790517935283e+20   # 2.7a0

whereas the code is the same!?
msg77412 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2008-12-09 13:33
Python 2.5.1 (r251:54863, Jul 31 2008, 23:17:40)
>>> reduce(lambda x,y: x*32768.0 + y, [256, 0, 0, 1, 32767])
2.9514790517935283e+20
>>> float(295147905179352891391L)
2.9514790517935289e+20

Python 2.7a0 (trunk:67679M, Dec  9 2008, 14:29:12)
>>> reduce(lambda x,y: x*32768.0 + y, [256, 0, 0, 1, 32767])
2.9514790517935283e+20
>>> float(295147905179352891391L)
2.9514790517935283e+20

Python 3.1a0 (py3k:67652M, Dec  9 2008, 13:08:19)
>>> float(295147905179352891391)
2.9514790517935283e+20
>>> digits=[256, 0, 0, 1, 32767]; x=0
>>> for d in digits:
...  x*=32768.0
...  x+= d
...
>>> x
2.9514790517935283e+20

All results are the same, except float(295147905179352891391L) in 
Python 2.5!? Python 2.5 rounds correctly:

Python 2.5.1 (r251:54863, Jul 31 2008, 23:17:40)
>>> x=295147905179352891391L
>>> long(float(long(x))) - x
1L
msg77418 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2008-12-09 14:59
Ok, I understand why different versions of the same code gives 
different results: compiler flags! Python 2.5.1 is my Ubuntu version 
(should be compiled with -O3) whereas Python 2.7 and 3.1a0 are 
compiled by me with -00.

Results with Python 2.5.1:
 - with -O0, float(295147905179352891391L) gives 
2.9514790517935283e+20
 - with -O1, float(295147905179352891391L) gives 
2.9514790517935289e+20

I'm unable to isolate the exact compiler flag which changes the 
result. I tried all options listed in the gcc doc for the -O1 option:
http://gcc.gnu.org/onlinedocs/gcc-4.1.2/gcc/Optimize-Options.html#Optimize-Options
msg77421 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 15:09
Victor, what does

>>> 1e16 + 2.9999

give on your Ubuntu 2.5 machine?
(Humor me. :) )
msg77427 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2008-12-09 15:38
About -O0 vs -O1, I think that I understood (by reading the 
assembler).

pseudocode of the -O0 version:
  while (....)
  {
     load x from the stack
     x = x * ... + ...
     write x to the stack
  }

pseudocode of the -O1 version:
  while (....)
  {
     x = x * ... + ...
  }

Intel uses 80 bits float in internals, but load/store uses 64 bits 
float. Load/store looses least significant bits.

Hey, floating point numbers are funny :-)

---

Python 2.5.1 (r251:54863, Jul 31 2008, 23:17:40)
>>> 1e16 + 2.999
10000000000000002.0
>>> 1e16 + 2.9999
10000000000000004.0

Same result with python 2.7/3.1.
msg77429 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2008-12-09 15:52
An interresting document: "Request for Comments: Rounding in PHP"
http://wiki.php.net/rfc/rounding
msg77430 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 16:02
> Intel uses 80 bits float in internals, but load/store uses 64 bits 
> float. Load/store looses least significant bits.

Exactly.  If your Intel machine is Pentium 4 or newer, you can get
around this by using the SSE2 extensions, which work with 64-bit doubles
throughout.  I don't remember offhand precisely which gcc flags you need
for this.

> Hey, floating point numbers are funny :-)

Yup.
msg77431 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2008-12-09 16:12
On Tue, Dec 9, 2008 at 11:02 AM, Mark Dickinson <report@bugs.python.org> wrote:
...
> If your Intel machine is Pentium 4 or newer, you can get
> around this by using the SSE2 extensions, which work with 64-bit doubles
> throughout.  I don't remember offhand precisely which gcc flags you need
> for this.

The flags you may be looking for are -msse2 -mfpmath=sse
msg77432 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 16:43
[Alexander]
> The flags you may be looking for are -msse2 -mfpmath=sse

Thanks, Alexander!

[Alexander again, from an earlier post...]
> I noticed that you replaced a call to _PyLong_AsScaledDouble with your 
> round to nearest algorithm.  I wonder if _PyLong_AsScaledDouble itself 
> would benefit from your change.  Currently it is used in 
PyLong_AsDouble 
> and long_true_divide.  I would think that long_true_divide would be 
more 
> accurate if longs were rounded to the nearest float.

You read my mind!  I've got another issue open about making long 
division round correctly, somewhere.  And indeed I'd like to make 
_PyLong_AsScaledDouble do correct rounding.  (I'd also like to make it 
return the exponent in bits, rather than digits, so that mathmodule.c 
doesn't have to know about the long int representation, but that's 
another story...)

> I believe _PyLong_AsScaledDouble is written the way it is to support 
> non-IEEE floating formats, but I am not sure that your algorithm would 
> always return the nearest float on an arbitrary non-IEEE platform.

Well, I had other possible formats in mind when I wrote the code, and I 
hope it's good whenever FLT_RADIX is 2.  If you can think of explicit 
cases where it's not going to work, please let me know.  My belief that 
it's safe rests on two facts: (1) it entirely avoids IEEE 754 oddities 
like negative zero, denormals and NaNs, and (2) all the floating-point 
operations actually performed should have exact results, so rounding 
doesn't come into play anywhere.

When FLT_RADIX is some other power of 2 (FLT_RADIX=16 is the only 
example I know of that exists in the wild) the code probably doesn't 
produce correctly rounded results in all cases, but it shouldn't do 
anything catastrophic either---I think the error still should't be more 
than 1ulp in this case.  When FLT_RADIX is not a power of 2 then so much 
else is going to be broken anyway that it's not worth worrying about.
msg77435 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 16:51
[Alexander]
> I also wonder whether round to nearest float can be implemented 
without
> floating point arithmetics.  I would think round towards zero should 
be
> a simple matter of extracting an appropriate number of bits from the
> long and round to nearest would at most require a long addition.

The idea's attractive.  The problem is finding an integer type that's 
guaranteed to have enough bits to store the mantissa for the float 
(probably plus one or two bits more for comfort);  for IEEE 754 this 
means a 64-bit integer type, and there aren't any of those in C89.

(One could use two 32-bit integer variables, but that's going to get 
messy.)
msg77436 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2008-12-09 17:31
..
> The idea's attractive.  The problem is finding an integer type that's
> guaranteed to have enough bits to store the mantissa for the float
> (probably plus one or two bits more for comfort);  for IEEE 754 this
> means a 64-bit integer type, and there aren't any of those in C89.

But Python already has an arbitrary precision integer type, why not
use it?  Performance may suffer, but optimization can be considered
later possibly first on the most popular platforms.
msg77437 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 17:39
As you say, performance would suffer.

What would using Python's integer type solve, that isn't already solved by  
the patch?

I know the code isn't terribly readable;  I'll add some comments 
explaining clearly what's going on.
msg77438 - (view) Author: Alexander Belopolsky (belopolsky) * (Python committer) Date: 2008-12-09 18:27
On Tue, Dec 9, 2008 at 12:39 PM, Mark Dickinson <report@bugs.python.org> wrote:
..
> What would using Python's integer type solve, that isn't already solved by
> the patch?
>

Speaking for myself, it would alleviate the irrational fear of
anything involving the FPU. :-)

Seriously, it is not obvious that your algorithm is correct and does
not depend on x being stored in an extended precision register.
Floating point experts are in short supply, so it may take a while for
your patch to be thoroughly reviewed by one of them.  If you could
produce a formal proof of correctness of your algorithm, that would be
more helpful than code comments.

On the other hand, an implementation that uses only integer
arithmetics plus bit shifts is likely to be much easier to understand.
 I don't think the performance will suffer much.  We can even start
with a pure python prototype using struct.pack or ctypes to produce
the double result.
msg77441 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 19:18
By the way, the algorithm here is essentially the same as the algorithm that I 
implemented for the float.fromhex method, except that the float.fromhex method is more 
complicated in that it may have to deal with signed zeros or subnormals.

So any mathematical defects that you find in this patch probably indicate a defect in 
float.fromhex too.

In fact, the code *does* do integer arithmetic, except that one of the integers happens 
to be stored as a double.  If you look at the code you'll find that at every stage, the 
floating-point variable "x" has an exact nonnegative integer value between 0 and 
2**DBL_MANT_DIG.  All such values are exactly representable as a double, and all the 
arithmetic operations involved are exact.  This holds right up to the ldexp call at the 
end.  So the arithmetic with x is essentially integer arithmetic.

I accept the code needs extra documentation;  I was planning to put the equivalent 
Python code into the comments to make things clearer.
msg77451 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 21:02
> floating-point variable "x" has an exact nonnegative integer value 
> between 0 and 2**DBL_MANT_DIG. 

Hmm.  On closer inspection that's not quite true.  After the line

x = x * PyLong_BASE + (dig & (PyLong_BASE - pmask));

x has a value of the form n * pmask, where pmask is a power of 2 and
n is in the range [0, 2**DBL_MANT_DIG).  It's still exactly represented, 
provided that FLT_RADIX is 2.  (It's the multiplications by powers of 2 
that get hairy when FLT_RADIX is 16, since they *can* lose information.)
msg77473 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-09 23:48
Thanks for your comments, Alexander.

Here's a rewritten version of the patch that's better commented and 
somewhat less convoluted;  I think it should be easier to verify the 
correctness of this one.
msg77556 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-10 17:06
Minor cleanup of long_as_double2.patch.
msg77798 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2008-12-14 12:53
Updated patch;  cleanup of comments and slight refactoring of code.

Int->float conversions are even a speck faster than the current code, for 
small inputs.  (On my machine, on a Friday night, during a full moon.  
Your results may differ.  :))

Also, retarget this for 2.7 and 3.1.
msg85225 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-04-02 17:15
Updated patch;  applies cleanly to current trunk.  No significant changes.

Note that there's now a new reason to apply this patch: it ensures that 
the result of a long->float conversion is independent of whether we're 
using 30-bit digits or 15-bit digits for longs.

One problem:  if long->float conversions are correctly rounded, then
int->float conversions should be correctly rounded as well.  (And ideally, 
we should have float(int(n)) == float(long(n)) for any integer n.)

This problem only affects 64-bit machines:  on 32-bit machines, all 
integers are exactly representable as floats, and the C99 standard 
specifies that in that case the conversion should be exact.
msg86206 - (view) Author: Mark Dickinson (mark.dickinson) * (Python committer) Date: 2009-04-20 21:50
(Slightly updated version of) patch applied in r71772 (trunk),
r71773 (py3k).
History
Date User Action Args
2022-04-11 14:56:35adminsetgithub: 47416
2009-04-20 21:50:43mark.dickinsonsetstatus: open -> closed
resolution: accepted
messages: + msg86206

stage: patch review -> resolved
2009-04-02 17:16:02mark.dickinsonsetfiles: + long_as_double4.patch

messages: + msg85225
2009-02-04 12:24:55mark.dickinsonsetpriority: normal
assignee: mark.dickinson
2008-12-14 12:53:19mark.dickinsonsetfiles: - long_as_double2.patch
2008-12-14 12:53:15mark.dickinsonsetfiles: - long_as_double.patch
2008-12-14 12:53:09mark.dickinsonsetfiles: + long_as_double3.patch
messages: + msg77798
versions: + Python 3.1, Python 2.7, - Python 2.6
2008-12-10 17:06:32mark.dickinsonsetfiles: - long_as_double2.patch
2008-12-10 17:06:23mark.dickinsonsetfiles: + long_as_double2.patch
messages: + msg77556
2008-12-09 23:48:29mark.dickinsonsetfiles: + long_as_double2.patch
messages: + msg77473
2008-12-09 21:02:36mark.dickinsonsetmessages: + msg77451
2008-12-09 19:18:28mark.dickinsonsetmessages: + msg77441
2008-12-09 18:27:18belopolskysetmessages: + msg77438
2008-12-09 17:39:12mark.dickinsonsetmessages: + msg77437
2008-12-09 17:38:14ggenellinasetnosy: + ggenellina
2008-12-09 17:31:23belopolskysetmessages: + msg77436
2008-12-09 16:51:56mark.dickinsonsetmessages: + msg77435
2008-12-09 16:43:24mark.dickinsonsetmessages: + msg77432
2008-12-09 16:12:04belopolskysetmessages: + msg77431
2008-12-09 16:02:26mark.dickinsonsetmessages: + msg77430
2008-12-09 15:52:57vstinnersetmessages: + msg77429
2008-12-09 15:38:01vstinnersetmessages: + msg77427
2008-12-09 15:09:56mark.dickinsonsetmessages: + msg77421
2008-12-09 14:59:58vstinnersetmessages: + msg77418
2008-12-09 13:33:41vstinnersetmessages: + msg77412
2008-12-09 13:13:03vstinnersetmessages: + msg77411
2008-11-06 17:30:34belopolskysetnosy: + belopolsky
messages: + msg75568
2008-11-04 23:17:58vstinnersetmessages: + msg75510
stage: patch review
2008-11-04 23:07:09vstinnersetnosy: + vstinner
2008-08-26 21:10:32drjsetnosy: + drj
messages: + msg71997
2008-06-21 21:11:59mark.dickinsoncreate