Issue39484

This issue tracker **has been migrated to GitHub**,
and is currently **read-only**.

For more information,
see the GitHub FAQs in the Python's Developer Guide.

Created on **2020-01-29 13:02** by **vxgmichel**, last changed **2022-04-11 14:59** by **admin**.

Files | ||||
---|---|---|---|---|

File name | Uploaded | Description | Edit | |

Comparing_division_errors_over_10_us.png | vxgmichel, 2020-02-03 12:21 | |||

comparing_errors.py | vxgmichel, 2020-02-03 12:22 | |||

Comparing_conversions_over_5_us.png | vxgmichel, 2020-02-03 18:40 | |||

comparing_conversions.py | vxgmichel, 2020-02-03 18:43 |

Messages (31) | |||
---|---|---|---|

msg360958 - (view) | Author: Vincent Michel (vxgmichel) * | Date: 2020-01-29 13:02 | |

On windows, the timestamps produced by time.time() often end up being equal because of the 15 ms resolution: >>> time.time(), time.time() (1580301469.6875124, 1580301469.6875124) The problem I noticed is that a value produced by time_ns() might end up being higher then a value produced time() even though time_ns() was called before: >>> a, b = time.time_ns(), time.time() >>> a, b (1580301619906185300, 1580301619.9061852) >>> a / 10**9 <= b False This break in causality can lead to very obscure bugs since timestamps are often compared to one another. Note that those timestamps can also come from non-python sources, i.e a C program using `GetSystemTimeAsFileTime`. This problem seems to be related to the conversion `_PyTime_AsSecondsDouble`: https://github.com/python/cpython/blob/f1c19031fd5f4cf6faad539e30796b42954527db/Python/pytime.c#L460-L461 # Float produced by `time.time()` >>> b.hex() '0x1.78c5f4cf9fef0p+30' # Basically what `_PyTime_AsSecondsDouble` does: >>> (float(a) / 10**9).hex() '0x1.78c5f4cf9fef0p+30' # What I would expect from `time.time()` >>> (a / 10**9).hex() '0x1.78c5f4cf9fef1p+30' However I don't know if this would be enough to fix all causality issues since, as Tim Peters noted in another thread: > Just noting for the record that a C double (time.time() result) isn't quite enough to hold a full-precision Windows time regardless (https://bugs.python.org/issue19738#msg204112) |
|||

msg360980 - (view) | Author: Vincent Michel (vxgmichel) * | Date: 2020-01-29 18:26 | |

I thought about it a bit more and I realized there is no way to recover the time in hundreds of nanoseconds from the float produced by `time.time()` (since the windows time currently takes 54 bits and will take 55 bits in 2028). That means `time()` and `time_ns()` cannot be compared by converting time() to nanoseconds, but it might still make sense to compare them by converting time_ns() to seconds (which is apparently broken at the moment). If that makes sense, a possible roadmap to tackle this problem would be: - fix `_PyTime_AsSecondsDouble` so that `time.time_ns() / 10**9 == time.time()` - add a warning in the documentation that one should be careful when comparing the timestamps produced by `time()` and time_ns()` (in particular, `time()` should not be converted to nanoseconds) |
|||

msg361139 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-01 00:59 | |

>>> a / 10**9 <= b False Try to use a/1e9 <= b. -- The C code to get the system clock is the same for time.time() and time.time_ns(). It's only the conversion of the result which is different: static PyObject * time_time(PyObject *self, PyObject *unused) { _PyTime_t t = _PyTime_GetSystemClock(); return _PyFloat_FromPyTime(t); } static PyObject * time_time_ns(PyObject *self, PyObject *unused) { _PyTime_t t = _PyTime_GetSystemClock(); return _PyTime_AsNanosecondsObject(t); } where _PyTime_t is int64_t: 64-bit signed integer. Conversions: static PyObject* _PyFloat_FromPyTime(_PyTime_t t) { double d = _PyTime_AsSecondsDouble(t); return PyFloat_FromDouble(d); } double _PyTime_AsSecondsDouble(_PyTime_t t) { /* volatile avoids optimization changing how numbers are rounded */ volatile double d; if (t % SEC_TO_NS == 0) { _PyTime_t secs; /* Divide using integers to avoid rounding issues on the integer part. 1e-9 cannot be stored exactly in IEEE 64-bit. */ secs = t / SEC_TO_NS; d = (double)secs; } else { d = (double)t; d /= 1e9; } return d; } PyObject * _PyTime_AsNanosecondsObject(_PyTime_t t) { Py_BUILD_ASSERT(sizeof(long long) >= sizeof(_PyTime_t)); return PyLong_FromLongLong((long long)t); } In short, time.time() = float(time.time_ns()) / 1e9. -- The problem can be reproduced in Python: >>> a=1580301619906185300 >>> b=a/1e9 >>> a / 10**9 <= b False I added time.time_ns() because we loose precision if you care about nanosecond resolution, with such "large number". float has a precision around 238 nanoseconds: >>> import math; ulp=math.ulp(b) >>> ulp 2.384185791015625e-07 >>> "%.0f +- %.0f" % (b*1e9, ulp*1e9) '1580301619906185216 +- 238' int/int and int/float don't give the same result: >>> a/10**9 1580301619.9061854 >>> a/1e9 1580301619.9061852 I'm not sure which one is "correct". To understand the issue, you can use the next math.nextafter() function to get the next floating point towards -inf: >>> a/10**9 1580301619.9061854 >>> a/1e9 1580301619.9061852 >>> math.nextafter(a/10**9, -math.inf) 1580301619.9061852 >>> math.nextafter(a/1e9, -math.inf) 1580301619.906185 Handling floating point numbers are hard. Why don't use only use integers? :-) |
|||

msg361140 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-01 01:01 | |

See also bpo-39277 which is similar issue. I proposed a fix which uses nextafter(): https://github.com/python/cpython/pull/17933 |
|||

msg361141 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-01 01:10 | |

Another way to understand the problem: nanosecond (int) => seconds (float) => nanoseconds (int) roundtrip looses precison. >>> a=1580301619906185300 >>> a/1e9*1e9 1.5803016199061852e+18 >>> b=int(a/1e9*1e9) >>> b 1580301619906185216 >>> a - b 84 The best would be to add a round parameter to _PyTime_AsSecondsDouble(), but I'm not sure how to implement it. The following rounding mode is used to read a clock: /* Round towards minus infinity (-inf). For example, used to read a clock. */ _PyTime_ROUND_FLOOR=0, _PyTime_ROUND_FLOOR is used in time.clock_settime(), time.gmtime(), time.localtime() and time.ctime() functions: to round input arguments. time.time(), time.monotonic() and time.perf_counter() converts _PyTime_t to float using _PyTime_AsSecondsDouble() (which currently has no round parameter) for their output. See also my rejected PEP 410 ;-) -- One way to solve this issue is to document how to compare time.time() and time.time_ns() timestamps in a reliable way. |
|||

msg361142 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-01 01:14 | |

By the way, I wrote an article about the history on how Python rounds time... https://vstinner.github.io/pytime.html I also wrote two articles about nanoseconds in Python: * https://vstinner.github.io/python37-pep-564-nanoseconds.html * https://vstinner.github.io/python37-perf-counter-nanoseconds.html Oh and I forgot the main one: PEP 564 :-) "Example 2: compare times with different resolution" sounds like this issue. https://www.python.org/dev/peps/pep-0564/#example-2-compare-times-with-different-resolution |
|||

msg361143 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-01 01:17 | |

I don't think this is fixable, because it's not exactly a bug. The problem is we're running out of bits. In converting the time around, we've lost some precision. So the times that come out of time.time() and time.time_ns() should not be considered directly comparable. Both functions, time.time() and time.time_ns(), call the same underlying function to get the current time. That function is_PyTime_GetSystemClock(); it returns nanoseconds since the 1970 epoch, stored in an int64. Each function then simply converts that time into its return format and returns that. In the case of time.time_ns(), it loses no precision whatsoever. In the case of time.time(), it (usually) converts to double and divides by 1e9, which is implicitly floor rounding. Back-of-the-envelope math here: An IEEE double has 53 bits of resolution for the mantissa, not counting the leading 1. The current time in seconds since the 1970 epoch uses about 29 bits of those 53 bits. That leaves 24 bits for the fractional second. But you'd need 30 bits to render all one billion fractional values. We're six bits short. Unless anybody has an amazing suggestion about how to ameliorate this situation, I think we should close this as wontfix. |
|||

msg361144 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-01 01:18 | |

(Oh, wow, Victor, you wrote all that while I was writing my reply. ;-) |
|||

msg361286 - (view) | Author: Vincent Michel (vxgmichel) * | Date: 2020-02-03 12:21 | |

Thanks for your answers, that was very informative! > >>> a/10**9 > 1580301619.9061854 > >>> a/1e9 > 1580301619.9061852 > > I'm not sure which one is "correct". Originally, I thought `a/10**9` was more precise because I ran into the following case while working with hundreds of nanoseconds (because windows): ``` r = 1580301619906185900 print("Ref :", r) print("10**7 :", int(r // 100 / 10**7 * 10 ** 7) * 100) print("1e7 :", int(r // 100 / 1e7 * 10 ** 7) * 100) print("10**9 :", int(r / 10**9 * 10**9)) print("1e9 :", int(r / 1e9 * 10**9)) [...] Ref : 1580301619906185900 10**7 : 1580301619906185800 1e7 : 1580301619906186200 10**9 : 1580301619906185984 1e9 : 1580301619906185984 ``` I decided to plot the conversion errors for different division methods over a short period of time. It turns out that: - `/1e9` is equally or more precise than `/10**9` when working with nanoseconds - `/10**7` is equally or more precise than `/1e7` when working with hundreds nanoseconds This result really surprised me, I have no idea what is the reason behind this. See the plots and code attached for more information. In any case, this means there is no reason to change the division in `_PyTime_AsSecondsDouble`, closing this issue as wontfix sounds fine :) --- As a side note, the only place I could find something similar mentioned in the docs is in the `os.stat_result.st_ctime_ns` documentation: https://docs.python.org/3.8/library/os.html#os.stat_result.st_ctime_ns > Similarly, although st_atime_ns, st_mtime_ns, and st_ctime_ns are > always expressed in nanoseconds, many systems do not provide > nanosecond precision. On systems that do provide nanosecond precision, > 1the floating-point object used to store st_atime, st_mtime, and > st_ctime cannot preserve all of it, and as such will be slightly > inexact. If you need the exact timestamps you should always use > st_atime_ns, st_mtime_ns, and st_ctime_ns. Maybe this kind of limitation should also be mentioned in the documentation of `time.time_ns()`? |
|||

msg361288 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-03 12:54 | |

Yeah, time.time(), time.monotonic() and time.perf_counter() can benefit of a note suggestion to use time.time_ns(), time.monotonic_ns() or time.perf_counter_ns() to better precision. |
|||

msg361292 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2020-02-03 13:37 | |

The problem is that there is a double rounding in time = float(time_ns) / 1e9 1. When convert time_ns to float. 2. When divide it by 1e9. The formula time = time_ns / 10**9 may be more accurate. |
|||

msg361296 - (view) | Author: Vincent Michel (vxgmichel) * | Date: 2020-02-03 14:19 | |

> The problem is that there is a double rounding in [...] Actually `float(x) / 1e9` and `x / 1e9` seems to produce the same results: ``` import time import itertools now = time.time_ns() for x in itertools.count(now): assert float(x) / 1e9 == x / 1e9 ``` > The formula `time = time_ns / 10**9` may be more accurate. Well that seems to not be the case, see the plots and the corresponding code. I might have made a mistake though, please let me know if I got something wrong :) |
|||

msg361300 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-03 14:28 | |

> The problem is that there is a double rounding in > time = float(time_ns) / 1e9 > 1. When convert time_ns to float. > 2. When divide it by 1e9. I'm pretty sure that in Python 3, if you say c = a / b and a and b are both "single-digit" integers, it first converts them both into doubles and then performs the divide. See long_true_divide() in Objects/longobject.c, starting (currently) at line 3938. |
|||

msg361301 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2020-02-03 14:36 | |

But they are not single-digit integers. And more, int(float(a)) != a. |
|||

msg361304 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-03 14:47 | |

I'm not sure which kind of problem you are trying to solve here. time.time() does lose precision because it uses the float type. Comparing time.time() and time.time_ns() tricky because of that. If you care of nanosecond precision, avoid float whenever possible and only store time as integer. I'm not sure how to compat time.time() float with time.time_ns(). Maybe math.isclose() can help. I don't think that Python is wrong here, time.time() and time.time_ns() work are expected, and I don't think that time.time() result can be magically more accurate: 1580301619906185300 nanoseconds (int) cannot be stored exactly as floating point number of seconds. I suggest to only document in time.time() is less accurate than time.time_ns(). |
|||

msg361306 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * | Date: 2020-02-03 16:04 | |

>>> 1580301619906185300/10**9 1580301619.9061854 >>> 1580301619906185300/1e9 1580301619.9061852 >>> float(F(1580301619906185300/10**9) * 10**9 - 1580301619906185300) 88.5650634765625 >>> float(F(1580301619906185300/1e9) * 10**9 - 1580301619906185300) -149.853515625 1580301619906185300/10**9 is more accurate than 1580301619906185300/1e9. |
|||

msg361308 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-03 16:35 | |

I compare nanoseconds (int): >>> t=1580301619906185300 # int/int: int.__truediv__(int) >>> abs(t - int(t/10**9 * 1e9)) 172 # int/float: float.__rtruediv__(int) >>> abs(t - int(t/1e9 * 1e9)) 84 # float/int: float.__truediv__(int) >>> abs(t - int(float(t)/10**9 * 1e9)) 84 # float/float: float.__truediv__(float) >>> abs(t - int(float(t)/1e9 * 1e9)) 84 => int/int is less accurate than float/float for t=1580301619906185300 You compare seconds (float/Fraction): >>> from fractions import Fraction as F >>> t=1580301619906185300 # int / int >>> float(F(t/10**9) * 10**9 - t) 88.5650634765625 # int / float >>> float(F(t/1e9) * 10**9 - t) -149.853515625 => here int/int looks more accurate than int/float And we get different conclusion :-) |
|||

msg361309 - (view) | Author: Vincent Michel (vxgmichel) * | Date: 2020-02-03 16:35 | |

@serhiy.storchaka > 1580301619906185300/10**9 is more accurate than 1580301619906185300/1e9. I don't know exactly what `F` represents in your example but here is what I get: >>> r = 1580301619906185300 >>> int(r / 10**9 * 10**9) - r 172 >>> int(r / 1e9 * 10**9) - r -84 @vstinner > I suggest to only document in time.time() is less accurate than time.time_ns(). Sounds good! |
|||

msg361314 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2020-02-03 17:51 | |

> int/int is less accurate than float/float for t=1580301619906185300 No, int/int is more accurate here. If a and b are ints, a / b is always correctly rounded on an IEEE 754 system, while float(a) / float(b) will not necessarily give a correctly rounded result. So for an integer a, `a / 10**9` will _always_ be at least as accurate as `a / 1e9`. |
|||

msg361316 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2020-02-03 17:57 | |

To be clear: the following is flawed as an accuracy test, because the *multiplication* by 1e9 introduces additional error. # int/int: int.__truediv__(int) >>> abs(t - int(t/10**9 * 1e9)) 172 Try this instead, which uses the Fractions module to get the exact error. (The error is converted to a float before printing, for convenience, to show the approximate size of the errors.) >>> from fractions import Fraction as F >>> exact = F(t, 10**9) >>> int_int = t / 10**9 >>> float_float = t / 1e9 >>> int_int_error = F(int_int) - exact >>> float_float_error = F(float_float) - exact >>> print(float(int_int_error)) 8.85650634765625e-08 >>> print(float(float_float_error)) -1.49853515625e-07 |
|||

msg361319 - (view) | Author: Vincent Michel (vxgmichel) * | Date: 2020-02-03 18:40 | |

@mark.dickinson > To be clear: the following is flawed as an accuracy test, because the *multiplication* by 1e9 introduces additional error. Interesting, I completely missed that! But did you notice that the full conversion might still perform better when using only floats? ``` >>> from fractions import Fraction as F >>> r = 1580301619906185300 >>> abs(int(r / 1e9 * 1e9) - r) 84 >>> abs(round(F(r / 10**9) * 10**9) - r) 89 ``` I wanted to figure out how often that happens so I updated my plotting, you can find the code and plot attached. Notice how both methods seems to perform equally good (the difference of the absolute errors seems to average to zero). I have no idea about why that happens though. |
|||

msg361351 - (view) | Author: STINNER Victor (vstinner) * | Date: 2020-02-04 15:19 | |

> No, int/int is more accurate here. Should _PyFloat_FromPyTime() implementation be modified to reuse long_true_divide()? |
|||

msg361650 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-09 13:57 | |

p.s. for what it's worth: I re-checked my math and as usual I goofed. It takes *30* bits to store the non-fractional seconds part of the current time in a double, leaving 23 bits for the fractional part, so we're *7* bits short. |
|||

msg361651 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2020-02-09 14:42 | |

[Larry] > It takes *30* bits to store the non-fractional seconds part of the current time in a double I make it 31. :-) >>> from datetime import datetime >>> time_since_epoch = datetime.now() - datetime(1970, 1, 1) >>> int(time_since_epoch.total_seconds()).bit_length() 31 |
|||

msg361652 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-09 14:55 | |

Yes, but you get the first 1 bit for free. So it actually only uses 30 bits of storage inside the double. This is apparently called "leading bit convention": https://en.wikipedia.org/wiki/IEEE_754#Representation_and_encoding_in_memory |
|||

msg361653 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2020-02-09 15:17 | |

> Yes, but you get the first 1 bit for free. Not really. :-) That's a detail of how floating-point numbers happen to be stored; it's not really relevant here. It doesn't affect the fact that IEEE 754 binary64 floats have 53 bits of *precision*, so using 31 for the integer part leaves only 22 for the fractional part, so we're 8 bits short, not 7. (If you really want, you can subtract 30 from 52 instead of 31 from 53, but it's just a more complicated way of doing the same calculation, and doesn't change the result.) |
|||

msg361654 - (view) | Author: Mark Dickinson (mark.dickinson) * | Date: 2020-02-09 15:28 | |

Here's another way to see it. Let's get the Unix timestamp for right now: >>> from datetime import datetime >>> epoch = datetime(1970, 1, 1) >>> now = (datetime.now() - epoch).total_seconds() >>> now 1581261916.910558 Now let's figure out the resolution, by taking the next float up from that value and subtracting. >>> from math import nextafter, inf >>> nextafter(now, inf) - now 2.384185791015625e-07 That's 2**-22, or a little bit less than a quarter of a microsecond. We're out from our desired resolution (1ns) by a factor of ~239, so it's going to take another 8 bits (factor of 256) to get us there. |
|||

msg361665 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-10 01:54 | |

Aha! The crucial distinction is that IEEE 754 doubles have 52 bits of storage for the mantissa, but folks (e.g. Wikipedia, Mark Dickinson) describe this as "53 bits of precision" because that's easier saying "52 bits but you don't have to store the leading 1 bit". To round the bases: the actual physical storage of a double is 1 sign bit + 52 mantissa bits + 11 exponent bits = 64 bits. The current time in seconds is 31 bits, but we get the leading 1 for free so it only takes up 30 bits of the mantissa. Therefore we only have 22 bits of precision left for the fractional second, therefore we're 8 bits short of being able to represent every billionth of a second. We can represent approximately 0.4% of all distinct billionths of a second, which is just sliiightly more than 1/256 (0.39%). Just to totally prove it to myself, I wrote a brute-force Python program. It starts with 1581261916, then for i in range(one_billion) it adds i / one_billion to that number. It then checks to see if that result is different from the previous result. It detected 4194304 times the result was different, which is exactly 2**22. QED. p.s. I knew in my heart that I would never *actually* correct Mark Dickinson on something regarding floating point numbers |
|||

msg361666 - (view) | Author: Eryk Sun (eryksun) * | Date: 2020-02-10 04:50 | |

A binary float has the form (-1)**sign * (1 + frac) * 2**exp, where sign is 0 or 1, frac is a rational value in the range [0, 1), and exp is a signed integer (but stored in non-negative, biased form). The smallest value of frac is epsilon, and the smallest increment for a given power of two is thus epsilon * 2**exp. To get exp for a given value, we have log2(abs(value)) == log2((1 + frac) * 2**exp) == log2(1 + frac) + log2(2**exp) == log2(1 + frac) + exp. Thus exp == log2(abs(value)) - log2(1 + frac). We know log2(1 + frac) is in the range [0, 1), so exp is the floor of the log2 result. For a binary64, epsilon is 2**-52, but we can leave it up to the floating point implementation by using sys.float_info: >>> exp = math.floor(math.log2(time.time())) >>> sys.float_info.epsilon * 2**exp 2.384185791015625e-07 Anyway, it's better to leave it to the experts: >>> t = time.time() >>> math.nextafter(t, math.inf) - t 2.384185791015625e-07 |
|||

msg361668 - (view) | Author: Larry Hastings (larry) * | Date: 2020-02-10 06:47 | |

> Anyway, it's better to leave it to the experts: I'm not sure what you're suggesting here. I shouldn't try to understand how floating-point numbers are stored? |
|||

msg361669 - (view) | Author: Eryk Sun (eryksun) * | Date: 2020-02-10 07:34 | |

> I'm not sure what you're suggesting here. I shouldn't try to understand > how floating-point numbers are stored? No, that's the furthest thought from my mind. I meant only that I would not recommend using one's own understanding of floating-point numbers instead of something like math.nextafter. Even if I correctly understand the general case, there are probably corner cases that I'm not aware of. |

History | |||
---|---|---|---|

Date | User | Action | Args |

2022-04-11 14:59:25 | admin | set | github: 83665 |

2021-03-21 21:08:52 | eryksun | set | nosy:
+ docs@python title: time_ns() and time() cannot be compared on windows -> [time] document that integer nanoseconds may be more precise than float assignee: docs@python versions: + Python 3.9, Python 3.10, - Python 3.7 components: + Documentation, Extension Modules, - Library (Lib) type: behavior -> enhancement |

2020-02-10 07:34:39 | eryksun | set | messages: + msg361669 |

2020-02-10 06:47:14 | larry | set | messages: + msg361668 |

2020-02-10 04:50:42 | eryksun | set | nosy:
+ eryksun messages: + msg361666 |

2020-02-10 01:54:48 | larry | set | messages: + msg361665 |

2020-02-09 15:28:09 | mark.dickinson | set | messages: + msg361654 |

2020-02-09 15:17:18 | mark.dickinson | set | messages: + msg361653 |

2020-02-09 14:55:02 | larry | set | messages: + msg361652 |

2020-02-09 14:42:49 | mark.dickinson | set | messages: + msg361651 |

2020-02-09 13:57:10 | larry | set | messages: + msg361650 |

2020-02-04 15:19:11 | vstinner | set | messages: + msg361351 |

2020-02-03 18:43:25 | vxgmichel | set | files: + comparing_conversions.py |

2020-02-03 18:40:47 | vxgmichel | set | files:
+ Comparing_conversions_over_5_us.png messages: + msg361319 |

2020-02-03 17:57:54 | mark.dickinson | set | messages: + msg361316 |

2020-02-03 17:51:23 | mark.dickinson | set | messages: + msg361314 |

2020-02-03 16:35:22 | vxgmichel | set | messages: + msg361309 |

2020-02-03 16:35:11 | vstinner | set | messages: + msg361308 |

2020-02-03 16:04:09 | serhiy.storchaka | set | messages: + msg361306 |

2020-02-03 14:47:56 | vstinner | set | messages: + msg361304 |

2020-02-03 14:36:56 | serhiy.storchaka | set | messages: + msg361301 |

2020-02-03 14:28:55 | larry | set | messages: + msg361300 |

2020-02-03 14:19:38 | vxgmichel | set | messages: + msg361296 |

2020-02-03 13:37:38 | serhiy.storchaka | set | nosy:
+ lemburg, rhettinger, mark.dickinson, stutzbach messages: + msg361292 |

2020-02-03 12:54:58 | vstinner | set | messages: + msg361288 |

2020-02-03 12:22:21 | vxgmichel | set | files: + comparing_errors.py |

2020-02-03 12:21:32 | vxgmichel | set | files:
+ Comparing_division_errors_over_10_us.png messages: + msg361286 |

2020-02-01 01:18:55 | larry | set | messages: + msg361144 |

2020-02-01 01:17:49 | larry | set | messages: + msg361143 |

2020-02-01 01:14:37 | vstinner | set | messages: + msg361142 |

2020-02-01 01:10:45 | vstinner | set | messages: + msg361141 |

2020-02-01 01:01:18 | vstinner | set | messages: + msg361140 |

2020-02-01 00:59:08 | vstinner | set | messages: + msg361139 |

2020-01-31 22:13:40 | serhiy.storchaka | set | nosy:
+ serhiy.storchaka |

2020-01-31 20:45:42 | rhettinger | set | nosy:
+ larry |

2020-01-31 20:30:57 | terry.reedy | set | nosy:
+ vstinner |

2020-01-29 18:26:46 | vxgmichel | set | messages: + msg360980 |

2020-01-29 13:02:45 | vxgmichel | create |