classification
Title: test_touch_common failure under Windows
Type: behavior Stage: resolved
Components: Library (Lib), Tests Versions: Python 3.4
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: BreamoreBoy, akuchling, brian.curtin, jkloth, loewis, pitrou, python-dev, steve.dower, tim.golden, tim.peters
Priority: normal Keywords:

Created on 2013-11-22 17:42 by pitrou, last changed 2015-04-13 16:47 by akuchling. This issue is now closed.

Messages (43)
msg203817 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 17:42
Path.touch() doesn't seem to change the mtime under Windows, which leads to the following failure:

======================================================================
FAIL: test_touch_common (test.test_pathlib.PathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_pathlib.py", line 1391, in test_touch_common
    self.assertGreaterEqual(p.stat().st_mtime, old_mtime)
AssertionError: 1385140662.458926 not greater than or equal to 1385140662.4589267

======================================================================
FAIL: test_touch_common (test.test_pathlib.WindowsPathTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_pathlib.py", line 1391, in test_touch_common
    self.assertGreaterEqual(p.stat().st_mtime, old_mtime)
AssertionError: 1385140663.098527 not greater than or equal to 1385140663.098528

Can anyone enlighten me about the semantics of st_mtime on Windows?
msg203832 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 18:43
Now that I think, I didn't encounter the failure when testing in my own Windows VM... uh.
msg203849 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 19:47
And indeed, the test seems to pass on another Windows buildbot:

http://buildbot.python.org/all/builders/x86%20Windows%20Server%202008%20%5BSB%5D%203.x

(note: my VM is under Windows 7)
msg203860 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-22 20:28
I don't quite understand the test: why is it dating back the file by 10s, but then not using the dated-back time stamp for anything?
msg203866 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 20:32
> I don't quite understand the test: why is it dating back the file by
> 10s, but then not using the dated-back time stamp for anything?

Well, it does, unless I'm misunderstanding your question:

        self.assertGreaterEqual(p.stat().st_mtime, old_mtime)

This checks that the mtime has indeed by changed to something newer.
msg203871 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-22 20:55
IIUC, the sequence of events is this:
1. touch
2. read old_mtime
3. date back 10s
4. touch
5. read mtime

So the time stamp that is set in step 3 is never read, correct? So there is no test that it is newer than the 10s-old-stamp, but only newer then the recent-stamp (step 2)?
msg203877 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 21:03
> IIUC, the sequence of events is this:
> 1. touch
> 2. read old_mtime
> 3. date back 10s
> 4. touch
> 5. read mtime
> 
> So the time stamp that is set in step 3 is never read, correct? So
> there is no test that it is newer than the 10s-old-stamp, but only
> newer then the recent-stamp (step 2)?

Indeed, the test is that step 4 overrides the timestamp set in step 3
with something that represents "now"; and the heuristic for that is that
the mtime in step 5 is at least as fresh as the mtime in step 2 (the
old_mtime).

So step 3 serves to make sure that the test isn't being fooled by a
coarse timestamp granularity. Another way of doing the same thing (but
more costly) would be to call time.sleep(several seconds).
msg203882 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-22 21:17
I think I found the problem. In one run, the current time (as reported by time.time()) was

1385154213.291315

On the first touch call (in step 1), the file is not there, and gets its current time internally from the system (not sure which part exactly assigns the time stamp). The resulting nanosecond/dwLowDateTime was

291315800 1303049222

Then, the utime call in step 4 asked to set this to 

291315078 1303049214

When reading the timestamp back, I get

291315000 1303049214

So in analysis/interpretation
A) the time.time() is apparently 712ns earlier than the system time (plus NTFS needs to round to the next multiple of 100ns)
B) when setting the time, the requested nanoseconds isn't exactly representable, so it rounds down 78ns.
C) as a consequence, the newer timestamp is 800ns before the old one.
msg203889 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 21:34
> I think I found the problem. In one run, the current time (as reported
> by time.time()) was
> 
> 1385154213.291315
> 
> On the first touch call (in step 1), the file is not there, and gets
> its current time internally from the system (not sure which part
> exactly assigns the time stamp). The resulting
> nanosecond/dwLowDateTime was
> 
> 291315800 1303049222
> 
> Then, the utime call in step 4 asked to set this to 

You mean step 3, right? ("date back 10s")

> 291315078 1303049214
> 
> When reading the timestamp back, I get
> 
> 291315000 1303049214

Ok, but... the problem is that touch() in step 5 should bump back the
timestamp to 1303049222 plus some nanoseconds. Not leave it at
1303049214. The test is written so that the nanoseconds should be
irrelevant.

(note also how far 1385154213 is from 1303049222, but the test is
careful to avoid that)
msg203900 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 22:06
To recap a bit, here is a very simplified view of the test:

A. path is opened for writing (and then closed)
B. its st_mtime is recorded in old_mtime
C. path is opened again for writing (and closed)
D. assert `path's current mtime` >= old_mtime

Whatever the details of Windows filesystem timestamps, it should be a no-brainer that the assertion passes :-) But it fails by some nanoseconds on some Windows buildbots (not all):

  File "C:\buildbot.python.org\3.x.kloth-win64\build\lib\test\test_pathlib.py", line 1391, in test_touch_common
    self.assertGreaterEqual(p.stat().st_mtime, old_mtime)
AssertionError: 1385156382.902938 not greater than or equal to 1385156382.9029381

  File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows7\build\lib\test\test_pathlib.py", line 1391, in test_touch_common
    self.assertGreaterEqual(p.stat().st_mtime, old_mtime)
AssertionError: 1385150397.383464 not greater than or equal to 1385150397.3834648

  File "D:\cygwin\home\db3l\buildarea\3.x.bolen-windows\build\lib\test\test_pathlib.py", line 1391, in test_touch_common
    self.assertGreaterEqual(p.stat().st_mtime, old_mtime)
AssertionError: 1385157186.106778 not greater than or equal to 1385157186.1067784
msg203903 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-11-22 22:20
New changeset 4101bfaa76fe by Antoine Pitrou in branch 'default':
Try to debug issue #19715
http://hg.python.org/cpython/rev/4101bfaa76fe
msg203907 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-22 22:56
FYI, the test fails on my box (32-bit Windows Vista) about 1 time in 3.  Here's the latest failure:

AssertionError: 1385160333.612968 not greater than or equal to 1385160333.6129684

And here's another:

AssertionError: 1385160530.348423 not greater than or equal to 1385160530.3484235

I can't dig into it more now.  Possibilities:

1. The clock isn't always monotonic.
2. MS uses an insane (non-monotonic) rounding algorithm (to move to a 100ns boundary).
3. Python uses an insane (non-monotonic) scheme to convert the time to a float.
msg203909 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-22 23:01
> 3. Python uses an insane (non-monotonic) scheme to convert the time
> to a float.

Yes, I was at least hoping to clear that out. The code is there:
http://hg.python.org/cpython/file/4101bfaa76fe/Modules/posixmodule.c#l1459

Other than that, I suppose I can hack the test if the discrepancy is never more than 1 µs :-/...
msg203910 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2013-11-22 23:02
I don't have any extra insight into this. The documented resolution for mtime on NTFS is 100ns (2s on FAT32), so without delaying by at least that long you're not going to see an official change. The noise is probably from floating-point conversions (in _PyTime_ObjectToDenominator, at a guess).

Maybe you want to test to be a bit more generous with the bounds:

1. touch
2. get original_mtime
3. utime(10s ago)
4. verify p.stat().st_mtime < original_mtime
5. store old_mtime
6. touch
7. verify p.stat().st_mtime > old_mtime
OR
7. verify original_mtime - 0.001 < p.stat().st_mtime < original_mtime + 0.001
msg203911 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2013-11-22 23:04
> 7. verify original_mtime - 0.001 < p.stat().st_mtime < original_mtime + 0.001

Actually, don't check the upper-bound here... that's a bad idea :)
msg203913 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-22 23:11
Here's a failure with the patch:

    self.assertGreaterEqual(st.st_mtime_ns, old_mtime_ns)
AssertionError: 1385161652120374900 not greater than or equal to
1385161652120375500

And another:

AssertionError: 1385161754170484000 not greater than or equal to
1385161754170484500

And one more:

AssertionError: 1385161795681499000 not greater than or equal to
1385161795681499500

I'll stare at the code later - have to leave now.  Please fix it
before I get back ;-)

On Fri, Nov 22, 2013 at 5:04 PM, Steve Dower <report@bugs.python.org> wrote:
>
> Steve Dower added the comment:
>
>> 7. verify original_mtime - 0.001 < p.stat().st_mtime < original_mtime + 0.001
>
> Actually, don't check the upper-bound here... that's a bad idea :)
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <http://bugs.python.org/issue19715>
> _______________________________________
msg203914 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-11-22 23:18
New changeset 716e41100553 by Victor Stinner in branch 'default':
Issue #19715: Ensure that consecutive calls to monotonic() are monotonic
http://hg.python.org/cpython/rev/716e41100553
msg203928 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-23 00:03
>> Then, the utime call in step 4 asked to set this to 
> You mean step 3, right? ("date back 10s")

No, I really meant step 4 (touch). The second touch call *really* steps backwards. I don't find that very surprising, since it is actually the first call that passes an actual time.

I can readily believe that Windows uses different rounding algorithm when creating a fresh file than Python uses when converting a float to a FILETIME.

> (note also how far 1385154213 is from 1303049222, but the test is
> careful to avoid that)

These are really entirely different values. 1385154213 is the number of seconds since 1970. 1303049222 is the dwLowDateTime of that timestamp, i.e. the lower 32-bit in units of 100 ns since 1601.

This test *does* fail because of a few nanoseconds - I don't understand why you claim that it cannot.
msg203929 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-23 00:27
> No, I really meant step 4 (touch). The second touch call *really* 
> steps backwards. I don't find that very surprising, since it is
> actually the first call that passes an actual time.

Ok, now I see what you mean. Indeed, we are calling utime() on the file when it already exists, which may explain the discrepancy.

I guess we can go for a simple fix, then...
msg203930 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-11-23 00:33
New changeset 602062d2a008 by Antoine Pitrou in branch 'default':
Try to fix issue #19715 (timestamp rounding inconsistencies under Windows?)
http://hg.python.org/cpython/rev/602062d2a008
msg203932 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-23 00:42
Looking at the ReactOS sources, it appears that Windows doesn't do any rounding, so it's all Python's fault. Windows keeps its system time (in KI_USER_SHARED_DATA) in essentially a FILETIME represenation (i.e. units of 100ns since 1601), and never needs to convert it. So CreateFile just uses the current system time as-is when filling the LastWriteTime.

I don't think it's enough for Python to convert these monotonically. The error would already occur if Python cannot convert bijectively, i.e. if getting the system time into float and back won't get the same FILETIME. It might just be that a 64-bit float value (with 53 bit mantissa) is too imprecise for that operation. A FILETIME currently has 56 significant bits.
msg203934 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-23 00:44
Looking at one of the failure instances:

>>> math.frexp(1385150397.383464)[0] * 2**53
5809741852347053.0
>>> math.frexp(1385150397.3834648)[0] * 2**53
5809741852347056.0

It doesn't seem that lack of FP precision would be sufficient to explain the problem, although it's true that it is quite close.
msg203935 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-23 00:45
OTOH, a time.time() in units of 100ns needs 53.6 bits, so it should "almost" fit.
msg203938 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-11-23 00:55
New changeset d647a4a8505e by Antoine Pitrou in branch 'default':
Trying other strategy for #19715: use utime(..., None)
http://hg.python.org/cpython/rev/d647a4a8505e
msg203940 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-23 01:08
I thought that using utime(..., None) would be better, but it's actually much worse: it calls GetSystemTime, which only has millisecond precision, while time.time() calls GetSystemTimeAsFileTime which has better precision.

(this is probably a bug in utime())
msg203941 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-11-23 01:11
New changeset a273f99159e0 by Antoine Pitrou in branch 'default':
Revert utime(..., None) strategy (it has too poor resolution under Windows) and restore the previous test workaround
http://hg.python.org/cpython/rev/a273f99159e0
msg203964 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-23 03:52
One "obvious" conversion error is this code from 

http://hg.python.org/cpython/file/4101bfaa76fe/Python/pytime.c#l35

microseconds = large.QuadPart / 10 - 11644473600000000;

This discard the 100ns part of the current time, rounding it down to the previous microsecond. As a result, the second time stamp can be up to a microsecond earlier than the previous one.
msg203965 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-23 04:34
[MvL]
> One "obvious" conversion error is this code from 
>
> http://hg.python.org/cpython/file/4101bfaa76fe/Python/pytime.c#l35
>
> microseconds = large.QuadPart / 10 - 11644473600000000;
>
> This discard the 100ns part of the current time, rounding it
> down to the previous microsecond. As a result, the second
> time stamp can be up to a microsecond earlier than the previous
> one.

I don't think so, on two counts.  First, if *both* timestamps went through this code, it's monotonically non-decreasing.  That is, if ts1 and ts2 are two full-precision (100ns resolution) timestamps, and ts1 < ts2, then calling the transformation above `T` we'll always have T(ts1) <= T(ts2).  T(ts1) can be equal to T(ts2) even if ts1 < ts2, but T(ts1) > T(ts2) can't happen (because C's unsigned integer division truncates).

Second, the version of the test code that compared .st_mtime_ns members showed timestamps with 100-ns resolution.  Like:

    self.assertGreaterEqual(c_mtime_ns, old_mtime_ns)
AssertionError: 1385179768290263800 not greater than or equal to 1385179768290264500

Note the trailing "800" and "500" in those.  So it's not possible that those timestamps came from a _PyTime_timeval (which the code you showed is constructing) - that struct records no info about nanoseconds.

There's another clue I don't know what to make of: I've seen dozens of these failures now, and I believe that in every case the original timestamp (the one that should be older) _always_ ended with "500" (in st_mtime_ns format).  That's true in the example I posted just above, and also true of the 3 examples I posted in msg203913.

LOL!  So much for that clue - while I was typing I got this failure:

AssertionError: 1385181174851605800 not greater than or equal to 1385181174851606000

So the older timestamp in failing cases always ends with "500" or "000" ;-)
msg203967 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2013-11-23 05:09
The "000" or "500" still smells of floating point rounding to me. Windows appears to round-trip the values provided perfectly:

import ctypes
def do_test(t):
    h = ctypes.windll.kernel32.CreateFileW("test.txt", 0xC0000000, 7, None, 2, 0, 0)
    assert h != -1
    try:
        mt1 = ctypes.c_uint64(t)
        assert ctypes.windll.kernel32.SetFileTime(h, None, None, ctypes.byref(mt1)) != 0
        mt2 = ctypes.c_uint64()
        assert ctypes.windll.kernel32.GetFileTime(h, None, None, ctypes.byref(mt2)) != 0
        assert mt1.value == mt2.value
        print(mt2.value)
    finally:
        assert ctypes.windll.kernel32.CloseHandle(h) != 0

>>> do_test(123)
123
>>> do_test(999999999999999999)
999999999999999999

Now, I'm not going to make any claims about GetSystemTime's accuracy, and there could well be floating point conversions within there that cause the rounding issue, but I'm more inclined to think that one of Python's conversions is at fault.

Of course, an easy way around this would be to sleep for >100ns before touching the file again.
msg203968 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-23 06:16
Steve, I'm afraid sleeping 100ns wouldn't be enough. The more data I collect, the more bizarre this gets :-(

Across 300 runs I recorded the difference, in nanoseconds, between the "old" and "new" timestamps.  A negative difference is a test failure.  I was very surprised (for one thing) to see how few *distinct* values there were:

    -600:   5
    -500:   9
    -200:   1
    -100:   8
       0:  13
     100:   1
  975800:   7
  975900:  58
  976000:  69
  976100:   7
  976300:   9
  976400:  48
  976500:  52
  976600:   6
 1952400:   1
 1952500:   1
 1952800:   1
 1952900:   3
 1953000:   1
          ---
          300

So the vast bulk of the differences were close to a millisecond (1e6 nanoseconds), and a handful at the tail close to 2 milliseconds.  Anyone know the precision of NTFS file creation time?  I know the time structure is _capable_ of 100ns resolution, but the numbers above strongly suggest the precision is a lot closer to a millisecond.

Anyway, on the failure end, the biggest difference seen was 600 nanoseconds.  A 100ns sleep wouldn't cover that ;-)
msg203973 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-23 07:06
I have a different theory about this.  As explained all over the place, on FAT file creation times are good to 10 milliseconds but modification times only good to 2 seconds.  But I can't find one credible word about what the various precisions are for NTFS.  For example,

http://msdn.microsoft.com/en-us/library/windows/desktop/ms724290(v=vs.85).aspx

only gives details about FAT.

Anyway, suppose internal system time is 4.5123 seconds.  On FAT we create a file, and it's recorded as 4.51 seconds.  We immediately modify it, and that's recorded as 4 seconds.  Oops!  The file was modified before it was created :-)

If NTFS similarly records creation times with greater resolution than modification times, then obviously something similar could happen.

Could that explain the test failures?  Possibly.  The file doesn't exist at first, so it's plausible that the initial "modification time" retrieved is really the file creation time.  And that the later modification time really is a modification time.
msg203976 - (view) Author: Steve Dower (steve.dower) * (Python committer) Date: 2013-11-23 07:39
Or as Martin suggested earlier, time.time() could be returning different values to what the system uses when it creates the file (which I assume is GetSystemTimeAsFileTime/SetFileTime or the kernel-mode equivalent).

I only looked briefly at the touch() implementation, but I believe if the file doesn't exist it just creates it and doesn't try to set the time?

So in the first touch(), the system will set the time. For the second touch(), the file exists and so Python calls utime(... time.time()).
msg203980 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-23 09:11
Tim,

> First, if *both* timestamps went through this code, it's monotonically 
> non-decreasing.   ts1 < ts2 [...] but T(ts1) > T(ts2) can't happen

It's as Steve says (and I tried to say before): What *can* happen is that ts1 > T(ts2). ts1 is used by Windows itself as the file stamp when the file is created, as the first .touch() call doesn't set the timestamp (it doesn't call os.utime()). Only ts2 goes through the pygettimeofday.

For my original example, we have ts1 = ts2 = 1385154213.291315800, T(ts2) = 1385154213.291315076. Converting this back to a FILETIME gives 1385154213.291315000.

>  Anyone know the precision of NTFS file creation time?

See for example 

http://doxygen.reactos.org/d8/d55/drivers_2filesystems_2ext2_2src_2write_8c_source.html#l00525

It's only ReactOS, and only the ext2 driver, but I believe the code should be similar for NTFS. The driver does KeQuerySystemTime, and then uses that to set the LastWriteTime. For ext2, there might be truncation, but for NTFS, the value is likely written as-is. So the question is really in which granularity the system time advances. I believe this is still a global variable, updated by the timer interrupt.
msg203981 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-23 09:53
Martin, I don't see how:

> What *can* happen is that ts1 > T(ts2)

_in this test_.  As shown in many failure examples *both* nanosecond timestamps had non-zero nanoseconds.  Like:

AssertionError: 1385161652120374900 not greater than or equal to
1385161652120375500

Anything coming from T() would have "000" at the end, not "900" or "500".  T() only retains microseconds.  Or do you disagree with that?

In any case, we're comparing apples to oranges here somehow ;-) If I add:

        p.touch()

right before:

        st = p.stat()
        old_mtime = st.st_mtime
        old_mtime_ns = st.st_mtime_ns

then we're certainly comparing a modification (as opposed to creation) time to a modification time, and I get no more errors.  The nanosecond-difference distribution across 300 runs changed to the obviously bi-modal:

       0: 193
  975800:  12
  975900:   7
  976000:   5
  976100:  29
  976800:  10
  977000:  35
  977100:   9
          ---
          300

I suggest too that's a better way to fix the test than the current

            delta = 1e-6 if os.name == 'nt' else 0

dance.  There's no justification for 1e-6 beyond "well, it seemed to work", and I have shown failure cases with a delta as large as 0.6e-6.
msg203985 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-23 10:43
On sam., 2013-11-23 at 09:53 +0000, Tim Peters wrote:
> I suggest too that's a better way to fix the test than the current
> 
>             delta = 1e-6 if os.name == 'nt' else 0
> 
> dance.

Except that it would test something else (i.e. it wouldn't test the
st_mtime value when the file is created).
msg203989 - (view) Author: Martin v. Löwis (loewis) * (Python committer) Date: 2013-11-23 11:42
> AssertionError: 1385161652120374900 not greater than or equal to
> 1385161652120375500
> Anything coming from T() would have "000" at the end, not "900" or "500". 

But "900" *is* "000" :-) 

A. t1=t2=1385161652120375500
B. pygettimeofday truncates this to 1385161652.120375
C. time.time() converts this to float, yielding
   0x1.4a3f8ed07b439p+30 i.e.
   (0.6450161580556887, 31)
   1385161652.120375 (really .1203749566283776)
D. _PyTime_ObjectToDenominator converts this to 
   1385161652.120374917
E. time_t_to_FILE_TIME convert this to 
   1385161652.120374900
msg204003 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-23 13:15
Note that issue #19727 would possibly allow for an easy fix.
msg204029 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-11-23 14:26
New changeset b5e9d61f6987 by Antoine Pitrou in branch 'default':
Issue #19715: try the utime(..., None) approach again, now that it should be more precise under Windows
http://hg.python.org/cpython/rev/b5e9d61f6987
msg204060 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-23 17:22
Antoine, FYI, with the current code test_pathlib passed 500 times in a row on my box.  Success :-)
msg204062 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2013-11-23 17:24
> Antoine, FYI, with the current code test_pathlib passed 500 times in a 
> row on my box.  Success :-)

This has been a tedious one :-)
I'm gonna open a separate issue for the precision loss in pytime.c, then.
msg204148 - (view) Author: Tim Peters (tim.peters) * (Python committer) Date: 2013-11-24 01:33
[MvL]
> A. t1=t2=1385161652120375500
> B. pygettimeofday truncates this to 1385161652.120375
> C. time.time() converts this to float, yielding
>   0x1.4a3f8ed07b439p+30 i.e.
>   (0.6450161580556887, 31)
>   1385161652.120375 (really .1203749566283776)
> D. _PyTime_ObjectToDenominator converts this to
>   1385161652.120374917
> E. time_t_to_FILE_TIME convert this to
>   1385161652.120374900

Got it.  It's not surprising we can't round-trip, but it's annoying we can't always round-trip even if there are no nanoseconds to lose at the start :-(  The only part here that rounds is step C - everything else truncates.

For example, start with 1385161652120374000.  B is exact then, returning seconds 1385161652 and usecs 120374.

C doesn't do _much_ damage, returning

1385161652.120374 == 0x1.4a3f8ed07b435p+30

D. breaks that into
    1385161652.0
and
    0.12037396430969238
yielding seconds 1385161652 and numerator 120373964.  The last part is a little truncated, but the major loss comes in E, which chops off the final "64" - we end up changing

    1385161652120374000 into
    1385161652120373900

There's a reason C's time_t is almost always implemented as an integer type ;-)  I expect we'd be better off if we represented times internally as 64-bit ints, used double for the output of time.time(), and refused to accept doubles for any purpose - LOL ;-)
msg222639 - (view) Author: Mark Lawrence (BreamoreBoy) * Date: 2014-07-09 21:12
#19727 is fixed so do we need issues for "the precision loss in pytime.c" from msg204062 and the analysis in msg204148 ?
msg240654 - (view) Author: A.M. Kuchling (akuchling) * (Python committer) Date: 2015-04-13 16:47
It looks like Antoine filed the precision-loss error as #19738 and that ticket is now closed.  I conclude there's nothing left to do on this ticket, so I'll close it.
History
Date User Action Args
2015-04-13 16:47:45akuchlingsetstatus: open -> closed
nosy: + akuchling
messages: + msg240654

2014-07-09 21:12:05BreamoreBoysetnosy: + BreamoreBoy
messages: + msg222639
2013-11-24 01:33:30tim.peterssetstatus: pending -> open

messages: + msg204148
2013-11-23 17:24:53pitrousetstatus: open -> pending
resolution: fixed
messages: + msg204062

stage: needs patch -> resolved
2013-11-23 17:22:29tim.peterssetmessages: + msg204060
2013-11-23 14:26:05python-devsetmessages: + msg204029
2013-11-23 13:15:45pitrousetmessages: + msg204003
2013-11-23 11:42:30loewissetmessages: + msg203989
2013-11-23 10:43:40pitrousetmessages: + msg203985
2013-11-23 09:53:44tim.peterssetmessages: + msg203981
2013-11-23 09:11:43loewissetmessages: + msg203980
2013-11-23 07:39:50steve.dowersetmessages: + msg203976
2013-11-23 07:06:56tim.peterssetmessages: + msg203973
2013-11-23 06:16:04tim.peterssetmessages: + msg203968
2013-11-23 05:09:49steve.dowersetmessages: + msg203967
2013-11-23 04:34:18tim.peterssetmessages: + msg203965
2013-11-23 03:52:20loewissetmessages: + msg203964
2013-11-23 01:11:09python-devsetmessages: + msg203941
2013-11-23 01:08:52pitrousetmessages: + msg203940
2013-11-23 00:55:02python-devsetmessages: + msg203938
2013-11-23 00:45:56loewissetmessages: + msg203935
2013-11-23 00:44:50pitrousetmessages: + msg203934
2013-11-23 00:42:20loewissetmessages: + msg203932
2013-11-23 00:33:04python-devsetmessages: + msg203930
2013-11-23 00:27:55pitrousetmessages: + msg203929
2013-11-23 00:03:07loewissetmessages: + msg203928
2013-11-22 23:18:13python-devsetmessages: + msg203914
2013-11-22 23:11:28tim.peterssetmessages: + msg203913
2013-11-22 23:04:33steve.dowersetmessages: + msg203911
2013-11-22 23:03:00steve.dowersetmessages: + msg203910
2013-11-22 23:01:01pitrousetmessages: + msg203909
2013-11-22 22:56:42tim.peterssetnosy: + tim.peters
messages: + msg203907
2013-11-22 22:20:15python-devsetnosy: + python-dev
messages: + msg203903
2013-11-22 22:06:45pitrousetnosy: + jkloth, steve.dower
messages: + msg203900
2013-11-22 21:34:31pitrousetmessages: + msg203889
2013-11-22 21:17:18loewissetmessages: + msg203882
2013-11-22 21:03:55pitrousetmessages: + msg203877
2013-11-22 20:56:00loewissetmessages: + msg203871
2013-11-22 20:32:46pitrousetmessages: + msg203866
2013-11-22 20:28:26loewissetnosy: + loewis
messages: + msg203860
2013-11-22 19:47:49pitrousetmessages: + msg203849
2013-11-22 18:43:20pitrousetnosy: + tim.golden, brian.curtin
messages: + msg203832
2013-11-22 17:42:48pitroucreate