This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author prjsf
Recipients
Date 2001-12-28.22:12:59
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Logged In: YES 
user_id=412110

Most software will trust localtime() and gmtime() to
interpret the clock.  It's only a problem here because
you're testing (and thus doubting) the Python bindings to
those functions, and rather than check it against some other 
use of the C functions, you check it against a precomputed
string, thus doubting the C functions.  C's localtime and
gmtime give correct results on my system, because I'm using
a time zone designed for a clock that counts all seconds.
Don't doubt the C functions; they aren't what you're trying
to test.

I already explained the point of keeping the clock this way
(the TAI scale): it simplifies interval calculations (the
difference t1-t0 actually tells you how many seconds passed
between those times), and it makes no clock values
ambiguous.  The NTP scale (counting only non-leap seconds)
is a horrible bit of backward compatibility.  By depending
on it, you punish those with well-configured systems to
reward those with badly-configured systems.

I mentioned an easy fix, which is to compare the computed
string to each of the values seen above, and to pass the
test if it matches either of them.  This will still fail for
people who use even more exotic setups, but I don't know of
any such.  Is there anything wrong with this?  I think the
benefit easily outweighs the cost.
History
Date User Action Args
2007-08-23 13:58:25adminlinkissue497162 messages
2007-08-23 13:58:25admincreate