This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author pitrou
Recipients asvetlov, brian.curtin, flox, gps, jnoller, jyasskin, pitrou, torsten
Date 2010-03-21.01:36:45
SpamBayes Score 1.2005785e-06
Marked as misclassified No
Message-id <1269135416.44.0.592957294634.issue7316@psf.upfronthosting.co.za>
In-reply-to
Content
Here is a new patch fixing most of your comments.

A couple of answers:

> I believe we can support arbitrary values here, subject to floating
> point rounding errors, by calling lock-with-timeout in a loop. I'm not
> sure whether that's a good idea, but it fits better with python's
> arbitrary-precision ints.

I'm a bit wary of this, because we can't test it properly.

> -        task_handler.join(1e100)
> +        task_handler.join()
> 
> Why is this change here?  (Mostly curiosity)

Because 1e100 would raise OverflowError :)

> +               if (timeout > PY_TIMEOUT_MAX) {
> 
> I believe it's possible for this comparison to return false, but for
> the conversion to PY_TIMEOUT_T to still overflow:

Ok, I've replaced it with the following which should be ok:

    if (timeout >= (double) PY_TIMEOUT_MAX) [...]

> +               milliseconds = (microseconds + 999) / 1000;
> 
> Can (microseconds+999) overflow?

Indeed it can (I sincerely hoped that nobody would care...).
I've replaced it with what might be a more appropriate construct.
Please note that behaviour is undefined when microseconds exceeds the max timeout, though (this is the low-level C API).
History
Date User Action Args
2010-03-21 01:36:57pitrousetrecipients: + pitrou, jyasskin, gps, jnoller, brian.curtin, asvetlov, flox, torsten
2010-03-21 01:36:56pitrousetmessageid: <1269135416.44.0.592957294634.issue7316@psf.upfronthosting.co.za>
2010-03-21 01:36:54pitroulinkissue7316 messages
2010-03-21 01:36:53pitroucreate