Message196956
Oh, I'm not opposed, I'm just complaining ;-)
It would be much nicer to have an approach that worked for all thread users, not just threading.Thread users. For example, a user can easily (well, plausibly) get into the same kinds of troubles here by calling _thread.start_new_thread() directly, then waiting for their threads "to end" before letting the program finish - they have no idea either when their tstates are actually destroyed.
A high-probability way to "appear to fix" this for everyone could change Py_EndInterpreter's
if (tstate != interp->tstate_head || tstate->next != NULL)
Py_FatalError("Py_EndInterpreter: not the last thread");
to something like
int count = 0;
while (tstate != interp->tstate_head || tstate->next != NULL) {
++count;
if (count > SOME_MAGIC_VALUE)
Py_FatalError("Py_EndInterpreter: not the last thread");
sleep(SOME_SHORT_TIME);
}
In the meantime ;-), you should change this part of the new .join() code:
if endtime is not None:
waittime = endtime - _time()
if not lock.acquire(timeout=waittime):
return
The problem here is that we have no idea how much time may have elapsed before computing the new `waittime`. So the new `waittime` _may_ be negative, in which case we've already timed out (but passing a negative `waittime` to acquire() means "wait as long as it takes to acquire the lock"). So this block should return if waittime < 0. |
|
Date |
User |
Action |
Args |
2013-09-04 21:09:25 | tim.peters | set | recipients:
+ tim.peters, jcea, csernazs, ncoghlan, pitrou, grahamd, neologix, python-dev, Tamas.K |
2013-09-04 21:09:25 | tim.peters | set | messageid: <1378328965.4.0.848447681959.issue18808@psf.upfronthosting.co.za> |
2013-09-04 21:09:25 | tim.peters | link | issue18808 messages |
2013-09-04 21:09:24 | tim.peters | create | |
|