This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author petr.viktorin
Recipients petr.viktorin, vinay.sajip
Date 2016-09-01.15:15:59
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
There are two "barrier" like abstractions on Lib/logging/ in the _monitor method.

First _monitor has two loops, what is already kind of a hint something is not right.

Second, it has two ways to exit the loop, that also exit the thread:
1) The _stop threading.Event is "set"
2) The _sentinel object is added to the queue

The problem is, the documentation says that the correct way to not loose records, the stop method must be called, but, the stop method just sets the _stop object and then adds the _sentinel object to the queue.

The loop stops when noticing that _stop is set, and then enters a second version of the loop, trying again to see the _sentinel object, but this time with non blocking read.

The test case shows the problem, but it also hints about the race conditions by the fact that running the test case under "taskset 1" works, so, to reproduce the issue, run the test under a multiprocessor environment.

The proper solution would be to have a proper locking mechanism, otherwise, the _stop object should not be used, and rely only in seeing the _sentinel field; this is what the class DeterministicQueueListener does in the test case.

(Reported by Paulo Andrade at )
Date User Action Args
2016-09-01 15:15:59petr.viktorinsetrecipients: + petr.viktorin, vinay.sajip
2016-09-01 15:15:59petr.viktorinsetmessageid: <>
2016-09-01 15:15:59petr.viktorinlinkissue27930 messages
2016-09-01 15:15:59petr.viktorincreate