msg162062 - (view) |
Author: Johan Aires Rastén (JohanAR) |
Date: 2012-06-01 08:43 |
Python 2.7.3 (default, Apr 20 2012, 22:39:59)
[GCC 4.6.3] on linux2
If a signal handler calls Queue.PriorityQueue.put and a second signal is received while one is already being processed, neither of the calls to put will terminate.
Highly dependent on timing so it might be difficult to reproduce.
|
msg162065 - (view) |
Author: Johan Aires Rastén (JohanAR) |
Date: 2012-06-01 09:44 |
Start queue_deadlock.py in one terminal and note the PID it prints.
Compile queue_sendint.c in another terminal and execute it with previous PID as only argument.
If the bug is triggered, nothing will print in the python terminal window when you press Enter. To terminate the application you can press Ctrl-\
|
msg162069 - (view) |
Author: Richard Oudkerk (sbt) * |
Date: 2012-06-01 12:02 |
I don't think there is anything special about PriorityQueue.
There is a similar concerning the use of the Python implementation of RLock in signal handlers -- see http://bugs.python.org/issue13697.
Maybe the signal handler should temporarily mask or ignore SIGINT while it runs.
|
msg162076 - (view) |
Author: Johan Aires Rastén (JohanAR) |
Date: 2012-06-01 14:43 |
I did read some more on the subject and it seems like using locks with interrupts is in general a very difficult problem and not limited to Python.
Don't know if this is considered common knowledge among programmers or if it would be useful with at least a warning on the signal manual page.
|
msg211436 - (view) |
Author: Itamar Turner-Trauring (itamarst) |
Date: 2014-02-17 18:50 |
This is not specifically a signal issue; it can happen with garbage collection as well if you have a Queue.put that runs in __del__ or a weakref callback function.
This can happen in real code. In my case, a thread that reads log messages from a Queue and writes them to disk. The thread putting log messages into the Queue can deadlock if GC happens to cause a log message to be written right after Queue.put() acquired the lock.
(see https://github.com/itamarst/crochet/issues/25).
|
msg275181 - (view) |
Author: mike bayer (zzzeek) * |
Date: 2016-09-08 21:59 |
SQLAlchemy suffered from this issue long ago as we use a Queue for connections, which can be collected via weakref callback and sent back to put(), which we observed can occur via gc. For many years (like since 2007 or so) we've packaged a complete copy of Queue.Queue with an RLock inside of it to work around it.
I'm just noticing this issue because I'm writing a new connection pool implementation that tries to deal with this issue a little differently (not using Queue.Queue though).
|
msg275377 - (view) |
Author: Raymond Hettinger (rhettinger) * |
Date: 2016-09-09 18:38 |
Guido opposes having us going down the path of using an RLock and altering the queue module to cope with reentrancy.
In the case of mixing signals with threads, we all agree with Johan Aires Rastén that "using locks with interrupts is in general a very difficult problem and not limited to Python", nor is it even limited to the queue module.
I believe that the usual technique for mixing signals and threads is to have the signal handler do the minimal work necessary to record the event without interacting with the rest of the program. Later, another thread can act on the recorded signal but can do so with normal use of locks so that code invariants are not violated. This design makes reasoning about the program more tractable.
|
msg275404 - (view) |
Author: Roundup Robot (python-dev) |
Date: 2016-09-09 19:31 |
New changeset 137806ca59ce by Gregory P. Smith in branch '3.5':
Add a note about queue not being safe for use from signal handlers.
https://hg.python.org/cpython/rev/137806ca59ce
New changeset 4e15e7618715 by Gregory P. Smith in branch 'default':
Add a note about queue not being safe for use from signal handlers.
https://hg.python.org/cpython/rev/4e15e7618715
|
msg275421 - (view) |
Author: mike bayer (zzzeek) * |
Date: 2016-09-09 20:16 |
yep, that's what im doing in my approach. though longer term thing, I noticed it's very hard to find documentation on exactly when gc might run. E.g. would it ever run if I did something innocuous, like "self.thread_id = None" (probably not). Just wasn't easy to get that answer.
|
msg275485 - (view) |
Author: Roundup Robot (python-dev) |
Date: 2016-09-09 22:59 |
New changeset 8c00cbbd3ff9 by Raymond Hettinger in branch '3.5':
Issue 14976: Note that the queue module is not designed to protect against reentrancy
https://hg.python.org/cpython/rev/8c00cbbd3ff9
|
msg291764 - (view) |
Author: Itamar Turner-Trauring (itamarst) |
Date: 2017-04-16 20:44 |
This bug was closed on the basis that signals + threads don't interact well. Which is a good point.
Unfortunately this bug can happen in cases that have nothing to do with signals. If you look at the title and some of the comments it also happens as a result of garbage collection: in `__del__` methods and weakref callbacks.
Specifically, garbage collection can happen on any bytecode boundary and cause reentrancy problems with Queue.
The attached file is an attempt to demonstrate this: it runs and runs until GC happens and then deadlocks for me (on Python 3.6). I.e. it prints GC! and after that no further output is printed and no CPU usage reported.
Please re-open this bug: if you don't want to fix signal case that's fine, but the GC case is still an issue.
|
msg300421 - (view) |
Author: Nick Coghlan (ncoghlan) * |
Date: 2017-08-17 13:52 |
Itamar wrote up a post describing the GC variant of this problem in more detail: https://codewithoutrules.com/2017/08/16/concurrency-python/
In particular, he highlighted a particularly nasty action-at-a-distance variant of the deadlock where:
1. Someone registers a logging.handlers.QueueHandler instance with the logging system
2. One or more types in the application or libraries it uses call logging functions in a __del__ method or a weakref callback
3. A GC cycle triggers while a log message is already being processed and hence the thread already holds the queue's put() lock
4. Things deadlock because the put() operation isn't re-entrant
As far as I can see, there's no application level way of resolving that short of "Only register logging.handlers.QueueHandler with a logger you completely control and hence can ensure is never used in a __del__ method or weakref callback", which doesn't feel like a reasonable restriction to place on the safe use of a standard library logging handler.
|
msg300422 - (view) |
Author: Yury Selivanov (yselivanov) * |
Date: 2017-08-17 13:54 |
An idea from the blog post: if we rewrite queue in C it will use the GIL as a lock which will fix this particular bug. I can make a patch.
|
msg300455 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-17 20:01 |
Using any kind of potentially-blocking synchronization primitive from __del__ or weakref callback is indeed a bug waiting for happen. I agree non-trivial cases can be hard to debug, especially when people don't expect that kind of cause.
It would be ok to submit a patch solving this issue using C code IMHO. Note the maxsize argument complicates things even though most uses of Queue don't use maxsize.
Of course, the general issue isn't only about Queue: other synchronization primitives are widely used. See https://github.com/tornadoweb/tornado/pull/1876 for an example.
|
msg300478 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-18 09:06 |
Here is a pure Python PoC patch that allows unbounded Queue and LifoQueue to have reentrant put().
|
msg300489 - (view) |
Author: Serhiy Storchaka (serhiy.storchaka) * |
Date: 2017-08-18 11:08 |
Are all changes are necessary for this issue or some of them are just an optimization? The patch changes the semantic of public attribute unfinished_tasks. Seems that old unfinished_tasks is new unfinished_tasks + _qsize().
|
msg300491 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-18 11:35 |
`unfinished_tasks` is not a public attribute AFAIK (it's not documented).
The change is necessary: you cannot increment unfinished_tasks in reentrant put(), since incrementing in pure Python is not atomic. So the incrementation is moved to get(), which probably cannot be made reentrant at all.
If keeping the visible semantics of the `unfinished_tasks` attribute is important, we could make it a property that computes the desired value.
|
msg300499 - (view) |
Author: mike bayer (zzzeek) * |
Date: 2017-08-18 14:25 |
> Here is a pure Python PoC patch that allows unbounded Queue and LifoQueue to have reentrant put().
per http://bugs.python.org/msg275377 guido does not want an RLock here.
|
msg300504 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-18 15:24 |
I'll believe it when Guido chimes in and actually says so himself. Otherwise, I am skeptical Guido cares a lot about whether the internal implementation of Queue uses a Lock, a RLock or whatever else.
|
msg300505 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-18 15:29 |
Guido, apparently you are opposed to the Queue implementation using a RLock instead of a Lock, according to Raymond in https://bugs.python.org/issue14976#msg275377. I find this a bit surprising, so could you confirm it yourself?
|
msg300509 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-18 15:34 |
Also, to elaborate a bit, I don't think we should aim to make Queue fully reentrant, as that would be extremely involved. I think we can settle on the simpler goal of making put() reentrant for unbounded LIFO and FIFO queues, which is what most people care about (and which is incidentally what the posted patch claims to do).
|
msg300510 - (view) |
Author: Yury Selivanov (yselivanov) * |
Date: 2017-08-18 15:37 |
> Here is a pure Python PoC patch that allows unbounded Queue and LifoQueue to have reentrant put().
Is it guaranteed that the GC will happen in the same thread that is holding the lock? IOW will RLock help with all GC/__del__ deadlocking scenarios?
|
msg300511 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-18 15:51 |
Le 18/08/2017 à 17:37, Yury Selivanov a écrit :
>
> Is it guaranteed that the GC will happen in the same thread that is holding the lock?
By design, if it's called from a different thread, Queue will cope fine:
__del__ implicitly calls RLock.acquire which, if the RLock is already
taken, releases the GIL and waits until the RLock is released by the
other thread.
The contentious case is when a Queue method call is interrupted by some
asychronous event (signal handler, GC callback) that calls another Queue
method in the same thread.
Note, in my patch, the RLock isn't even used for its recursive
properties, but only because it allows to query if it's already taken by
the current thread (using a private method that's already used by
threading.Condition, and which could reasonably be made public IMHO).
(on that topic, the pure Python implementation of RLock doesn't
guarantee proper reentrancy against asynchronous events itself -- see
https://bugs.python.org/issue13697 --, but fortunately we use a C
implementation by default which is pretty much ok)
|
msg300532 - (view) |
Author: Guido van Rossum (gvanrossum) * |
Date: 2017-08-18 21:26 |
Given the date from that comment I assume that I told Raymond this during the 2016 core sprint. I can't recall that conversation but I am still pretty worried about using an RLock. (What if someone slightly more insane decides to call get() from inside a GC callback or signal handler?)
However I do think we have to do something here. It's also helpful that all mutable state except for unfinished_tasks is just a deque or list, and the _get()/_put() operations for these are atomic. (I betcha heappop() is too when implemented in C, but not when implemented in Python.)
I can't say I understand all of Antoine's patch, but it's probably okay to do it this way; however I would rather see if we can add _is_owned() to Lock, assuming it can be implemented using any of the threading/locking libraries we still support (I presume that's basically posix and Windows).
IIUC the end result would be a Queue whose put() works from signal handlers, GC callbacks and __del__, as long as it's unbounded, right? And when it *is* bounded, it will give a decent message if the queue is full and the lock is already taken, right? Antoine, can you confirm?
|
msg300567 - (view) |
Author: Nick Coghlan (ncoghlan) * |
Date: 2017-08-19 06:46 |
+1 for treating Queue.put() specifically as the case to be handled, as that's the mechanism that can be used to *avoid* running complex operations directly in __del__ methods and weakref callbacks.
For testing purposes, the current deadlock can be reliably reproduced with sys.settrace:
```
>>> import sys
>>> import queue
>>> the_queue=queue.Queue()
>>> counter = 0
>>> def bad_trace(*args):
... global counter
... counter += 1
... print(counter)
... the_queue.put(counter)
... return bad_trace
...
>>> sys.settrace(bad_trace)
>>> the_queue.put(None)
1
2
3
4
5
6
7
[and here we have a deadlock]
```
|
msg300574 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-19 08:53 |
Le 18/08/2017 à 23:26, Guido van Rossum a écrit :
>
> IIUC the end result would be a Queue whose put() works from signal handlers, GC callbacks and __del__, as long as it's unbounded, right?
Yes.
> And when it *is* bounded, it will give a decent message if the queue is full and the lock is already taken, right?
Currently it gives a decent message on any bounded queue, even if not
full. That may be improved, I'd have to investigate how.
|
msg300575 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-19 08:54 |
Oh and:
Le 18/08/2017 à 23:26, Guido van Rossum a écrit :
>
> I can't say I understand all of Antoine's patch, but it's probably okay to do it this way; however I would rather see if we can add _is_owned() to Lock, assuming it can be implemented using any of the threading/locking libraries we still support (I presume that's basically posix and Windows).
Regular Locks don't have the notion of an owning thread so, while we
could add it, that would be a bizarre API.
We can also detect reentrancy in get() and raise an error.
|
msg300577 - (view) |
Author: Nick Coghlan (ncoghlan) * |
Date: 2017-08-19 10:09 |
Would it be feasible to change the behaviour of non-reentrant locks such that:
1. They *do* keep track of the owning thread
2. Trying to acquire them again when the current thread already has them locked raises RuntimeError instead of deadlocking the way it does now?
Then they could sensibly expose the same "_is_locked()" API as RLock, while still disallowing reentrancy by default.
|
msg300581 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-19 12:05 |
Le 19/08/2017 à 12:09, Nick Coghlan a écrit :
>
> Would it be feasible to change the behaviour of non-reentrant locks such that:
>
> 1. They *do* keep track of the owning thread
Yes.
> 2. Trying to acquire them again when the current thread already has them locked raises RuntimeError instead of deadlocking the way it does now?
No. It's not a deadlock, since you can release a Lock from another thread.
|
msg300723 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-08-22 22:05 |
After experimenting a bit more with this approach, I now realize that the case where a get() is waiting and gets interrupted by a put() call is not handled properly: there is no obvious way for the get() call to realize (when the interruption finishes) that the queue is now non-empty and can be popped from.
So perhaps we need C code after all.
|
msg300825 - (view) |
Author: Raymond Hettinger (rhettinger) * |
Date: 2017-08-25 05:07 |
[Antoine Pitrou]
> So perhaps we need C code after all.
This matches my experience with functools.lru_cache() where I used an RLock() to handle reentrancy. That by itself was insufficient. I also had to make otherwise unnecessary variable assignments to hang onto object references to avoid a decref triggering arbitrary Python code from reentering before the links were all in a consistent state. Further, I had to create a key wrapper to make sure a potentially reentrant __hash__() call wouldn't be made before the state was fully updated. Even then, a potentially reentrant __eq__() call couldn't be avoided, so I had to re-order the operations to make sure this was the last call after the other state updates. This defended against all normal code, but all these measures still could not defend against signals or a GC invocation of __del__, either of which can happen at any time.
On the plus side, we now have a C version of functools.lru_cache() that is protected somewhat by the GIL. On the minus side, it was hard to get right. Even with the pure python code as a model, the person who wrote the C code didn't fully think through all sources of reentrancy and wrote buggy code that shipped in 3.5 and 3.6 (resulting in normal code code triggering hard-to-reproduce reentrancy bugs). The lesson here is that while the C code can be written correctly, it isn't easy to do and it is hard to notice when it is incorrect.
One other thought: Given that __del__() can be invoked at almost any time and can potentially call any other piece of Python code, we should consider turning every lock into an rlock. Also, there should be some guidance on __del__() advising considerable restraint on what gets called. The world is likely full of pure Python code that can't defend itself against arbitrary re-entrancy.
|
msg301170 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-09-02 18:25 |
Just a random thought: if there was a SimpleQueue class with very basic functionality (only FIFO, only get(), put() and empty(), no size limit, no task management), it would be easier to make it reentrant using C.
(FTR, multiprocessing also has a light-weight SimpleQueue)
|
msg301173 - (view) |
Author: Guido van Rossum (gvanrossum) * |
Date: 2017-09-02 20:01 |
Agreed; the Queue class has a bunch of rarely used functionality rolled
in...
Why was task management ever added?
|
msg301175 - (view) |
Author: Tim Peters (tim.peters) * |
Date: 2017-09-02 21:43 |
[Guido]
> Why was task management ever added?
Raymond published a "joinable" queue class as a recipe here:
http://code.activestate.com/recipes/475160-taskqueue/
and later folded it into the standard Python queue. So the usual answer applies: "it was easy to add and sometimes useful, so it seemed like a good idea at the time" ;-)
|
msg301176 - (view) |
Author: Raymond Hettinger (rhettinger) * |
Date: 2017-09-02 22:38 |
[Guido]
> Why was task management ever added?
See http://bugs.python.org/issue1455676
Problem being solved: How can a requestor of a service get notified when that service is complete given that the work is being done by a daemon thread that never returns.
|
msg301207 - (view) |
Author: Guido van Rossum (gvanrossum) * |
Date: 2017-09-04 04:18 |
Oh well. While it is undoubtedly useful I wish we had had more experience and factored the API differently. Ditto for the maxsize=N feature.
So, while it's not too late, perhaps we should indeed follow Antoine's advice and implement a different queue that has fewer features but is guaranteed to be usable by signal handlers and GC callbacks (including __del__). The nice part here is that a queue is mostly a wrapper around a deque anyways, and deque itself is reentrant. (At least one would hope so -- Antoine's patch to Queue assumes this too, and I can't think of a reason why deque would need to release the GIL.)
|
msg301299 - (view) |
Author: Raymond Hettinger (rhettinger) * |
Date: 2017-09-05 06:19 |
> I can't think of a reason why deque would need to release the GIL.
Yes, in deque.append(item) and deque.popleft() are atomic. Likewise list.append() and list.pop() are atomic. The heappush() and heappop() operations aren't as fortunate since they call __lt__() on arbitrary Python objects.
> perhaps we should indeed follow Antoine's advice and implement
> a different queue that has fewer features but is guaranteed
> to be usable by signal handlers and GC callbacks
> (including __del__).
Is the idea to use a regular non-reentrant lock but write the whole thing in C to avoid running any pure python instructions (any of which could allow a signal handler to run)?
If so, it seems like the only feature that needs to be dropped is subclassing. The maxsize feature and unfinished tasks tracking could still be supported. Also, I suppose the guarantees would have be marked as CPython implementation details, leaving the other implementations to fend for themselves.
Or were you thinking about a pure python simplified queue class? If so, an RLock() will likely be required (the act of assigning deque.popleft's return value to a pure python variable can allow a pending signal to be handled before the lock could be released).
One other thought: Would it make sense get() and put() to add gc.disable() and gc.enable() whenever GC is already enabled? That would eliminate a source of reentrancy.
|
msg301321 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-09-05 15:50 |
> Would it make sense get() and put() to add gc.disable() and gc.enable() whenever GC is already enabled?
That doesn't sound very nice if some thread is waiting on a get() for a very long time (which is reasonable if you have a thread pool that's only used sporadically, for example). Also, using gc.disable() and gc.enable() in multi-threaded programs is a bit delicate.
|
msg301322 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-09-05 15:52 |
I don't mind someone to reimplement a full-fledged Queue in C. As for me, I am currently implementing a SimpleQueue in C that's reentrant and has only the most basic functionality.
|
msg301378 - (view) |
Author: Gregory P. Smith (gregory.p.smith) * |
Date: 2017-09-05 21:21 |
FYI - here is an appropriately licensed (WTFPL) very simple lock-free queue implemented in C: https://github.com/darkautism/lfqueue
|
msg301379 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-09-05 21:25 |
Le 05/09/2017 à 23:21, Gregory P. Smith a écrit :
>
> FYI - here is an appropriately licensed (WTFPL) very simple lock-free queue implemented in C
Looks like it uses busy-waiting inside its get() equivalent.
Also we would need a portable version of the atomic instructions used
there (e.g. __sync_bool_compare_and_swap) :-)
|
msg301380 - (view) |
Author: Gregory P. Smith (gregory.p.smith) * |
Date: 2017-09-05 21:43 |
yeah, there are others such as https://www.liblfds.org/ that seem better from that standpoint (gcc, and microsoft compiler support - I'm sure clang is fine as well - anything gcc can do they can do). Ensuring they're supported across build target platforms (all the hardware technically has the ability) is the fun part if we're going to go that route.
|
msg301383 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2017-09-05 22:10 |
One tangential note about a potential full-fledged Queue in C: the pure Python implementation is fair towards consumers, as it uses a threading.Condition which is itself fair. Achieving the same thing in C may significantly complicate the implementation. Raymond would have to decide whether it's an important property to keep.
The SimpleQueue class, being a separate API, is not tied by this constraint.
|
msg301386 - (view) |
Author: Raymond Hettinger (rhettinger) * |
Date: 2017-09-05 22:19 |
To handle the logging.handlers.QueueHandler case, the SimpleQueue needs a put_nowait() method (see line 1959 of Lib/logging/handlers.py.
[Mike Bayer]
> I noticed it's very hard to find documentation on exactly when
> gc might run.
The answer I got from the other coredevs is that GC can trigger whenever a new GCable object is allocated (pretty much any container or any pure python object).
|
msg301409 - (view) |
Author: Raymond Hettinger (rhettinger) * |
Date: 2017-09-06 00:04 |
Just for the record, Guido explained his aversion to RLocks. Roughly: 1) RLocks are slower and more complex 2) It is difficult to correctly write code that can survive reentrancy, so it is easy fool yourself into believing you've written correct code 3) Getting a deadlock with a regular lock is a better outcome than having the invariants subtly violated, 4) Deadlocks have a more clear-cut failure mode and are easier to debug.
Guido also explained that he favored some sort of minimal queue class even if it might not handle all possible gc/weakref/signal/__del__ induced issues. Essentially, he believes there is some value in harm reduction by minimizing the likelihood of a failure. The will help sysadmins who already have a practice of doing monitoring and occasional restarts as a way of mitigating transient issues.
|
msg310014 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-01-15 20:05 |
Hi all,
The PR has been ready for quite some time now. Raymond posted some review comments back in September, which I addressed by making the requested changes.
If someone wants to add their comments, now is the time. Otherwise I plan to merge the PR sometimes soon, so that it gets in before 3.7 beta.
|
msg310025 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-01-15 23:27 |
New changeset 94e1696d04c65e19ea52e5c8664079c9d9aa0e54 by Antoine Pitrou in branch 'master':
bpo-14976: Reentrant simple queue (#3346)
https://github.com/python/cpython/commit/94e1696d04c65e19ea52e5c8664079c9d9aa0e54
|
msg310026 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-01-15 23:27 |
Ok, there has been enough reviews in the last four months :-) This is now merged.
|
msg310109 - (view) |
Author: Gregory P. Smith (gregory.p.smith) * |
Date: 2018-01-16 20:00 |
Catalin has been examining code... switching concurrent.futures.thread to use SimpleQueue instead of Queue is probably a good idea as the queues in there get used from weakref callbacks.
|
msg310111 - (view) |
Author: Antoine Pitrou (pitrou) * |
Date: 2018-01-16 20:00 |
Could you open a new issue for it?
|
msg310125 - (view) |
Author: Gregory P. Smith (gregory.p.smith) * |
Date: 2018-01-17 01:04 |
https://bugs.python.org/issue32576 filed for that
|
|
Date |
User |
Action |
Args |
2022-04-11 14:57:31 | admin | set | github: 59181 |
2018-01-17 22:03:30 | gregory.p.smith | unlink | issue21009 superseder |
2018-01-17 01:04:26 | gregory.p.smith | set | messages:
+ msg310125 |
2018-01-16 20:00:46 | pitrou | set | messages:
+ msg310111 |
2018-01-16 20:00:05 | gregory.p.smith | set | messages:
+ msg310109 |
2018-01-15 23:27:50 | pitrou | set | status: open -> closed resolution: fixed messages:
+ msg310026
stage: needs patch -> resolved |
2018-01-15 23:27:18 | pitrou | set | messages:
+ msg310025 |
2018-01-15 20:05:40 | pitrou | set | messages:
+ msg310014 |
2017-10-12 19:35:14 | Catalin Patulea | set | nosy:
+ Catalin Patulea
|
2017-09-06 00:04:44 | rhettinger | set | messages:
+ msg301409 |
2017-09-05 22:19:27 | rhettinger | set | messages:
+ msg301386 |
2017-09-05 22:10:19 | pitrou | set | messages:
+ msg301383 |
2017-09-05 21:43:36 | gregory.p.smith | set | messages:
+ msg301380 |
2017-09-05 21:25:06 | pitrou | set | messages:
+ msg301379 |
2017-09-05 21:21:54 | gregory.p.smith | set | nosy:
+ gregory.p.smith messages:
+ msg301378
|
2017-09-05 15:56:03 | pitrou | set | pull_requests:
+ pull_request3357 |
2017-09-05 15:52:16 | pitrou | set | messages:
+ msg301322 |
2017-09-05 15:50:58 | pitrou | set | messages:
+ msg301321 |
2017-09-05 06:19:53 | rhettinger | set | messages:
+ msg301299 |
2017-09-04 04:18:39 | gvanrossum | set | messages:
+ msg301207 |
2017-09-02 22:38:55 | rhettinger | set | messages:
+ msg301176 |
2017-09-02 21:43:23 | tim.peters | set | messages:
+ msg301175 |
2017-09-02 20:01:06 | gvanrossum | set | messages:
+ msg301173 |
2017-09-02 18:25:42 | pitrou | set | messages:
+ msg301170 |
2017-08-25 05:07:08 | rhettinger | set | messages:
+ msg300825 |
2017-08-22 22:05:09 | pitrou | set | messages:
+ msg300723 |
2017-08-19 12:05:56 | pitrou | set | messages:
+ msg300581 |
2017-08-19 10:09:17 | ncoghlan | set | messages:
+ msg300577 |
2017-08-19 08:54:37 | pitrou | set | messages:
+ msg300575 |
2017-08-19 08:53:41 | pitrou | set | messages:
+ msg300574 |
2017-08-19 06:46:07 | ncoghlan | set | messages:
+ msg300567 |
2017-08-18 21:26:53 | gvanrossum | set | messages:
+ msg300532 |
2017-08-18 15:51:40 | pitrou | set | messages:
+ msg300511 |
2017-08-18 15:37:59 | yselivanov | set | messages:
+ msg300510 |
2017-08-18 15:34:10 | pitrou | set | messages:
+ msg300509 |
2017-08-18 15:29:41 | pitrou | set | nosy:
+ gvanrossum messages:
+ msg300505
|
2017-08-18 15:24:53 | pitrou | set | messages:
+ msg300504 |
2017-08-18 14:25:19 | zzzeek | set | messages:
+ msg300499 |
2017-08-18 11:35:34 | pitrou | set | messages:
+ msg300491 |
2017-08-18 11:08:52 | serhiy.storchaka | set | messages:
+ msg300489 |
2017-08-18 09:06:21 | pitrou | set | files:
+ q_reentrancy.patch
nosy:
+ serhiy.storchaka messages:
+ msg300478
keywords:
+ patch |
2017-08-17 20:01:27 | pitrou | set | type: crash -> behavior versions:
+ Python 3.7, - Python 2.7, Python 3.3, Python 3.5, Python 3.6 nosy:
+ pitrou
messages:
+ msg300455 resolution: wont fix -> (no value) stage: needs patch |
2017-08-17 13:54:49 | yselivanov | set | status: closed -> open
messages:
+ msg300422 |
2017-08-17 13:52:09 | ncoghlan | set | nosy:
+ ncoghlan messages:
+ msg300421
|
2017-08-17 12:58:13 | yselivanov | set | nosy:
+ yselivanov
|
2017-04-16 20:44:34 | itamarst | set | files:
+ queuebug2.py
messages:
+ msg291764 versions:
+ Python 3.5, Python 3.6 |
2016-09-09 22:59:25 | python-dev | set | messages:
+ msg275485 |
2016-09-09 20:16:38 | zzzeek | set | messages:
+ msg275421 |
2016-09-09 20:01:22 | davin | link | issue21009 superseder |
2016-09-09 19:31:29 | python-dev | set | nosy:
+ python-dev messages:
+ msg275404
|
2016-09-09 18:38:53 | rhettinger | set | status: open -> closed
nosy:
+ davin messages:
+ msg275377
resolution: wont fix |
2016-09-08 21:59:52 | zzzeek | set | nosy:
+ zzzeek messages:
+ msg275181
|
2014-02-24 14:55:10 | rhettinger | set | nosy:
+ tim.peters, rhettinger
|
2014-02-17 18:50:43 | itamarst | set | versions:
+ Python 3.3 nosy:
+ itamarst title: Queue.PriorityQueue() is not interrupt safe -> queue.Queue() is not reentrant, so signals and GC can cause deadlocks messages:
+ msg211436
|
2012-06-01 14:43:01 | JohanAR | set | messages:
+ msg162076 |
2012-06-01 12:02:09 | sbt | set | nosy:
+ sbt messages:
+ msg162069
|
2012-06-01 09:44:19 | JohanAR | set | messages:
+ msg162065 |
2012-06-01 09:40:02 | JohanAR | set | files:
+ queue_sendint.c |
2012-06-01 09:39:02 | JohanAR | set | files:
+ queue_deadlock.py |
2012-06-01 08:43:03 | JohanAR | create | |