Message115125
> - A worker removes a job from the queue and is killed before
> sending an ACK.
Yeah, this may be a problem. I was thinking we could make sure the task is acked before child process shutdown. Kill -9 is then not safe, but do we really want to guarantee that in multiprocessing? In celery we're safe by using AMQP's ack trasnaction anyway. The same could be said if there's a problem with the queue though. Maybe using ack timeouts? We know how many worker processes are free already.
> A worker removes a job from the queue, sends an ACK, and then is
> killed. Due to bad luck with the scheduler, the parent cleans the
> worker before the parent has recorded the worker pid.
Guess we need to consume from the result queue until it's empty.
> You're now reading from self._cache in one thread but writing it in
> another.
Yeah, I'm not sure if SimpleQueue is managed by a lock already. Should maybe use a lock if it isn't.
> What happens if a worker sends a result and then is killed?
In the middle of sending? Or, if not I don't think this matters. |
|
Date |
User |
Action |
Args |
2010-08-27 19:55:10 | asksol | set | recipients:
+ asksol, jnoller, vlasovskikh, gdb, Albert.Strasheim |
2010-08-27 19:55:10 | asksol | set | messageid: <1282938910.09.0.615144532149.issue9205@psf.upfronthosting.co.za> |
2010-08-27 19:55:08 | asksol | link | issue9205 messages |
2010-08-27 19:55:08 | asksol | create | |
|