msg91889 - (view) |
Author: Carlos (carlosdf) |
Date: 2009-08-23 17:46 |
It's not possible to modify a dict inside a dict using a manager from
multiprocessing.
Ex:
from multiprocessing import Process,Manager
def f(d):
d['1'] = '1'
d['2']['1'] = 'Try To Write'
if __name__ == '__main__':
manager = Manager()
d = manager.dict()
d['2'] = manager.dict()
print d
p = Process(target=f, args=(d,))
p.start()
p.join()
print d
d['2'] = 5
print d
The output Under Windows 7 (32 Bits) / Python 2.6.2 (32 Bits) is:
{'2': {}}
{'1': '1', '2': {}}
{'1': '1', '2': 5}
The output is the same if you change "d['2'] = manager.dict()" to
"d['2'] = dict()"
|
msg93948 - (view) |
Author: Terrence Cole (terrence) |
Date: 2009-10-13 23:28 |
I get the same results on:
Python 2.6.2 (r262:71600, Sep 14 2009, 18:47:57)
[GCC 4.3.2] on linux2
I think this is the same issue I was seeing yesterday. You can exercise
the issue and cause an exception with just 6 lines:
##### CODE #####
from multiprocessing import Manager
manager = Manager()
ns_proxy = manager.Namespace()
evt_proxy = manager.Event()
ns_proxy.my_event_proxy = evt_proxy
print ns_proxy.my_event_proxy
##### TRACEBACK #####
Traceback (most recent call last):
File "test_nsproxy.py", line 39, in <module>
print ns_proxy.my_event_proxy
File "/usr/lib64/python2.6/multiprocessing/managers.py", line 989, in
__getattr__
return callmethod('__getattribute__', (key,))
File "/usr/lib64/python2.6/multiprocessing/managers.py", line 740, in
_callmethod
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------
Unserializable message: ('#RETURN', <threading._Event object at 0x1494790>)
---------------------------------------------------------------------
Storing a proxy into a proxied object and then accessing the proxy
returns a copy of the object itself and not the stored proxy. Thus,
updates to the nested dict are local and do not update the real object,
and proxies to unpicklable objects raise an exception when accessed.
|
msg93951 - (view) |
Author: Terrence Cole (terrence) |
Date: 2009-10-14 00:33 |
When a manager receives a message, it unpickles the arguments; this
calls BaseProxy.__reduce__, which calls RebuildProxy. If we are in the
manager, this returns the actual object, otherwise it returns a new
proxy. If we naively disable the ability for proxied objects to be
unredirected in the manager, as in the attached svn diff, this solves
the problem that Carlos and I are seeing. Surprisingly, after applying
this change, the full multiprocessing regression test still runs fine.
I'm sure this change should have some greater impact, but I'm not sure
what. I would appreciate if someone more knowledgeable could comment.
|
msg93957 - (view) |
Author: Jesse Noller (jnoller) *  |
Date: 2009-10-14 01:40 |
Nothing jumps out to me off the top of my head - I can take a closer look
at this after my pycon planning duties finish up in a few weeks. I agree
this is unintended behavior. I'll need to audit the tests to make sure
that A> This is being tested, and B> Those tests are not disabled.
When we included multiprocessing, some tests were deemed too unstable at
the time, and we disabled. This was unfortunate, and I haven't been able
to circle back and spend the time needed to refactor the test suite.
|
msg93961 - (view) |
Author: Terrence Cole (terrence) |
Date: 2009-10-14 02:16 |
The tests for the SyncManager are being automagically generated at
import time -- I was not quite able to follow that well enough to know
exactly what is getting tested, or if they are even enabled. It did not
appear to contain any recursion, however.
|
msg93962 - (view) |
Author: Jesse Noller (jnoller) *  |
Date: 2009-10-14 02:17 |
Yeah, the auto-generation is too clever and needs to be pulled out
entirely.
|
msg98529 - (view) |
Author: Kaushik Ghose (kghose) |
Date: 2010-01-29 19:17 |
Even with the patch, I can not resolve this problem. I can reproduce the problem with the patched version with the following code. My system is:
Python 2.6.4 (r264:75821M, Oct 27 2009, 19:48:32)
IPython 0.10
Platform is Mac OS X (10.5.8) Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009
import multiprocessing as mp
def f(d):
d['f'] = {}
d['f']['msg'] = 'I am here'
manager = mp.Manager()
d = manager.dict()
p = mp.Process(target=f, args=(d,))
p.start()
p.join()
print d
d = {}
f(d)
print d
Output:
{'f': {}}
{'f': {'msg': 'I am here'}}
|
msg98548 - (view) |
Author: Terrence Cole (terrence) |
Date: 2010-01-30 04:06 |
Kaushik, in your example, d is a dict proxy, so assignment to d['f'] correctly ferries the assignment (a new normal dict) to the d['f'] in the original process. The new dict, however, is not a dict proxy, it's just a dict, so assignment of d['f']['msg'] goes nowhere. All hope is not lost, however, because the Manager can be forked to new processes. The slightly modified example below shows how this works:
from multiprocessing import Process, Manager
def f(m, d):
d['f'] = m.dict()
d['f']['msg'] = 'I am here'
m = Manager()
d = m.dict()
p = Process(target=f, args=(m,d))
p.start()
p.join()
print d
{'f': <DictProxy object, typeid 'dict' at 0x7f1517902810>}
print d['f']
{'msg': 'I am here'}
With the attached patch, the above works as shown, without, it gives the same output as your original example.
|
msg133655 - (view) |
Author: Darío Suárez Gracia (dariosg) |
Date: 2011-04-13 08:59 |
Hello,
Trying to share a dictionary of dictionaries of lists with a manager I get the same problem with the patch applied in Python 2.7 (r27:82500, Nov 24 2010, 18:24:29) [GCC 4.1.2 20080704 (Red Hat 4.1.2-48)] on linux2.
The shared variable in results and what I'm trying to do is simultaneously parsing multiple files.
The quality of the code is not very good because I'm a newbie python programmer.
Best regards,
Darío
|
msg213534 - (view) |
Author: Richard Fothergill (Richard.Fothergill) |
Date: 2014-03-14 11:20 |
I'm getting these results on both:
Python 3.2.3 (default, Apr 10 2013, 06:11:55)
[GCC 4.6.3] on linux2
and
Python 2.7.3 (default, Apr 10 2013, 06:20:15)
[GCC 4.6.3] on linux2
The symptoms are exactly as Terrence described.
Nesting proxied containers is supposed to be a supported use case! From the documentation: http://docs.python.org/2/library/multiprocessing.html#proxy-objects
>>> a = manager.list()
>>> b = manager.list()
>>> a.append(b) # referent of a now contains referent of b
>>> print a, b
[[]] []
>>> b.append('hello')
>>> print a, b
[['hello']] ['hello']
The documented code works as expected, but:
>>> a[0].append('world') # Appends to b?
>>> print a, b
[['hello']] ['hello']
I've attached my reproduction as a script.
|
msg255795 - (view) |
Author: Justin Patrin (Justin Patrin) |
Date: 2015-12-02 23:27 |
I'm still running into these issues with Python 2.7.10. I'm trying to find a way to share dynamically allocated sub-dictionaries through multiprocessing as well as dynamically allocated RLock and Value instances. I can use the manager to create them but when I put them in a managed dict the various issues related in this ticket happen.
|
msg257226 - (view) |
Author: Davin Potts (davin) *  |
Date: 2015-12-30 03:54 |
Two core issues are compounding one another here:
1. An un-pythonic, inconsistent behavior currently exists with how managed lists and dicts return different types of values.
2. Confusion comes from reading what is currently in the docs regarding the expected behavior of nested managed objects (e.g. managed dict containing other managed dicts).
As Terrence described, it is RebuildProxy where the decision is made to not return a proxy object but a new local instance (copy) of the managed object from the Server. Unfortunately there are use cases where Terrence's proposed modification won't work such as a managed list that contains a reference to itself or more generally a managed list/dict that contains a reference to another managed list/dict when an attempt is made to delete the outer managed list/dict before the inner. The reference counting implementation in multiprocessing.managers.Server obtains a lock before decrementing reference counts and any deleting of objects whose count has dropped to zero. In fact, when an object's ref count drops to zero, it deletes the object synchronously and won't release the lock until it's done. If that object contains a reference to another proxy object (managed by the same Manager and Server), it will follow a code path that leads it to wait forever for that same lock to be released before it can decref that managed object.
I agree with Jesse's earlier assessment that the current behavior (returning a copy of the managed object and not a proxy) is unintended and has unintended consequences. There are hints in Richard's (sbt's) code that also suggest this is the case. Merely better documenting the current behavior does nothing to address the lack of or at least limited utility suggested in the comments here or the extra complications described in issue20854. As such, I believe this is behavior that should be addressed in 2.7 as well as 3.x.
My proposed patch makes the following changes:
1. Changes RebuildProxy to always return a proxy object (just like Terrence).
2. Changes Server's decref() to asynchronously delete objects after their ref counts drop to 0.
3. Updates the documentation to clarify the expected behavior and clean up the terminology to hopefully minimize potential for confusion or misinterpretation.
4. Adds tests to validate this expected behavior and verify no lock contention.
Concerned about performance, I've attempted applying the #2 change without the others and put it through stress tests on a 4.0GHz Core i7-4790K in a iMac-Retina5K-late2014 OS X system and discovered no degradation in execution speed or memory overhead. If anything with #2 applied it was slightly faster but the differences are too small to be regarded as anything more significant than noise.
In separate tests, applying the #1 and #2 changes together has no noteworthy impact when stress testing with non-nested managed objects but when stress testing the use of nested managed objects does result in a slowdown in execution speed corresponding to the number of nested managed objects and the requisite additional communication surrounding them.
These proposed changes enable the following code to execute and terminate cleanly:
import multiprocessing
m = multiprocessing.Manager()
a = m.list()
b = m.list([4, 5])
a.append(b)
print(str(a))
print(str(b))
print(repr(a[0]))
a[0].append(6)
print(str(b))
To produce the following output:
[<ListProxy object, typeid 'list' at 0x110b99260>]
[4, 5]
<ListProxy object, typeid 'list' at 0x110b7a538>
[4, 5, 6]
Justin: I've tested the RLock values I've gotten back from my managed lists too -- just didn't have that in the example above.
Patches to be attached shortly (after cleaning up a bit).
|
msg274625 - (view) |
Author: Davin Potts (davin) *  |
Date: 2016-09-06 20:53 |
Attaching patch for default (3.6) branch which implements what was previously described and discussed, updates the documentation to explain this updated behavior, and includes new tests.
@yselivanov: Can you think of any edge cases that should be handled but we're missing?
|
msg274874 - (view) |
Author: Davin Potts (davin) *  |
Date: 2016-09-07 19:54 |
Updating previously supplied patch for 3.6 to the right format.
|
msg274911 - (view) |
Author: Davin Potts (davin) *  |
Date: 2016-09-07 23:34 |
Attaching updated patch to reflect Yury's suggested changes from review.
|
msg274920 - (view) |
Author: Roundup Robot (python-dev)  |
Date: 2016-09-07 23:53 |
New changeset 39e7307f9aee by Davin Potts in branch 'default':
Fixes issue #6766: Updated multiprocessing Proxy Objects to support nesting
https://hg.python.org/cpython/rev/39e7307f9aee
|
msg274937 - (view) |
Author: Davin Potts (davin) *  |
Date: 2016-09-08 00:51 |
Fixed in upcoming 3.6.
|
msg309914 - (view) |
Author: Johannes (John_81) |
Date: 2018-01-14 07:51 |
Hi all, I'm trying to use multiprocessing with a 3d list. From the documentation I expected it to work. As I found this report a bid later, I opened a bug report here: https://bugs.python.org/issue32538. Am I doing sth. wrong or is it still not working in 3.6.3?
|
msg311724 - (view) |
Author: Snidhi Sofpro (Snidhi) |
Date: 2018-02-06 15:03 |
Hi team,
Looks like this issue remains per code below:
import multiprocessing, sys, time, traceback;
if __name__ == '__main__':
print(sys.version);
mpd = multiprocessing.Manager().dict();
mpd['prcss'] = {'q' : 'queue_1', 'ctlg' : 'ctlg_1' };
# update 1 - doesn't work!
mpd['prcss'].update( { 'name': 'concfun_1'} );
print('Result of failed update 1:', mpd['prcss']);
# update 2 - doesn't work!
mpd['prcss']['name'] = 'concfun_1';
print('Result of failed update 2:', mpd['prcss']);
# update 3 - works!
mpd_prcss = mpd['prcss'];
mpd_prcss['name'] = 'concfun_1';
mpd['prcss'] = mpd_prcss;
print('Result of successful update 3:', mpd['prcss']);
### --- output ###
3.6.1 (v3.6.1:69c0db5, Mar 21 2017, 17:54:52) [MSC v.1900 32 bit (Intel)]
Result of failed update 1: {'q': 'queue_1', 'ctlg': 'ctlg_1'}
Result of failed update 2: {'q': 'queue_1', 'ctlg': 'ctlg_1'}
Result of successful update 3: {'q': 'queue_1', 'ctlg': 'ctlg_1', 'name': 'concfun_1'}
|
msg336850 - (view) |
Author: Dusan Gligoric (dusan76) |
Date: 2019-02-28 16:44 |
Hey folks,
This is still an issue with 3.7.2
===============================================
# Python 3.7.2 (default, Jan 10 2019, 23:51:51)
# [GCC 8.2.1 20181127] on linux
from multiprocessing import Manager
manager = Manager()
d = manager.dict({})
d["test"] = {"a": 123}
# update fails
d["test"]["a"] = 321
# add fails
d["test"]["b"] = 321
print(d)
|
|
Date |
User |
Action |
Args |
2022-04-11 14:56:52 | admin | set | github: 51015 |
2019-02-28 16:44:16 | dusan76 | set | nosy:
+ dusan76
messages:
+ msg336850 versions:
+ Python 3.7, - Python 3.6 |
2018-02-06 15:03:07 | Snidhi | set | nosy:
+ Snidhi
messages:
+ msg311724 versions:
- Python 2.7, Python 3.4, Python 3.5 |
2018-01-14 07:51:40 | John_81 | set | nosy:
+ John_81 messages:
+ msg309914
|
2016-09-08 13:13:59 | berker.peksag | set | status: open -> closed |
2016-09-08 00:51:40 | davin | set | resolution: fixed messages:
+ msg274937 stage: patch review -> resolved |
2016-09-07 23:53:28 | python-dev | set | nosy:
+ python-dev messages:
+ msg274920
|
2016-09-07 23:34:39 | davin | set | files:
+ issue_6766_py36.nogit.yuryfeedback.patch
messages:
+ msg274911 |
2016-09-07 19:54:51 | davin | set | files:
+ issue_6766_py36.nogit.patch
messages:
+ msg274874 |
2016-09-06 20:53:48 | davin | set | files:
+ issue_6766_py36.patch
nosy:
+ yselivanov messages:
+ msg274625
stage: needs patch -> patch review |
2015-12-30 03:58:15 | davin | set | assignee: jnoller -> davin |
2015-12-30 03:54:19 | davin | set | messages:
+ msg257226 versions:
+ Python 3.4, Python 3.5, Python 3.6, - Python 3.2, Python 3.3 |
2015-12-02 23:27:00 | Justin Patrin | set | nosy:
+ Justin Patrin messages:
+ msg255795
|
2015-06-15 10:27:29 | Waldemar.Parzonka | set | nosy:
+ Waldemar.Parzonka
|
2015-04-02 06:14:38 | ned.deily | set | nosy:
+ sbt, davin
|
2015-04-01 18:25:06 | dan.oreilly | set | nosy:
+ dan.oreilly
versions:
+ Python 3.3 |
2014-03-14 11:20:44 | Richard.Fothergill | set | files:
+ nesting.py versions:
+ Python 3.2 nosy:
+ Richard.Fothergill
messages:
+ msg213534
|
2011-11-29 06:06:45 | ezio.melotti | set | stage: needs patch versions:
+ Python 2.7, - Python 2.6 |
2011-04-13 08:59:34 | dariosg | set | files:
+ test_dict_dict_arrays.py nosy:
+ dariosg messages:
+ msg133655
|
2010-01-30 04:06:20 | terrence | set | messages:
+ msg98548 |
2010-01-29 19:17:23 | kghose | set | nosy:
+ kghose messages:
+ msg98529
|
2009-10-14 02:17:27 | jnoller | set | messages:
+ msg93962 |
2009-10-14 02:16:05 | terrence | set | messages:
+ msg93961 |
2009-10-14 01:40:04 | jnoller | set | messages:
+ msg93957 |
2009-10-14 00:33:03 | terrence | set | files:
+ mp_proxy_hack.diff keywords:
+ patch messages:
+ msg93951
|
2009-10-13 23:28:23 | terrence | set | nosy:
+ terrence messages:
+ msg93948
|
2009-08-23 18:59:19 | r.david.murray | set | assignee: jnoller
nosy:
+ jnoller |
2009-08-23 17:46:28 | carlosdf | create | |