This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author davin
Recipients Justin Patrin, Richard.Fothergill, Waldemar.Parzonka, carlosdf, dan.oreilly, dariosg, davin, jnoller, kghose, sbt, terrence
Date 2015-12-30.03:54:16
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1451447659.67.0.993875275794.issue6766@psf.upfronthosting.co.za>
In-reply-to
Content
Two core issues are compounding one another here:
1. An un-pythonic, inconsistent behavior currently exists with how managed lists and dicts return different types of values.
2. Confusion comes from reading what is currently in the docs regarding the expected behavior of nested managed objects (e.g. managed dict containing other managed dicts).

As Terrence described, it is RebuildProxy where the decision is made to not return a proxy object but a new local instance (copy) of the managed object from the Server.  Unfortunately there are use cases where Terrence's proposed modification won't work such as a managed list that contains a reference to itself or more generally a managed list/dict that contains a reference to another managed list/dict when an attempt is made to delete the outer managed list/dict before the inner.  The reference counting implementation in multiprocessing.managers.Server obtains a lock before decrementing reference counts and any deleting of objects whose count has dropped to zero.  In fact, when an object's ref count drops to zero, it deletes the object synchronously and won't release the lock until it's done.  If that object contains a reference to another proxy object (managed by the same Manager and Server), it will follow a code path that leads it to wait forever for that same lock to be released before it can decref that managed object.

I agree with Jesse's earlier assessment that the current behavior (returning a copy of the managed object and not a proxy) is unintended and has unintended consequences.  There are hints in Richard's (sbt's) code that also suggest this is the case.  Merely better documenting the current behavior does nothing to address the lack of or at least limited utility suggested in the comments here or the extra complications described in issue20854.  As such, I believe this is behavior that should be addressed in 2.7 as well as 3.x.

My proposed patch makes the following changes:
1. Changes RebuildProxy to always return a proxy object (just like Terrence).
2. Changes Server's decref() to asynchronously delete objects after their ref counts drop to 0.
3. Updates the documentation to clarify the expected behavior and clean up the terminology to hopefully minimize potential for confusion or misinterpretation.
4. Adds tests to validate this expected behavior and verify no lock contention.

Concerned about performance, I've attempted applying the #2 change without the others and put it through stress tests on a 4.0GHz Core i7-4790K in a iMac-Retina5K-late2014 OS X system and discovered no degradation in execution speed or memory overhead.  If anything with #2 applied it was slightly faster but the differences are too small to be regarded as anything more significant than noise.
In separate tests, applying the #1 and #2 changes together has no noteworthy impact when stress testing with non-nested managed objects but when stress testing the use of nested managed objects does result in a slowdown in execution speed corresponding to the number of nested managed objects and the requisite additional communication surrounding them.



These proposed changes enable the following code to execute and terminate cleanly:
import multiprocessing
m = multiprocessing.Manager()
a = m.list()
b = m.list([4, 5])
a.append(b)
print(str(a))
print(str(b))
print(repr(a[0]))
a[0].append(6)
print(str(b))


To produce the following output:
[<ListProxy object, typeid 'list' at 0x110b99260>]
[4, 5]
<ListProxy object, typeid 'list' at 0x110b7a538>
[4, 5, 6]


Justin: I've tested the RLock values I've gotten back from my managed lists too -- just didn't have that in the example above.


Patches to be attached shortly (after cleaning up a bit).
History
Date User Action Args
2015-12-30 03:54:19davinsetrecipients: + davin, jnoller, carlosdf, terrence, kghose, sbt, dariosg, Richard.Fothergill, dan.oreilly, Waldemar.Parzonka, Justin Patrin
2015-12-30 03:54:19davinsetmessageid: <1451447659.67.0.993875275794.issue6766@psf.upfronthosting.co.za>
2015-12-30 03:54:19davinlinkissue6766 messages
2015-12-30 03:54:16davincreate