Issue20854
This issue tracker has been migrated to GitHub,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2014-03-05 16:03 by allista, last changed 2022-04-11 14:57 by admin. This issue is now closed.
Messages (4) | |||
---|---|---|---|
msg212763 - (view) | Author: Allis Tauri (allista) | Date: 2014-03-05 16:03 | |
1. I have a tree-like recursive class MyClass. It's method 'get_child(i)' returns an instanse of that same class. 2. I register this class with BaseManager as follows: class MyManager(BaseManager): pass MyManager.register('MyClass', MyClass, method_to_typeid={'get_child':'MyClass'}) 3. When I call 'get_child' method of AutoProxy[MyClass] object, the exception is raised in the '__init__' method of MyClass: it is called with a single argument which is the instance of MyClass returned by 'get_child'. This happens in the following code of multiprocessing.managers.Server.create method: 373 def create(self, c, typeid, *args, **kwds): ... 382 if callable is None: 383 assert len(args) == 1 and not kwds 384 obj = args[0] 385 else: 386 obj = callable(*args, **kwds) <-This line raises the exception This means that if ANY method registered with a Manager should return a proxy for a registered typeid, for which a callable is provided, it will fail, unless the callable is capable to handle such unexpected arguments. |
|||
msg212934 - (view) | Author: Richard Oudkerk (sbt) * ![]() |
Date: 2014-03-08 16:10 | |
I am not sure method_to_typeid and create_method were really intended to be public -- they are only used by Pool proxies. You can maybe work around the problem by registering a second typeid without specifying callable. That can be used in method_to_typeid: import multiprocessing.managers class MyClass(object): def __init__(self): self._children = {} def get_child(self, i): return self._children.setdefault(i, type(self)()) def __repr__(self): return '<MyClass %r>' % self._children class MyManager(multiprocessing.managers.BaseManager): pass MyManager.register('MyClass', MyClass, method_to_typeid = {'get_child': '_MyClass'}) MyManager.register('_MyClass', method_to_typeid = {'get_child': '_MyClass'}, create_method=False) if __name__ == '__main__': m = MyManager() m.start() try: a = m.MyClass() b = a.get_child(1) c = b.get_child(2) d = c.get_child(3) print a # <MyClass {1: <MyClass {2: <MyClass {3: <MyClass {}>}>}>}> finally: m.shutdown() |
|||
msg212959 - (view) | Author: Allis Tauri (allista) | Date: 2014-03-09 09:24 | |
Thanks for the suggestion. method_to_typeid and create_method are documented features, so I don't see why not. It does the trick in a cleaner way than my workaround: a metaclass for MyClass that just checks the arguments before creating a new instance. It just seems to me somewhat counterintuitive. Another issue that arises in my case is: when I try to pass a proxy of MyClass to a subprocess it looses its' _manager during pickling and thus the ability to create proxies for children returned by get_child. This is solved by reimplementing the (not-working: http://bugs.python.org/issue5862) __reduce__ method of BaseManager in MyManager and creating corresponding custom proxy for MyClass with __reduce__ method also reimplemented. So the working solution for the situation is: either 1.1) class ReturnProxy(type): def __call__(cls, *args, **kwargs): if not kwargs and args and isinstance(args[0], cls): return args[0] return super(ReturnProxy, cls).__call__(*args, **kwargs) class MyClass(object): __metaclass__ = ReturnProxy ###class body### or 1.2) Your solution with the second typeid registration. 2) class AutoProxyMeta(type): '''Metaclass that replicates multiprocessing.managers.MakeProxyType functionality, but allows proxy classes that use it to be pickable''' def __new__(cls, name, bases, attrs): dic = {} for meth in attrs.get('_exposed_', ()): exec '''def %s(self, *args, **kwds): return self._callmethod(%r, args, kwds)''' % (meth, meth) in dic dic.update(attrs) return super(AutoProxyMeta, cls).__new__(cls, name, bases, dic) class MyClassProxy(BaseProxy): __metaclass__ = AutoProxyMeta _exposed_ = ('get_child',) _method_to_typeid_ = dict(get_child='MyClass') #or: _method_to_typeid_ = dict(get_child='_MyClass') def __reduce__(self): _unpickle, (cls, token, serializer, kwds) = BaseProxy.__reduce__(self) kwds['manager'] = self._manager return _unpickle, (cls, token, serializer, kwds) class MyClassManager(UManager): def __reduce__(self): return (RebuildMyClassManager, (self._address, None, self._serializer)) WorkCounterManager.register('MyClass', MyClass, MyClassProxy) #optionally: WorkCounterManager.register('_MyClass', None, MyClassProxy, create_method=False) def RebuildMyClassManager(address, authkey, serializer): mgr = MyClassManager(address, authkey, serializer) mgr.connect() return mgr |
|||
msg301651 - (view) | Author: Davin Potts (davin) * ![]() |
Date: 2017-09-07 23:54 | |
It appears that the multiple workarounds proposed by the OP (@allista) address the original request and that there is no bug or unintended behavior arising from multiprocessing itself. Combined with the lack of activity in this discussion, I'm inclined to believe that the workarounds have satisfied the OP and this issue should be closed. |
History | |||
---|---|---|---|
Date | User | Action | Args |
2022-04-11 14:57:59 | admin | set | github: 65053 |
2022-01-04 23:20:09 | iritkatriel | set | status: pending -> closed stage: resolved |
2017-09-07 23:54:51 | davin | set | status: open -> pending nosy: + davin messages: + msg301651 type: behavior |
2014-03-09 09:24:15 | allista | set | messages: + msg212959 |
2014-03-08 16:10:32 | sbt | set | messages: + msg212934 |
2014-03-05 20:39:54 | ned.deily | set | nosy:
+ sbt |
2014-03-05 16:03:30 | allista | create |