New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster total_ordering #67321
Comments
Proposed patch makes comparation method generated by the total_ordering decorator faster up to 20%. Benchmark results:
a < b 2.46 2.45 It also adds few additional attributes to generated methods. |
This looks like a nice, relatively simple improvement in both speed and introspection support, so +1 from me. Something I've wondered since we changed total_ordering to handle NotImplemented correctly is whether it would be worth exposing more of the *components* of rich boolean comparison operations through the operator module. Currently it isn't possible to access the individual steps, which is why handling NotImplemented incurred such a large performance hit relative to the previous implementation that didn't worry about it (but also potentially hit RecursionError if the underlying comparison operation returned NotImplemented). |
I tweaked Serhiy's benchmark script to also include the timing relative to spelling out all four ordered comparison methods. For an unpatched debug build of trunk, I get the following: $ ../py3k/python total_ordering_relative_bench.py
a < b 1.643 1.605 x0.98
b < a 1.611 1.611 x1.00
a >= b 1.599 3.539 x2.21
b >= a 1.607 3.579 x2.23
a <= b 1.600 3.677 x2.30
b <= a 1.601 5.599 x3.50
a > b 1.600 3.624 x2.26
b > a 1.612 6.465 x4.01 With Serhiy's change applied I get: $ ../py3k/python total_ordering_relative_bench.py
a < b 1.599 1.602 x1.00
b < a 1.607 1.609 x1.00
a >= b 1.602 2.802 x1.75
b >= a 1.605 2.804 x1.75
a <= b 1.737 2.842 x1.64
b <= a 1.607 4.835 x3.01
a > b 1.667 2.821 x1.69
b > a 1.597 5.557 x3.48 |
Side effect of improving introspection is that generated unbounded methods are pickleable now. Updated patch contains a test for this. |
New changeset 4e85df8b3ea6 by Serhiy Storchaka in branch 'default': |
Serhiy, this is a really nice idea. By removing the additional layer of indirection, the code is more intelligible, runs faster, and the tracebacks make more sense when the user's root comparison raises an exception. Since there are only twelve functions involved, I don't think the function templates give us much of a payoff. Instead, it would be better to just precompute the 12 functions rather than have 5 templates. I've attached a patch relative to Python 3.4. Ideally, I would like this backported to 3.4 to fix the regression in performance and intelligibility. One further possible change is to localize the NotImplemented global variable. This will reduce the overhead of NotImplemented checking to almost nothing and almost completely restore the performance of earlier versions. |
While I like the readability of Raymond's version, I think the main pay-off we're getting from the template based version is that each decorator invocation is creating *new* function objects. That creation of new function objects is what allows Serhiy's patch to set __module__ and __qualname__ for each method implementation based on the class being defined. The two approaches could be combined by moving the explicit definitions into factory functions that always created new function objects and set their introspection attributes appropriately. For example (untested code): def _fix_introspection(module, cls_qualname):
def update_metadata(f):
f.__qualname__ = "%s.%s" % (cls_qualname, f.__name__)
f.__module__ = module
return f
return update_metadata
def _derive_from_lt(module, cls_qualname):
_NotImplemented = NotImplemented
@_fix_introspection(module, cls_qualname)
def __gt__(self, other):
op_result = self.__lt__(other)
if op_result is _NotImplemented:
return _NotImplemented
return not op_result and self != other
@_fix_introspection(module, cls_qualname)
def __le__(self, other):
op_result = self.__lt__(other)
return op_result or self == other
@_fix_introspection(module, cls_qualname)
def __ge__(self, other):
op_result = self.__lt__(other)
if op_result is _NotImplemented:
return _NotImplemented
return not op_result
return __lt__, __gt__, __ge__ |
Oops, at least one error in my example: the return tuple should be "__gt__, __le__, __ge__". |
I don't see "creating new functions" as an advantage. ABCs don't do this, nor does any other subclassing. It seems like an attempt to create a false illusion about where the code resides. This feels like feature creep with no real advantage that anyone cares about. In the non-templating version, the code is simple and it is clear where it came-from (i.e. a code inspector can find it). IMO, the factory functions just make it harder to grok this code. Please resist the urge invent new magic and stick with the simplest Python that gets the job done. |
The convert mapping is redundant, function name can be calculated from root and opname. __name__ and __doc__ can be assigned at import time. |
The metadata adjustments in Serhiy's patch had me thinking in terms of functools.wraps, but that rationale doesn't actually apply here (since the functions are only used in one place, and have a distinctive name rather than a generic one). I'd also missed that the conversion to standalone module level functions inherently provides the same pickle compatibility benefit that Serhiy's patch did. Accordingly, I agree that my suggested factory functions would make the code more complex for no benefit. |
New changeset 09b0da38ce8d by Raymond Hettinger in branch '3.4': |
May be implement your idea about local NotImplemented? And it would be good to |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: