New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TODO list when PEP 573 "Module State Access from C Extension Methods" will be implemented #84318
Comments
In bpo-1635741, many C extension modules are converted to PEP-489 multiphase initialization and/or modified to get a module state. Problem: the module state cannot be accessed in some functions, and so some of these changes had to workaround the fact that PEP-573 "Module State Access from C Extension Methods" is not implemented yet. This issue tracks C extension modules which should be modified once PEP-573 will be implemented:
|
Another minor issue: "assert(PyScanner_Check(self));" and "assert(PyEncoder_Check(self));" were removed by commit 33f15a1 when _json module got a module state. scanner_traverse(), scanner_clear(), encoder_traverse() and encoder_clear() cannot get the module state of a module using PEP-489 multiphase initialization. Similar issue in scanner_call() and encoder_call(). I'm not sure that the PEP-573 provide a solution for this exact issue, since the intent of the assertion is to check the type of the argument. The PEP-573 gives access to the module state through types. But here we are not sure that the object type is the expected type... :-) Again, it's a minor issue, it can be ignored. That's why I accepted to merge the commit. Technically, I don't see how we could get the wrong type in practice. |
I consider that all things that could be done have already been done, so I close the issue. Thanks for Hai and Dong-hee who helped ;-)
Sadly, PEP-573 implementation is not complete enough to use it in _functools: see PR 20055.
Fixed in bpo-40566 by: commit 77c6146
tp_clear slot cannot get the defining type (PEP-573 may be extended later to allow that). Using PyType_GetModule(Py_TYPE(self)) to access types stored in the module state and then compare Py_TYPE(self) to these types... looks badly broken :-) If we get the wrong type, PyType_GetModule(Py_TYPE(self)) will return the wrong module and so we cannot get the json state from the wrong module... It's a chicken-and-egg issue :-) I think that it's not worth it to attempt to add back these assertions. It's very unlikely to get the wrong types in practice. |
It should be possible to get them back using _PyType_GetModuleByDef. |
Oh nice, in this case, I reopen my issue :-) |
The following commit introduces a reference leak: commit dd39123
Lib/test/test_functools.py | 3 +- Leaks: test_interpreters leaked [2038, 2034, 2038] references, sum=6110 |
I wrote PR 24006 to fix the leak. |
|
For the record, this one has been fixed by: commit 77c6146
|
State lookups and indirections were added to most performance sensitive code part of the
state = get_functools_state_by_type(Py_TYPE(self));
if (state == NULL) {
return NULL;
}
key = lru_cache_make_key(state, args, kwds, self->typed); I think it should be removed for Python3.10. Right now, it is of zero value to users (actually, negative value). |
Here are some benchmarks of using lru_cache in 3.9 and 3.10 (PGO/LTO/CPU isol): ❯ ./python -m pyperf timeit "from functools import lru_cache; f = lru_cache(lambda: 42)" "f()" --compare-to ../3.9/python Mean +- std dev: [/home/pablogsal/github/3.9/python] 2.60 us +- 0.05 us -> [/home/pablogsal/github/cpython/python] 2.74 us +- 0.06 us: 1.06x slower Given that lru_cache is normally used to seek speed, this is a bit unfortunate :( |
As a Release Manager, I am not very happy with a 6% reduction in speed, especially in these functions. Can someone take a look to see if we can get the performance closer to Python 3.9? |
Serhiy, do you think we could substitute the "self" instance of functools._lru_cache_wrapper object for the kw_mark? It already compares on identity, so it should be just a fast as the instance of object(). AFAICT, kwd_mark doesn't have to be unique across instances. To avoid creating a GC only circular reference, we could skip the INCREF, leaving the object weakly referencing itself. In bounded_lru_cache_wrapper(), the other state reference reference is in creating a new lru_list_elem. There are several things we can do about this. 1) move the expensive get_functools_state_by_type() inside the code block that creates new links, so that it won't affect a full cache. 2) create a per-instance variable that references the per-type lru_list_elem. This has the downside of making each lru_cache instance slightly larger. 3) prepopulate the lru_cache with maxsize links. This simplifies the code and makes it slightly faster but will waste spaces for use cases that never actually fill-up the cache lazily, also it slows down start-up time. Ideally, these PRs should be reverted for Python 3.10. AFAICT, no user of functools will benefit in any way from these changes. Presumably, that is why there are no tests. It is all cost and no benefit. BTW, the 6% timing is likely understated because the loop is executed entirely in L1 cache or the CPU's recent write cache rather than real world conditions where there is fierce competition of the limited 32kb L1 data cache. Just looking at the total number of sequentially dependent memory accesses, I expect that the actual cost is more than a third. |
FWIW, I've only looked at the lru_cache() code. Someone should go through all the PyModule_GetState calls to see if they are on a critical path. AFAICT a huge number of these changes were applied without regard to whether or not they occurred on an existing fast path. This is a considerable number of clock cycles for a type lookup that used to be almost cost free. Whether it is high or low impact greatly depends on whether the surrounding code was doing a lot of work or very little work. In thin functions, the incremental costs will be higher. |
You misused pyperf timeit: the two statements are run at each iteration of the benchmark. I rerun the benchmark on Linux with PGO+LTO and CPU isolation: Mean +- std dev: [py39] 37.5 ns +- 1.0 ns -> [master] 43.2 ns +- 0.7 ns: 1.15x slower I understand that adding get_functools_state_by_type() has a cost of +5.7 ns on the performance of functions decorated with @lru_cache. I used the commands: ./configure --enable-optimizations --with-lto |
I wrote PR 25393 to micro-optimize _PyType_GetModuleByDef(). It reduces the overhead from +5.7 ns to +3.4 ns (-2.3 ns). Well, I didn't check in depth if _PyType_GetModuleByDef() is the only change in functools.lru_cache() from Python 3.9 to master. Compare master to Python 3.9: Mean +- std dev: [py39] 37.5 ns +- 1.0 ns -> [master] 43.2 ns +- 0.7 ns: 1.15x slower Compare PR 25393 to Python 3.9: Mean +- std dev: [py39] 37.5 ns +- 1.0 ns -> [inline] 40.9 ns +- 1.0 ns: 1.09x slower Compare PR 25393 to master: Mean +- std dev: [master] 43.2 ns +- 0.7 ns -> [inline] 40.9 ns +- 1.0 ns: 1.05x faster |
Raymond:
This should be as easy as adding a _functools_state pointer to struct lru_cache_object, init that pointer in lru_cache_new(), and then fetch it from self in infinite_lru_cache_wrapper() and bounded_lru_cache_wrapper(), no? Diff attached. |
New microbenchmark on the functools.lru_cache(lambda: 42) function using my 3 optimizations on _PyType_GetModuleByDef():
$ python3 -m pyperf compare_to py39.json master.json
Mean +- std dev: [py39] 38.8 ns +- 0.5 ns -> [master] 39.2 ns +- 0.9 ns: 1.01x slower The _PyType_GetModuleByDef() overhead in _functools is now about +0.4 ns, it's better than my previous measurement before optimization: +5.7 ns (37.5 ns +- 1.0 ns -> 43.2 ns +- 0.7 ns). These timings are really tiny, it's really hard to get reliable timing even with CPU isolation. For example, two measurements on Python 3.9:
I guess that for timings under 100 ns, the PGO build is no longer reliable enough. Moreover, the std dev is around to 1 ns on 40 ns. |
msg391004 benchmark comparing master with Raymond's optimization (commit 139c232) compared to Python 3.9: Mean +- std dev: [py39] 38.8 ns +- 0.5 ns -> [obj_cache] 40.3 ns +- 0.1 ns: 1.04x slower For me, these numbers make no sense :-D Getting kwd_mark from the instance (Raymond's new code) *must* be faster than calling _PyType_GetModuleByDef() on the instance. As I wrote in my previous comment, my bet is that PGO is not reproducible and so we should not pay too much attention to differences of a few nanoseconds. |
IMO we spent enough time micro-optimizing _functools and _PyType_GetModuleByDef(). I don't think that it's worth it to spend too much time trying to investigate differences between benchmarks around +- 1 ns on ~40 ns. I close the issue. Thanks Raymond for fixing the slowdown ;-) |
Victor, thanks for the work to mitigate the costs of _PyType_GetModuleByDef(). It now has much lower overhead. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: