This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: re._compiled_typed's lru_cache causes significant degradation of the mako_v2 bench
Type: performance Stage: resolved
Components: Library (Lib) Versions: Python 3.2, Python 3.3, Python 3.4
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: serhiy.storchaka Nosy List: asvetlov, barry, bdkearns, brett.cannon, ezio.melotti, jcea, ncoghlan, pitrou, pjenvey, python-dev, rhettinger, sbt, serhiy.storchaka, terry.reedy, zzzeek
Priority: high Keywords: 3.2regression, 3.3regression, patch

Created on 2012-11-02 18:23 by pjenvey, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
re_compile_cache.patch serhiy.storchaka, 2012-11-04 10:19 review
issue16389.diff ezio.melotti, 2012-11-05 16:51
Messages (27)
msg174550 - (view) Author: Philip Jenvey (pjenvey) * (Python committer) Date: 2012-11-02 18:23
#9396 replaced a few caches in the stdlib w/ lru_cache, this made the mako_v2 benchmark on Python 3 almost 3x slower than 2.7

The benchmark results are good now that Mako was changed to cache the re itself, but the problem still stands that lru_cache seems to hurt the perf of inline res compared to 2.7. The fix for Mako did not affect the 2.7 benchmark numbers

See more info here:

http://mail.python.org/pipermail/python-dev/2012-November/122521.html
msg174559 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-11-02 18:54
lru_cache() seems to use a complicated make_key() function, which is invoked on each cache hit. The LRU logic is probably on the slow side too, compared to a hand-coded logic which would favour lookup cost over insertion / eviction cost.
msg174563 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2012-11-02 20:03
Would be interesting to know what speed difference would occur if the statistics gathering was optional and turned off.

As for _make_key(), I wonder if ``(args, tuple(sorted(kwd.items())))`` as a key would be any faster as a tuple's hash is derived from its contents and not the tuple itself (if I remember correctly). You could even special case when len(kwds) == 0 to skip the sorted(kwd.items()) overhead if it is worth it performance-wise.
msg174566 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2012-11-02 20:11
Ditching the statistics only sped up regex_compile by 2%.
msg174567 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-11-02 20:17
> Ditching the statistics only sped up regex_compile by 2%.

Does explicit compiling even go through the cache?
Regardless, the issue here is with performance of cache hits, not cache
misses. By construction, you cache something which is costly to compute,
so the overhead of a cache miss won't be very noticeable.
msg174570 - (view) Author: Brett Cannon (brett.cannon) * (Python committer) Date: 2012-11-02 20:36
re.compile() calls _compile() which has the lru_cache decorator so it will trigger it. But you make a good point, Antoine, that it's the hit overhead here that we care about as long as misses don't get worse as the calculation of is to be cached should overwhelm anything the LRU does.

With a simplified _make_key() I can get regex_compile w/ cache clearing turned off to be 1.28x faster by making it be::

    if not typed:
        if len(kwds) == 0:
            return args, ()
        else:
            return args, tuple(sorted(kwds.items()))
    else:
        if len(kwds) == 0:
            return (tuple((type(arg), arg) for arg in args), ())
        else:
            return (tuple((type(arg), arg) for arg in args),
                    tuple((type(v), (k, v)) for k, v in kwds.items()))

That might not be the fastest way to handle keyword arguments (since regex_compile w/ caching and leaving out the len(kwds) trick out becomes 1.13x slower), but at least for the common case of positional arguments it seems faster and the code is easier to read IMO.
msg174598 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2012-11-03 01:44
Did you try moving the existing single-argument fast path to before the main if statement in _make_key? That is:

    if not kwds and len(args) == 1:
         key = args[0]
         key_type = type(key)
         if key_type in fasttypes:
             if typed:
                 return key, key_type
             return key

Such a special case is already present, but it's *after* a lot of the other processing *and* it doesn't fire when typed==True.

So instead of the simple 2-tuple creation above, you instead do the relatively wasteful:

    args + tuple(type(v) for v in args)
msg174599 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2012-11-03 02:29
> re.compile() calls _compile() which has the lru_cache decorator so it
> will trigger it.

What's the point of using the lru_cache for compiled regexes?
Unless I'm missing something, re.compile() should just return the compiled regex without going though lru_cache and needlessly wasting time and cache's slots.
msg174626 - (view) Author: mike bayer (zzzeek) * Date: 2012-11-03 14:31
in response to ezio, I poked around the source here, since I've never been sure if re.compile() cached its result or not.   It seems to be the case in 2.7 and 3.2 also - 2.7 uses a local caching scheme and 3.2 uses functools.lru_cache, yet we don't see as much of a slowdown with 3.2.

so it seems like the caching behavior is precedent here, but I would revert re.py's caching scheme to the one used in 2.7 if the functools.lru_cache can't be sped up very significantly.  ideally lru_cache would be native.

also does python include any kind of benchmarking unit tests ?   over in SQLA we have an approach that fails if the call-counts of various functions, as measured by cProfile, fall outside of a known range.   it's caught many issues like these for me.
msg174630 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2012-11-03 15:08
Now that Brett has a substantial portion of the benchmark suite running on Py3k, we should see a bit more progress on the PyPy-inspired speed.python.org project (which should make it much easier to catch this kind of regression before it hits a production release).

In this case, as I noted in my earlier comment, I think the 3.3 changes to make_key broke an important single-argument fast path that the re module was previously relying on, thus the major degradation in performance on a cache hit. I haven't looked into setting up the benchmark suite on my own machine though, so we won't know for sure until either I get around to doing that, or someone with it already set up tries the change I suggested above.
msg174638 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-03 15:43
This is not only 3.3 regression, this is also 3.2 regression. 3.1, 3.2 and 3.3 have different caching implementation.

Mikrobenchmark:
$ ./python -m timeit -s "import re"  "re.match('', '')"

Results:
3.1: 2.61 usec per loop
3.2: 5.77 usec per loop
3.3: 11.8 usec per loop
msg174772 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-04 10:19
Here is a patch which reverts 3.1 implementation (and adds some optimization).

Microbenchmark:
$ ./python -m timeit -s "import re"  "re._compile('', 0)"

Results:
3.1: 1.45 usec per loop
3.2: 4.45 usec per loop
3.3: 9.91 usec per loop
3.4patched: 0.89 usec per loop
msg174918 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2012-11-05 16:51
Attached a proof of concept that removes the caching for re.compile, as suggested in msg174599.
msg174924 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-05 17:45
Ezio, I agree with you, but I think this should be a separate issue.
msg174927 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2012-11-05 18:30
I think the lru_cache should be kept if possible (i.e. I'm -0.5 on your patch).  If this result in a slowdown (as the mako_v2 benchmark indicates), then there are two options:
  1) optimize lru_cache;
  2) avoid using it for regular expressions compiled with re.compile;
Both the changes should improve the performances.
This issue is about the re module, so I think it's ok to cover any changes to the re module here, and improvements to lru_cache (e.g. a faster make_key) should be moved to a new issue.
If this is not enough to restore the performances we might decide to go back to a custom cache instead of using lru_cache.
msg174993 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-11-06 16:46
> I think the lru_cache should be kept if possible (i.e. I'm -0.5 on your patch).

This patch is only to show the upper level to which should be sought.  I tried to optimize lru_cache(), but got only 15%.  I'm afraid that serious optimization is impossible 
without rewriting lru_cache() on C.

>   2) avoid using it for regular expressions compiled with re.compile;

I do not see how it can significantly affect performance.
msg175037 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2012-11-06 23:59
Maybe lru_cache() should have a key argument so you can specify a specialized key function.  So you might have

    def _compile_key(args, kwds, typed):
        return args

    @functools.lru_cache(maxsize=500, key=_compile_key)
    def _compile(pattern, flags):
        ...
msg181601 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2013-02-07 13:09
> Maybe lru_cache() should have a key argument so you can specify a specialized key function.

It would be interesting to look at the microbenchmarking results.
msg182192 - (view) Author: Terry J. Reedy (terry.reedy) * (Python committer) Date: 2013-02-15 23:47
Since switching from a simple custom cache to the generalized lru cache made a major slowdown, I think the change should be reverted. A dict + either occasional clearing or a circular queue and a first-in, first-out discipline is quite sufficient. There is no need for the extra complexity of a last-used, first out discipline.
msg182199 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2013-02-16 00:00
For 3.4 #14373 might solve the issue.
msg183856 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2013-03-09 23:35
A few thoughts:

* The LRU cache was originally intended for IO bound calls not for tight, frequently computationally bound calls like re.compile.

* The Py3.3 version of lru_cache() favors size optimizations (i.e. it uses only one dictionary instead of the two used by OrderedDict and keyword arguments are flattened into a single list instead of a nested structure).  Also, the 3.3 version assures that __hash__ is not called more than one for a given key (this change helps objects that have a slow hash function and it helps solve a reentrancy problem with recursive cached function calls).  The cost of these changes is that _make_key is slower than it was before.

* I had hoped to get in a C version of _make_key before Py3.3 went out but I didn't have time.  Going forward, the lru_cache() will likely have a C-implementation that is blindingly fast.  

* For the re module, it might make sense to return to custom logic in the re modue that implements size limited caching without the overhead of 1) LRU logic, 2) general purpose argument handling, 3) reentrancy or locking logic, and 4) without statistics tracking.
msg183857 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2013-03-10 00:04
Until the lru_cache can be sped-up significantly, I recommend just accepting Serhiy's patch to go back to 3.2 logic in the regex module.

In the meantime, I'll continue to work on improving speed of _make_key().
msg183913 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2013-03-11 02:21
Raymond's plan sounds good to me.

We may also want to tweak the 3.3 lru_cache docs to note the trade-offs involved in using it. Perhaps something like:

"As a general purpose cache, lru_cache needs to be quite pessimistic in deriving non-conflicting keys from the supplied arguments. When caching the results of CPU-bound calculations, the cost of deriving non-conflicting keys may need be assessed against the typical cost of the underlying calculation."

Which does give me a thought - perhaps lru_cache in 3.4 could accept a "key" argument that is called as "key(*args, **kwds)" to derive the cache key? (that would be a separate issue, of course)
msg183917 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2013-03-11 03:13
Serhiy, please go ahead an apply your patch.  Be sure to restore the re cache tests that existed in Py3.2 as well.

Thank you.
msg183967 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2013-03-11 15:40
Raymond, actually my patch reverts 3.1 logic. lru_cache used since 3.2.

There are no any additional re cache tests in 3.2 or 3.1.
msg183968 - (view) Author: Richard Oudkerk (sbt) * (Python committer) Date: 2013-03-11 15:48
> Which does give me a thought - perhaps lru_cache in 3.4 could accept a 
> "key" argument that is called as "key(*args, **kwds)" to derive the cache 
> key? (that would be a separate issue, of course)

Agreed.  I suggested the same in an earlier post.
msg184353 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2013-03-16 21:01
New changeset 6951d7b8d3ad by Serhiy Storchaka in branch '3.2':
Issue #16389: Fixed an issue number in previos commit.
http://hg.python.org/cpython/rev/6951d7b8d3ad

New changeset 7b737011d822 by Serhiy Storchaka in branch '3.3':
Issue #16389: Fixed an issue number in previos commit.
http://hg.python.org/cpython/rev/7b737011d822

New changeset 6898e1afc216 by Serhiy Storchaka in branch 'default':
Issue #16389: Fixed an issue number in previos commit.
http://hg.python.org/cpython/rev/6898e1afc216
History
Date User Action Args
2022-04-11 14:57:38adminsetgithub: 60593
2013-04-06 05:29:33serhiy.storchakasetstatus: open -> closed
resolution: fixed
stage: test needed -> resolved
2013-03-16 21:01:51python-devsetnosy: + python-dev
messages: + msg184353
2013-03-11 15:48:23sbtsetmessages: + msg183968
2013-03-11 15:40:03serhiy.storchakasetmessages: + msg183967
2013-03-11 03:13:09rhettingersetpriority: normal -> high
assignee: rhettinger -> serhiy.storchaka
messages: + msg183917

stage: needs patch -> test needed
2013-03-11 02:21:38ncoghlansetmessages: + msg183913
2013-03-10 00:04:33rhettingersetmessages: + msg183857
2013-03-09 23:35:19rhettingersetassignee: rhettinger
messages: + msg183856
2013-03-09 05:34:14bdkearnssetnosy: + bdkearns
2013-02-16 00:00:00ezio.melottisetmessages: + msg182199
2013-02-15 23:47:25terry.reedysetnosy: + terry.reedy
messages: + msg182192
2013-02-07 13:09:13serhiy.storchakasetmessages: + msg181601
2013-02-07 12:31:29asvetlovsetnosy: + asvetlov
2012-11-06 23:59:32sbtsetnosy: + sbt
messages: + msg175037
2012-11-06 16:46:02serhiy.storchakasetmessages: + msg174993
2012-11-06 16:00:02jceasetnosy: + jcea
2012-11-05 18:30:05ezio.melottisetmessages: + msg174927
2012-11-05 17:45:16serhiy.storchakasetmessages: + msg174924
2012-11-05 16:51:55ezio.melottisetfiles: + issue16389.diff

messages: + msg174918
2012-11-04 10:19:32serhiy.storchakasetfiles: + re_compile_cache.patch
keywords: + patch
messages: + msg174772
2012-11-03 15:43:19serhiy.storchakasetkeywords: + 3.2regression
nosy: + serhiy.storchaka
messages: + msg174638

2012-11-03 15:08:09ncoghlansetmessages: + msg174630
2012-11-03 14:31:41zzzeeksetmessages: + msg174626
2012-11-03 02:29:30ezio.melottisetmessages: + msg174599
2012-11-03 01:44:33ncoghlansetmessages: + msg174598
2012-11-03 01:30:57ncoghlanlinkissue16390 superseder
2012-11-03 00:48:15barrysetnosy: + barry
2012-11-02 20:36:35brett.cannonsetmessages: + msg174570
2012-11-02 20:17:25pitrousetmessages: + msg174567
2012-11-02 20:11:34brett.cannonsetmessages: + msg174566
2012-11-02 20:03:28brett.cannonsetmessages: + msg174563
2012-11-02 18:54:51pitrousetnosy: + pitrou
messages: + msg174559
2012-11-02 18:32:21pitrousetstage: needs patch
versions: + Python 3.4
2012-11-02 18:29:04ezio.melottisetkeywords: + 3.3regression
nosy: + ezio.melotti
2012-11-02 18:23:35pjenveycreate