This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author brett.cannon
Recipients brett.cannon, ezio.melotti, ncoghlan, pitrou, pjenvey, rhettinger, zzzeek
Date 2012-11-02.20:36:34
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1351888595.19.0.777633981148.issue16389@psf.upfronthosting.co.za>
In-reply-to
Content
re.compile() calls _compile() which has the lru_cache decorator so it will trigger it. But you make a good point, Antoine, that it's the hit overhead here that we care about as long as misses don't get worse as the calculation of is to be cached should overwhelm anything the LRU does.

With a simplified _make_key() I can get regex_compile w/ cache clearing turned off to be 1.28x faster by making it be::

    if not typed:
        if len(kwds) == 0:
            return args, ()
        else:
            return args, tuple(sorted(kwds.items()))
    else:
        if len(kwds) == 0:
            return (tuple((type(arg), arg) for arg in args), ())
        else:
            return (tuple((type(arg), arg) for arg in args),
                    tuple((type(v), (k, v)) for k, v in kwds.items()))

That might not be the fastest way to handle keyword arguments (since regex_compile w/ caching and leaving out the len(kwds) trick out becomes 1.13x slower), but at least for the common case of positional arguments it seems faster and the code is easier to read IMO.
History
Date User Action Args
2012-11-02 20:36:35brett.cannonsetrecipients: + brett.cannon, rhettinger, ncoghlan, pitrou, pjenvey, ezio.melotti, zzzeek
2012-11-02 20:36:35brett.cannonsetmessageid: <1351888595.19.0.777633981148.issue16389@psf.upfronthosting.co.za>
2012-11-02 20:36:35brett.cannonlinkissue16389 messages
2012-11-02 20:36:34brett.cannoncreate