Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More compact range iterator #89189

Closed
serhiy-storchaka opened this issue Aug 27, 2021 · 27 comments
Closed

More compact range iterator #89189

serhiy-storchaka opened this issue Aug 27, 2021 · 27 comments
Labels
3.11 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) pending The issue will be closed if no feedback is provided performance Performance or resource usage

Comments

@serhiy-storchaka
Copy link
Member

BPO 45026
Nosy @gvanrossum, @rhettinger, @ambv, @serhiy-storchaka, @sweeneyde
PRs
  • gh-89189: More compact range iterator #27986
  • bpo-45026: More compact range iterator (alt) #28176
  • Files
  • three-way-comparison.log
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = None
    closed_at = None
    created_at = <Date 2021-08-27.03:03:33.269>
    labels = ['interpreter-core', '3.11', 'performance']
    title = 'More compact range iterator'
    updated_at = <Date 2021-09-27.14:46:46.491>
    user = 'https://github.com/serhiy-storchaka'

    bugs.python.org fields:

    activity = <Date 2021-09-27.14:46:46.491>
    actor = 'lukasz.langa'
    assignee = 'none'
    closed = False
    closed_date = None
    closer = None
    components = ['Interpreter Core']
    creation = <Date 2021-08-27.03:03:33.269>
    creator = 'serhiy.storchaka'
    dependencies = []
    files = ['50306']
    hgrepos = []
    issue_num = 45026
    keywords = ['patch']
    message_count = 22.0
    messages = ['400390', '400392', '400396', '400399', '400528', '401085', '401411', '401419', '402139', '402335', '402347', '402349', '402452', '402477', '402534', '402543', '402555', '402639', '402649', '402708', '402712', '402724']
    nosy_count = 5.0
    nosy_names = ['gvanrossum', 'rhettinger', 'lukasz.langa', 'serhiy.storchaka', 'Dennis Sweeney']
    pr_nums = ['27986', '28176']
    priority = 'normal'
    resolution = None
    stage = 'patch review'
    status = 'open'
    superseder = None
    type = 'resource usage'
    url = 'https://bugs.python.org/issue45026'
    versions = ['Python 3.11']

    @serhiy-storchaka
    Copy link
    Member Author

    The proposed PR provides more compact implementation of the range iterator. It consumes less memory and produces smaller pickles. It is presumably faster because it performs simpler arithmetic operations on iteration (no multiplications).

    @serhiy-storchaka serhiy-storchaka added 3.11 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) performance Performance or resource usage labels Aug 27, 2021
    @serhiy-storchaka
    Copy link
    Member Author

    Currently the range iterator contains four integers: index, start, step and len. On iteration index is increased until achieve len, the result is calculated as start+index*step.

    In the proposed PR the range iterator contains only three integers: index was removed. Instead len counts the number of iterations that left, and start is increased by step on every iteration. Less memory, simpler calculation.

    Pickle no longer contains index, but __setstate__ is kept for compatibility.

    The only incompatible change is that calling __setstate__ repeatedly will have different effect. Currently __setstate__ with the same values does not have effect, with this PR it will advance iterator. But Python only calls __setstate__ once for just created object when unpickle or copy.

    @sweeneyde
    Copy link
    Member

    Is it worth removing the len field as well and lazily using get_len_of_range() as needed?

    Then the hot function can look something like:

    static PyObject *
    rangeiter_next(rangeiterobject *r)
    {
        long result = r->start
        if (result < r->stop) {
            r->start += r->step;
            return PyLong_FromLong(result);
        }
        return NULL;
    }

    @serhiy-storchaka
    Copy link
    Member Author

    step can be negative. So the condition should be more complex: ((r->stop > 0) ? (result < r->stop) : (result > r->stop)). And it would look much more complex for longrangeiterobject.

    @serhiy-storchaka
    Copy link
    Member Author

    Microbenchmarks show some speed up:

    Iterating large range:

    $ ./python -m timeit -s 'r = range(0, 10**20, 3**35)' 'list(r)'
    Before: 2000 loops, best of 5: 199 usec per loop
    After:  2000 loops, best of 5: 113 usec per loop

    Unpickling:

    $ ./python -m timeit -s 'from pickle import dumps, loads; p = dumps([iter(range(i)) for i in range(1000)])' 'loads(p)'
    Before: 500 loops, best of 5: 476 usec per loop
    After:  500 loops, best of 5: 363 usec per loop

    I did not observe any difference in iterating small ranges and pickling.

    Smaller size in memory, smaller pickles.

    @serhiy-storchaka
    Copy link
    Member Author

    PR 28176 implements idea proposed by Dennis in msg400396. It keeps the current and stop values instead of the initial start, index and initial length. I do not have data yet, but it is expected that it's iteration may be faster (for large integers), but __lenght_hint__ should be slower.

    @ambv
    Copy link
    Contributor

    ambv commented Sep 8, 2021

    I like Dennis' idea and Serhiy's implementation in #72363. It's a bit of a larger change compared to #72173 but I think it's worth it: I expect iteration speed is more important than len() speed for range objects.

    @serhiy-storchaka
    Copy link
    Member Author

    I have not benchmarked PR 28176 yet and do not know whether it have advantages over PR 27986 and how large. Slower __lenght_hint__ can make list(range(...)) slower for small ranges, but I do not know how small.

    @serhiy-storchaka
    Copy link
    Member Author

    Iterating large integers:

    $ ./python -m pyperf timeit -s 'r = range(0, 10**20, 3**35)' 'for i in r: pass'

    baseline: Mean +- std dev: 223 us +- 10 us
    PR 27986: Mean +- std dev: 128 us +- 4 us
    PR 28176: Mean +- std dev: 99.0 us +- 3.7 us

    $ ./python -m pyperf timeit -s 'r = range(0, 10**20, 3**35)' 'list(r)'
    baseline: Mean +- std dev: 191 us +- 13 us
    PR 27986: Mean +- std dev: 107 us +- 7 us
    PR 28176: Mean +- std dev: 91.3 us +- 2.4 us

    Unpickling:

    $ ./python -m pyperf timeit -s 'from pickle import dumps, loads; p = dumps([iter(range(i)) for i in range(1000)])' 'loads(p)'
    baseline: Mean +- std dev: 535 us +- 29 us
    PR 27986: Mean +- std dev: 420 us +- 15 us
    PR 28176: Mean +- std dev: 418 us +- 17 us
    
    $ ./python -m pyperf timeit -s 'from pickle import dumps, loads; p = dumps([iter(range(i*10**10)) for i in range(1000)])' 'loads(p)'
    baseline: Mean +- std dev: 652 us +- 37 us
    PR 27986: Mean +- std dev: 530 us +- 43 us
    PR 28176: Mean +- std dev: 523 us +- 17 us

    Seems PR 28176 is slightly faster than PR 27986 in iterating long integers.

    @ambv
    Copy link
    Contributor

    ambv commented Sep 21, 2021

    Looks like #72363 is faster across the board. One missing benchmark here is comparing len() speed as this was where the results will be reversed. It would be interesting to see to what extent.

    @serhiy-storchaka
    Copy link
    Member Author

    length_hint(), not len(). Its cost is included in microbenchmark for list(), where it is followed by iterating 2000 items.

    Calling operator.length_hint() in Python:

    $ ./python -m pyperf timeit -s 'it = iter(range(1000)); from operator import length_hint' 'length_hint(it)'
    baseline: Mean +- std dev: 109 ns +- 6 ns
    PR 27986: Mean +- std dev: 109 ns +- 5 ns
    PR 28176: Mean +- std dev: 115 ns +- 5 ns
    
    $ ./python -m pyperf timeit -s 'it = iter(range(0, 10**20, 3**35)); from operator import length_hint' 'length_hint(it)'
    baseline: Mean +- std dev: 114 ns +- 6 ns
    PR 27986: Mean +- std dev: 95.6 ns +- 4.3 ns
    PR 28176: Mean +- std dev: 285 ns +- 13 ns

    Indirect call from C (it includes overhead for calling list() and iter() in Python):

    $ ./python -m pyperf timeit -s 'r = range(10**20, 10**20+1, 3**35)' 'list(iter(r))'
    baseline: Mean +- std dev: 331 ns +- 16 ns
    PR 27986: Mean +- std dev: 300 ns +- 16 ns
    PR 28176: Mean +- std dev: 391 ns +- 18 ns

    With few experiments I have found that PR 28176 is faster than PR 27986 for list(iter(range(...))) if a range is larger than 40-100 items.

    @gvanrossum
    Copy link
    Member

    May the best PR win. :-)

    @ambv
    Copy link
    Contributor

    ambv commented Sep 22, 2021

    Since len timings for ranges of 100 items are negligible anyway, I personally still favor #72363 which is clearly faster during iteration.

    @sweeneyde
    Copy link
    Member

    I benchmarked #72173 and #72363 on "for i in range(10000): pass" and found that #72173 was faster for this (likely most common) case of relatively small integers.

    Mean +- std dev: [main] 204 us +- 5 us -> [GH-27986] 194 us +- 4 us: 1.05x faster
    Mean +- std dev: [main] 204 us +- 5 us -> [GH-28176] 223 us +- 6 us: 1.09x slower

    It's possible to have different implementations for small/large integers, but IMO it's probably best to keep consistency and go with #72173.

    @ambv
    Copy link
    Contributor

    ambv commented Sep 23, 2021

    Good point benchmarking small iterations too, Dennis. I missed that.

    Agreed then, #72173 looks like a winner.

    @serhiy-storchaka
    Copy link
    Member Author

    I do not see any difference in iterating small integers:

    $ ./python -m pyperf timeit -s 'r = range(10000)' 'for i in r: pass'
    PR 27986: Mean +- std dev: 174 us +- 9 us
    PR 28176: Mean +- std dev: 172 us +- 10 us

    @ambv
    Copy link
    Contributor

    ambv commented Sep 24, 2021

    Serhiy, right: looks like the difference stems from recreating the range objects, not from iteration. But Dennis' observation still stands: using `for i in range(...):` is a very popular idiom. If that becomes slower than the status quo, we will be making existing Python programs slower as well.

    @serhiy-storchaka
    Copy link
    Member Author

    There is nothing in code that could explain a measureable difference in creating the range objects or the range object iterators. And indeed, it is in the range of the standard deviation, so it is non-existent.

    @sweeneyde
    Copy link
    Member

    I did more benchmarks on my Windows laptop, and it seems the difference goes away after using PGO.

    The benchmarking program:

    #################################

    from pyperf import Runner
    
    runner = Runner()
    
    for n in [10, 100, 1000, 10_000, 100_000]:
        runner.timeit(f"for i in range({n}): pass",
                    stmt=f"for i in range({n}): pass")
    runner.timeit(f"for i in it_{n}: pass",
                setup=f"it = iter(range({n}))",
                stmt="for i in it: pass")
    
    runner.timeit(f"deque(it_{n})",
                  setup=(f"from collections import deque; "
                         f"it = iter(range({n}))"),
                  stmt="deque(it, maxlen=0)")
    
    runner.timeit(f"list(iter(range({n})))",
                  stmt=f"list(iter(range({n})))")
    

    ###################################

    The results (without PGO):

    PS C:\Users\sween\Source\Repos\cpython2\cpython> .\python.bat -m pyperf compare_to .\20e3149c175a24466c7d1c352f8ff2c11effc489.json .\cffa90a8b0057d7e7456571045f2fb7b9ceb426f.json -G
    Running Release|x64 interpreter...
    Slower (15):

    • list(iter(range(100))): 741 ns +- 13 ns -> 836 ns +- 11 ns: 1.13x slower
    • for i in range(100000): pass: 2.05 ms +- 0.05 ms -> 2.26 ms +- 0.06 ms: 1.10x slower
    • list(iter(range(1000))): 12.2 us +- 0.1 us -> 13.2 us +- 0.2 us: 1.08x slower
    • for i in range(10000): pass: 203 us +- 4 us -> 219 us +- 4 us: 1.08x slower
    • for i in range(100): pass: 1.18 us +- 0.02 us -> 1.27 us +- 0.03 us: 1.08x slower
    • for i in range(1000): pass: 18.1 us +- 0.3 us -> 19.5 us +- 0.3 us: 1.07x slower
    • list(iter(range(10000))): 145 us +- 7 us -> 152 us +- 2 us: 1.05x slower
    • list(iter(range(100000))): 1.98 ms +- 0.06 ms -> 2.06 ms +- 0.05 ms: 1.04x slower
    • for i in range(10): pass: 265 ns +- 9 ns -> 272 ns +- 8 ns: 1.03x slower
    • deque(it_1000): 324 ns +- 4 ns -> 332 ns +- 8 ns: 1.02x slower
    • deque(it_100000): 327 ns +- 5 ns -> 333 ns +- 7 ns: 1.02x slower
    • list(iter(range(10))): 357 ns +- 7 ns -> 363 ns +- 3 ns: 1.02x slower
    • deque(it_10): 325 ns +- 5 ns -> 330 ns +- 5 ns: 1.01x slower
    • deque(it_100): 325 ns +- 6 ns -> 329 ns +- 4 ns: 1.01x slower
    • deque(it_10000): 326 ns +- 7 ns -> 330 ns +- 4 ns: 1.01x slower

    Faster (2):

    • for i in it_10: pass: 26.0 ns +- 1.4 ns -> 25.3 ns +- 0.3 ns: 1.03x faster
    • for i in it_1000: pass: 25.7 ns +- 0.7 ns -> 25.3 ns +- 0.4 ns: 1.02x faster

    Benchmark hidden because not significant (3): for i in it_100: pass, for i in it_10000: pass, for i in it_100000: pass

    Geometric mean: 1.03x slower

    ###################################

    The results (with PGO):

    PS C:\Users\sween\Source\Repos\cpython2\cpython> .\python.bat -m pyperf compare_to .\PGO-20e3149c175a24466c7d1c352f8ff2c11effc489.json .\PGO-cffa90a8b0057d7e7456571045f2fb7b9ceb426f.json -G
    Running PGUpdate|x64 interpreter...
    Slower (7):

    • for i in it_100: pass: 20.3 ns +- 0.5 ns -> 21.3 ns +- 0.7 ns: 1.05x slower
    • for i in it_10000: pass: 20.4 ns +- 0.6 ns -> 21.4 ns +- 0.8 ns: 1.05x slower
    • for i in it_100000: pass: 20.5 ns +- 0.5 ns -> 21.4 ns +- 0.6 ns: 1.05x slower
    • for i in it_1000: pass: 20.3 ns +- 0.5 ns -> 21.2 ns +- 0.5 ns: 1.05x slower
    • for i in it_10: pass: 20.3 ns +- 0.5 ns -> 21.1 ns +- 0.5 ns: 1.04x slower
    • for i in range(10): pass: 214 ns +- 3 ns -> 219 ns +- 11 ns: 1.03x slower
    • deque(it_100000): 288 ns +- 5 ns -> 291 ns +- 12 ns: 1.01x slower

    Faster (7):

    • list(iter(range(10000))): 112 us +- 3 us -> 96.4 us +- 3.4 us: 1.16x faster
    • list(iter(range(1000))): 9.69 us +- 0.15 us -> 8.39 us +- 0.26 us: 1.15x faster
    • list(iter(range(100000))): 1.65 ms +- 0.04 ms -> 1.48 ms +- 0.04 ms: 1.11x faster
    • list(iter(range(100))): 663 ns +- 11 ns -> 623 ns +- 12 ns: 1.06x faster
    • for i in range(1000): pass: 14.6 us +- 0.5 us -> 14.2 us +- 0.3 us: 1.03x faster
    • for i in range(10000): pass: 162 us +- 2 us -> 159 us +- 3 us: 1.02x faster
    • for i in range(100000): pass: 1.64 ms +- 0.03 ms -> 1.62 ms +- 0.04 ms: 1.01x faster

    Benchmark hidden because not significant (6): deque(it_10), list(iter(range(10))), for i in range(100): pass, deque(it_100), deque(it_1000), deque(it_10000)

    Geometric mean: 1.01x faster

    @ambv
    Copy link
    Contributor

    ambv commented Sep 27, 2021

    Dennis, run your benchmarks with --rigorous to avoid "Benchmark hidden because not significant".

    I note that the second and third benchmarks aren't useful as written because the iterators are exhausted after first repetition. I could see this in my results, note how the values don't rise with the iterator size:

    for i in it_10: pass: Mean +- std dev: 25.0 ns +- 0.3 ns
    for i in it_100: pass: Mean +- std dev: 25.1 ns +- 0.5 ns
    for i in it_1000: pass: Mean +- std dev: 25.0 ns +- 0.3 ns
    for i in it_10000: pass: Mean +- std dev: 25.0 ns +- 0.3 ns
    for i in it_100000: pass: Mean +- std dev: 25.6 ns +- 0.5 ns

    deque(it_10): Mean +- std dev: 334 ns +- 8 ns
    deque(it_100): Mean +- std dev: 338 ns +- 9 ns
    deque(it_1000): Mean +- std dev: 335 ns +- 9 ns
    deque(it_10000): Mean +- std dev: 336 ns +- 10 ns
    deque(it_100000): Mean +- std dev: 338 ns +- 11 ns

    When I modified those to recreate the iterator on every run, the story was much different.

    @ambv
    Copy link
    Contributor

    ambv commented Sep 27, 2021

    Benchmarks for PGO builds on macOS 10.15 Catalina, Intel MBP 2018.

    Like in Dennis' case, 20e3149 is #72173 and cffa90a is #72363. The difference is that it_ benchmarks create the iterator on each execution. In this case the explicit iterator versions of the for-loop are indistinguishable from the ones using range() directly.

    ################
    ❯ python -m pyperf compare_to /tmp/20e3149c175a24466c7d1c352f8ff2c11effc489-2.json /tmp/cffa90a8b0057d7e7456571045f2fb7b9ceb426f-2.json -G
    Slower (11):

    • deque(it_100): 886 ns +- 22 ns -> 944 ns +- 12 ns: 1.07x slower
    • list(iter(range(100))): 856 ns +- 17 ns -> 882 ns +- 17 ns: 1.03x slower
    • for i in range(100000): pass: 2.20 ms +- 0.02 ms -> 2.26 ms +- 0.03 ms: 1.02x slower
    • for i in range(10000): pass: 219 us +- 1 us -> 223 us +- 5 us: 1.02x slower
    • for i in it_10000: pass: 219 us +- 1 us -> 223 us +- 5 us: 1.02x slower
    • for i in it_100000: pass: 2.20 ms +- 0.03 ms -> 2.24 ms +- 0.04 ms: 1.02x slower
    • for i in it_1000: pass: 20.1 us +- 0.1 us -> 20.4 us +- 0.4 us: 1.02x slower
    • for i in range(1000): pass: 20.2 us +- 0.4 us -> 20.5 us +- 0.3 us: 1.02x slower
    • for i in range(100): pass: 1.50 us +- 0.03 us -> 1.52 us +- 0.03 us: 1.01x slower
    • list(iter(range(10))): 317 ns +- 9 ns -> 320 ns +- 6 ns: 1.01x slower
    • for i in it_100: pass: 1.53 us +- 0.01 us -> 1.54 us +- 0.02 us: 1.01x slower

    Faster (8):

    • list(iter(range(100000))): 2.25 ms +- 0.05 ms -> 2.12 ms +- 0.03 ms: 1.06x faster
    • deque(it_10000): 145 us +- 2 us -> 142 us +- 1 us: 1.03x faster
    • list(iter(range(1000))): 12.6 us +- 0.2 us -> 12.3 us +- 0.1 us: 1.02x faster
    • deque(it_100000): 1.47 ms +- 0.01 ms -> 1.45 ms +- 0.02 ms: 1.02x faster
    • for i in it_10: pass: 309 ns +- 6 ns -> 304 ns +- 3 ns: 1.02x faster
    • list(iter(range(10000))): 147 us +- 2 us -> 145 us +- 2 us: 1.01x faster
    • deque(it_10): 544 ns +- 19 ns -> 537 ns +- 10 ns: 1.01x faster
    • deque(it_1000): 12.6 us +- 0.2 us -> 12.5 us +- 0.2 us: 1.01x faster

    Benchmark hidden because not significant (1): for i in range(10): pass

    Geometric mean: 1.00x slower
    ################

    The results look like a wash here. Let me compare both to main.

    @ambv
    Copy link
    Contributor

    ambv commented Sep 27, 2021

    Well, this is kind of disappointing on my end. I attach the full result of a three-way comparison between main at the time of Serhiy's last merge (3f8b23f) with #72173 (20e3149) and #72363 (cffa90a). The gist is this:

    Geometric mean
    ==============

    20e3149-2: 1.01x slower
    cffa90a-2: 1.01x slower

    At least on my Macbook Pro, all PGO builds, looks like the status quo is on average faster than any of the candidate PRs.

    @ezio-melotti ezio-melotti transferred this issue from another repository Apr 10, 2022
    @iritkatriel
    Copy link
    Member

    Given the benchmark results, are we abandoning this?

    @iritkatriel iritkatriel added the pending The issue will be closed if no feedback is provided label Aug 18, 2022
    @iritkatriel iritkatriel closed this as not planned Won't fix, can't repro, duplicate, stale Sep 1, 2022
    @serhiy-storchaka
    Copy link
    Member Author

    The difference of 1% is not significant. You can get larger difference from run to run with the same binary. But for large integers the difference is ~2x. And the difference in the memory and the pickle sizes is not insignificant.

    The original code was changed, in particular there is a special path in the eval loop for range iteration, so all benchmarks should be rerun.

    It sounds funny, but the small difference in iterating small ranges in #27986 disappeared after applying a simple manual optimization:

             long result = r->start;
    -        r->start += r->step;
    +        r->start = result + r->step;
             r->len--;
             return PyLong_FromLong(result);

    I thought that the compiler is smart enough for this.

    @serhiy-storchaka
    Copy link
    Member Author

    #27986 is now the same as the baseline in iterating small integers range. #28176 is slightly slower.

    $ ./python -m pyperf timeit -s 'r = range(10000)' 'for i in r: pass'
    Baseline: Mean +- std dev: 59.4 us +- 4.2 us
    #27986:   Mean +- std dev: 58.8 us +- 3.4 us
    #28176:   Mean +- std dev: 64.7 us +- 4.2 us
    

    But it is the fastest in iterating large integers range.

    $ ./python -m pyperf timeit -s 'r = range(0, 10**20, 3**35)' 'for i in r: pass'
    Baseline: Mean +- std dev: 185 us +- 8 us
    #27986:   Mean +- std dev: 106 us +- 6 us
    #28176:   Mean +- std dev: 78.6 us +- 4.4 us
    

    @gvanrossum
    Copy link
    Member

    I would vote for the faster PR when the values are small -- perf for large ranges is not a priority (these occur too rare and everything else is slower with them as well).

    @serhiy-storchaka
    Copy link
    Member Author

    I agree. Thank you for review.

    carljm added a commit to carljm/cpython that referenced this issue Dec 1, 2022
    * main: (112 commits)
      pythongh-99894: Ensure the local names don't collide with the test file in traceback suggestion error checking (python#99895)
      pythongh-99612: Fix PyUnicode_DecodeUTF8Stateful() for ASCII-only data (pythonGH-99613)
      Doc: Add summary line to isolation_level & autocommit sqlite3.connect params (python#99917)
      pythonGH-98906 ```re``` module: ```search() vs. match()``` section should mention ```fullmatch()``` (pythonGH-98916)
      pythongh-89189: More compact range iterator (pythonGH-27986)
      bpo-47220: Document the optional callback parameter of weakref.WeakMethod (pythonGH-25491)
      pythonGH-99905: Fix output of misses in summarize_stats.py execution counts (pythonGH-99906)
      pythongh-99845: PEP 670: Convert PyObject macros to functions (python#99850)
      pythongh-99845: Use size_t type in __sizeof__() methods (python#99846)
      pythonGH-99877)
      Fix typo in exception message in `multiprocessing.pool` (python#99900)
      pythongh-87092: move all localsplus preparation into separate function called from assembler stage (pythonGH-99869)
      pythongh-99891: Fix infinite recursion in the tokenizer when showing warnings (pythonGH-99893)
      pythongh-99824: Document that sqlite3.connect implicitly open a transaction if autocommit=False (python#99825)
      pythonGH-81057: remove static state from suggestions.c (python#99411)
      Improve zip64 limit error message (python#95892)
      pythongh-98253: Break potential reference cycles in external code worsened by typing.py lru_cache (python#98591)
      pythongh-99127: Allow some features of syslog to the main interpreter only (pythongh-99128)
      pythongh-82836: fix private network check (python#97733)
      Docs: improve accuracy of socketserver reference (python#24767)
      ...
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.11 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) pending The issue will be closed if no feedback is provided performance Performance or resource usage
    Projects
    None yet
    Development

    No branches or pull requests

    5 participants