## Background
All `hashlib` computations and `binascii.crc32` and `zlib.crc32` release the GIL around their computational core. But they use a hard coded length check to determine when to do so, or always do it.
That already accomplishes the larger good of releasing the GIL on big computations. But _probably_ just serves to slow down smaller ones when a GIL release adds more overhead than a context switch to another thread could meaningfully provide in terms of forward progress.
## Desire 1
Determine if a threshold should exist at all (should we just always release the GIL for these?) and if so, allow it to be tuned on a per algorithm basis.
This comes at the same time as other enhancements like bpo-47102 and its windows and macos cousins could shift us towards using OS kernel APIs for a subset of algorithms where available - which may effectively "always release" the GIL on OS APIs that are virtual IO call based such as bpo-47102's.
## Desire 2
When multiple implementations of an algorithm may be available, allow the user to configure data length thresholds for when each one is triggered. Without meaningfully slowing most things down by adding such logic. Different implementations have different setup and finalization time which can make them more useful for large data rather than tiny.
---
I'm marking this low priority as it veers towards over-optimization. :) Creating benchmarks and a thing to live in Tools/ that people could run on their target platform to provide a tuning suggestion of what thresholds work best for their needs.
Related inspiring work: OSes often benchmark several algorithm implementations up front to pick a "best" to use for a given platform (ex: see what the Linux kernel does for hashes and raid algorithms). This extends that idea and acknowledges latency as important, not exclusively thru-put.
|