Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freeze all modules imported during startup. #89183

Closed
ericsnowcurrently opened this issue Aug 26, 2021 · 96 comments
Closed

Freeze all modules imported during startup. #89183

ericsnowcurrently opened this issue Aug 26, 2021 · 96 comments
Assignees
Labels
3.11 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) type-feature A feature request or enhancement

Comments

@ericsnowcurrently
Copy link
Member

BPO 45020
Nosy @malemburg, @gvanrossum, @warsaw, @brettcannon, @nascheme, @rhettinger, @terryjreedy, @gpshead, @ronaldoussoren, @ncoghlan, @vstinner, @larryhastings, @tiran, @tiran, @tiran, @methane, @markshannon, @ericsnowcurrently, @indygreg, @lysnikolaou, @pablogsal, @miss-islington, @brandtbucher, @isidentical, @shihai1991, @FFY00, @softsol solutions
PRs
  • bpo-45020: Freeze the modules imported during startup. #28107
  • bpo-45020: Add -X frozen_modules=[on|off] to explicitly control use of frozen modules. #28320
  • bpo-45020: Freeze some of the modules imported during startup. #28335
  • bpo-45020: Don't test IDLE with frozen module. #28344
  • [3.10] bpo-45020: Don't test IDLE with frozen module. (GH-28344) #28345
  • [3.9] bpo-45020: Don't test IDLE with frozen module. (GH-28344) #28346
  • bpo-45020: Drop the frozen .h files from the repo. #28375
  • bpo-45020: Revert "Drop the frozen .h files from the repo. (gh-28375)" #28380
  • bpo-45020: Drop the frozen .h files from the repo. #28392
  • bpo-45020: Freeze os, site, and codecs. #28398
  • bpo-45020: Fix build out of source tree #28410
  • bpo-45020: Fix some corner cases for frozen module generation. #28538
  • bpo-45020: Make "make all" output less noisy. #28554
  • bpo-45020: Use a common recipe for freezing modules. #28583
  • bpo-45020: Default to using frozen modules only on non-debug builds. #28590
  • bpo-45020: Do not freeze <pkg>/__init__.py twice. #28635
  • bpo-45020: Add more test cases for frozen modules. #28664
  • Also install __phello__, in addition to __hello__. #28665
  • bpo-45020: Identify which frozen modules are actually aliases. #28655
  • bpo-45020: Default to using frozen modules unless running from source tree. #28940
  • bpo-45020: Add tests for the -X "frozen_modules" option. #28997
  • bpo-45020: Fix strict-prototypes warning (GH-29755) #29755
  • bpo-45020: Fix strict-prototypes warning (GH-29755) #29755
  • bpo-45020: Fix strict-prototypes warning (GH-29755) #29755
  • Dependencies
  • bpo-45186: Marshal output isn't completely deterministic.
  • bpo-45188: De-couple the Windows builds from freezing modules.
  • Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.

    Show more details

    GitHub fields:

    assignee = 'https://github.com/ericsnowcurrently'
    closed_at = <Date 2021-10-28.19:52:39.815>
    created_at = <Date 2021-08-26.19:23:58.261>
    labels = ['interpreter-core', 'type-feature', '3.11']
    title = 'Freeze all modules imported during startup.'
    updated_at = <Date 2021-11-24.19:01:47.807>
    user = 'https://github.com/ericsnowcurrently'

    bugs.python.org fields:

    activity = <Date 2021-11-24.19:01:47.807>
    actor = 'christian.heimes'
    assignee = 'eric.snow'
    closed = True
    closed_date = <Date 2021-10-28.19:52:39.815>
    closer = 'eric.snow'
    components = ['Parser']
    creation = <Date 2021-08-26.19:23:58.261>
    creator = 'eric.snow'
    dependencies = ['45186', '45188']
    files = []
    hgrepos = []
    issue_num = 45020
    keywords = ['patch']
    message_count = 96.0
    messages = ['400370', '400371', '400372', '400373', '400374', '400381', '400386', '400394', '400421', '400422', '400427', '400447', '400449', '400450', '400454', '400455', '400459', '400460', '400461', '400469', '400507', '400508', '400510', '400629', '400636', '400638', '400639', '400660', '400664', '400665', '400666', '400667', '400675', '400766', '400769', '400808', '400855', '400856', '401027', '401040', '401734', '401740', '401805', '401811', '401813', '401814', '401848', '401865', '401870', '401905', '401911', '401917', '401918', '401919', '401921', '401954', '401966', '401988', '402017', '402020', '402070', '402080', '402100', '402101', '402103', '402113', '402116', '402118', '402119', '402151', '402356', '402463', '402464', '402476', '402587', '402609', '402629', '402633', '402634', '402898', '402993', '403024', '403025', '403255', '403256', '403323', '403324', '404111', '404164', '404257', '404344', '405002', '405203', '405244', '406143', '406948']
    nosy_count = 25.0
    nosy_names = ['lemburg', 'gvanrossum', 'barry', 'brett.cannon', 'nascheme', 'rhettinger', 'terry.reedy', 'gregory.p.smith', 'ronaldoussoren', 'ncoghlan', 'vstinner', 'larry', 'christian.heimes', 'christian.heimes', 'christian.heimes', 'methane', 'Mark.Shannon', 'eric.snow', 'indygreg', 'lys.nikolaou', 'pablogsal', 'miss-islington', 'brandtbucher', 'BTaskaya', 'shihai1991', 'FFY00', 'santhu_reddy12']
    pr_nums = ['28107', '28320', '28335', '28344', '28345', '28346', '28375', '28380', '28392', '28398', '28410', '28538', '28554', '28583', '28590', '28635', '28664', '28665', '28655', '28940', '28997', '29755', '29755', '29755']
    priority = 'normal'
    resolution = 'fixed'
    stage = 'resolved'
    status = 'closed'
    superseder = None
    type = 'enhancement'
    url = 'https://bugs.python.org/issue45020'
    versions = ['Python 3.11']

    @ericsnowcurrently
    Copy link
    Member Author

    Currently we freeze the 3 main import-related modules into the python binary (along with one test module). This allows us to bootstrap the import machinery from Python modules. It also means we get better performance importing those modules.

    If we freeze modules that are likely to be used during execution then we get even better startup times. I'll be putting up a PR that does so, freezing all the modules that are imported during startup. This could also be done for any stdlib modules that are commonly imported.

    (also see bpo-45019 and faster-cpython/ideas#82)

    @ericsnowcurrently ericsnowcurrently added build The build process and cross-build 3.11 only security fixes labels Aug 26, 2021
    @ericsnowcurrently ericsnowcurrently self-assigned this Aug 26, 2021
    @ericsnowcurrently ericsnowcurrently added type-feature A feature request or enhancement build The build process and cross-build 3.11 only security fixes labels Aug 26, 2021
    @ericsnowcurrently ericsnowcurrently self-assigned this Aug 26, 2021
    @ericsnowcurrently ericsnowcurrently added the type-feature A feature request or enhancement label Aug 26, 2021
    @ericsnowcurrently
    Copy link
    Member Author

    FYI, with my branch I'm getting a 15% improvement to startup for "./python -c pass".

    @ericsnowcurrently
    Copy link
    Member Author

    I'm aware of two potentially problematic consequences to this change:

    • making changes to those source files for the modules will not be reflected during execution until after "make" is run
    • tricks to inject hooks ASAP (e.g. coverage.py swaps the encodings module) may lose their entry point

    For the former, I'm not sure there's a way around it. We may consider the inconvenience worth it in order to get the performance benefits.

    For the latter, the obvious solution is to introduce a startup hook (e.g. on the CLI) like we've talked about doing for years. (I wasn't able to find previous discussions on that topic after a quick search.)

    @malemburg
    Copy link
    Member

    Not sure whether you are aware, but the PyRun project I'm maintaining
    does that and goes beyond this by freezing almost the complete stdlib
    and statically linking most C extensions into a single binary:

    https://www.egenix.com/products/python/PyRun/

    Startup is indeed better, but not as much as you might think.
    You do save stat calls and can share resources across processes.

    The big time consumer is turning marshal'ed code objects back
    into Python objects, though. If that could be made faster by
    e.g. using a more efficient storage format such as one which is
    platform dependent, it'd be a much bigger win than the freezing
    approach.

    @ericsnowcurrently
    Copy link
    Member Author

    The big time consumer is turning marshal'ed code objects back
    into Python objects, though. If that could be made faster by
    e.g. using a more efficient storage format such as one which is
    platform dependent, it'd be a much bigger win than the freezing
    approach.

    That's something Guido has been exploring. :)

    See: faster-cpython/ideas#84 (and others)

    @ericsnowcurrently
    Copy link
    Member Author

    For the latter, the obvious solution is to introduce a startup hook

    I'm not sure why I said "obvious". Sorry about that.

    @gvanrossum
    Copy link
    Member

    I noticed nedbat un-nosied himself. Probably he didn't realize you were
    calling him out because it's possible this would affect coverage.py?

    @gvanrossum
    Copy link
    Member

    The big time consumer is turning marshal'ed code objects back
    into Python objects, though. If that could be made faster by
    e.g. using a more efficient storage format such as one which is
    platform dependent, it'd be a much bigger win than the freezing
    approach.

    I've explored a couple of different approaches here (see the issue Eric linked to and a few adjacent ones) and this is a tricky issue. Marshal seems to be pretty darn efficient as a storage format, because it's very compact compared to the Python objects it creates. My final (?) proposal is creating static data structures embedded in the code that just *are* Python objects. Unfortunately on Windows the C compiler balks at this -- the C++ compiler handles it just fine, but it's not clear that we are able to statically link C++ object files into Python without depending on a lot of other C++ infrastructure. (In GCC and Clang this is apparently a language extension.)

    @ericsnowcurrently
    Copy link
    Member Author

    @guido, @mark Shannon, do you recall the other issue where folks objected to that other patch, due to local changes to source files not being reflected?

    Also, one thought that comes to mind is that we could ignore the frozen modules when in a dev environment (and opt in to using the frozen modules when an environment variable).

    @markshannon
    Copy link
    Member

    I don't recall, but...

    You can't modify any builtin modules. Freezing modules effectively makes them builtin from a user's perspective. There are plenty of modules that can't be modified:

    >>> sys.builtin_module_names
    ('_abc', '_ast', '_codecs', '_collections', '_functools', '_imp', '_io', '_locale', '_operator', '_signal', '_sre', '_stat', '_string', '_symtable', '_thread', '_tokenize', '_tracemalloc', '_warnings', '_weakref', 'atexit', 'builtins', 'errno', 'faulthandler', 'gc', 'itertools', 'marshal', 'posix', 'pwd', 'sys', 'time', 'xxsubtype')

    I don't see why adding a few more modules to that list would be a problem.

    Was the objection to feezing *all* modules, not just the core ones?

    @gvanrossum
    Copy link
    Member

    We should ask Neil S. for the issue where Larry introduced this. That might
    have some discussion.

    But if I had to guess, it’s confusing that you can see *Python* source that
    you can’t edit (or rather, where editing doesn’t get reflected in the next
    Python run, unless you also compile it.

    I know that occasionally a debug session I add a print statement to a
    stdlib module.

    --Guido (mobile)

    @ericsnowcurrently
    Copy link
    Member Author

    Neil, do you recall the story here?

    @gvanrossum
    Copy link
    Member

    The plot thickens. By searching my extensive GMail archives for Jeethu Rao I found an email from Sept. 14 to python-dev by Larry Hastings titled "Store startup modules as C structures for 20%+ startup speed improvement?"

    It references an issue and a PR:

    https://bugs.python.org/issue34690
    https://github.com/python/cpython/pull/9320
    

    Here's a link to the python-dev thread:

    https://mail.python.org/pipermail/python-dev/2018-September/155188.html
    

    There's a lot of discussion there. I'll try to dig through it.

    @gvanrossum
    Copy link
    Member

    Adding Larry in case he remembers more color. (Larry: the key question here is whether some version of this (like the one I've been working on, or a simpler one that Eric has prepared) is viable, given that any time someone works on one of the frozen or deep-frozen stdlib modules, they will have to run make (with the default target) to rebuild the Python binary with the deep-frozen files.

    (Honestly if I were working on any of those modules, I'd just comment out some lines from Eric's freeze_modules.py script and do one rebuild until I was totally satisfied with my work. Either way it's a suboptimal experience for people contributing to those modules. But we stand to gain a ~20% startup time improvement.)

    PS. The top comment links to Eric's work.

    @larryhastings
    Copy link
    Contributor

    Since nobody's said so in so many words (so far in this thread anyway): the prototype from Jeethu Rao in 2018 was a different technology than what Eric is doing. The "Programs/_freeze_importlib.c" Eric's playing with essentially inlines a .pyc file as C static data. The Jeethu Rao approach is more advanced: instead of serializing the objects, it stores the objects from the .pyc file as pre-initialized C static objects. So it saves the un-marshalling step, and therefore should be faster. To import the module you still need to execute the module body code object though--that seems unavoidable.

    The python-dev thread covers nearly everything I remember about this. The one thing I guess I never mentioned is that building and working with the prototype was frightful; it had both Python code and C code, and it was fragile and hard to get working. My hunch at the time was that it shouldn't be so fragile; it should be possible to write the converter in Python: read in .pyc file, generate .c file. It might have to make assumptions about the internal structure of the CPython objects it instantiates as C static data, but since we'd ship the tool with CPython this should be only a minor maintenance issue.

    In experimenting with the prototype, I observed that simply calling stat() to ensure the frozen .py file hadn't changed on disk lost us about half the performance win from this approach. I'm not much of a systems programmer, but I wonder if there are (system-proprietary?) library calls one could make to get the stat info for all files in a single directory all at once that might be faster overall. (Of course, caching this information at startup might make for a crappy experience for people who edit Lib/*.py files while the interpreter is running.)

    One more observation about the prototype: it doesn't know how to deal with any mutable types. marshal.c can deal with list, dict, and set. Does this matter? ISTM the tree of objects under a code object will never have a reference to one of these mutable objects, so it's probably already fine.

    Not sure what else I can tell you. It gave us a measurable improvement in startup time, but it seemed fragile, and it was annoying to work with/on, so after hacking on it for a week (at the 2018 core dev sprint in Redmond WA) I put it aside and moved on to other projects.

    @larryhastings
    Copy link
    Contributor

    There should be a boolean flag that enables/disables cached copies of .py files from Lib/. You should be able to turn it off with either an environment variable or a command-line option, and when it's off it skips all the internal cached stuff and uses the normal .py / .pyc machinery.

    With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    As for changes to the build process: the most analogous thing we have is probably Argument Clinic. For what it's worth, Clinic hasn't been very well integrated into the CPython build process. There's a pseudotarget that runs it for you in the Makefile, but it's only ever run manually, and I'm not sure there's *any* build automation for Windows developers. AFAIK it hasn't really been a problem. But then I'm not sure this is a very good analogy--the workflow for making Clinic changes is very different from people hacking on Lib/*.py.

    It might be sensible to add a mechanism that checks whether or not the pre-cached modules are current. Store a hash for each cached module and check that they all match. This could then be part of the release process, run from a GitHub hook, etc.

    @gvanrossum
    Copy link
    Member

    Since nobody's said so in so many words (so far in this thread anyway): the prototype from Jeethu Rao in 2018 was a different technology than what Eric is doing. The "Programs/_freeze_importlib.c" Eric's playing with essentially inlines a .pyc file as C static data. The Jeethu Rao approach is more advanced: instead of serializing the objects, it stores the objects from the .pyc file as pre-initialized C static objects. So it saves the un-marshalling step, and therefore should be faster. To import the module you still need to execute the module body code object though--that seems unavoidable.

    Yes, I know. We're discussing two separate ideas -- Eric's approach, which is doing the same we're doing for importlib for more stdlib modules; and "my" approach, dubbed "deep-freeze", which is similar to Jeethu's approach (details in faster-cpython/ideas#84).

    What the two approaches have in common is that they require rebuilding the python binary whenever you edit any of the changed modules. I heard somewhere (I'm sorry, I honestly don't recall who said it first, possibly Eric himself) that Jeethu's approach was rejected because of that.

    FWIW in my attempts to time this, it looks like the perf benefits of Eric's approach are close to those of deep-freezing. And deep-freezing causes much more bloat of the source code and of the resulting binary. (At runtime the binary size is made up by matching heap savings, but to some people binary size is important too.)

    The python-dev thread covers nearly everything I remember about this. The one thing I guess I never mentioned is that building and working with the prototype was frightful; it had both Python code and C code, and it was fragile and hard to get working. My hunch at the time was that it shouldn't be so fragile; it should be possible to write the converter in Python: read in .pyc file, generate .c file. It might have to make assumptions about the internal structure of the CPython objects it instantiates as C static data, but since we'd ship the tool with CPython this should be only a minor maintenance issue.

    Deep-freezing doesn't seem frightful to work with, to me at least. :-) Maybe the foundational work by Eric (e.g. generating sections of Makefile.pre.in) has helped.

    I don't understand entirely why Jeethu's prototype had part written in C. I never ran it so I don't know what the generated code looked like, but I have a feeling that for objects that don't reference other objects, it would generate a byte array containing the exact contents of the object structure (which it would get from constructing the object in memory and copying the bytes) which was then put together with the object header (containing the refcount and type) and cast to (PyObject *).

    In contrast, for deep-freeze I just reverse engineered what the structures look like and wrote a Python script to generate C code for an initialized instance of those structures. You can look at some examples here: https://github.com/gvanrossum/cpython/blob/codegen/Python/codegen__collections_abc.c . It's verbose but the C compiler handles it just fine (C compilers have evolved to handle *very* large generated programs).

    In experimenting with the prototype, I observed that simply calling stat() to ensure the frozen .py file hadn't changed on disk lost us about half the performance win from this approach. I'm not much of a systems programmer, but I wonder if there are (system-proprietary?) library calls one could make to get the stat info for all files in a single directory all at once that might be faster overall. (Of course, caching this information at startup might make for a crappy experience for people who edit Lib/*.py files while the interpreter is running.)

    I think the only solution here was hinted at in the python-dev thread from 2018: have a command-line flag to turn it on or off (e.g. -X deepfreeze=1/0) and have a policy for what the default for that flag should be (e.g. on by default in production builds, off by default in developer builds -- anything that doesn't use --enable-optimizations).

    One more observation about the prototype: it doesn't know how to deal with any mutable types. marshal.c can deal with list, dict, and set. Does this matter? ISTM the tree of objects under a code object will never have a reference to one of these mutable objects, so it's probably already fine.

    Correct, marshal supports things that you will never see in a code object. Perhaps the reason is that when marshal was invented, it wasn't so clear that code objects should be immutable -- that realization came later, when Greg Stein proposed making them ROM-able. That didn't work out, but the notion that code objects should be strictly mutable (to the python user, at least) was born and is now ingrained.

    Not sure what else I can tell you. It gave us a measurable improvement in startup time, but it seemed fragile, and it was annoying to work with/on, so after hacking on it for a week (at the 2018 core dev sprint in Redmond WA) I put it aside and moved on to other projects.

    I'm not so quick to give up. I do believe I have seen similar startup time improvements. But Eric's version (i.e. this issue) is nearly as good, and the binary bloat is much less -- marshal is way more compact than in-memory objects.

    (Second message)

    There should be a boolean flag that enables/disables cached copies of .py files from Lib/. You should be able to turn it off with either an environment variable or a command-line option, and when it's off it skips all the internal cached stuff and uses the normal .py / .pyc machinery.

    Yeah.

    With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    *All* the .py files? I think the binary bloat cause by deep-freezing the entire stdlib would be excessive. In fact, Eric's approach freezes everything in the encodings package, which turns out to be a lot of files and a lot of code (lots of simple data tables expressed in code), and I found that for basic startup time, it's best not to deep-freeze the encodings module except for __init__.py, aliases.py and utf_8.py.

    As for changes to the build process: the most analogous thing we have is probably Argument Clinic. For what it's worth, Clinic hasn't been very well integrated into the CPython build process. There's a pseudotarget that runs it for you in the Makefile, but it's only ever run manually, and I'm not sure there's *any* build automation for Windows developers. AFAIK it hasn't really been a problem. But then I'm not sure this is a very good analogy--the workflow for making Clinic changes is very different from people hacking on Lib/*.py.

    I think we've got reasonably good automation for both Eric's approach and the deep-freeze approach -- all you need to do is run "make" when you've edited one of the (deep-)frozen modules.

    It might be sensible to add a mechanism that checks whether or not the pre-cached modules are current. Store a hash for each cached module and check that they all match. This could then be part of the release process, run from a GitHub hook, etc.

    I think the automation that Eric developed is already good enough. (He even generates Windows project files.) See #27980 .

    @larryhastings
    Copy link
    Contributor

    What the two approaches have in common is that they require rebuilding the python binary whenever you edit any of the changed modules. I heard somewhere (I'm sorry, I honestly don't recall who said it first, possibly Eric himself) that Jeethu's approach was rejected because of that.

    My dim recollection was that Jeethu's approach wasn't explicitly rejected, more that the community was more "conflicted" than "strongly interested", so I lost interest, and nobody else followed up.

    I don't understand entirely why Jeethu's prototype had part written in C.

    My theory: it's easier to serialize C objects from C. It's maybe even slightly helpful? But it made building a pain. And yeah it just doesn't seem necessary. The code generator will be tied to the C representation no matter how you do it, so you might as well write it in a nice high-level language.

    I never ran it so I don't know what the generated code looked like, [...]

    You can see an example of Jeethu's serialized objects here:

    https://raw.githubusercontent.com/python/cpython/267c93d61db9292921229fafd895b5ff9740b759/Python/frozenmodules.c

    Yours is generally more readable because you're using the new named structure initializers syntax. Though Jeethu's code is using some symbolic constants (e.g. PyUnicode_1BYTE_KIND) where you're just printing the actual value.

    > With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    *All* the .py files? I think the binary bloat cause by deep-freezing the entire stdlib would be excessive.

    I did say "all the .py files automatically read in at startup". In current trunk, there are 32 modules in sys.module at startup (when run non-interactively), and by my count 13 of those are written in Python.

    If we go with Eric's approach, that means we'd turn those .pyc files into static data. My quick experiment suggests that'd be less than 300k. On my 64-bit Linux system, a default build of current trunk (configure && make -j) yields a 23mb python executable, and a 44mb libpython3.11.a. If I build without -g, they are 4.3mb and 7mb respectively. So this speedup would add another 2.5% to the size of a stripped build.

    If even that 300k was a concern, the marshal approach would also permit us to compile all the deep-frozen modules into a separate shared library and unload it after we're done.

    I don't know what the runtime impact of "deep-freeze" is, but it seems like it'd be pretty minimal. You're essentially storing these objects in C static data instead of the heap, which should be about the same. Maybe it'd mean the code objects for the module bodies would stick around longer than they otherwise would? But that doesn't seem like it'd add that much overhead.

    It's interesting to think about applying these techniques to the entire standard library, but as you suggest that would probably be wasteful. On the other hand: if we made a viable tool that could consume some arbitrary set of .py files and produce a C file, and said C file could then be compiled into a shared library, end users could enjoy this speedup over the subset of the standard library their program used, and perhaps even their own source tree(s).

    @nascheme
    Copy link
    Member

    [Larry]

    The one thing I guess I never mentioned is that building and working with the
    prototype was frightful; it had both Python code and C code, and it was
    fragile and hard to get working.

    I took Larry's PR and did a fair amount of cleanup on it to make the build less
    painful and fragile. My branch is fairly easy to re-build. The major downsides remaining
    are that you couldn't update .py files and have them used (static ones
    take priority) and the generated C code is quite large.

    I didn't make any attempt to work on the serializer, other than to make it work
    with an alpha version of Python 3.10.

    https://github.com/nascheme/cpython/tree/static_frozen

    It was good enough to pass nearly(?) all tests and I did some profiling. It helped reduce startup time quite a bit.

    @malemburg
    Copy link
    Member

    On 28.08.2021 06:06, Guido van Rossum wrote:

    > With that in place, it'd be great to pre-cache all the .py files automatically read in at startup.

    *All* the .py files? I think the binary bloat cause by deep-freezing the entire stdlib would be excessive. In fact, Eric's approach freezes everything in the encodings package, which turns out to be a lot of files and a lot of code (lots of simple data tables expressed in code), and I found that for basic startup time, it's best not to deep-freeze the encodings module except for __init__.py, aliases.py and utf_8.py.

    Eric's approach, as I understand it, is pretty much what PyRun does.
    It freezes almost the entire stdlib. The main aim was to save space
    and create a Python runtime with very few files for easy installation and
    shipment of products written in Python.

    For Python 3.8 (I haven't ported it to more recent Python versions yet),
    the uncompressed stripped binary is 15MB. UPX compressed, it's only 5MB:

    -rwxr-xr-x 1 lemburg lemburg 15M May 19 15:26 pyrun3.8
    -rwxr-xr-x 1 lemburg lemburg 32M Aug 26 2020 pyrun3.8-debug
    -rwxr-xr-x 1 lemburg lemburg 5.0M May 19 15:26 pyrun3.8-upx

    There's no bloat, since you don't need the .py/.pyc files for the stdlib
    anymore. In fact, you save quite a bit of disk space compared to a
    full Python installation and additionally benefit from the memory
    mapping the OS does for sharing access to the marshal'ed byte code
    between processes.

    That said, some things don't work with such an approach, e.g.
    a few packages include additional data files which they expect to
    find on disk. Since those are not available anymore, they fail.

    For PyRun I have patched some of those packages to include the
    data in form of Python modules instead, so that it gets frozen
    as well, e.g. the Python grammar files.

    Whether this is a good approach for Python in general is a different
    question, though. PyRun is created on top of the existing released
    Python distribution, so it doesn't optimize for being able to
    work with the frozen code. In fact, early versions did not
    even have a REPL, since the main point was to run a
    single released app.

    @indygreg
    Copy link
    Mannequin

    indygreg mannequin commented Aug 28, 2021

    When I investigated freezing the standard library for PyOxidizer, I ran into a rash of problems. The frozen importer doesn't behave like PathFinder. It doesn't (didn't?) set some common module-level attributes that are documented by the importer "specification" to be set and this failed a handful of tests and lead to runtime issues or breakage in 3rd party packages (such as random packages looking for a __file__ on a common stdlib module).

    Also, when I last looked at the CPython source, the frozen importer performed a linear scan of its indexed C array performing strcmp() on each entry until it found what it was looking for. So adding hundreds of modules could result in sufficient overhead and justify using a more efficient lookup algorithm. (PyOxidizer uses Rust's HashMap to index modules by name.)

    I fully support more aggressive usage of frozen modules in the standard library to speed up interpreter startup. However, if you want to ship this as enabled by default, from my experience with PyOxidizer, I highly recommend:

    @indygreg
    Copy link
    Mannequin

    indygreg mannequin commented Aug 28, 2021

    Oh, PyOxidizer also ran into more general issues with the frozen importer in that it broke various importlib APIs. e.g. because the frozen importer only supports bytecode, you can't use .__loader__.get_source() to obtain the source of a module. This makes tracebacks more opaque and breaks legitimate API consumers relying on these importlib interfaces.

    The fundamental limitations with the frozen importer are why I implemented my own meta path importer (implemented in pure Rust), which is more fully featured, like the PathFinder importer that most people rely on today. That importer is available on PyPI (https://pypi.org/project/oxidized-importer/) and has its own API to facilitate PyOxidizer-like functionality (https://pyoxidizer.readthedocs.io/en/stable/oxidized_importer.html) if anyone wants to experiment with it.

    @softsolsolutions softsolsolutions mannequin added 3.9 only security fixes and removed build The build process and cross-build 3.11 only security fixes labels Oct 1, 2021
    @gvanrossum
    Copy link
    Member

    @santhu_reddy12, why did you assign this to the Parser category? IMO this issue is clearly in the Build category. (We haven't met, I assume you have triage permissions?)

    @gvanrossum
    Copy link
    Member

    And it's most definitely 3.11, not 3.9. (Did you mean to change a different issue?)

    @gvanrossum gvanrossum added 3.11 only security fixes and removed 3.9 only security fixes labels Oct 1, 2021
    @ericsnowcurrently
    Copy link
    Member Author

    New changeset 08285d5 by Eric Snow in branch 'main':
    bpo-45020: Identify which frozen modules are actually aliases. (gh-28655)
    08285d5

    @gvanrossum
    Copy link
    Member

    Whoa. os.path is not always an alias for posixpath, is it?

    @larryhastings
    Copy link
    Contributor

    Nope. On Windows, os.path is "ntpath".

    @ericsnowcurrently
    Copy link
    Member Author

    On Tue, Oct 5, 2021 at 11:31 AM Guido van Rossum <report@bugs.python.org> wrote:

    Whoa. os.path is not always an alias for posixpath, is it?

    Steve brought this to my attention a couple weeks ago. Bottom line:
    the frozen module entry is only there for checks, not for actual
    import, but should probably be removed regardless. See
    https://bugs.python.org/issue45272.

    @ericsnowcurrently
    Copy link
    Member Author

    New changeset b9cdd0f by Eric Snow in branch 'main':
    bpo-45020: Default to using frozen modules unless running from source tree. (gh-28940)
    b9cdd0f

    @gpshead
    Copy link
    Member

    gpshead commented Oct 18, 2021

    could changes related to this be the cause of https://bugs.python.org/issue45506 ?

    out of tree builds in main usually cannot pass key tests today. they often hang or blow up with strange exceptions.

    @gvanrossum
    Copy link
    Member

    Is #73126 only for UNIX?

    I built on Windows with default options (PCbuild\build.bat) and it looks like the frozen modules are used by default even though I am running in the source directory. (I put a printf() call in unmarshal_frozen_code().)

    I also put a printf() in is_dev_env() and found that it returns 0 on this check:

        /* If dirname() is the same for both then it is a dev build. */
        if (len != _Py_find_basename(stdlib)) {
            return 0;
        }

    I assume that's because the binary (in my case at least) is at PCbuild\amd64\python.exe which is not the same as my current directory (which is the repo root).

    @ericsnowcurrently
    Copy link
    Member Author

    On Mon, Oct 18, 2021 at 7:14 PM Guido van Rossum <report@bugs.python.org> wrote:

    Is #73126 only for UNIX?

    It wasn't meant to be. :(

    I built on Windows with default options (PCbuild\build.bat) and it looks like the frozen modules are used by default even though I am running in the source directory. (I put a printf() call in unmarshal_frozen_code().)

    I'll look into this.

    @ericsnowcurrently
    Copy link
    Member Author

    New changeset 6afb285 by Eric Snow in branch 'main':
    bpo-45020: Add tests for the -X "frozen_modules" option. (gh-28997)
    6afb285

    @ericsnowcurrently
    Copy link
    Member Author

    On Mon, Oct 18, 2021 at 7:14 PM Guido van Rossum <report@bugs.python.org> wrote:

    Is #73126 only for UNIX?
    I built on Windows with default options (PCbuild\build.bat) and it looks like the frozen modules are used by default even though I am running in the source directory. (I put a printf() call in unmarshal_frozen_code().)

    FYI, I opened https://bugs.python.org/issue45651 for sorting this out.

    @ericsnowcurrently
    Copy link
    Member Author

    I consider this done. There is some lingering follow-up work, for which I've created a number of issues:

    @gvanrossum
    Copy link
    Member

    New changeset 1cbaa50 by Guido van Rossum in branch 'main':
    bpo-45696: Deep-freeze selected modules (GH-29118)
    1cbaa50

    @tiran
    Copy link
    Member

    tiran commented Nov 24, 2021

    New changeset 5c4b19e by Christian Heimes in branch 'main':
    bpo-45020: Fix strict-prototypes warning (GH-29755)
    5c4b19e

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    3.11 only security fixes interpreter-core (Objects, Python, Grammar, and Parser dirs) type-feature A feature request or enhancement
    Projects
    None yet
    Development

    No branches or pull requests