Author methane
Recipients brett.cannon, dino.viehland, eric.snow, methane, serhiy.storchaka, skrah
Date 2019-06-04.00:30:59
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <>
In-reply-to <>
On Tue, Jun 4, 2019 at 8:45 AM Dino Viehland <> wrote:
> The 20MB of savings is actually the amount of byte code that exists in the IG code base.

Wait, do you expect you can reduce 100% saving of co_code object?
co_code takes 0byte?
You said "the overhead it just takes a byte code w/ 16 opcodes before
it breaks even" later.
So I believe you must understand there is overhead.

>  I was just measuring the web site code, and not the other various Python code in the process (e.g. no std lib code, no 3rd party libraries, etc...).  The IG code base is pretty monolithic and starting up the site requires about half of the code to get imported.  So I think the 20MB per process is a pretty realistic number.

How many MBs your application takes?  20MB of 200MB is attractive.
20MB of 20GB is not attractive.
Absolute number is not important here.  You should use ratio.

> It's certainly true that the byte code isn't the #1 source of memory here (the code objects themselves are pretty big), but in the serialized state it ends up representing 25% of the serialized data.

"25% of the serialized data" is misleading information.
You are trying to reduce memory usage, not serialized data size.
You should use ratio of total RAM usage, not serialized data size.

> I can't make any promises about open sourcing the import system, but I can certainly look into that as well.

Open sourcing is not necessary.  Try it by yourself and share the real
number, instead of
number of your estimation.
How much RSS is used by Python, and how much of it you can reduce by
this system, actually?

Inada Naoki  <>
Date User Action Args
2019-06-04 00:31:00methanesetrecipients: + methane, brett.cannon, dino.viehland, skrah, eric.snow, serhiy.storchaka
2019-06-04 00:31:00methanelinkissue36839 messages
2019-06-04 00:30:59methanecreate