This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author serhiy.storchaka
Recipients serhiy.storchaka, vstinner
Date 2017-10-04.19:22:19
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1507144939.6.0.213398074469.issue31695@psf.upfronthosting.co.za>
In-reply-to
Content
I think that the process of running bigmem tests should be improved.

1. Every bigmem test should run in separate process. It's effect on memory fragmentation shouldn't affect other tests.

2. Only one bigmem test should be run at the same time. Otherwise several parallel bigmem tests  can exhaust RAM and lead to swapping. In multiprocessing mode the process running a bigmem test should send a signal to other processes, wait until they finished their current tests and paused, run a bigmem test, and send a signal to other processes that they can continue.

3. The hard limit of addressed memory should be set for every bigmem test, according to the declared requirements. It is better to crash one test and report the failure, than hang on swapping.

4. All other tests should be run with the limit set to 2 GiB (or 1 GiB?). This will help to find memory consuming tests which should be made bigmem tests.

5. The maxrss of the process running a bigmem test should be reported in verbose mode after finishing a test.
History
Date User Action Args
2017-10-04 19:22:19serhiy.storchakasetrecipients: + serhiy.storchaka, vstinner
2017-10-04 19:22:19serhiy.storchakasetmessageid: <1507144939.6.0.213398074469.issue31695@psf.upfronthosting.co.za>
2017-10-04 19:22:19serhiy.storchakalinkissue31695 messages
2017-10-04 19:22:19serhiy.storchakacreate