This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author jtaylor
Recipients davin, jtaylor, neologix
Date 2015-02-27.17:40:44
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1425058844.55.0.923611260591.issue23530@psf.upfronthosting.co.za>
In-reply-to
Content
certainly for anything that needs good control over affinity psutils is the best choice, but I'm not arguing to implement full process control in python. I only want python to provide the number of cores one can work on to make best use of the available resources.

If you code search python files for cpu_count you find on github 18000 uses, randomly sampling a few every single one was to determine the number of cpus to start worker jobs to get best performance. Every one of these will oversubscribe a host that restricts the cpus a process can use. This is an issue especially for the increasingly popular use of containers instead of full virtual machines.

as a documentation update I would like to have a note saying that this number is the number of (online) cpus in the system may not be the number of of cpus the process can actually use. Maybe with a link to len(psutils.Process.get_affinity()) as a reference on how to obtain that number.

there would be no dependence on coreutils, I just mentioned it as you can look up the OS api you need to use to get the number there (e.g. sched_getaffinity). It is trivial API use and should not be a licensing issue, one could also look at the code from psutil which most likely looks very similar.
History
Date User Action Args
2015-02-27 17:40:44jtaylorsetrecipients: + jtaylor, neologix, davin
2015-02-27 17:40:44jtaylorsetmessageid: <1425058844.55.0.923611260591.issue23530@psf.upfronthosting.co.za>
2015-02-27 17:40:44jtaylorlinkissue23530 messages
2015-02-27 17:40:44jtaylorcreate