Message236805
certainly for anything that needs good control over affinity psutils is the best choice, but I'm not arguing to implement full process control in python. I only want python to provide the number of cores one can work on to make best use of the available resources.
If you code search python files for cpu_count you find on github 18000 uses, randomly sampling a few every single one was to determine the number of cpus to start worker jobs to get best performance. Every one of these will oversubscribe a host that restricts the cpus a process can use. This is an issue especially for the increasingly popular use of containers instead of full virtual machines.
as a documentation update I would like to have a note saying that this number is the number of (online) cpus in the system may not be the number of of cpus the process can actually use. Maybe with a link to len(psutils.Process.get_affinity()) as a reference on how to obtain that number.
there would be no dependence on coreutils, I just mentioned it as you can look up the OS api you need to use to get the number there (e.g. sched_getaffinity). It is trivial API use and should not be a licensing issue, one could also look at the code from psutil which most likely looks very similar. |
|
Date |
User |
Action |
Args |
2015-02-27 17:40:44 | jtaylor | set | recipients:
+ jtaylor, neologix, davin |
2015-02-27 17:40:44 | jtaylor | set | messageid: <1425058844.55.0.923611260591.issue23530@psf.upfronthosting.co.za> |
2015-02-27 17:40:44 | jtaylor | link | issue23530 messages |
2015-02-27 17:40:44 | jtaylor | create | |
|