This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

classification
Title: Expose sched.h functions
Type: enhancement Stage: resolved
Components: Extension Modules Versions: Python 3.3
process
Status: closed Resolution: fixed
Dependencies: Superseder:
Assigned To: Nosy List: amaury.forgeotdarc, anacrolix, benjamin.peterson, georg.brandl, giampaolo.rodola, neologix, pitrou, python-dev, rosslagerwall, serhiy.storchaka, vstinner
Priority: release blocker Keywords: patch

Created on 2011-07-29 17:09 by benjamin.peterson, last changed 2022-04-11 14:57 by admin. This issue is now closed.

Files
File name Uploaded Description Edit
sched_stuff.patch benjamin.peterson, 2011-07-29 17:09 review
sched_stuff.patch benjamin.peterson, 2011-07-29 23:37 more ifdefs review
sched_stuff.patch benjamin.peterson, 2011-07-30 22:51 address review review
cpuset.patch pitrou, 2012-08-04 11:13 review
Messages (22)
msg141401 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2011-07-29 17:09
For fun and profit. :)
msg141402 - (view) Author: Amaury Forgeot d'Arc (amaury.forgeotdarc) * (Python committer) Date: 2011-07-29 17:48
> @@ -10330,26 +10899,34 @@ INITFUNC(void)
I know that it's only an increase of 5%, but I feel that posixmodule.c is already large enough.
Does this feature belongs to the os module?
Or is it time to split posixmodule.c in several pieces?
msg141403 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2011-07-29 17:50
2011/7/29 Amaury Forgeot d'Arc <report@bugs.python.org>:
>
> Amaury Forgeot d'Arc <amauryfa@gmail.com> added the comment:
>
>> @@ -10330,26 +10899,34 @@ INITFUNC(void)
> I know that it's only an increase of 5%, but I feel that posixmodule.c is already large enough.
> Does this feature belongs to the os module?
> Or is it time to split posixmodule.c in several pieces?

I completely agree with you, but that's a separate issue.
msg141411 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2011-07-29 21:32
Haven't reviewed the implementation, but +1 for adding this functionality.

As for splitting apart posixmodule.c, I agree that's a separate concern. Something like the _io module splitup (several C files compiled in a single extension module) looks workable.

We could also consider having a separate module for this, but I can't think of any good name. "sched" is AFAIK already taken, and "scheduler" is too close to the former not to be confusing.
msg141415 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-07-29 22:54
I'm +0.
It would certainly be fun, but I'm not so sure about the "profit" part.
The main usage of the scheduler API is for real-time policies, and I
somehow doubt Python is commonly used in this area (but I could be
wrong).
Furthermore, the same functionality can be easily achieved with
schedtool/taskset.

Concerning the patch, I think that several constants ought to be
guarded by #ifdef's: only SCHED_(OTHER|FIFO|RR) are required by POSIX:
the other are Linux-specific (also, SCHED_BATCH, SCHED_IDLE and
SCHED_RESET_ON_FORK are only defined in recent Linux kernels, older
libcs probably don't define them).
msg141416 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2011-07-29 23:30
I actually implemented this because I wanted to confine a Python process to a cpu to prevent keep it from being tossed from core to core. It made sense to bring the other scheduling functions along for the ride.
msg141432 - (view) Author: Charles-François Natali (neologix) * (Python committer) Date: 2011-07-30 08:37
> I actually implemented this because I wanted to confine a Python process to a cpu to prevent keep it from being tossed from core to core. It made sense to bring the other scheduling functions along for the ride.

Why didn't you use something like:

$ taskset <cpu mask> python myscript.py

By the way, binding a multi-threaded Python process to a single core
is often a simple way to improve performance, because with the GIL the
threads are actually serialized, so you have almost no contention, and
your threads get hot cache.
msg141442 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2011-07-30 14:15
2011/7/30 Charles-François Natali <report@bugs.python.org>:
>
> Charles-François Natali <neologix@free.fr> added the comment:
>
>> I actually implemented this because I wanted to confine a Python process to a cpu to prevent keep it from being tossed from core to core. It made sense to bring the other scheduling functions along for the ride.
>
> Why didn't you use something like:
>
> $ taskset <cpu mask> python myscript.py

Because I didn't want to type that every time I ran the script.

>
> By the way, binding a multi-threaded Python process to a single core
> is often a simple way to improve performance, because with the GIL the
> threads are actually serialized, so you have almost no contention, and
> your threads get hot cache.

Indeed, that's what I was doing.
msg141583 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2011-08-02 22:30
New changeset 89e92e684b37 by Benjamin Peterson in branch 'default':
expose sched.h functions (closes #12655)
http://hg.python.org/cpython/rev/89e92e684b37
msg153490 - (view) Author: Matt Joiner (anacrolix) Date: 2012-02-16 18:12
Please also expose sched_getcpu().
msg167386 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-08-04 00:31
I'm sorry, I missed this issue. I just saw its API and I have remarks on the cpu_set type:
 * Why not reusing the set type for input parameters and the result of sched_getaffinity?
 * Method names of cpu_set are very different than names of the set type
 * Why do I need to specify the number of CPU?

signal.pthread_sigmask() accepts any iterable object as input, convert it to a sigset_t. For the result, it converts the new sigset_t to a classic Python set object.

Is it possible to get the number of CPU to be able to convert a Python set to a cpu_set?
msg167389 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-08-04 00:35
> Is it possible to get the number of CPU to be able to convert a Python 
> set to a cpu_set?

sched_getaffinity returns EINVAL if the cpu_set is too small, so it should be easy enough to iterate.
I agree the API should be changed for something saner, and before 3.3. Sorry for not looking at the patch in more detail when it was first submitted. I will try to cook up a new patch this week-end.
msg167396 - (view) Author: Benjamin Peterson (benjamin.peterson) * (Python committer) Date: 2012-08-04 04:02
Using a Python set is fine.
msg167401 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-08-04 11:08
Here is a patch removing cpu_set and using sets of integers instead.
msg167403 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-08-04 13:00
sched_getaffinity() does not fail if the set is smaller than the
number of CPU. Try with an initial value of ncpus=1. So we cannot
start the heuristic with ncpus=16, because it would only return 16
even if the computer has more cpus.

Instead of this heuristic, why not simply alway using ncpus = CPU_SETSIZE?

I don't know if CPU_SETSIZE is part of the standard (POSIX?).

You may also use a constant size (CPU_SETSIZE) of the set used by
sched_setaffinity() to simplify the code.
msg167404 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-08-04 13:03
> Try with an initial value of ncpus=1.

Well, I've tried and it works:

>>> os.sched_getaffinity(0)
{0, 1, 2, 3}

> I don't know if CPU_SETSIZE is part of the standard (POSIX?).

These are Linux-specific functions.
msg167405 - (view) Author: STINNER Victor (vstinner) * (Python committer) Date: 2012-08-04 13:10
>> Try with an initial value of ncpus=1.
> Well, I've tried and it works:

Oh, you're right :-) I only checked ncpus (1), not the final result.
It works because CPU_ALLOC_SIZE() rounds the size using
sizeof(unsigned long) (64 bits on my CPU). So CPU_ALLOC_SIZE(1)
returns 8, and sched_getaffinity() takes the size of the set in bytes,
and not the number of CPU.

sched_getaffinity(0) returns {0, 1, 2, 3, 4, 5, 6, 7} with ncpus=1 and
setsize=8.
msg167406 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2012-08-04 13:11
+1 for Antoine's patch/approach: it's more usable and pythonic.
I think documentation should mention and link the existence of:
http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count
msg167409 - (view) Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) Date: 2012-08-04 13:44
> You may also use a constant size (CPU_SETSIZE) of the set used by
sched_setaffinity() to simplify the code.

As Antoine pointed out to me (and I was convinced itself, experimented with an example from man:CPU_SET(3)) the cpu_set functions work with a sets of more than CPU_SETSIZE processors.
msg167410 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2012-08-04 14:19
New changeset d6745ddbccbd by Antoine Pitrou in branch 'default':
Issue #12655: Instead of requiring a custom type, os.sched_getaffinity and
http://hg.python.org/cpython/rev/d6745ddbccbd
msg167411 - (view) Author: Antoine Pitrou (pitrou) * (Python committer) Date: 2012-08-04 14:20
Ok, I've improved the default ncpus value and committed.
msg167429 - (view) Author: Roundup Robot (python-dev) (Python triager) Date: 2012-08-04 19:01
New changeset fb975cb8fb45 by Victor Stinner in branch 'default':
Issue #12655: Mention multiprocessing.cpu_count() in os.sched_getaffinity() doc
http://hg.python.org/cpython/rev/fb975cb8fb45
History
Date User Action Args
2022-04-11 14:57:20adminsetgithub: 56864
2012-08-04 19:01:44python-devsetmessages: + msg167429
2012-08-04 14:20:56pitrousetstatus: open -> closed
resolution: fixed
messages: + msg167411
2012-08-04 14:19:49python-devsetmessages: + msg167410
2012-08-04 13:44:28serhiy.storchakasetnosy: + serhiy.storchaka
messages: + msg167409
2012-08-04 13:11:32giampaolo.rodolasetmessages: + msg167406
2012-08-04 13:10:53vstinnersetmessages: + msg167405
2012-08-04 13:03:10pitrousetmessages: + msg167404
2012-08-04 13:00:58vstinnersetmessages: + msg167403
2012-08-04 11:13:11pitrousetfiles: + cpuset.patch
2012-08-04 11:13:05pitrousetfiles: - cpuset.patch
2012-08-04 11:08:06pitrousetfiles: + cpuset.patch

messages: + msg167401
2012-08-04 04:02:12benjamin.petersonsetmessages: + msg167396
2012-08-04 00:35:01pitrousetpriority: normal -> release blocker
nosy: + georg.brandl
messages: + msg167389

2012-08-04 00:31:30vstinnersetstatus: closed -> open

nosy: + vstinner
messages: + msg167386

resolution: fixed -> (no value)
2012-02-17 11:19:57giampaolo.rodolasetnosy: + giampaolo.rodola
2012-02-16 18:12:54anacrolixsetnosy: + anacrolix
messages: + msg153490
2011-08-02 22:30:42python-devsetstatus: open -> closed

nosy: + python-dev
messages: + msg141583

resolution: fixed
stage: patch review -> resolved
2011-07-30 22:52:04benjamin.petersonsetfiles: + sched_stuff.patch
2011-07-30 14:15:17benjamin.petersonsetmessages: + msg141442
2011-07-30 08:37:40neologixsetmessages: + msg141432
2011-07-29 23:37:44benjamin.petersonsetfiles: + sched_stuff.patch
2011-07-29 23:30:26benjamin.petersonsetmessages: + msg141416
2011-07-29 22:54:21neologixsetmessages: + msg141415
2011-07-29 21:32:04pitrousetnosy: + neologix, rosslagerwall

messages: + msg141411
stage: patch review
2011-07-29 17:50:15benjamin.petersonsetmessages: + msg141403
2011-07-29 17:48:55amaury.forgeotdarcsetnosy: + amaury.forgeotdarc
messages: + msg141402
2011-07-29 17:09:55benjamin.petersoncreate