This issue tracker has been migrated to GitHub ,
and is currently read-only.
For more information,
see the GitHub FAQs in the Python's Developer Guide.
Created on 2011年07月29日 17:09 by benjamin.peterson, last changed 2022年04月11日 14:57 by admin. This issue is now closed.
| Files | ||||
|---|---|---|---|---|
| File name | Uploaded | Description | Edit | |
| sched_stuff.patch | benjamin.peterson, 2011年07月29日 17:09 | review | ||
| sched_stuff.patch | benjamin.peterson, 2011年07月29日 23:37 | more ifdefs | review | |
| sched_stuff.patch | benjamin.peterson, 2011年07月30日 22:51 | address review | review | |
| cpuset.patch | pitrou, 2012年08月04日 11:13 | review | ||
| Messages (22) | |||
|---|---|---|---|
| msg141401 - (view) | Author: Benjamin Peterson (benjamin.peterson) * (Python committer) | Date: 2011年07月29日 17:09 | |
For fun and profit. :) |
|||
| msg141402 - (view) | Author: Amaury Forgeot d'Arc (amaury.forgeotdarc) * (Python committer) | Date: 2011年07月29日 17:48 | |
> @@ -10330,26 +10899,34 @@ INITFUNC(void) I know that it's only an increase of 5%, but I feel that posixmodule.c is already large enough. Does this feature belongs to the os module? Or is it time to split posixmodule.c in several pieces? |
|||
| msg141403 - (view) | Author: Benjamin Peterson (benjamin.peterson) * (Python committer) | Date: 2011年07月29日 17:50 | |
2011年7月29日 Amaury Forgeot d'Arc <report@bugs.python.org>: > > Amaury Forgeot d'Arc <amauryfa@gmail.com> added the comment: > >> @@ -10330,26 +10899,34 @@ INITFUNC(void) > I know that it's only an increase of 5%, but I feel that posixmodule.c is already large enough. > Does this feature belongs to the os module? > Or is it time to split posixmodule.c in several pieces? I completely agree with you, but that's a separate issue. |
|||
| msg141411 - (view) | Author: Antoine Pitrou (pitrou) * (Python committer) | Date: 2011年07月29日 21:32 | |
Haven't reviewed the implementation, but +1 for adding this functionality. As for splitting apart posixmodule.c, I agree that's a separate concern. Something like the _io module splitup (several C files compiled in a single extension module) looks workable. We could also consider having a separate module for this, but I can't think of any good name. "sched" is AFAIK already taken, and "scheduler" is too close to the former not to be confusing. |
|||
| msg141415 - (view) | Author: Charles-François Natali (neologix) * (Python committer) | Date: 2011年07月29日 22:54 | |
I'm +0. It would certainly be fun, but I'm not so sure about the "profit" part. The main usage of the scheduler API is for real-time policies, and I somehow doubt Python is commonly used in this area (but I could be wrong). Furthermore, the same functionality can be easily achieved with schedtool/taskset. Concerning the patch, I think that several constants ought to be guarded by #ifdef's: only SCHED_(OTHER|FIFO|RR) are required by POSIX: the other are Linux-specific (also, SCHED_BATCH, SCHED_IDLE and SCHED_RESET_ON_FORK are only defined in recent Linux kernels, older libcs probably don't define them). |
|||
| msg141416 - (view) | Author: Benjamin Peterson (benjamin.peterson) * (Python committer) | Date: 2011年07月29日 23:30 | |
I actually implemented this because I wanted to confine a Python process to a cpu to prevent keep it from being tossed from core to core. It made sense to bring the other scheduling functions along for the ride. |
|||
| msg141432 - (view) | Author: Charles-François Natali (neologix) * (Python committer) | Date: 2011年07月30日 08:37 | |
> I actually implemented this because I wanted to confine a Python process to a cpu to prevent keep it from being tossed from core to core. It made sense to bring the other scheduling functions along for the ride. Why didn't you use something like: $ taskset <cpu mask> python myscript.py By the way, binding a multi-threaded Python process to a single core is often a simple way to improve performance, because with the GIL the threads are actually serialized, so you have almost no contention, and your threads get hot cache. |
|||
| msg141442 - (view) | Author: Benjamin Peterson (benjamin.peterson) * (Python committer) | Date: 2011年07月30日 14:15 | |
2011年7月30日 Charles-François Natali <report@bugs.python.org>: > > Charles-François Natali <neologix@free.fr> added the comment: > >> I actually implemented this because I wanted to confine a Python process to a cpu to prevent keep it from being tossed from core to core. It made sense to bring the other scheduling functions along for the ride. > > Why didn't you use something like: > > $ taskset <cpu mask> python myscript.py Because I didn't want to type that every time I ran the script. > > By the way, binding a multi-threaded Python process to a single core > is often a simple way to improve performance, because with the GIL the > threads are actually serialized, so you have almost no contention, and > your threads get hot cache. Indeed, that's what I was doing. |
|||
| msg141583 - (view) | Author: Roundup Robot (python-dev) (Python triager) | Date: 2011年08月02日 22:30 | |
New changeset 89e92e684b37 by Benjamin Peterson in branch 'default': expose sched.h functions (closes #12655) http://hg.python.org/cpython/rev/89e92e684b37 |
|||
| msg153490 - (view) | Author: Matt Joiner (anacrolix) | Date: 2012年02月16日 18:12 | |
Please also expose sched_getcpu(). |
|||
| msg167386 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2012年08月04日 00:31 | |
I'm sorry, I missed this issue. I just saw its API and I have remarks on the cpu_set type: * Why not reusing the set type for input parameters and the result of sched_getaffinity? * Method names of cpu_set are very different than names of the set type * Why do I need to specify the number of CPU? signal.pthread_sigmask() accepts any iterable object as input, convert it to a sigset_t. For the result, it converts the new sigset_t to a classic Python set object. Is it possible to get the number of CPU to be able to convert a Python set to a cpu_set? |
|||
| msg167389 - (view) | Author: Antoine Pitrou (pitrou) * (Python committer) | Date: 2012年08月04日 00:35 | |
> Is it possible to get the number of CPU to be able to convert a Python > set to a cpu_set? sched_getaffinity returns EINVAL if the cpu_set is too small, so it should be easy enough to iterate. I agree the API should be changed for something saner, and before 3.3. Sorry for not looking at the patch in more detail when it was first submitted. I will try to cook up a new patch this week-end. |
|||
| msg167396 - (view) | Author: Benjamin Peterson (benjamin.peterson) * (Python committer) | Date: 2012年08月04日 04:02 | |
Using a Python set is fine. |
|||
| msg167401 - (view) | Author: Antoine Pitrou (pitrou) * (Python committer) | Date: 2012年08月04日 11:08 | |
Here is a patch removing cpu_set and using sets of integers instead. |
|||
| msg167403 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2012年08月04日 13:00 | |
sched_getaffinity() does not fail if the set is smaller than the number of CPU. Try with an initial value of ncpus=1. So we cannot start the heuristic with ncpus=16, because it would only return 16 even if the computer has more cpus. Instead of this heuristic, why not simply alway using ncpus = CPU_SETSIZE? I don't know if CPU_SETSIZE is part of the standard (POSIX?). You may also use a constant size (CPU_SETSIZE) of the set used by sched_setaffinity() to simplify the code. |
|||
| msg167404 - (view) | Author: Antoine Pitrou (pitrou) * (Python committer) | Date: 2012年08月04日 13:03 | |
> Try with an initial value of ncpus=1.
Well, I've tried and it works:
>>> os.sched_getaffinity(0)
{0, 1, 2, 3}
> I don't know if CPU_SETSIZE is part of the standard (POSIX?).
These are Linux-specific functions.
|
|||
| msg167405 - (view) | Author: STINNER Victor (vstinner) * (Python committer) | Date: 2012年08月04日 13:10 | |
>> Try with an initial value of ncpus=1.
> Well, I've tried and it works:
Oh, you're right :-) I only checked ncpus (1), not the final result.
It works because CPU_ALLOC_SIZE() rounds the size using
sizeof(unsigned long) (64 bits on my CPU). So CPU_ALLOC_SIZE(1)
returns 8, and sched_getaffinity() takes the size of the set in bytes,
and not the number of CPU.
sched_getaffinity(0) returns {0, 1, 2, 3, 4, 5, 6, 7} with ncpus=1 and
setsize=8.
|
|||
| msg167406 - (view) | Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) | Date: 2012年08月04日 13:11 | |
+1 for Antoine's patch/approach: it's more usable and pythonic. I think documentation should mention and link the existence of: http://docs.python.org/library/multiprocessing.html#multiprocessing.cpu_count |
|||
| msg167409 - (view) | Author: Serhiy Storchaka (serhiy.storchaka) * (Python committer) | Date: 2012年08月04日 13:44 | |
> You may also use a constant size (CPU_SETSIZE) of the set used by sched_setaffinity() to simplify the code. As Antoine pointed out to me (and I was convinced itself, experimented with an example from man:CPU_SET(3)) the cpu_set functions work with a sets of more than CPU_SETSIZE processors. |
|||
| msg167410 - (view) | Author: Roundup Robot (python-dev) (Python triager) | Date: 2012年08月04日 14:19 | |
New changeset d6745ddbccbd by Antoine Pitrou in branch 'default': Issue #12655: Instead of requiring a custom type, os.sched_getaffinity and http://hg.python.org/cpython/rev/d6745ddbccbd |
|||
| msg167411 - (view) | Author: Antoine Pitrou (pitrou) * (Python committer) | Date: 2012年08月04日 14:20 | |
Ok, I've improved the default ncpus value and committed. |
|||
| msg167429 - (view) | Author: Roundup Robot (python-dev) (Python triager) | Date: 2012年08月04日 19:01 | |
New changeset fb975cb8fb45 by Victor Stinner in branch 'default': Issue #12655: Mention multiprocessing.cpu_count() in os.sched_getaffinity() doc http://hg.python.org/cpython/rev/fb975cb8fb45 |
|||
| History | |||
|---|---|---|---|
| Date | User | Action | Args |
| 2022年04月11日 14:57:20 | admin | set | github: 56864 |
| 2012年08月04日 19:01:44 | python-dev | set | messages: + msg167429 |
| 2012年08月04日 14:20:56 | pitrou | set | status: open -> closed resolution: fixed messages: + msg167411 |
| 2012年08月04日 14:19:49 | python-dev | set | messages: + msg167410 |
| 2012年08月04日 13:44:28 | serhiy.storchaka | set | nosy:
+ serhiy.storchaka messages: + msg167409 |
| 2012年08月04日 13:11:32 | giampaolo.rodola | set | messages: + msg167406 |
| 2012年08月04日 13:10:53 | vstinner | set | messages: + msg167405 |
| 2012年08月04日 13:03:10 | pitrou | set | messages: + msg167404 |
| 2012年08月04日 13:00:58 | vstinner | set | messages: + msg167403 |
| 2012年08月04日 11:13:11 | pitrou | set | files: + cpuset.patch |
| 2012年08月04日 11:13:05 | pitrou | set | files: - cpuset.patch |
| 2012年08月04日 11:08:06 | pitrou | set | files:
+ cpuset.patch messages: + msg167401 |
| 2012年08月04日 04:02:12 | benjamin.peterson | set | messages: + msg167396 |
| 2012年08月04日 00:35:01 | pitrou | set | priority: normal -> release blocker nosy: + georg.brandl messages: + msg167389 |
| 2012年08月04日 00:31:30 | vstinner | set | status: closed -> open nosy: + vstinner messages: + msg167386 resolution: fixed -> (no value) |
| 2012年02月17日 11:19:57 | giampaolo.rodola | set | nosy:
+ giampaolo.rodola |
| 2012年02月16日 18:12:54 | anacrolix | set | nosy:
+ anacrolix messages: + msg153490 |
| 2011年08月02日 22:30:42 | python-dev | set | status: open -> closed nosy: + python-dev messages: + msg141583 resolution: fixed stage: patch review -> resolved |
| 2011年07月30日 22:52:04 | benjamin.peterson | set | files: + sched_stuff.patch |
| 2011年07月30日 14:15:17 | benjamin.peterson | set | messages: + msg141442 |
| 2011年07月30日 08:37:40 | neologix | set | messages: + msg141432 |
| 2011年07月29日 23:37:44 | benjamin.peterson | set | files: + sched_stuff.patch |
| 2011年07月29日 23:30:26 | benjamin.peterson | set | messages: + msg141416 |
| 2011年07月29日 22:54:21 | neologix | set | messages: + msg141415 |
| 2011年07月29日 21:32:04 | pitrou | set | nosy:
+ neologix, rosslagerwall messages: + msg141411 stage: patch review |
| 2011年07月29日 17:50:15 | benjamin.peterson | set | messages: + msg141403 |
| 2011年07月29日 17:48:55 | amaury.forgeotdarc | set | nosy:
+ amaury.forgeotdarc messages: + msg141402 |
| 2011年07月29日 17:09:55 | benjamin.peterson | create | |