[Python-ideas] Tulip / PEP 3156 event loop implementation question: CPU vs. I/O starvation
Robert Collins
robertc at robertcollins.net
Sat Jan 12 06:06:22 CET 2013
On 12 January 2013 12:41, Guido van Rossum <guido at python.org> wrote:
> Here's an interesting puzzle. Check out the core of Tulip's event
> loop: http://code.google.com/p/tulip/source/browse/tulip/unix_events.py#672
> now_ready = list(_ready)
> _ready.clear()
> for handler in now_ready:
> call handler
>> However this implies that we go back to the I/O polling code more
> frequently. While the I/O polling code sets the timeout to zero when
> there's anything in the _ready queue, so it won't block, it still
> isn't free; it's an expensive system call that we'd like to put off
> until we have nothing better to do.
How expensive is it really? If its select, its terrible, but we
shouldn't be using that anywhere.
if its poll() it is moderately expensive, but still it doesn't scale -
its linear with fd's.
If its IO Completion ports in Windows, it is approximately free - the
OS calls back into us every time we tell it we're ready for more
events.
And if its epoll it is also basically free, reading off of an event
queue rather than checking every entry in the array.
kqueue has similar efficiency, for BSD systems.
I'd want to see some actual numbers before assuming that the call into
epoll or completion is actually a driving factor in latency here.
-Rob
More information about the Python-ideas
mailing list