Message179018
Here's a new version adressing Guido's comments (except for kqueue, for which I'll add support later when I can test it).
I'm also attaching a benchmark to compare the implementations: as noted by Guido, the complexity of select/poll/epoll are the theoretical ones: in real life, the syscall's cost is probably dwarfed by objects creation, etc.
Here's a run, for two ready FDs among 1018:
"""
$ ./python ~/selector_bench.py -r 2 -m 1018 -t socket
Trying with 2 ready FDs out of 1018, type socket
<class 'select.EpollSelector'>
0.056010190999586484
<class 'select.PollSelector'>
0.2639519829990604
<class 'select.SelectSelector'>
1.1859817369986558
"""
So this can be interesting when a large number of FDs are monitored.
For sake of cmpleteness, for a sparse allocation (i.e. just a couple FDS allocated near FD_SETSIZE), there's not much gain:
"""
$ ./python ~/selector_bench.py -r 2 -m 1018 -t socket -s
Trying with 2 FDs starting at 1018, type socket
<class 'select.EpollSelector'>
0.06651040699944133
<class 'select.PollSelector'>
0.06033727799876942
<class 'select.SelectSelector'>
0.0948788189998595
"""
Two points I'm not sure about:
- should EINTR be handled (i.e. retry, with an updated timeout). I'm tempted to say yes, because EINTR is just a pain the user should never be exposed with.
- what should be done with POLLNVAL and POLLERR? Raise an exception (that's what Java does, but since you can get quite quite easily it would be a pain to use), return a generic SELECT_ERR event? FWIW, twisted returns POLLERR|POLLNVAL as a CONNECTION_LOST event. |
|
Date |
User |
Action |
Args |
2013-01-04 13:18:11 | neologix | set | recipients:
+ neologix, gvanrossum, pitrou, giampaolo.rodola, rosslagerwall, felipecruz |
2013-01-04 13:18:11 | neologix | set | messageid: <1357305491.57.0.597912977053.issue16853@psf.upfronthosting.co.za> |
2013-01-04 13:18:11 | neologix | link | issue16853 messages |
2013-01-04 13:18:11 | neologix | create | |
|