This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author vstinner
Recipients Sebastian.Kreft.Deezer, giampaolo.rodola, gvanrossum, pitrou, vstinner, yselivanov
Date 2014-06-02.19:47:51
SpamBayes Score -1.0
Marked as misclassified Yes
Message-id <1401738471.55.0.67002110293.issue21594@psf.upfronthosting.co.za>
In-reply-to
Content
"I agree that blocking is not ideal, however there are already some other methods that can eventually block forever, and for such cases a timeout is provided."

Functions like read() can "block" during several minutes, but it's something expect from network functions. Blocking until the application releases a file descriptor is more surprising.


"I think this method should retry until it can actually access the resources,"

You can easily implement this in your application.

"knowing when and how many files descriptors are going to be used is very implementation dependent"

I don't think that asyncio is the right place to handle file descriptors.

Usually, the file descriptor limit is around 1024. How did you reach such high limit? How many processes are running at the same time? asyncio should not "leak" file descriptors. It's maybe a bug in your application?

I'm now closing the bug.
History
Date User Action Args
2014-06-02 19:47:51vstinnersetrecipients: + vstinner, gvanrossum, pitrou, giampaolo.rodola, yselivanov, Sebastian.Kreft.Deezer
2014-06-02 19:47:51vstinnersetmessageid: <1401738471.55.0.67002110293.issue21594@psf.upfronthosting.co.za>
2014-06-02 19:47:51vstinnerlinkissue21594 messages
2014-06-02 19:47:51vstinnercreate