This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author bwelling
Recipients
Date 2005-08-15.18:13:28
SpamBayes Score
Marked as misclassified
Message-id
In-reply-to
Content
Logged In: YES 
user_id=63197

The real problem we were seeing wasn't the memory leak, it
was a file descriptor leak.  Leaking references within the
interpreter is bad, but the garbage collector will
eventually notice that the system is out of memory and clean
them.  Leaking file descriptors is much worse, as gc won't
be triggered when the process has reached it's limit, and
the process will start failing with "Too many file descriptors".

To easily show this problem, run the following from an
interactive python interpreter:

import urllib2
f = urllib2.urlopen('http://www.google.com')
f.close()

and from another window, run "lsof -p <pid of interpreter>".
 It should show  a TCP socket in CLOSE_WAIT, which means the
file descriptor is still open.  I'm seeing weirdness on
Fedora Core 4 today that I didn't see last week where after
a few seconds, the file descriptor is listed as "can't
identify protocol" instead of TCP, but that's not too
relevant, since it's still open.

Repeating the urllib2.urlopen()/close() pairs of statements
in the interpreter will cause more fds to be leaked, which
can also be seen by lsof.
History
Date User Action Args
2008-01-20 09:57:52adminlinkissue1208304 messages
2008-01-20 09:57:52admincreate