This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author Aaron.Sherman
Recipients Aaron.Sherman
Date 2011-02-24.23:22:42
SpamBayes Score 2.1396231e-06
Marked as misclassified No
Message-id <1298589764.8.0.472079473316.issue11314@psf.upfronthosting.co.za>
In-reply-to
Content
I wrote some code a while back which used os.popen. I recently got a warning about popen being deprecated so I tried a test with the new subprocess module. In that test, subprocess.Popen appears to have a 40% process creation overhead penalty over os.popen, which really isn't small. It seems that the difference may be made up of some heavy mmap-related work that's happening in my version of python, and that might be highly platform specific, but the mmap/mremap/munmap calls being made in my sample subprocess code aren't being made at all in the os.popen equivalent.

Now, before someone says, "process creation is trivial, so a 40% hit is acceptable because it's 40% of a trivial part of your execution time," keep in mind that many Python applications are heavily process-creation focused. In my case that means monitoring, but I could also imagine this having a substantial impact on Web services and other applications that spend almost all of their time creating child processes. For a trivial script, subprocess is fine as is, but for these demanding applications, subprocess represents a significant source of pain.

Anyway my testing, results and conclusions are written up here:

http://essays.ajs.com/2011/02/python-subprocess-vs-ospopen-overhead.html
History
Date User Action Args
2011-02-24 23:22:44Aaron.Shermansetrecipients: + Aaron.Sherman
2011-02-24 23:22:44Aaron.Shermansetmessageid: <1298589764.8.0.472079473316.issue11314@psf.upfronthosting.co.za>
2011-02-24 23:22:42Aaron.Shermanlinkissue11314 messages
2011-02-24 23:22:42Aaron.Shermancreate