This issue tracker has been migrated to GitHub, and is currently read-only.
For more information, see the GitHub FAQs in the Python's Developer Guide.

Author jcea
Recipients Garen, belopolsky, danchr, dhduvall, dmalcolm, fche, glyph, hazmat, jbaker, jcea, jmcp, laca, lasizoillo, loewis, mjw, movement, neologix, pitrou, rhettinger, robert.kern, ronaldoussoren, scox, serverhorror, sirg3, techtonik, twleung, wsanchez
Date 2011-12-12.14:49:14
SpamBayes Score 4.399592e-12
Marked as misclassified No
Message-id <>
According to Stan Cox, this patch "almost" work with SystemTap. Moreover, there are work porting DTrace to Linux.

DTrace helping to improve performance is secondary. The real important thing is the "observability". It is difficult to understand the advantages without experimenting directly, but possibilities are endless.

Just an example:

Yesterday I was kind of worried about the memory cost in the last version. I know that the extra memory used is around 2*n, where "n" is the bytecode size. But what is that in a real world cost?.

I wrote the following dtrace script:

unsigned long int tam;
unsigned int num;



    printf(\"%d %d\", num, tam);



This code bookkeeps details about the extra allocations/deallocations at import/reload() time.

Note that this code DOESN'T use the new Python probes. You could use them, for instance, to know which module/function is doing the lion share of imports.

You launch this code with "dtrace -s SCRIPT.d -c COMMAND".

Some real world examples:

- Interactive interpreter invocation: 517 blocks, 95128 bytes.

- BitTorrent tracker with persistence (DURUS+BerkeleyDB) backend: 2122 blocks, 439332 bytes.

- Fully functional LMTP+POP3 server written in Python, with persistence (DURUS+BerkeleyDB) backend: 2182 blocks, 422288 bytes.

- RSS to Twitter gateway, with OAuth: 2680 blocks, 556636 bytes. Surprising the import weight that brings "feedparser" and "bitly" libraries.

So the memory hit seems pretty reasonable. And I can verify it without ANY change in Python.

In this case I am launching python "inside" dtrace because I want to see the complete picture, from the very beginning. But usually you "plug" a long running python process for a while, without stopping it at all, and when you are done, you shutdown the tracing script... without ANY disturbance of the running python program, that keeps working. For instance, your server code is being slow for some reason *NOW*. You use DTrace to study what the program is doing just NOW. You don't have to stop the program, add "debug" code to it an launch again and WAIT until the problem happens again, determine that your "debug" code is not helping, changing it, repeat...

This is observability. Difficult to explain. Life sucks when you are used to it and you don't have it.
Date User Action Args
2011-12-12 14:49:15jceasetrecipients: + jcea, loewis, rhettinger, ronaldoussoren, belopolsky, pitrou, wsanchez, movement, techtonik, serverhorror, glyph, laca, twleung, jbaker, robert.kern, sirg3, danchr, dhduvall, dmalcolm, mjw, Garen, neologix, lasizoillo, fche, hazmat, jmcp, scox
2011-12-12 14:49:15jceasetmessageid: <>
2011-12-12 14:49:15jcealinkissue13405 messages
2011-12-12 14:49:14jceacreate