classification
Title: context manager for measuring duration of blocks of code
Type: enhancement Stage: needs patch
Components: Library (Lib) Versions: Python 3.8
process
Status: open Resolution:
Dependencies: Superseder:
Assigned To: steven.daprano Nosy List: THRlWiTi, cheryl.sabella, cjrh, dmoore, ezio.melotti, georg.brandl, giampaolo.rodola, martin.panter, ncoghlan, r.david.murray, rbcollins, rhettinger, steven.daprano, tim.peters, xgdomingo
Priority: normal Keywords:

Created on 2013-11-04 18:07 by dmoore, last changed 2019-03-30 14:11 by ncoghlan.

Messages (15)
msg202151 - (view) Author: Damien Moore (dmoore) Date: 2013-11-04 18:07
It would be useful if timeit had a class like `timeblock` below that would allow one to easily time code inside a `with` block:

import time

class timeblock:
    def __init__(self,descr=''):
        self.descr=descr
    def __enter__(self):
        self.t0=time.time()
        return self
    def __exit__(self, type, value, traceback):
        self.t1=time.time()
        self.elapsed = self.t1-self.t0
        if self.descr:
            print self.descr,'took',self.elapsed,'seconds'

This would be used as follows:

with timeblock('cumsum') as t:
    a=0
    for x in range(10000000):
        a+=x

and would output:
cumsum took 2.39 seconds

This is useful when trying to find bottlenecks in large programs without interfering with their operation, which would be harder to do with timeit.timeit and more verbose with time.
msg202153 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2013-11-04 18:38
I think I have already proposed something similar in past but I can't find any reference on the tracker (so maybe I didn't after all).
BTW I have something like this [1] in a "utils" module I use across different projects and I would definitively welcome an inclusion in the stdlib.

[1] a modified version of http://dabeaz.blogspot.it/2010/02/function-that-works-as-context-manager.html
msg220402 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2014-06-13 00:44
I have been using something like this for many years now and it is very handy. I have an early version of the code posted here:

http://code.activestate.com/recipes/577896

Over the next week or so, I'll prepare a patch.

Because it's a new feature, it must be for 3.5 only. The other versions are in feature-freeze.
msg220526 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2014-06-14 02:30
This idea is a bit at odds with the design of the rest of the time-it module which seeks to:

* run small snippets of code in volume rather than "trying to find bottlenecks in large programs without interfering with their operation".  The latter task is more suited to the profiling modules.

* run code inside a function (local variables have difference performance characteristics than global lookups).

* take care of running the loop for you (including repeated measurements and setup code) instead of your writing your own loop as done the proposed code fragment).

* return the output rather than print it

* minimize the overhead of the timing suite itself (using itertools.repeat  to control the number of loops, etc) unlike the class based version which introduces its own overhead.

It might still be worth having a time elapsed decorator, but I wanted to note that this isn't in the spirit of the rest of the module.
msg220546 - (view) Author: Giampaolo Rodola' (giampaolo.rodola) * (Python committer) Date: 2014-06-14 10:30
> It might still be worth having a time elapsed decorator

This is exactly what I think. It's a very common task.
msg220688 - (view) Author: Raymond Hettinger (rhettinger) * (Python committer) Date: 2014-06-16 02:09
Perhaps a time-elapsed context manager would be a better fit in the contextlib module (which contains more general purpose 
macro-level tools usable for many different tasks)
rather than the timeit module (which is more
narrowly tailored to high-quality reproducable in-vitro performance analysis at a fine-grained level).

> It's a very common task.

That said, the task is sometimes solved in different ways.  Hard-wiring this to "print" would preclude its use in cases where you want to save the result to a variable, log it, or build some cumulative total-time statistics.

Also, I think it ignores some of the realities about how difficult it is to do meaningful performance analysis.
The "cumsum" example isn't the norm.  Most code is harder to time and doesn't have a high-volume tight-loop that you can that you can easily wrap a course-grained context manager around.  Getting reproduceable timings (creating a consistent setup, using repeated calls to average-out noise, and isolating the part you want to test) can be tricky in the middle of real applications.

I occasionally have code where the proposed content manager would have been nice (saving me from typing the usual start=time() ... end=time()-start pairs).  However, most of the time this technique is too simple and something like Robert Kern's line-profiler or cProfile are a better fit for identifying hotspots.
msg220691 - (view) Author: Steven D'Aprano (steven.daprano) * (Python committer) Date: 2014-06-16 02:55
On Mon, Jun 16, 2014 at 02:09:22AM +0000, Raymond Hettinger wrote:

> Perhaps a time-elapsed context manager would be a better fit in the 
> contextlib module (which contains more general purpose macro-level 
> tools usable for many different tasks) rather than the timeit module 
> (which is more narrowly tailored to high-quality reproducable in-vitro 
> performance analysis at a fine-grained level).

Perhaps you're right, but "timeit" is a fairly broad description, and I 
would expect anything related to the task "time it" to be found there. 
I'm +0.9 on timeit and +0.4 on contextlib.

Does anyone else have an opinion on where this belongs?
msg220692 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-06-16 02:59
I agree with Raymond -- this is a common pattern but there are many variations that are hard to catch in a single implementation. E.g. at Dropbox we have a decorator like this that lets you specify an identifier for the block you name, and which logs the measured times to a special log file. Then there is separate analysis software that shows the plots the average time for each named block over time. Incredibly useful, but hard to share the code.

I think that Jeff Preshing's blog post giving the basic idea (to which Steven's recipe links) is more useful than a implementation in the stdlib could ever be. Steven's class already has three keyword arguments, and that feels like either not enough or already too many, depending on your needs. It's too easy to bikeshed it to death. (E.g. the "disable+restore gc" feature could easily be moved into a separate decorator; and why not make the verbose printing a little more flexible by adding another parameter to either specify the file to print to or the function to be used to print. Oh, and the message could be customizable. Then again, you might want to subclass this in order to do all those customizations, eventually allowing this to be used as the basis for e.g. the Dropbox version. Etc. Not that I propose to go that way -- I'm just bringing this up to indicate how hopeless it would be to try and put some version in the stdlib.)
msg220693 - (view) Author: Guido van Rossum (gvanrossum) * (Python committer) Date: 2014-06-16 03:04
FWIW, I definitely don't think this belongs in the timeit module, unless you are going to treat that module as a "namespace" package, which I don't like much (though in Java I think it's a pretty common pattern). If we have to have it, contextlib sounds fine (it's a grab-bag already). The sub-idea of a context manager to disable gc (or perhaps manipulate other state of the gc module) sounds fine even if you follow my recommendation not to add the timer context manager to the stdlib.
msg220703 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2014-06-16 08:21
I'm with Guido on this one - I don't believe there's a clean sweet spot to hit. There *might* be a plausible case to be made for tracing functionality in the logging module, but even there the number of options multiplies fast enough that writing your own context manager would likely be easier than learning a new API with lots of settings to tweak.
msg221065 - (view) Author: Ezio Melotti (ezio.melotti) * (Python committer) Date: 2014-06-20 03:41
I often write code like:
  import time
  start = time.time()
  ...
  end = time.time()
  print(end - start)

Usually I don't do this to measure accurately the performance of some piece of code, but rather I do it for tasks that take some time (e.g. downloading a file, or anything that I can leave there for a while and come back later to see how long it took).

So I'm +1 on a simple context manager that replaces this common snippet, and -0 on something that tries to measure accurately some piece of code (if it takes a few seconds or more, high-accuracy doesn't matter; if it takes a fraction of seconds, I won't trust the result without repeating the measurement in a loop).

Regarding the implementation I can think about 2 things I might want:
1) a way to retrieve the time (possibly as a timedelta-like object [0]), e.g.:
    with elapsed_time() as t:
        ...
    print(t.seconds)
2) a way to print a default message (this could also be the default behavior, with a silent=True to suppress the output), e.g.:
    >>> with elapsed_time(print=True):
    ...     ...
    ...
    Task terminated after X seconds.

For the location I think that the "time" module would be the first place where I would look (since I would have to otherwise import time() from there).  I would probably also look at "timeit" (since I'm going to time something), even though admittedly it doesn't fit too much with the rest.  While it might fit nicely in "contextlib", I won't probably expect to find it there unless I knew it was there in the first place.


[0] would making timedelta a context manager be too crazy?
msg311294 - (view) Author: Cheryl Sabella (cheryl.sabella) * (Python committer) Date: 2018-01-30 19:43
Hello everyone,

I was interested in similar functionality for timeit and found this discussion.  I know it's too late for 3.7, but I was wondering if it would be worth putting this on python-ideas or python-dev to try to get some consensus for a PEP or a patch?  With Barry's addition of breakpoint(), I would find something similar for timeit (that is, a simpler way to invoke it) would be quite helpful.

Thanks!
msg311295 - (view) Author: R. David Murray (r.david.murray) * (Python committer) Date: 2018-01-30 20:06
FWIW, we just ran in to the need for this on our project, and used 'Time' from the contexttimer package on pypi.  In our use case we want the number so we can log it, and what we care about is the actual elapsed time, not a precise measurement of the code block, because it for system monitoring and metrics, not hardcore performance analysis.  

Having this always print would indeed be sub-optimal :)
msg338797 - (view) Author: Caleb Hattingh (cjrh) * Date: 2019-03-25 13:46
Somehow I missed that there's been an open issue on this. Like others I've written a bunch of different incarnations of an "elapsed" context manager over the years.  Always for the more crude "how long did this take" reason like David mentioned, never the microbenchmarks scenario that timeit serves. The work is never quite substantial enough to make a Pypi package of it, but always annoying to have to do again the next time the need arises. 


The overall least fiddly scheme I've found is to just use a callback. It's the simplest option:


@contextmanager
def elapsed(cb: Callable[[float, float], None], counter=time.perf_counter):
    t0 = counter()
    try:
        yield
    finally:
        t1 = counter()
        cb(t0, t1)


The simple case, the one most people want when they just want a quick check of elapsed time on a chunk of code, is then quite easy:


with elapsed(lambda t0, t1: print(f'read_excel: {t1 - t0:.2f} s')):
    # very large spreadsheet
    df = pandas.read_excel(filename, dtype=str)


(I rarely need to use a timer decorator for functions, because the profiler tracks function calls. But within the scope of a function it can sometimes be difficult to get timing information, particularly if the calls made there are into native extensions)

One of the consequences of using a callback strategy is that an additional version might be required for async callbacks (I have used these in production also):


@asynccontextmanager
async def aioelapsed(acb: Callable[[float, float], Awaitable[None]],
                     counter=time.perf_counter):
    t0 = counter()
    try:
        yield
    finally:
        t1 = counter()
        await acb(t0, t1)


So, the interesting thing here is that there is a general form for which an "elapsed" function is just a special case:


T = TypeVar('T')


@contextmanager
def sample_before_and_after(cb: Callable[[T, T], None], sample: Callable[[], T]):
    before = sample()
    try:
        yield
    finally:
        after = sample()
        cb(before, after)


The version of "elapsed" given further above is just this with kwarg sample=time.perf_counter.  So, it might be sufficient to cover the use-case of an "elapsed" context manager instead with something like the above instead, which is more general. However, I don't actually have any use cases for this more general thing above, other than "elapsed", but I thought it was interesting.


Whether any of this merits being in the stdlib or not is hard to say. These code snippets are all short and easy to write. But I've written them multiple times to make "elapsed".


---


Once the "elapsed" abstraction is available, other cool ideas become a little bit easier to think about. These would be things that are user code (not be in the stdlib), but which can make use of the "elapsed" cm; for example, a clever logger for slow code blocks (written a few of these too):


@contextmanager
def slow_code_logging(logger_name, msg, *args, threshold_sec=1.0, **kwargs):
    logger = logging.getLogger(logger_name)

    if logger.isEnabledFor(logging.INFO):
        def cb(t0: float, t1: float) -> None:
            dt = t1 - t0
            if dt < threshold_sec:
                # Change the logger level depending on threshold
                logger.debug(msg, dt, *args, **kwargs)
            else:
                logger.info(msg, dt, *args, **kwargs)

        cm = elapsed(cb)
    else:
        # Logger is not even enabled, do nothing.
        cm = nullcontext()

    with cm:
        yield


with slow_code_logging(__name__, 'Took longer to run than expected: %.4g s'):
    ...


And a super-hacky timing histogram generator (which would be quite interesting to use for measuring latency in network calls, e.g. with asyncio code):


@contextmanager
def histobuilder(counter, bin_size):
    def cb(t0, t1):
        dt = t1 - t0
        bucket = dt - dt % bin_size
        counter[bucket, bucket + bin_size] += 1

    with elapsed(cb, counter=time.perf_counter_ns):
        yield

counter = Counter()

for i in range(100):
    with histobuilder(counter, bin_size=int(5e4)):  # 50 us
        time.sleep(0.01)  # 10 ms

for (a, b), v in sorted(counter.items(), key=lambda _: _[0][0]):
    print(f'{a/1e6:6.2f} ms - {b/1e6:>6.2f} ms: {v:4} ' + '\u2588' * v)


output:


  9.85 ms -   9.90 ms:    1 █
  9.90 ms -   9.95 ms:   10 ██████████
  9.95 ms -  10.00 ms:   17 █████████████████
 10.00 ms -  10.05 ms:    8 ████████
 10.05 ms -  10.10 ms:   12 ████████████
 10.10 ms -  10.15 ms:    5 █████
 10.15 ms -  10.20 ms:    4 ████
 10.20 ms -  10.25 ms:    4 ████
 10.25 ms -  10.30 ms:    6 ██████
 10.30 ms -  10.35 ms:    9 █████████
 10.35 ms -  10.40 ms:    3 ███
 10.40 ms -  10.45 ms:    5 █████
 10.45 ms -  10.50 ms:   12 ████████████
 10.50 ms -  10.55 ms:    3 ███
 10.55 ms -  10.60 ms:    1 █


Things like the histogram builder above become (for me) much simpler when "elapsed" is just a thing that already exists even though it's very simple on its own.

If the right call is that we shouldn't add such a thing to the stdlib, should we instead just add an example of an elapsed context manager to the contextlib docs? Perhaps copy-paste from the docs is good enough for most users (like we do with the itertools recipes for example).  I'd be willing to add something to the docs if it's decided that's the way to go.

(On the off chance some version of "elapsed" goes into the stdlib, I think contextlib is right place rather than timeit, particularly because of how the "sample_before_and_after" pattern generalises)
msg339201 - (view) Author: Nick Coghlan (ncoghlan) * (Python committer) Date: 2019-03-30 14:11
I think Caleb's "sample_before_and_after" idea hints that this may be an idea that could benefit from the ExitStack treatment where contextlib provides a building block that handles the interaction with the context management machinery, with the documentation shows recipes for particular use cases (such as implementing timers for blocks of code)

For example:

    class ContextMonitor:
        """Invoke given sampling callbacks when a context is entered and exited, and optionally combine the sampling results

        *on_entry*: zero-arg sampling function to call on context entry
        *on_exit*: zero-arg sampling function to call on context exit
        *combine_samples*: two-arg sample combination function. If not given, samples are combined as 2-tuples.
        *keep_results*: whether to keep results for retrieval via ``pop_combined_result()``

        Instances of this context manager are reusable and reentrant.
        """
        def __init__(self, on_entry=None, on_exit=None, combine_samples=None, *, keep_results = False):
            self._on_entry = lambda:None if on_entry is None else on_entry
            self._on_exit = lambda:None if on_exit is None else on_exit
            self._combine_samples = lambda *args: args if combine_samples is None else combine_samples
            self._entry_samples = []
            self._keep_results = keep_results
            self._combined_results = [] if keep_results else None

        @classmethod
        def sample(cls, on_event=None, check_results=None):
            """Context monitor that uses the same sampling callback on entry and exit"""
            return cls(on_event, on_event, check_results)

        def pop_combined_result(self):
            """Pops the last combined result. Raises RuntimeError if no results are available"""
            results = self._combined_results
            if not results:
                raise RuntimeError("No sample results to report")
            return self.checked_results.pop()

        def __enter__(self):
            self._entry_samples.append(self._on_entry())
            return self

        def __exit__(self, *args):
            entry_sample = self._entry_samples.pop()
            exit_sample = self._on_exit()
            result = self._combine_samples(entry_sample, exit_sample)
            if self._keep_results:
                self._combined_results.append(result)
 
And then a recipe like the following (adapted from Caleb's example):

    def log_if_slow(logger_name, msg, *args, threshold_sec=1.0, **kwargs):
        """"Context manager that logs monitored blocks that take too long"""
        logger = logging.getLogger(logger_name)
        if not logger.isEnabledFor(logging.INFO):
            # Avoid the timer overhead if the logger will never print anything
            return nullcontext()
        def _log_slow_blocks(start, end):
            duration = end - start
            if dt >= threshold_sec:
                logger.info(msg, duration, *args, **kwargs)
        return ContextMonitor.sample(time.perfcounter, _log_slow_blocks)
    
    with log_if_slow(__name__, 'Took longer to run than expected: %.4g s'):
        ...

    
The question is whether anyone would actually find it easier to learn to use an API like ContextMonitor over just writing their own generator based context manager.

Depending on how familiar they are with the context management protocol, it's plausible that they would, as the construction API only asks a few questions:

* whether to use the same sampling function on entry and exit or different ones
* which sampling function(s) to use
* how to combine the two samples into a single result (defaulting to producing a two-tuple)
* whether to keep the results around for later retrieval (useful if you want to post-process the samples rather than dealing with them directly in the sample combination callback)
History
Date User Action Args
2019-03-30 14:11:14ncoghlansetmessages: + msg339201
2019-03-25 13:46:36cjrhsetnosy: + cjrh
messages: + msg338797
2018-02-11 03:11:25xgdomingosetnosy: + xgdomingo
2018-01-30 20:06:51r.david.murraysetnosy: + r.david.murray

messages: + msg311295
versions: + Python 3.8, - Python 3.5
2018-01-30 19:43:08cheryl.sabellasetnosy: + cheryl.sabella
messages: + msg311294
2017-05-21 05:55:37gvanrossumsetnosy: - gvanrossum
2017-05-20 11:52:05THRlWiTisetnosy: + THRlWiTi
2015-03-18 04:59:49martin.pantersetnosy: + martin.panter
2015-03-17 22:39:16rbcollinssetnosy: + rbcollins

title: Enhancement for timeit: measure time to run blocks of code using 'with' -> context manager for measuring duration of blocks of code
2014-06-20 03:41:49ezio.melottisetmessages: + msg221065
2014-06-16 08:21:58ncoghlansetmessages: + msg220703
2014-06-16 03:04:51gvanrossumsetmessages: + msg220693
2014-06-16 02:59:32gvanrossumsetmessages: + msg220692
2014-06-16 02:55:24steven.dapranosetmessages: + msg220691
2014-06-16 02:09:22rhettingersetnosy: + ncoghlan
messages: + msg220688
2014-06-14 10:30:43giampaolo.rodolasetmessages: + msg220546
2014-06-14 02:30:41rhettingersetnosy: + gvanrossum, tim.peters, rhettinger
messages: + msg220526
2014-06-13 00:44:41steven.dapranosetversions: + Python 3.5, - Python 3.4
nosy: + steven.daprano

messages: + msg220402

assignee: steven.daprano
2013-11-04 18:38:54giampaolo.rodolasetmessages: + msg202153
2013-11-04 18:09:10ezio.melottisetnosy: + georg.brandl, giampaolo.rodola, ezio.melotti
stage: needs patch

versions: - Python 2.6, Python 3.1, Python 2.7, Python 3.2, Python 3.3, Python 3.5
2013-11-04 18:07:31dmoorecreate