Welcome to pytest-benchmark’s documentation!

This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.

Notable features and goals:

  • Sensible defaults and automatic calibration for microbenchmarks
  • Good integration with pytest
  • Comparison and regression tracking
  • Exhausive statistics
  • JSON export


def something(duration=0.000001):
    Function that needs some serious benchmarking.
    # You may return anything you want, like the result of a computation
    return 123

def test_my_stuff(benchmark):
    # benchmark something
    result = benchmark(something)

    # Extra code, to verify that the run completed correctly.
    # Sometimes you may want to check the result, fast functions
    # are no good if they return incorrect results :-)
    assert result == 123

def test_my_stuff_different_arg(benchmark):
    # benchmark something, but add some arguments
    result = benchmark(something, 0.001)
    assert result == 123


Normal run:

Screenshot of py.test summary

Compare mode (--benchmark-compare):

Screenshot of py.test summary in compare mode

Histogram (--benchmark-histogram):

Histogram sample
Also, it has nice tooltips.

Indices and tables