Welcome to pytest-benchmark’s documentation!¶
This plugin provides a benchmark fixture. This fixture is a callable object that will benchmark any function passed to it.
Notable features and goals:
- Sensible defaults and automatic calibration for microbenchmarks
- Good integration with pytest
- Comparison and regression tracking
- Exhausive statistics
- JSON export
Examples:
def something(duration=0.000001):
"""
Function that needs some serious benchmarking.
"""
time.sleep(duration)
# You may return anything you want, like the result of a computation
return 123
def test_my_stuff(benchmark):
# benchmark something
result = benchmark(something)
# Extra code, to verify that the run completed correctly.
# Sometimes you may want to check the result, fast functions
# are no good if they return incorrect results :-)
assert result == 123
def test_my_stuff_different_arg(benchmark):
# benchmark something, but add some arguments
result = benchmark(something, 0.001)
assert result == 123
Screenshots¶
Normal run:

Compare mode (--benchmark-compare
):

Histogram (--benchmark-histogram
):
Also, it has nice tooltips.
User guide¶
- Installation
- Usage
- Calibration
- Pedantic mode
- Comparing past runs
- Hooks
- Frequently Asked Questions
- Glossary
- Contributing
- Authors
- Changelog
- 4.0.0 (2022-10-26)
- 3.4.1 (2021-04-17)
- 3.4.0 (2021-04-17)
- 3.2.3 (2020-01-10)
- 3.2.2 (2017-01-12)
- 3.2.1 (2017-01-10)
- 3.2.0 (2017-01-07)
- 3.1.1 (2017-07-26)
- 3.1.0 (2017-07-21)
- 3.1.0a2 (2017-03-27)
- 3.1.0a1 (2016-10-29)
- 3.0.0 (2015-11-08)
- 3.0.0rc1 (2015-10-25)
- 3.0.0b3 (2015-10-22)
- 3.0.0b2 (2015-10-17)
- 3.0.0b1 (2015-10-13)
- 3.0.0a4 (2015-10-08)
- 3.0.0a3 (2015-10-02)
- 3.0.0a2 (2015-09-30)
- 3.0.0a1 (2015-09-13)
- 2.5.0 (2015-06-20)
- 2.4.1 (2015-03-16)
- 2.4.0 (2015-03-12)
- 2.3.0 (2014-12-27)
- 2.2.0 (2014-12-26)
- 2.1.0 (2014-12-20)
- 2.0.0 (2014-12-19)
- 1.0.0 (2014-12-15)
- ? (?)