===== Usage ===== This plugin provides a `benchmark` fixture. This fixture is a callable object that will benchmark any function passed to it. Example: .. code-block:: python def something(duration=0.000001): """ Function that needs some serious benchmarking. """ time.sleep(duration) # You may return anything you want, like the result of a computation return 123 def test_my_stuff(benchmark): # benchmark something result = benchmark(something) # Extra code, to verify that the run completed correctly. # Sometimes you may want to check the result, fast functions # are no good if they return incorrect results :-) assert result == 123 You can also pass extra arguments: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, 0.02) Or even keyword arguments: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, duration=0.02) Another pattern seen in the wild, that is not recommended for micro-benchmarks (very fast code) but may be convenient: .. code-block:: python def test_my_stuff(benchmark): @benchmark def something(): # unnecessary function call time.sleep(0.000001) A better way is to just benchmark the final function: .. code-block:: python def test_my_stuff(benchmark): benchmark(time.sleep, 0.000001) # way more accurate results! If you need to do fine control over how the benchmark is run (like a `setup` function, exact control of `iterations` and `rounds`) there's a special mode - pedantic_: .. code-block:: python def my_special_setup(): ... def test_with_setup(benchmark): benchmark.pedantic(something, setup=my_special_setup, args=(1, 2, 3), kwargs={'foo': 'bar'}, iterations=10, rounds=100) Commandline options =================== ``py.test`` command-line options: --benchmark-min-time=SECONDS Minimum time per round in seconds. Default: '0.000005' --benchmark-max-time=SECONDS Maximum run time per test - it will be repeated until this total time is reached. It may be exceeded if test function is very slow or --benchmark-min-rounds is large (it takes precedence). Default: '1.0' --benchmark-min-rounds=NUM Minimum rounds, even if total time would exceed `--max-time`. Default: 5 --benchmark-timer=FUNC Timer to use when measuring time. Default: 'time.perf_counter' --benchmark-calibration-precision=NUM Precision to use when calibrating number of iterations. Precision of 10 will make the timer look 10 times more accurate, at a cost of less precise measure of deviations. Default: 10 --benchmark-warmup=KIND Activates warmup. Will run the test function up to number of times in the calibration phase. See `--benchmark-warmup-iterations`. Note: Even the warmup phase obeys --benchmark-max-time. Available KIND: 'auto', 'off', 'on'. Default: 'auto' (automatically activate on PyPy). --benchmark-warmup-iterations=NUM Max number of iterations to run in the warmup phase. Default: 100000 --benchmark-disable-gc Disable GC during benchmarks. --benchmark-skip Skip running any tests that contain benchmarks. --benchmark-disable Disable benchmarks. Benchmarked functions are only ran once and no stats are reported. Use this if you want to run the test but don't do any benchmarking. --benchmark-enable Forcibly enable benchmarks. Use this option to override --benchmark-disable (in case you have it in pytest configuration). --benchmark-only Only run benchmarks. This overrides --benchmark-skip. --benchmark-save=NAME Save the current run into 'STORAGE-PATH/counter- NAME.json'. Default: '__