ChatGPT解决这个技术问题 Extra ChatGPT

Printing test execution times and pinning down slow tests with py.test

I am running unit tests on a CI server using py.test. Tests use external resources fetched over network. Sometimes test runner takes too long, causing test runner to be aborted. I cannot repeat the issues locally.

Is there a way to make py.test print out execution times of (slow) test, so pinning down problematic tests become easier?


a
ankostis

I'm not sure this will solve your problem, but you can pass --durations=N to print the slowest N tests after the test suite finishes.

Use --durations=0 to print all.


Do you know if there is a possibility to add it to the generated HTML coverage report? Similar like adding .coveragerc file with the contents [run] branch = True adds branching coverage information?
You will need to add that information yourself, pytest-html has support to insert additional contents.
@oLas: That's not true: If tests are "too fast", the measured time can apparently become 0 and they will still be filtered out. A negative threshold also doesn't help in this case. Another annoyance with this approach is that pytest will always print (0.00 durations hidden. Use -vv to show these durations.) which does not make any sense.
@bluenote10 not sure if this is something has been added later as now is 2021 but with --durations-min=N you can set the minimal duration in seconds for inclusion in slowest list. Default is 0.005 so even --durations=0 it will not show any under 0.005 unless you set a value for durations-min
P
Phuong

You can pass the number with --durations

pytest --durations=0 — Show all times for tests and setup and teardown

pytest --durations=1 — Just show me the slowest

pytest --durations=50 — Slowest 50, with times, … etc

Take refer in: https://medium.com/@brianokken/pytest-durations-0-show-all-times-for-tests-and-setup-and-teardown-848dccac85db

Or: https://docs.pytest.org/en/latest/usage.html#profiling-test-execution-duration