Pytest Tips and Tricks
This document provides useful tips and tricks to enhance your experience while using pytest.
Running Specific Tests
Run tests that match a substring pattern:
pytest -k "login"
Run tests with specific markers:
import pytest @pytest.mark.slow def test_heavy_computation(): pass
pytest -m slow
Hint
Nice thing about
@pytest.markis it’s also compatible withunittest.TestCaseclass based test cases.
Stopping on Failures
Stop on the first failure:
pytest -xStop after a certain number of failures:
pytest --maxfail=5
Rerunning Failed Tests
Re-run only failed tests:
pytest --lfRun failed tests first, followed by others:
pytest --ffAutomatically re-run failed tests multiple times before marking them as failed:
pip install pytest-rerunfailures pytest --reruns 3
Run new tests first with
--nf:pytest --nf
Running Tests in Parallel
Speed up test execution by running tests in parallel:
pytest -n auto # Automatically determines the number of CPU cores
Debugging
Use
pytest.set_trace()for debugging:def test_debug(): x = 10 pytest.set_trace() # Enter interactive debugging mode assert x == 10
Use
--pdbto drop into the debugger on failure:pytest --pdb
Capturing Output
Capture stdout/stderr:
import logging import sys def test_stdout_stderr_output(capsys): print("This is stdout") captured = capsys.readouterr() assert "This is stdout" in captured.out print("This is stderr", file=sys.stderr) captured = capsys.readouterr() assert "This is stderr" in captured.err
Capture logs:
def test_log_output(caplog, capsys): with caplog.at_level(logging.DEBUG): logger = logging.getLogger(__name__) logger.info("This is a log message") assert "This is a log message" in caplog.text assert caplog.records[0].levelno == logging.INFO
Suppressing Warnings
Use
--disable-warningsto suppress warnings in pytest output:pytest --disable-warnings
Verbose Test Output
Use
-vfor more detailed test output and-vvfor even more verbosity:pytest -v # Shows individual test names and results pytest -vv # Provides extra debugging information
Handling Unstable Tests
Automatically rerun failing tests:
pip install pytest-rerunfailures
Mark a test to retry on failure:
import pytest @pytest.mark.flaky(reruns=3) def test_unstable(): assert 1 == 2 # Will be retried up to 3 times
Hint
Nice thing about
@pytest.mark.flakyis it’s also compatible withunittest.TestCaseclass based test cases.
Skipping and Expected Failures
Skip a test unconditionally:
import pytest @pytest.mark.skip(reason="This test is temporarily disabled") def test_skip(): assert False
Skip a test conditionally:
import pytest import sys @pytest.mark.skipif(sys.version_info < (3, 8), reason="Requires Python 3.8+") def test_skip_if(): assert True
Mark a test as expected to fail:
import pytest @pytest.mark.xfail(reason="Known issue, will be fixed in future") def test_expected_failure(): assert False
Hint
@pytest.mark.xfailcan be useful for tracking known bugs while ensuring they do not break test runs.
Using Fixtures Efficiently
Use module-scoped fixtures to avoid redundant setup:
import pytest @pytest.fixture(scope="module") def db_connection(): conn = "database_connection" yield conn # Teardown can be placed after yield
Warning
@pytest.fixtureis not compatible withunittest.TestCaseclass based test cases. Consider using pure python functions and classes for fixtures, and factories. As a best practice:Put all fixtures inside
factories.pymodule within the top leveltestsdirectoryTo scope fixtures, leverage unittest lifecycle functions
Testing Exceptions
Use
pytest.raisesto test for expected exceptions:import pytest def test_divide_by_zero(): with pytest.raises(ZeroDivisionError): 1 / 0
Parameterized Tests
Run a test with multiple inputs:
import pytest @pytest.mark.parametrize("a, b, expected", [(1, 2, 3), (4, 5, 9), (10, 15, 25)]) def test_addition(a, b, expected): assert a + b == expected
Warning
@pytest.mark.parametrizeis not compatible withunittest.TestCaseclass based test cases. Given this limitation, consider using parameterized package instead.
Code Coverage
Measure test coverage using
pytest-cov:pip install pytest-cov pytest --cov=my_package # Generate coverage report in HTML format in coverage_reports/ directory pytest --cov=my_package --cov-report html:coverage_reports/ # Generate coverage report in XML format in coverage.xml file pytest --cov=my_package --cov-report xml:coverage.xml
Generate Test Reports
Generate HTML reports:
pytest --html=report.html
Generate XML reports:
pytest --junitxml=junit.xml
Generate JSON reports:
pytest --json-report --json-report-file=report.json
Customizing Traceback Output
Control the verbosity of traceback output using the –tb flag:
pytest --tb=short # Short traceback (concise output) pytest --tb=long # Full traceback (detailed output) pytest --tb=line # One line per failure
List all fixtures and markers
List all fixtures and markers:
pytest --fixtures pytest --markers
Print test runtimes
Print test runtimes:
pytest --durations=10 # print the 10 slowest tests pytest --durations=0 # print all test runtimes
Customizing Pytest with pytest.ini
Use
pytest.inito set default options:[pytest] addopts = --maxfail=2 -v --tb=short testpaths = tests
By using these tips, you can improve your testing workflow and make the most of pytest.