This is a list of pytest.* API functions and fixtures.
For information on plugin hooks and objects, see Writing plugins.
For information on the pytest.mark mechanism, see Marking test functions with attributes.
For the below objects, you can also interactively ask for help, e.g. by typing on the Python interactive prompt something like:
import pytest
help(pytest)
return exit code, after performing an in-process test run.
Parameters: |
|
---|
More examples at Calling pytest from Python code
Assert that a code block/function call raises expected_exception and raise a failure exception otherwise.
This helper produces a ExceptionInfo() object (see below).
If using Python 2.5 or above, you may use this function as a context manager:
>>> with raises(ZeroDivisionError):
... 1/0
Changed in version 2.10.
In the context manager form you may use the keyword argument message to specify a custom failure message:
>>> with raises(ZeroDivisionError, message="Expecting ZeroDivisionError"):
... pass
Traceback (most recent call last):
...
Failed: Expecting ZeroDivisionError
Note
When using pytest.raises as a context manager, it’s worthwhile to note that normal context manager rules apply and that the exception raised must be the final line in the scope of the context manager. Lines of code after that, within the scope of the context manager will not be executed. For example:
>>> value = 15
>>> with raises(ValueError) as exc_info:
... if value > 10:
... raise ValueError("value must be <= 10")
... assert exc_info.type == ValueError # this will not execute
Instead, the following approach must be taken (note the difference in scope):
>>> with raises(ValueError) as exc_info:
... if value > 10:
... raise ValueError("value must be <= 10")
...
>>> assert exc_info.type == ValueError
Since version 3.1 you can use the keyword argument match to assert that the exception matches a text or regex:
>>> with raises(ValueError, match='must be 0 or None'):
... raise ValueError("value must be 0 or None")
>>> with raises(ValueError, match=r'must be \d+$'):
... raise ValueError("value must be 42")
Legacy forms
The forms below are fully supported but are discouraged for new code because the context manager form is regarded as more readable and less error-prone.
It is possible to specify a callable by passing a to-be-called lambda:
>>> raises(ZeroDivisionError, lambda: 1/0)
<ExceptionInfo ...>
or you can specify an arbitrary callable with arguments:
>>> def f(x): return 1/x
...
>>> raises(ZeroDivisionError, f, 0)
<ExceptionInfo ...>
>>> raises(ZeroDivisionError, f, x=0)
<ExceptionInfo ...>
It is also possible to pass a string to be evaluated at runtime:
>>> raises(ZeroDivisionError, "f(0)")
<ExceptionInfo ...>
The string will be evaluated using the same locals() and globals() at the moment of the raises call.
wraps sys.exc_info() objects and offers help for navigating the traceback.
the exception class
the exception instance
the exception raw traceback
the exception type name
the exception traceback (_pytest._code.Traceback instance)
return the exception as a string
when ‘tryshort’ resolves to True, and the exception is a _pytest._code._AssertionError, only the actual exception part of the exception representation is returned (so ‘AssertionError: ‘ is removed from the beginning)
return True if the exception is an instance of exc
return str()able representation of this exception info. showlocals: show locals per traceback entry style: long|short|no|native traceback style tbfilter: hide entries (where __tracebackhide__ is true)
in case of style==native, tbfilter and showlocals is ignored.
Match the regular expression ‘regexp’ on the string representation of the exception. If it matches then True is returned (so that it is possible to write ‘assert excinfo.match()’). If it doesn’t match an AssertionError is raised.
Note
Similar to caught exception objects in Python, explicitly clearing local references to returned ExceptionInfo objects can help the Python interpreter speed up its garbage collection.
Clearing those references breaks a reference cycle (ExceptionInfo –> caught exception –> frame stack raising the exception –> current frame stack –> local variables –> ExceptionInfo) which makes Python keep all objects referenced from that cycle (including all local variables in the current frame) alive until the next cyclic garbage collection run. See the official Python try statement documentation for more detailed information.
Examples at Assertions about expected exceptions.
context manager that can be used to ensure a block of code triggers a DeprecationWarning or PendingDeprecationWarning:
>>> import warnings
>>> def api_call_v2():
... warnings.warn('use v3 of this api', DeprecationWarning)
... return 200
>>> with deprecated_call():
... assert api_call_v2() == 200
deprecated_call can also be used by passing a function and *args and *kwargs, in which case it will ensure calling func(*args, **kwargs) produces one of the warnings types above.
Assert that two numbers (or two sets of numbers) are equal to each other within some tolerance.
Due to the intricacies of floating-point arithmetic, numbers that we would intuitively expect to be equal are not always so:
>>> 0.1 + 0.2 == 0.3
False
This problem is commonly encountered when writing tests, e.g. when making sure that floating-point values are what you expect them to be. One way to deal with this problem is to assert that two floating-point numbers are equal to within some appropriate tolerance:
>>> abs((0.1 + 0.2) - 0.3) < 1e-6
True
However, comparisons like this are tedious to write and difficult to understand. Furthermore, absolute comparisons like the one above are usually discouraged because there’s no tolerance that works well for all situations. 1e-6 is good for numbers around 1, but too small for very big numbers and too big for very small ones. It’s better to express the tolerance as a fraction of the expected value, but relative comparisons like that are even more difficult to write correctly and concisely.
The approx class performs floating-point comparisons using a syntax that’s as intuitive as possible:
>>> from pytest import approx
>>> 0.1 + 0.2 == approx(0.3)
True
The same syntax also works for sequences of numbers:
>>> (0.1 + 0.2, 0.2 + 0.4) == approx((0.3, 0.6))
True
Dictionary values:
>>> {'a': 0.1 + 0.2, 'b': 0.2 + 0.4} == approx({'a': 0.3, 'b': 0.6})
True
And numpy arrays:
>>> import numpy as np
>>> np.array([0.1, 0.2]) + np.array([0.2, 0.4]) == approx(np.array([0.3, 0.6]))
True
By default, approx considers numbers within a relative tolerance of 1e-6 (i.e. one part in a million) of its expected value to be equal. This treatment would lead to surprising results if the expected value was 0.0, because nothing but 0.0 itself is relatively close to 0.0. To handle this case less surprisingly, approx also considers numbers within an absolute tolerance of 1e-12 of its expected value to be equal. Infinity and NaN are special cases. Infinity is only considered equal to itself, regardless of the relative tolerance. NaN is not considered equal to anything by default, but you can make it be equal to itself by setting the nan_ok argument to True. (This is meant to facilitate comparing arrays that use NaN to mean “no data”.)
Both the relative and absolute tolerances can be changed by passing arguments to the approx constructor:
>>> 1.0001 == approx(1)
False
>>> 1.0001 == approx(1, rel=1e-3)
True
>>> 1.0001 == approx(1, abs=1e-3)
True
If you specify abs but not rel, the comparison will not consider the relative tolerance at all. In other words, two numbers that are within the default relative tolerance of 1e-6 will still be considered unequal if they exceed the specified absolute tolerance. If you specify both abs and rel, the numbers will be considered equal if either tolerance is met:
>>> 1 + 1e-8 == approx(1)
True
>>> 1 + 1e-8 == approx(1, abs=1e-12)
False
>>> 1 + 1e-8 == approx(1, rel=1e-6, abs=1e-12)
True
If you’re thinking about using approx, then you might want to know how it compares to other good ways of comparing floating-point numbers. All of these algorithms are based on relative and absolute tolerances and should agree for the most part, but they do have meaningful differences:
Warning
Changed in version 3.2.
In order to avoid inconsistent behavior, TypeError is raised for >, >=, < and <= comparisons. The example below illustrates the problem:
assert approx(0.1) > 0.1 + 1e-10 # calls approx(0.1).__gt__(0.1 + 1e-10)
assert 0.1 + 1e-10 > approx(0.1) # calls approx(0.1).__lt__(0.1 + 1e-10)
In the second example one expects approx(0.1).__le__(0.1 + 1e-10) to be called. But instead, approx(0.1).__lt__(0.1 + 1e-10) is used to comparison. This is because the call hierarchy of rich comparisons follows a fixed behavior. More information...
You can use the following functions in your test, fixture or setup functions to force a certain test outcome. Note that most often you can rather use declarative marks, see Skip and xfail: dealing with tests that cannot succeed.
explicitly fail an currently-executing test with the given Message.
Parameters: | pytrace – if false the msg represents the full failure information and no python traceback will be reported. |
---|
skip an executing test with the given message. Note: it’s usually better to use the pytest.mark.skipif marker to declare a test to be skipped under certain conditions like mismatching platforms or dependencies. See the pytest_skipping plugin for details.
To mark a fixture function:
(return a) decorator to mark a fixture factory function.
This decorator can be used (with or without parameters) to define a fixture function. The name of the fixture function can later be referenced to cause its invocation ahead of running tests: test modules or classes can use the pytest.mark.usefixtures(fixturename) marker. Test functions can directly use fixture names as input arguments in which case the fixture instance returned from the fixture function will be injected.
Parameters: |
|
---|
Fixtures can optionally provide their values to test functions using a yield statement, instead of return. In this case, the code block after the yield statement is executed as teardown code regardless of the test outcome. A fixture function must yield exactly once.
Tutorial at pytest fixtures: explicit, modular, scalable.
The request object that can be used from fixture functions.
A request for a fixture from a test or fixture function.
A request object gives access to the requesting test context and has an optional param attribute in case the fixture is parametrized indirectly.
fixture for which this request is being performed
Scope string, one of “function”, “class”, “module”, “session”
add finalizer/teardown function to be called after the last test within the requesting test context finished execution.
Apply a marker to a single test function invocation. This method is useful if you don’t want to have a keyword/marker on all function invocations.
Parameters: | marker – a _pytest.mark.MarkDecorator object created by a call to pytest.mark.NAME(...). |
---|
(deprecated) Return a testing resource managed by setup & teardown calls. scope and extrakey determine when the teardown function will be called so that subsequent calls to setup would recreate the resource. With pytest-2.3 you often do not need cached_setup() as you can directly declare a scope on a fixture function and register a finalizer through request.addfinalizer().
Parameters: |
|
---|
Dynamically run a named fixture function.
Declaring fixtures via function argument is recommended where possible. But if you can only decide whether to use another fixture at test setup time, you may use this function to retrieve it inside a fixture or test function body.
You can ask for available builtin or project-custom fixtures by typing:
$ pytest -q --fixtures
cache
Return a cache object that can persist state between testing sessions.
cache.get(key, default)
cache.set(key, value)
Keys must be a ``/`` separated value, where the first part is usually the
name of your plugin or application to avoid clashes with other cache users.
Values can be any object handled by the json stdlib module.
capsys
Enable capturing of writes to sys.stdout/sys.stderr and make
captured output available via ``capsys.readouterr()`` method calls
which return a ``(out, err)`` tuple.
capfd
Enable capturing of writes to file descriptors 1 and 2 and make
captured output available via ``capfd.readouterr()`` method calls
which return a ``(out, err)`` tuple.
doctest_namespace
Inject names into the doctest namespace.
pytestconfig
the pytest config object with access to command line opts.
record_xml_property
Add extra xml properties to the tag for the calling test.
The fixture is callable with ``(name, value)``, with value being automatically
xml-encoded.
monkeypatch
The returned ``monkeypatch`` fixture provides these
helper methods to modify objects, dictionaries or os.environ::
monkeypatch.setattr(obj, name, value, raising=True)
monkeypatch.delattr(obj, name, raising=True)
monkeypatch.setitem(mapping, name, value)
monkeypatch.delitem(obj, name, raising=True)
monkeypatch.setenv(name, value, prepend=False)
monkeypatch.delenv(name, value, raising=True)
monkeypatch.syspath_prepend(path)
monkeypatch.chdir(path)
All modifications will be undone after the requesting
test function or fixture has finished. The ``raising``
parameter determines if a KeyError or AttributeError
will be raised if the set/deletion operation has no target.
recwarn
Return a WarningsRecorder instance that provides these methods:
* ``pop(category=None)``: return last warning matching the category.
* ``clear()``: clear list of warnings
See http://docs.python.org/library/warnings.html for information
on warning categories.
tmpdir_factory
Return a TempdirFactory instance for the test session.
tmpdir
Return a temporary directory path object
which is unique to each test function invocation,
created as a sub directory of the base temporary
directory. The returned object is a `py.path.local`_
path object.
no tests ran in 0.12 seconds