Skip to content

Conftest

Fixtures and test utilities.

This module contains PyTest fixtures that are used by tests.

How this works#

Our goal here is to make sure that the way we create networks/datasets/algorithms during tests match as closely as possible how they are created normally in a real run. For example, when running python project/main.py algorithm=example.

We achieve this like so: All the components of an experiment are created using fixtures. The first fixtures to be invoked are the ones that would correspond to command-line arguments. The fixtures for command-line arguments

For example, one of the fixtures which is created first is datamodule_config.

The first fixtures to be created are the datamodule_config, network_config and algorithm_config, along with overrides. From these, the experiment_dictconfig is created

---
title: Fixture dependency graph
---
flowchart TD
datamodule_config[
    <a href="#project.conftest.datamodule_config">datamodule_config</a>
] -- 'datamodule=A' --> command_line_arguments
network_config[
    <a href="#project.conftest.network_config">network_config</a>:
] -- 'network=B' --> command_line_arguments
algorithm_config[
    <a href="#project.conftest.algorithm_config">algorithm_config</a>
] -- 'algorithm=C' --> command_line_arguments
overrides[
    <a href="#project.conftest.overrides">overrides</a>
] -- 'seed=123' --> command_line_arguments
command_line_arguments[
    <a href="#project.conftest.command_line_arguments">command_line_arguments</a>
] -- load configs for 'datamodule=A network=B algorithm=C seed=123' --> experiment_dictconfig
experiment_dictconfig[
    <a href="#project.conftest.experiment_dictconfig">experiment_dictconfig</a>
] -- instantiate objects from configs --> experiment_config
experiment_config[
    <a href="#project.conftest.experiment_config">experiment_config</a>
] --> datamodule & network & algorithm
datamodule[
    <a href="#project.conftest.datamodule">datamodule</a>
] --> algorithm
network[
    <a href="#project.conftest.network">network</a>
] --> algorithm
algorithm[
    <a href="#project.conftest.algorithm">algorithm</a>
] -- is used by --> some_test
algorithm & network & datamodule -- is used by --> some_other_test

algorithm_config #

algorithm_config(request: FixtureRequest) -> str | None

The algorithm config to use in the experiment, as if algorithm=<value> was passed.

This is parametrized with all the configurations for a given algorithm type when using the included tests, for example as is done in project.algorithms.example_test.

datamodule_config #

datamodule_config(request: FixtureRequest) -> str | None

The datamodule config to use in the experiment, as if datamodule=<value> was passed.

algorithm_network_config #

algorithm_network_config(
    request: FixtureRequest,
) -> str | None

The network config to use in the experiment, as in algorithm/network=<value>.

command_line_arguments #

command_line_arguments(
    devices: str,
    accelerator: str,
    algorithm_config: str | None,
    datamodule_config: str | None,
    algorithm_network_config: str | None,
    overrides: tuple[str, ...],
)

Fixture that returns the command-line arguments that will be passed to Hydra to run the experiment.

The algorithm_config, network_config and datamodule_config values here are parametrized indirectly by most tests using the project.utils.testutils.run_for_all_configs_of_type function so that the respective components are created in the same way as they would be by Hydra in a regular run.

experiment_dictconfig #

experiment_dictconfig(
    command_line_arguments: list[str],
    tmp_path_factory: TempPathFactory,
) -> DictConfig

The omegaconf.DictConfig that is created by Hydra from the command-line arguments.

Any interpolations in the configs will not have been resolved at this point.

experiment_config #

experiment_config(
    experiment_dictconfig: DictConfig,
) -> Config

The experiment configuration, with all interpolations resolved.

datamodule #

datamodule(experiment_dictconfig: DictConfig) -> DataModule

Fixture that creates the datamodule for the given config.

algorithm #

algorithm(
    experiment_config: Config,
    datamodule: DataModule,
    device: device,
    seed: int,
)

Fixture that creates the "algorithm" (a LightningModule).

seed #

seed(
    request: FixtureRequest, make_torch_deterministic: None
)

Fixture that seeds everything for reproducibility and yields the random seed used.

accelerator #

accelerator(request: FixtureRequest)

Returns the accelerator to use during unit tests.

By default, if cuda is available, returns "cuda". If the tests are run with -vvv, then also runs CPU.

devices #

devices(
    accelerator: str, request: FixtureRequest
) -> list[int] | int | Literal["auto"]

Fixture that creates the 'devices' argument for the Trainer config.

overrides #

overrides(request: FixtureRequest)

Fixture that makes it possible to specify command-line overrides to use in a given test.

Tests that require running an experiment should use the experiment_config fixture below.

Multiple test using the same overrides will use the same experiment.

use_overrides #

use_overrides(
    command_line_overrides: Param | list[Param], ids=None
)

Marks a test so that it can use components created using the given command-line arguments.

For example:

@use_overrides("algorithm=my_algo network=fcnet")
def test_my_algo(algorithm: MyAlgorithm):
    #The algorithm will be setup the same as if we did
    #   `python main.py algorithm=my_algo network=fcnet`.
    ...

make_torch_deterministic #

make_torch_deterministic()

Set torch to deterministic mode for unit tests that use the tensor_regression fixture.

pytest_runtest_makereport #

pytest_runtest_makereport(item, call)

Used to setup the pytest.mark.incremental mark, as described in this page.

pytest_runtest_setup #

pytest_runtest_setup(item)

Used to setup the pytest.mark.incremental mark, as described in this page.

pytest_generate_tests #

pytest_generate_tests(metafunc: Metafunc) -> None

Allows one to define custom parametrization schemes or extensions.

This is used to implement the parametrize_when_used mark, which allows one to parametrize an argument when it is used.

See https://docs.pytest.org/en/7.1.x/how-to/parametrize.html#how-to-parametrize-fixtures-and-test-functions