Conftest
Fixtures and test utilities.
This module contains PyTest fixtures that are used by tests.
How this works#
Our goal here is to make sure that the way we create networks/datasets/algorithms during tests match
as closely as possible how they are created normally in a real run.
For example, when running python project/main.py algorithm=example
.
We achieve this like so: All the components of an experiment are created using fixtures. The first fixtures to be invoked are the ones that would correspond to command-line arguments. The fixtures for command-line arguments
For example, one of the fixtures which is created first is datamodule_config.
The first fixtures to be created are the datamodule_config, network_config
and algorithm_config
, along with overrides
.
From these, the experiment_dictconfig
is created
---
title: Fixture dependency graph
---
flowchart TD
datamodule_config[
<a href="#project.conftest.datamodule_config">datamodule_config</a>
] -- 'datamodule=A' --> command_line_arguments
algorithm_config[
<a href="#project.conftest.algorithm_config">algorithm_config</a>
] -- 'algorithm=B' --> command_line_arguments
command_line_overrides[
<a href="#project.conftest.command_line_overrides">command_line_overrides</a>
] -- 'seed=123' --> command_line_arguments
command_line_arguments[
<a href="#project.conftest.command_line_arguments">command_line_arguments</a>
] -- load configs for 'datamodule=A algorithm=B seed=123' --> experiment_dictconfig
experiment_dictconfig[
<a href="#project.conftest.experiment_dictconfig">experiment_dictconfig</a>
] -- instantiate objects from configs --> experiment_config
experiment_config[
<a href="#project.conftest.experiment_config">experiment_config</a>
] --> datamodule & algorithm
datamodule[
<a href="#project.conftest.datamodule">datamodule</a>
] --> algorithm
algorithm[
<a href="#project.conftest.algorithm">algorithm</a>
] -- is used by --> some_test
algorithm & datamodule -- is used by --> some_other_test
original_datadir #
original_datadir(original_datadir: Path)
Overwrite the original_datadir fixture value to change where regression files are created.
By default, they are in a folder next to the source. Here instead we move them to $SCRATCH if available, or to a .regression_files folder at the root of the repo otherwise.
algorithm_config #
algorithm_config(request: FixtureRequest) -> str | None
The algorithm config to use in the experiment, as if algorithm=<value>
was passed.
This is parametrized with all the configurations for a given algorithm type when using the included tests, for example as is done in project.algorithms.example_test.
datamodule_config #
datamodule_config(request: FixtureRequest) -> str | None
The datamodule config to use in the experiment, as if datamodule=<value>
was passed.
algorithm_network_config #
algorithm_network_config(
request: FixtureRequest,
) -> str | None
The network config to use in the experiment, as in algorithm/network=<value>
.
command_line_arguments #
command_line_arguments(
algorithm_config: str | None,
datamodule_config: str | None,
algorithm_network_config: str | None,
command_line_overrides: tuple[str, ...],
request: FixtureRequest,
)
Fixture that returns the command-line arguments that will be passed to Hydra to run the experiment.
The algorithm_config
, network_config
and datamodule_config
values here are parametrized
indirectly by most tests using the project.utils.testutils.run_for_all_configs_of_type
function so that the respective components are created in the same way as they
would be by Hydra in a regular run.
experiment_dictconfig #
experiment_dictconfig(
command_line_arguments: tuple[str, ...],
tmp_path_factory: TempPathFactory,
) -> DictConfig
The omegaconf.DictConfig
that is created by Hydra from the command-line arguments.
Any interpolations in the configs will not have been resolved at this point.
experiment_config #
experiment_config(
experiment_dictconfig: DictConfig,
) -> Config
The experiment configuration, with all interpolations resolved.
datamodule #
datamodule(
experiment_dictconfig: DictConfig,
) -> DataModule | None
Fixture that creates the datamodule for the given config.
algorithm #
algorithm(
experiment_config: Config,
datamodule: DataModule | None,
device: device,
seed: int,
)
Fixture that creates the "algorithm" (a LightningModule).
seed #
seed(
request: FixtureRequest, make_torch_deterministic: None
)
Fixture that seeds everything for reproducibility and yields the random seed used.
accelerator #
accelerator(request: FixtureRequest)
Returns the accelerator to use during unit tests.
By default, if cuda is available, returns "cuda". If the tests are run with -vvv, then also runs CPU.
devices #
devices(
accelerator: str, request: FixtureRequest
) -> Generator[
list[int] | int | Literal["auto"], None, None
]
Fixture that creates the 'devices' argument for the Trainer config.
Splits up the GPUs between pytest-xdist workers when using distributed testing. This isn't currently used in the CI.
TODO: Design dilemna here: Should we be parametrizing the devices
command-line override and
force experiments to run with this value during tests? Or should we be changing things based on
this value in the config?
command_line_overrides #
command_line_overrides(
request: FixtureRequest,
) -> tuple[str, ...]
Fixture that makes it possible to specify command-line overrides to use in a given test.
Tests that require running an experiment should use the experiment_config
fixture below.
Multiple test using the same overrides will use the same experiment.
make_torch_deterministic #
Set torch to deterministic mode for unit tests that use the tensor_regression fixture.
pytest_runtest_makereport #
Used to setup the pytest.mark.incremental
mark, as described in the pytest docs.
See this page
pytest_runtest_setup #
pytest_runtest_setup(item: Function)
Used to setup the pytest.mark.incremental
mark, as described in this page.