Skip to content

Main test

experiment_configs module-attribute #

experiment_configs = [stem for p in glob('*.yaml')]

The list of all experiments configs in the configs/experiment directory.

This is used to check that all the experiment configs are covered by tests.

experiment_commands_to_test module-attribute #

experiment_commands_to_test: list[str | ParameterSet] = []

List of experiment commands to run for testing.

Consider adding a command that runs simple sanity check for your algorithm, something like one step of training or something similar.

test_jax_can_use_the_GPU #

test_jax_can_use_the_GPU()

Test that Jax can use the GPU if it we have one.

test_torch_can_use_the_GPU #

test_torch_can_use_the_GPU()

Test that torch can use the GPU if it we have one.

test_can_run_experiment #

test_can_run_experiment(
    command_line_overrides: tuple[str, ...],
    request: FixtureRequest,
    monkeypatch: MonkeyPatch,
)

Launches the sanity check experiments using the commands from the list above.

test_setting_just_algorithm_isnt_enough #

test_setting_just_algorithm_isnt_enough(
    dict_config: DictConfig,
) -> None

Test to check that the datamodule is required (even when just the example algorithm is set).

TODO: We could probably move the datamodule config under algorithm/datamodule. Maybe that would be better?

test_run_auto_schema_via_cli_without_errors #

test_run_auto_schema_via_cli_without_errors()

Checks that the command completes without errors.

test_setup_with_overrides_works #

test_setup_with_overrides_works(dict_config: DictConfig)

This test receives the dict_config loaded from Hydra with the given overrides.