Experiment
evaluate #
Evaluates the algorithm.
Returns the name of the 'error' metric for this run, its value, and a dict of metrics.
instantiate_values #
Returns the list of objects at the values in this dict of configs.
This is used for the config of the trainer/logger
and trainer/callbacks
fields, where
we can combine multiple config groups by adding entries in a dict.
For example, using trainer/logger=wandb
and trainer/logger=tensorboard
would result in a
dict with wandb
and tensorboard
as keys, and the corresponding config groups as values.
This would then return a list with the instantiated WandbLogger and TensorBoardLogger objects.
evaluate_lightningmodule #
evaluate_lightningmodule(
algorithm: LightningModule,
/,
*,
trainer: Trainer,
datamodule: LightningDataModule | None = None,
config: Config,
train_results: Any = None,
) -> tuple[MetricName, float | None, dict]
Evaluates the algorithm and returns the metrics.
By default, if validation is to be performed, returns the validation error. Returns the
training error when trainer.overfit_batches != 0
(e.g. when debugging or testing). Otherwise,
if trainer.limit_val_batches == 0
, returns the test error.
instantiate_datamodule #
instantiate_datamodule(
datamodule_config: (
Builds[type[LightningDataModule]]
| LightningDataModule
| None
),
) -> LightningDataModule | None
Instantiate the datamodule from the configuration dict.
Any interpolations in the config will have already been resolved by the time we get here.