Skip to content

Fine-tuning LLMs#

This example is based on this language modeling example from the HuggingFace transformers documentation.

To better understand what's going on in this example, it is a good idea to read through these tutorials first: * Causal language modeling simple example - HuggingFace docs * Fine-tune a language model - Colab Notebook

The main difference between this example and the original example from HuggingFace is that the LLMFinetuningExample is a LightningModule, that is trained by a lightning.Trainer.

This also means that this example doesn't use accelerate or the HuggingFace Trainer.

Running the example#

python project/main.py experiment=llm_finetuning_example