Fine-tuning preparation

OpenAI will automatically select hyperparameters for you. Depending on the performance of your model, you might want to update them using the advanced mode following the official documentation
Amazon Bedrock will use default values depending on the base model used. You can check the values on the official documentation
Default values will be used. Select the explicit mode to customize hyperparameters.

General hyperparameters

Number of iterations over the entire training dataset.
Step size in updating model weights during training.

PEFT hyperparameters

Size of the low-rank matrices (or "adaptors") used to approximate weight updates.
Impact of the low-rank updates on the model's weights.
Probability of dropping weights to prevent overfitting.

NEFTune

Magnitude of noise added to the embeddings during fine-tuning. Set to 0 to disable NEFTune.

Quantization

Quantized fine-tuning requires a GPU.

Checkpoints

Number of iterations over the entire training dataset.
Step size in updating model weights during training.
Number of examples in each batch.