Model
LLM
Trained on
Training dataset
Validation dataset
Prompt column
Completion column
User message column
Assistant message column
System message column
System static message
Fine-tuning
 The code env used during fine-tuning () is different from the connection code env (), this could lead to compatibility issues.
Connection
Model ID
Deployment ID
Model Name
Job ID
Job Name
Epochs count
Training loss
Perplexity
Code environment
Hyperparameters
Number of epochs
Learning rate
Initial learning rate
Batch size
LoRa rank
LoRa alpha
LoRa dropout
NEFTune noise alpha
Quantization mode