{{ $ctrl.splitType === 'HP_SEARCH' ? 'Train subset' : 'Train set' }} Test set {{ $ctrl.kfold ? 'Validation subset' : 'Validation set' }}

During evaluation, models are trained on the train set , and metrics are computed on the test set .
Models are then trained on the full dataset (after sampling if applicable).

During evaluation, models are trained on the train set , and metrics are computed on the evaluation time steps (i.e. time steps in the gap are ignored) of the test set .
Models are then retrained on the full dataset (after sampling if applicable).

During evaluation, models are trained on the train folds , and metrics are computed on the test folds .
Models are then trained on the full dataset (after sampling if applicable).

During evaluation, models are trained on the train folds , and metrics are computed on the evaluation time steps (i.e. time steps in the gap are ignored) of the test folds .
Models are then retrained on the full dataset (after sampling if applicable).

The offset between consecutive cross-test evaluation folds aims at avoiding overlaps with hyperparameter search validation fold that could cause an optimistic bias on model evaluation metrics.

Every train set fold is of equal duration.

During evaluation, models are trained on the train set , and metrics are computed on the test set .

During evaluation, models are trained on the train folds , and metrics are computed on the test folds .

Models are then trained on the full dataset (after sampling if applicable).

After the Train/Test split, the train set is split again into a train subset and a validation subset to find the best hyperparameters.

After the Train/Test split, the train set is split again into multiple train and validation subsets to find the best hyperparameters.

The offset between consecutive hyperparameter search validation folds aims at avoiding overlaps with cross-test evaluation folds that could cause an optimistic bias on model evaluation metrics.

Every train set fold is of equal duration.

After the Train/Test split, the train set is split again into a train subset and a validation subset to find the best hyperparameters.

After the Train/Test split, the train set is split again into multiple train and validation subsets to find the best hyperparameters.

Every train set fold is of equal duration.