XGBoost

XGBoost is an advanced gradient tree boosting algorithm. It has support for parallel processing, regularization and early stopping, which makes it a fast, scalable and accurate algorithm.

For more information on gradient tree boosting, see the "Gradient tree boosting" algorithm.

'DART' is a variant of the 'Gradient Boosted Trees' estimator where, at each step of the training phase, previous trees are randomly dropped out.
Set closer to 1 to shift towards a Poisson distribution. Set closer to 2 to shift towards a gamma distribution.
'Automatic' and 'Approximate' modes are only available for CPU execution.
'Automatic' will choose one of the other method based on heuristic and shape of the data.
'Exact' is a greedy tree-building algorithm.
'Approximate' is a fast tree-building algorithm.
'Histogram' is an optimized fast tree-building algorithm.
XGBoost has an early stop mechanism so the exact number of trees will be optimized. High number of actual trees will increase the training and prediction time. Typical values: 100 - 10000
High number of trees will increase the training and prediction time. Typical values: 100 - 10000
Use XGBoost built-in early stop mechanism so the exact number of trees will be optimized.
The cross-validation scheme defined in the "Hyperparameters" tab will be used.
Early stopping is not available for causal prediction.
Early stopping is not available for time series forecasting.
The optimizer stops if the loss never decreases for this consecutive number of iterations. Typical values: 1 - 100
Number of cores used for parallel training. Using more cores leads to faster training but at the expense of more memory consumption, especially for large training datasets.
Allow DSS to use sparse matrices to train the model
This may help reduce RAM and CPU usage
By default, features with a np.nan value are treated as missing.
This setting has no effect if the model ends-up using a sparse matrix.
This value is compared after preprocessing, which could include average rescaling and missing value imputation.