| Causal learning method | ||
|---|---|---|
| Meta-learner |
| Algorithm | |
|---|---|
| Penalty | |
| C |
| Algorithm | Split quality criterion | Gini | |||
|---|---|---|---|---|---|
| Number of trees | Use bootstrap | Yes | |||
| Max trees depth | Feature sampling strategy | ||||
| Min samples per leaf | Used features | Used features | |||
| Min samples to split |
| Algorithm | Split quality criterion | Gini | |||
|---|---|---|---|---|---|
| Number of trees | Use bootstrap | Yes | |||
| Max trees depth | Feature sampling strategy | ||||
| Min samples per leaf | Used features | Used features | |||
| Min samples to split |
| Algorithm | Split quality criterion | MSE | |||
|---|---|---|---|---|---|
| Number of trees | Use bootstrap | Yes | |||
| Max trees depth | Feature sampling strategy | ||||
| Min samples per leaf | Used features | Used features | |||
| Min samples to split |
| Algorithm | Split quality criterion | MSE | |||
|---|---|---|---|---|---|
| Number of trees | Use bootstrap | Yes | |||
| Max trees depth | Feature sampling strategy | ||||
| Min samples per leaf | Used features | Used features | |||
| Min samples to split |
| Algorithm | SVM classifier | |
|---|---|---|
| Kernel | ||
| Kernel coef (gamma) | ||
| C | ||
| Independent kernel term | ||
| Stopping tolerance | ||
| Max iterations |
| Algorithm | SVM regressor | |
|---|---|---|
| Kernel | ||
| Kernel coef (gamma) | ||
| C | ||
| Independent kernel term | ||
| Stopping tolerance | ||
| Max iterations |
| Algorithm | SGD |
|---|---|
| Loss function | |
| Epsilon | |
| Penalty | |
| L1 ratio | |
| Stopping tolerance | |
| Max iterations | |
| Actual iterations |
| Algorithm | Decision tree classifier |
|---|---|
| Max tree depth | |
| Split criterion | |
| Min samples per leaf | |
| Splitter |
| Algorithm | Ridge regression (L2) |
|---|---|
| Alpha | |
| Alpha |
| Algorithm | Lasso regression (L1) |
|---|---|
| Alpha | |
| Alpha |
| Algorithm | Ordinary Least Squares regression |
|---|
| Algorithm | Logistic Regression (MLLib) |
|---|---|
| Max iterations | |
| Lambda (regularization param) | |
| Alpha (Elastic net param) |
| Algorithm | Linear Regression (MLLib) |
|---|---|
| Max iterations | |
| Lambda (regularization param) | |
| Alpha (Elastic net param) |
| Algorithm | Random Forest (MLLib) |
|---|---|
| Number of trees | |
| Maximum depth of tree | |
| Step size | |
| Feature subset strategy | |
| Impurity | |
| Maximum number of bins | |
| Maximum memory | |
| Check point interval | |
| Cache node IDs | |
| Minimum information gain | |
| Minimum instance per node | |
| Subsampling rate | |
| Subsampling seed |
| Algorithm | Gradient Boosted Trees (MLLib) |
|---|---|
| Number of trees | |
| Maximum depth of tree | |
| Step size | |
| Impurity | |
| Maximum number of bins | |
| Maximum memory | |
| Check point interval | |
| Cache node IDs | |
| Minimum information gain | |
| Minimum instance per node | |
| Subsampling rate | |
| Step size | |
| Subsampling seed |
| Algorithm | Decision Tree (MLLib) |
|---|---|
| Maximum depth of tree | |
| Ids of cache nodes | |
| Checkpoint interval | |
| Maximum number of bins | |
| Maximum memory | |
| Minimum information gain | |
| Minimum instance per node |
| Algorithm | Naive Bayes (MLLib) |
|---|---|
| Lambda |
| Algorithm | XGBoost | |
|---|---|---|
| Booster | ||
| Objective | ||
| Actual number of trees | ||
| Max trees depth | ||
| Eta (learning rate) | ||
| Max delta step | ||
| Alpha (L1 regularization) | ||
| Lambda (L2 regularization) | ||
| Gamma (Min loss reduction to split a leaf) | ||
| Min sum of instance weight in a child | ||
| Subsample ratio of the training instance | ||
| Columns subsample ratio for trees | ||
| Columns subsample ratio for splits / levels | ||
| Balancing of positive and negative weights | ||
| Value treated as missing | NaN | |
| Tweedie variance power |
| Algorithm | Gradient Boosted Trees ({{(postTrain.algorithm === 'GBT_CLASSIFICATION') ? 'Classification' : 'Regression'}}) |
|---|---|
| Loss |
Deviance
Exponential
Least Square
Least Absolute Deviation
Huber
|
| Feature sampling strategy |
Default
Square root
Logarithm
Fixed number
Fixed proportion
|
| Number of boosting stages | |
| Eta (learning rate) | |
| Max trees depth | |
| Minimum samples per leaf |
| Algorithm | XGBoost | |
|---|---|---|
| Booster | ||
| Actual number of trees | ||
| Max trees depth | ||
| Eta (learning rate) | ||
| Alpha (L1 regularization) | ||
| Lambda (L2 regularization) | ||
| Gamma (Min loss reduction to split a leaf) | ||
| Min sum of instance weight in a child | ||
| Subsample ratio of the training instance | ||
| Fraction of columns in each tree | ||
| Value treated as missing | NaN |
| Algorithm | LightGBM |
|---|---|
| Booster | {{ postTrain.lightgbm.boosting_type }} |
| Actual number of trees | {{ postTrain.lightgbm.n_estimators }} |
| Maximum number of leaves | {{ postTrain.lightgbm.num_leaves }} |
| Learning rate | {{ postTrain.lightgbm.learning_rate }} |
| Alpha (L1 regularization) | {{ postTrain.lightgbm.reg_alpha }} |
| Lambda (L2 regularization) | {{ postTrain.lightgbm.reg_lambda }} |
| Minimal gain to perform a split on a leaf | {{ postTrain.lightgbm.min_split_gain }} |
| Minimum leaf samples | {{ postTrain.lightgbm.min_child_samples }} |
| Min sum of instance weight in a child | {{ postTrain.lightgbm.min_child_weight }} |
| Subsample ratio of the training instance | {{ postTrain.lightgbm.subsample }} |
| Columns subsample ratio for trees | {{ postTrain.lightgbm.colsample_bytree }} |
| Algorithm | Logistic regression (Vertica) |
|---|---|
| Max number of iterations |
| Algorithm | Decision Tree |
|---|---|
| Maximum depth | |
| Min. samples per leaf | |
| Split strategy |
Random
Best
|
| Algorithm | Neural Network |
|---|---|
| Activation |
ReLU
Identity
Logistic
Hyperbolic Tangent
|
| Alpha | |
| Max iterations | |
| Convergence tolerance | |
| Early stopping | |
| Validation fraction | |
| Solver |
ADAM
Stochastic Gradient Descent
LBFGS
|
| Shuffle data | |
| Intial Learning Rate | |
| Automatic batching | |
| Batch size | |
| beta_1 | |
| beta_2 | |
| epsilon | |
| Learning rate annealing |
Constant
Inverse scaling
Adaptive
|
| power_t | |
| Momentum | |
| Use Nesterov momentum |
| Algorithm | KNN |
|---|---|
| Neighbor finding algorithm | Automatic
KD Tree
Ball Tree
Brute force
|
| K | |
| Distance weighting | |
| Leaf size | |
| p |
| Algorithm | LASSO-LARS |
|---|---|
| Max number of features |
| Algorithm | Linear regression (Vertica) |
|---|---|
| Max number of iterations |
| Algorithm | Neural Network built with Keras |
|---|
| Algorithm | Deep Neural Network |
|---|---|
| Hidden layers | |
| Units per layer | |
| Learning rate | |
| Early stopping | |
| Early stopping patience | |
| Early stopping threshold | |
| Batch size | |
| Number of epochs | |
| Dropout | |
| Lambda (L2 regularization) | |
| Alpha (L1 regularization) |
| Algorithm |
|---|
| Algorithm | Naive seasonal forecasting |
|---|---|
| Season length |
| Algorithm | Auto ARIMA |
|---|---|
| Season length | |
| Information criterion | |
| Solver | |
| Stationary | |
| Maximum iterations | |
| Unit root test | |
| Seasonal unit root test | |
| Forecast quantiles |
| Algorithm | ARIMA |
|---|---|
| p | |
| d | |
| q | |
| p | |
| P | |
| D | |
| Q | |
| s | |
| Trend | |
| Trend offset | |
| Enforce stationarity | |
| Enforce invertibility | |
| Concentrate scale |
| Algorithm | ETS |
|---|---|
| Short name | |
| Trend | |
| Damped trend | |
| Error | |
| Seasonal | |
| Season length | |
| Seed |
| Algorithm | |
|---|---|
| Season length | |
| Seasonal smoother length | |
| Trend smoother length | |
| Low pass length | |
| Degree of seasonal LOESS | |
| Degree of trend LOESS | |
| Degree of low pass LOESS | |
| Seasonal jump | |
| Trend jump | |
| Low pass jump | |
| Forecast quantiles |
| Algorithm | |
|---|---|
| Changepoint prior scale | |
| Growth | |
| Floor | |
| Capacity | |
| Seasonality prior scale | |
| Seasonality mode | |
| Yearly seasonality | |
| Weekly seasonality | |
| Daily seasonality | |
| External features prior scale | |
| Changepoint range | |
| Number of changepoints | |
| Seed |
| Algorithm | GluonTS NPTS forecaster |
|---|---|
| Context length | |
| Kernel type | |
| Exponential kernel weights | |
| Seasonal model | |
| Use default time features | |
| Feature scale | |
| Forecast quantiles |
| Algorithm | GluonTS DeepAR | |
|---|---|---|
| Context length | ||
| Output distribution | Student's t | Negative binomial |
| Use identifiers as features | ||
| Nb. RNN layers | ||
| Nb. cells per layer | ||
| Dropout rate | ||
| Nb. batches per epoch | ||
| Learning rate | ||
| Weight decay | ||
| Patience | ||
| Batch size | ||
| Epochs | ||
| Forecast quantiles |
| Algorithm | GluonTS Feed forward neural network | |
|---|---|---|
| Context length | ||
| Output distribution | Student's t | Negative binomial |
| Batch normalization | ||
| Hidden layer sizes | ||
| Nb. batches per epoch | ||
| Learning rate | ||
| Weight decay | ||
| Batch size | ||
| Epochs | ||
| Forecast quantiles |
| Algorithm | GluonTS Feed forward neural network | ||
|---|---|---|---|
| Context length | |||
| Output distribution | Student's t | Gaussian | Negative binomial |
| Batch normalization | |||
| Mean scaling | |||
| Hidden layer sizes | |||
| Nb. batches per epoch | |||
| Learning rate | |||
| Batch size | |||
| Epochs | |||
| Forecast quantiles |
| Algorithm | GluonTS DeepAR | |||
|---|---|---|---|---|
| Context length | ||||
| Output distribution | Student's t | Gaussian | Negative binomial | |
| Use identifiers as features | ||||
| Nb. RNN layers | ||||
| Nb. cells per layer | ||||
| Cell type | ||||
| Dropout cell type | Zoneout | RNN zoneout | Variational dropout | Variational zoneout |
| Dropout rate | ||||
| α | ||||
| β | ||||
| Nb. batches per epoch | ||||
| Learning rate | ||||
| Batch size | ||||
| Epochs | ||||
| Forecast quantiles |
| Algorithm | GluonTS Transformer | ||
|---|---|---|---|
| Context length | |||
| Output distribution | Student's t | Gaussian | Negative binomial |
| Use identifiers as features | |||
| Transformer network dimension | |||
| Hidden layer dimension scale | |||
| Nb. heads in multi-head attention | |||
| Nb. batches per epoch | |||
| Learning rate | |||
| Batch size | |||
| Epochs | |||
| Forecast quantiles |
| Algorithm | GluonTS Multi-horizon quantile CNN |
|---|---|
| Context length | |
| Use identifiers as features | |
| MLP layer sizes | |
| Nb. channels | |
| Convolution dilations | |
| Kernel sizes | |
| Nb. batches per epoch | |
| Learning rate | |
| Batch size | |
| Epochs | |
| Forecast quantiles |
| Algorithm | Causal forest | Split quality criterion | {{ postTrain.causal_forest_params.criterion }} | ||
|---|---|---|---|---|---|
| Number of trees | |||||
| Max trees depth | Feature sampling strategy | ||||
| Min samples per leaf | Used features | Used features |
| Algorithm | {{postTrain.algorithm}} |
|---|
| Rows (before preprocessing) | Rows (after preprocessing) | ||
|---|---|---|---|
| Columns (before preprocessing) | Columns (after preprocessing) | ||
| Matrix type | {{modelData.iperf.modelInputIsSparse ? 'sparse' : 'dense'}} | Sample weights variable | |
| Estimated memory usage | Number of special features |