Causal learning

Causal learning method
Meta-learner

Algorithm details

Base learner details

Algorithm
Penalty
C
Algorithm Split quality criterion Gini
Number of trees Use bootstrap Yes
Max trees depth Feature sampling strategy
Min samples per leaf Used features Used features
Min samples to split
Algorithm Split quality criterion Gini
Number of trees Use bootstrap Yes
Max trees depth Feature sampling strategy
Min samples per leaf Used features Used features
Min samples to split
Algorithm Split quality criterion MSE
Number of trees Use bootstrap Yes
Max trees depth Feature sampling strategy
Min samples per leaf Used features Used features
Min samples to split
Algorithm Split quality criterion MSE
Number of trees Use bootstrap Yes
Max trees depth Feature sampling strategy
Min samples per leaf Used features Used features
Min samples to split
AlgorithmSVM classifier
Kernel
Kernel coef (gamma)
C
Independent kernel term
Stopping tolerance
Max iterations
AlgorithmSVM regressor
Kernel
Kernel coef (gamma)
C
Independent kernel term
Stopping tolerance
Max iterations
AlgorithmSGD
Loss function
Epsilon
Penalty
L1 ratio
Stopping tolerance
Max iterations
Actual iterations
AlgorithmDecision tree classifier
Max tree depth
Split criterion
Min samples per leaf
Splitter
AlgorithmRidge regression (L2)
Alpha
Alpha
AlgorithmLasso regression (L1)
Alpha
Alpha
AlgorithmOrdinary Least Squares regression
AlgorithmLogistic Regression (MLLib)
Max iterations
Lambda (regularization param)
Alpha (Elastic net param)
AlgorithmLinear Regression (MLLib)
Max iterations
Lambda (regularization param)
Alpha (Elastic net param)
AlgorithmRandom Forest (MLLib)
Number of trees
Maximum depth of tree
Step size
Feature subset strategy
Impurity
Maximum number of bins
Maximum memory
Check point interval
Cache node IDs
Minimum information gain
Minimum instance per node
Subsampling rate
Subsampling seed
AlgorithmGradient Boosted Trees (MLLib)
Number of trees
Maximum depth of tree
Step size
Impurity
Maximum number of bins
Maximum memory
Check point interval
Cache node IDs
Minimum information gain
Minimum instance per node
Subsampling rate
Step size
Subsampling seed
AlgorithmDecision Tree (MLLib)
Maximum depth of tree
Ids of cache nodes
Checkpoint interval
Maximum number of bins
Maximum memory
Minimum information gain
Minimum instance per node
AlgorithmNaive Bayes (MLLib)
Lambda
AlgorithmXGBoost
Booster
Objective
Actual number of trees
Max trees depth
Eta (learning rate)
Max delta step
Alpha (L1 regularization)
Lambda (L2 regularization)
Gamma (Min loss reduction to split a leaf)
Min sum of instance weight in a child
Subsample ratio of the training instance
Columns subsample ratio for trees
Columns subsample ratio for splits / levels
Balancing of positive and negative weights
Value treated as missing NaN
Tweedie variance power
AlgorithmGradient Boosted Trees ({{(postTrain.algorithm === 'GBT_CLASSIFICATION') ? 'Classification' : 'Regression'}})
Loss
Deviance Exponential Least Square Least Absolute Deviation Huber
Feature sampling strategy
Default Square root Logarithm Fixed number Fixed proportion
Number of boosting stages
Eta (learning rate)
Max trees depth
Minimum samples per leaf
AlgorithmXGBoost
Booster
Actual number of trees
Max trees depth
Eta (learning rate)
Alpha (L1 regularization)
Lambda (L2 regularization)
Gamma (Min loss reduction to split a leaf)
Min sum of instance weight in a child
Subsample ratio of the training instance
Fraction of columns in each tree
Value treated as missing NaN
Algorithm LightGBM
Booster {{ postTrain.lightgbm.boosting_type }}
Actual number of trees {{ postTrain.lightgbm.n_estimators }}
Maximum number of leaves {{ postTrain.lightgbm.num_leaves }}
Learning rate {{ postTrain.lightgbm.learning_rate }}
Alpha (L1 regularization) {{ postTrain.lightgbm.reg_alpha }}
Lambda (L2 regularization) {{ postTrain.lightgbm.reg_lambda }}
Minimal gain to perform a split on a leaf {{ postTrain.lightgbm.min_split_gain }}
Minimum leaf samples {{ postTrain.lightgbm.min_child_samples }}
Min sum of instance weight in a child {{ postTrain.lightgbm.min_child_weight }}
Subsample ratio of the training instance {{ postTrain.lightgbm.subsample }}
Columns subsample ratio for trees {{ postTrain.lightgbm.colsample_bytree }}
AlgorithmLogistic regression (Vertica)
Max number of iterations
AlgorithmDecision Tree
Maximum depth
Min. samples per leaf
Split strategy
Random
Best
AlgorithmNeural Network
Activation
ReLU
Identity
Logistic
Hyperbolic Tangent
Alpha
Max iterations
Convergence tolerance
Early stopping
Validation fraction
Solver
ADAM
Stochastic Gradient Descent
LBFGS
Shuffle data
Intial Learning Rate
Automatic batching
Batch size
beta_1
beta_2
epsilon
Learning rate annealing
Constant
Inverse scaling
Adaptive
power_t
Momentum
Use Nesterov momentum
AlgorithmKNN
Neighbor finding algorithm
Automatic
KD Tree
Ball Tree
Brute force
K
Distance weighting
Leaf size
p
AlgorithmLASSO-LARS
Max number of features
AlgorithmLinear regression (Vertica)
Max number of iterations
AlgorithmNeural Network built with Keras
AlgorithmDeep Neural Network
Hidden layers
Units per layer
Learning rate
Early stopping
Early stopping patience
Early stopping threshold
Batch size
Number of epochs
Dropout
Lambda (L2 regularization)
Alpha (L1 regularization)
Algorithm
Algorithm Naive seasonal forecasting
Season length
Algorithm Auto ARIMA
Season length
Information criterion
Solver
Stationary
Maximum iterations
Unit root test
Seasonal unit root test
Forecast quantiles
Algorithm ARIMA
p
d
q
p
P
D
Q
s
Trend
Trend offset
Enforce stationarity
Enforce invertibility
Concentrate scale
Algorithm ETS
Short name
Trend
Damped trend
Error
Seasonal
Season length
Seed
Algorithm
Season length
Seasonal smoother length
Trend smoother length
Low pass length
Degree of seasonal LOESS
Degree of trend LOESS
Degree of low pass LOESS
Seasonal jump
Trend jump
Low pass jump
Forecast quantiles
Algorithm
Changepoint prior scale
Growth
Floor
Capacity
Seasonality prior scale
Seasonality mode
Yearly seasonality
Weekly seasonality
Daily seasonality
External features prior scale
Changepoint range
Number of changepoints
Seed
Algorithm GluonTS NPTS forecaster
Context length
Kernel type
Exponential kernel weights
Seasonal model
Use default time features
Feature scale
Forecast quantiles
Algorithm GluonTS DeepAR
Context length
Output distribution Student's t Negative binomial
Use identifiers as features
Nb. RNN layers
Nb. cells per layer
Dropout rate
Nb. batches per epoch
Learning rate
Weight decay
Patience
Batch size
Epochs
Forecast quantiles
Algorithm GluonTS Feed forward neural network
Context length
Output distribution Student's t Negative binomial
Batch normalization
Hidden layer sizes
Nb. batches per epoch
Learning rate
Weight decay
Batch size
Epochs
Forecast quantiles
Algorithm GluonTS Feed forward neural network
Context length
Output distribution Student's t Gaussian Negative binomial
Batch normalization
Mean scaling
Hidden layer sizes
Nb. batches per epoch
Learning rate
Batch size
Epochs
Forecast quantiles
Algorithm GluonTS DeepAR
Context length
Output distribution Student's t Gaussian Negative binomial
Use identifiers as features
Nb. RNN layers
Nb. cells per layer
Cell type
Dropout cell type Zoneout RNN zoneout Variational dropout Variational zoneout
Dropout rate
α
β
Nb. batches per epoch
Learning rate
Batch size
Epochs
Forecast quantiles
Algorithm GluonTS Transformer
Context length
Output distribution Student's t Gaussian Negative binomial
Use identifiers as features
Transformer network dimension
Hidden layer dimension scale
Nb. heads in multi-head attention
Nb. batches per epoch
Learning rate
Batch size
Epochs
Forecast quantiles
Algorithm GluonTS Multi-horizon quantile CNN
Context length
Use identifiers as features
MLP layer sizes
Nb. channels
Convolution dilations
Kernel sizes
Nb. batches per epoch
Learning rate
Batch size
Epochs
Forecast quantiles
Algorithm Causal forest Split quality criterion {{ postTrain.causal_forest_params.criterion }}
Number of trees
Max trees depth Feature sampling strategy
Min samples per leaf Used features Used features
Algorithm{{postTrain.algorithm}}

Training data

Rows (before preprocessing) Rows (after preprocessing)
Columns (before preprocessing) Columns (after preprocessing)
Matrix type {{modelData.iperf.modelInputIsSparse ? 'sparse' : 'dense'}} Sample weights variable
Estimated memory usage Number of special features

Code