Generalized Linear Models (GLM) estimates regression models for outcomes following distributions from the exponential family. In addition to the Gaussian (i.e. normal) distribution, these include Poisson, binomial, gamma and Tweedie distributions. Each serves a different purpose, and depending on distribution and link function choice, it can be used either for prediction or classification. For example, choosing a gaussian distribution will lead to ordinary least squares regression, while a binomial distribution will lead to logistic regression.
NOTE : Due to its implementation, H2O's GLM does not support unknown categories in categorical features consistently when scoring. As a result, if such a situation may arise, another category handling method (such as dummification) should be chosen.
The loss/ link function used to build the model. It is recommended to leave the default value unless you are familiar with this kind of model.
Tweedie variance power.
Elastic net regularization coefficient. A value of 0 means lasso (l1) regularization, while a value of 1 means ridge (l2) regularization.
Regularization strength. Higher values will lead to better generalization, but higher bias, while lower values may lead to overfitting.
Maximum number of iterations over the dataset for training. -1 will lead to infinite iterations, with stopping governed by the Beta Epsilon parameter. Note that this may lead to very long training times.
Stops training if the coefficients haven't varied by this amount between iterations.