Deep Neural Network

Deep Neural Networks are a class of fully-connected feedforward artificial neural networks, composed of one or several "hidden" layers of nodes, or computational units, called neurons.
Each neuron from a hidden layer gets information from all nodes of the previous layer and feeds its output to all nodes of the subsequent layer.

Given a set of features and a target, Deep Neural Networks can learn and approximate complex nonlinear functions for both regression and classification, in a supervised fashion using the backpropagation technique.

Notes: the number of parameters of Deep Neural Networks make them prone to overfitting when trained on small amounts of data. Using the Early stopping parameter can help generalise better.

> > >
Use built-in early stopping mechanism to optimize the number of epochs The cross-validation scheme defined in the "Hyperparameters" tab will be used.
The optimizer stops if the loss doesn't decrease for {{hpSpace.early_stopping_patience || "\"patience\""}} number of epochs by at least {{hpSpace.early_stopping_threshold || "\"threshold\""}}.
Number of training samples processed before the internal model parameters are updated. Popular values are 8, 16, 32, 64, 128, etc.
Maximum number of full-passes of the entire training dataset. Higher values lead to better convergence, but take more time.
Regularization that randomly zeroes elements of the hidden layers input with a probability {{hpSpace.dropout}}
L2 regularization adds penalty to high-valued weights and biases in the network by forcing them to be small.
L1 regularization adds penalty to weights in the network by shrinking them towards 0. It can lead to ignoring some neurons.