Single Layer Perceptron
Neural Networks are a class of parametric models which are inspired by the functioning of the neurons.

They consist of several "hidden" layers of neurons, which receive inputs and transmit them to the next layer, mixing the inputs and applying non-linearities, allowing for a complex decision function.

This algorithm offers a single-layered network. In order to use neural networks with multiple hidden layers, the creation of a separate Deep-Learning visual analysis is required.

The activation function for the neurons in the network.
L2 regularization parameter. Higher values lead to smaller neuron weights and a more generalizable, although less sharp model.
Maximum iterations for learning. Higher values lead to better convergence, but take more time.
If the loss does not improve by this ratio over two iterations, training stops.
Whether the model should use validation and stop early.
The proportion of the training set to use for validation.
The solver to use for optimization. LBFGS is a batch algorithm and is not suited for larger datasets.
Whether the data should be shuffled between epochs (recommended, unless the data is already in random order).
The initial learning rate for gradient descent.
Whether batches should be created automatically (will use 200, or the whole dataset if there are less samples). Uncheck to select batch size.
The number of samples to include in each mini-batch.
beta_1 parameter for ADAM.
beta_2 parameter for ADAM.
epsilon parameter for ADAM.
The policy for learning rate annealing.
The exponent for an inverse scaling learning rate.
The momentum coefficient for stochastic gradient descnt.
Whether the Nesterov accelerated gradient technique should be used for momentum.