Random Forest

A Random Forest is made of many decision trees. Each tree in the forest predicts a record, and each tree "votes" for the final answer of the forest.
The forest chooses the class having the most votes.

A decision tree is a simple algorithm which builds a decision tree. Each node of the decision tree includes a condition on one of the input features.

When "growing" (ie, training) the forest:

Random Forests generally provide good results, at the expense of "explainability" of the model.

Adjusts the number of features to sample at each split.
Number of cores used for parallel training. Using more cores leads to faster training but at the expense of more memory consumption, especially for large training datasets.
Allow DSS to use sparse matrices to train the model
This may help reduce RAM and CPU usage