Predicted |
Predicted |
Total | |
|---|---|---|---|
Actually |
|||
Actually |
|||
| Total |
This model was evaluated using {{ modelData.splitDesc.params.nFolds || 'K' }}-fold cross-test.
“Optimal” cut was found by optimizing for .
A classifier produces a probability that a given object belongs to the class (e.g.
that is
). The threshold (or “cut-off”) is the
number beyond which the prediction is considered positive. If set too low, it may
predict too often, if set too high, too rarely.
One way to assess a classification model's performance is to use a "confusion matrix", which compares actual values (from the test set) to predicted values. Be careful though, the figures are highly dependent on the probability cutoff chosen to classify a record. Depending on your use case, you might want to adjust the cutoff to optimize a specific metric.
One way to assess a classification model's performance is to use a "confusion matrix", which compares actual values (from the test set) to predicted values.