Predicted Predicted Total
Actually
Actually
Total

This model was evaluated using {{ modelData.splitDesc.params.nFolds || 'K' }}-fold cross-test.

  • The record counts for confusion matrix and cost matrix represent the first fold only.
  • The metrics in the histogram are the average across all folds.

“Optimal” cut was found by optimizing for .

Description

A classifier produces a probability that a given object belongs to the class (e.g. that is ). The threshold (or “cut-off”) is the number beyond which the prediction is considered positive. If set too low, it may predict too often, if set too high, too rarely.

One way to assess a classification model's performance is to use a "confusion matrix", which compares actual values (from the test set) to predicted values. Be careful though, the figures are highly dependent on the probability cutoff chosen to classify a record. Depending on your use case, you might want to adjust the cutoff to optimize a specific metric.

One way to assess a classification model's performance is to use a "confusion matrix", which compares actual values (from the test set) to predicted values.