{{ "CUMULATIVE_LIFT" | mlMetricName: modelData.modeling.metrics }} is .
Could not compute lift data.
This can be caused by an extreme imbalance between classes (e.g. very rare occurrence of ).

Probabilities and lift

A binary classifier produces a probability that a given record is "positive" (here, that is ).
The lift is the ratio between the results of this model and the results obtained with a random model.
Lift curves are particularly useful for "targeting" kinds of problems (churn prevention, marketing campaign targeting, ...)

Cumulative Lift Curve

The goal of this curve is to visualize the benefits of using a model for targeting a subset of the population. On the horizontal axis, we show the percentage of the population which is targeted and on the vertical axis the percentage of found positive records.

The dotted diagonal illustrates a random model (i.e., targeting 40% of the population will find 40% of the positive records).

The wizard curve above shows a perfect model (there are {{ 100 * modelData.perf.liftVizData.wizard.positives / modelData.perf.liftVizData.wizard.total | number:0 }}% positive records in your test set, so a perfect model would target only this)

This curve shows The other curves (one per fold) show the actual percentage of actual positives found by this model. The steeper the curve, the better.

Per-bin lift chart

This chart sorts the observations by deciles of decreasing predicted probability. It shows the lift in each of the bins.

If there is 20% of positives in your test set, but 60% in the first bin of probability then the lift of this first bin is 3.
This means that targeting only the observations in this bin would yield 3 times as many positive results as a random sampling (equally sized bars at the level of the dotted line).

The bars should decrease progressively from left to right, and the higher the bars on the left, the better.