Individual prediction explanations are feature importances specific to a given sample.
When the model is linear (logistic regression, OLS...), the explanation for one feature is simply
the impact of the feature on the prediction with the mean feature value as a baseline: coefficient * (feature value - mean feature value).
As a generalization, the explanation is the difference between the prediction value and the average of prediction values obtained by replacing the feature value by values drawn from the test dataset. This method approximates Shapley values, trading off speed against both bias and variance.
For classification problems, the explanations are computed probability log-odd ratios: log(p / (1 - p))