Metrics definitions

Accuracy
:
Proportion of correct predictions among all records ("positive" and "negative"). It is less informative than F1-score for unbalanced datasets.

Precision
:
Proportion of correct (“positive“) predictions among “positive” predictions.

Recall
:
Proportion of (correct) “positive“ predictions among “positive” records.

F1-score
:
Harmonic mean between precision and recall.

Cost matrix gain

You can also evaluate the average gain per record that the test set would yield by specifying a gain for each outcome, e.g. you win $1 for each correct prediction of but $-0.4 (i.e. you lose $0.4) if that prediction turns out to be incorrect.