In order to measure the effectiveness of your predictive model you can adopt one of the several tests available. But the trick is to select the most appropriate evaluation criteria so that your model doesn’t remain feasible only in theory.
In this post I’ve demonstrated a few techniques for evaluating predictive models.
Percentage of Cases Correctly Classified (CCR)
This is particularly useful in classification problems, where the percentage of Cases Correctly Classified (CCR) is the most obvious accuracy measure. However, let me first cite an example where this method may give misleading results.
Consider a medical test designed to predict whether a patient is HIV positive. A classifier predicts that no one has AIDS. Since out of 200 cases only 1 will have AIDS, the classifier will give 99.5% accuracy even when it fails to predict the one positive case. But the classifier is useless as the purpose of the test is to identify those rare cases so that those lives can be saved.
In a data set there are four possible data: True positive (A value is correctly classified as positive), True negative (A value is correctly classified as negative), False positive (A value is wrongly classified as positive like detecting a disease when it’s not present) and False negative (wrongly classified as negative like reporting a sick person as healthy). We can arrange them in a 2×2 contingency table and compare actual values with classification values.