Evaluating predictive models
To evaluate predictive models, you start by splitting your dataset into two disjunctive subsets: a training set and a test set. There is no strict rule about how to perform this division. You can start by using 70% of the data for training and 30% for testing. You train the model on the training set. After the model is trained, you use it on the test set to predict the values of the target variable. However, because the test set also consists of the data where the target variable value is known, you can measure how well a model predicts, and compare different models.
There are quite a few possible measures, which I will explain in the next few paragraphs. Note that you can also use the same data for training and for testing. Although typically you get predictions that are too good, better than with a separate test set, you can still compare different models.
Let me start with T-SQL code that selects 30% of the data from the dbo.vTargetMail
view for the test set...