Exploring the Response Variable and Concluding the Initial Exploration
We have now looked through all the features to see whether any data is missing, as well as to generally examine them. The features are important because they constitute the inputs to our machine learning algorithm. On the other side of the model lies the output, which is a prediction of the response variable. For our problem, this is a binary flag indicating whether or not a credit account will default next month.
The key task for the case study project is to come up with a predictive model for this target. Since the response variable is a yes/no flag, this problem is called a binary classification task. In our labeled data, the samples (accounts) that defaulted (that is, 'default payment next month'
= 1
) are said to belong to the positive class, while those that didn't belong to the negative class.
The main piece of information to examine regarding the response of a binary classification problem is this: what is the proportion of the positive class? This is an easy check.
Before we perform this check, we load the packages we need with the following code:
import numpy as np #numerical computation import pandas as pd #data wrangling import matplotlib.pyplot as plt #plotting package #Next line helps with rendering plots %matplotlib inline import matplotlib as mpl #add'l plotting functionality mpl.rcParams['figure.dpi'] = 400 #high res figures
Now we load the cleaned version of the case study data like this:
df = pd.read_csv('../../Data/Chapter_1_cleaned_data.csv')
Note
The cleaned dataset should have been saved as a result of your work in Chapter 1, Data Exploration and Cleaning. The path to the cleaned data in the preceding code snippet may be different if you saved it in a different location.
Now, to find the proportion of the positive class, all we need to do is get the average of the response variable over the whole dataset. This has the interpretation of the default rate. It's also worthwhile to check the number of samples in each class, using groupby
and count
in pandas. This is presented in the following screenshot:
Since the target variable is 1
or 0
, taking the mean of this column indicates the fraction of accounts that defaulted: 22%. The proportion of samples in the positive class (default = 1), also called the class fraction for this class, is an important statistic. In binary classification, datasets are described in terms of being balanced or imbalanced: are the proportions of the positive and negative classes equal or not? Most machine learning classification models are designed to work with balanced data: a 50/50 split between the classes.
However, in practice, real data is rarely balanced. Consequently, there are several methods geared toward dealing with imbalanced data. These include the following:
- Undersampling the majority class: Randomly throwing out samples from the majority class until the class fractions are equal, or at least less imbalanced.
- Oversampling the minority class: Randomly adding duplicate samples of the minority class to achieve the same goal.
- Weighting samples: This method is performed as part of the training step, so the minority class collectively has as much "emphasis" as the majority class in the trained model. The effect of this is similar to oversampling.
- More sophisticated methods, such as Synthetic Minority Over-sampling Technique (SMOTE).
While our data is not, strictly speaking, balanced, we also note that a positive class fraction of 22% is not particularly imbalanced, either. Some domains, such as fraud detection, typically deal with much smaller positive class fractions: on the order of 1% or less. This is because the proportion of "bad actors" is quite small compared to the total population of transactions; at the same time, it is important to be able to identify them if possible. For problems like this, it is more likely that using a method to address class imbalance will lead to substantially better results.
Now that we've explored the response variable, we have concluded our initial data exploration. However, data exploration should be considered an ongoing task that you should continually have in mind during any project. As you create models and generate new results, it's always good to think about what those results imply about the data, which usually requires a quick iteration back to the exploration phase. A particularly helpful kind of exploration, which is also typically done before model building, is examining the relationship between features and the response. We gave a preview of that in Chapter 1, Data Exploration and Cleaning, when we were grouping by the EDUCATION
feature and examining the mean of the response variable. We will also do more of this later. However, this has more to do with building a model than checking the inherent quality of the data.
The initial perusal through all the data that we have just completed is an important foundation to lay at the beginning of a project. As you do this, you should ask yourself the following questions:
- Is the data complete?
Are there missing values or other anomalies?
- Is the data consistent?
Does the distribution change over time, and if so, is this expected?
- Does the data make sense?
Do the values of the features fit with their definition in the data dictionary?
The latter two questions help you determine whether you think the data is correct. If the answer to any of these questions is "no," this should be addressed before continuing the project.
Also, if you think of any alternative or additional data that might be helpful to have and is possible to get, now would be a good point in the project life cycle to augment your dataset with it. Examples of this may include postal code-level demographic data, which you could join to your dataset if you had the addresses associated with accounts. We don't have these for the case study data and have decided to proceed on this project with the data we have now.