🌑

Stephen Cheng

Increasing Overall Model Performance on Unseen Data with Cross Validation

 

Stephen Cheng

Intro

When adjusting models we are aiming to increase overall model performance on unseen data. Hyperparameter tuning can lead to much better performance on test sets. However, optimizing parameters to the test set can lead information leakage causing the model to preform worse on unseen data. To correct for this we can perform cross validation. To better understand CV, we will be performing different methods on the iris dataset.

K-Fold

The training data used in the model is split, into k number of smaller sets, to be used to validate the model. The model is then trained on k-1 folds of training set. The remaining fold is then used as a validation set to evaluate the model. As we will be trying to classify different species of iris flowers we will need to import a classifier model, for this exercise we will be using a DecisionTreeClassifier.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import KFold, StratifiedKFold
from sklearn.model_selection import LeaveOneOut, LeavePOut
from sklearn.model_selection import ShuffleSplit, cross_val_score

X, y = datasets.load_iris(return_X_y=True)
clf = DecisionTreeClassifier(random_state=42)

k_folds = KFold(n_splits = 5)
scores = cross_val_score(clf, X, y, cv = k_folds)

print("Cross Validation Scores: ", scores[:3])
print("Average CV Score: ", scores.mean())
print("Number of CV Scores used in Average: ", len(scores))
1
2
3
Cross Validation Scores:  [1.         1.         0.83333333]
Average CV Score: 0.9133333333333333
Number of CV Scores used in Average: 5

Stratified K-Fold

In cases where classes are imbalanced we need a way to account for the imbalance in both the train and validation sets. To do so we can stratify the target classes, meaning that both sets will have an equal proportion of all classes. While the number of folds is the same, the average CV increases from the basic k-fold when making sure there is stratified classes.

1
2
3
4
5
6
sk_folds = StratifiedKFold(n_splits = 5)
scores = cross_val_score(clf, X, y, cv = sk_folds)

print("Cross Validation Scores: ", scores[:3])
print("Average CV Score: ", scores.mean())
print("Number of CV Scores used in Average: ", len(scores))
1
2
3
Cross Validation Scores:  [0.96666667 0.96666667 0.9]
Average CV Score: 0.9533333333333334
Number of CV Scores used in Average: 5

Leave-One-Out (LOO)

Instead of selecting the number of splits in the training data set like k-fold LeaveOneOut, utilize 1 observation to validate and n-1 observations to train. This method is an exaustive technique.

1
2
3
4
5
6
loo = LeaveOneOut()
scores = cross_val_score(clf, X, y, cv = loo)

print("Cross Validation Scores: ", scores[:3])
print("Average CV Score: ", scores.mean())
print("Number of CV Scores used in Average: ", len(scores))
1
2
3
Cross Validation Scores:  [1. 1. 1.]
Average CV Score: 0.94
Number of CV Scores used in Average: 150

We can observe that the number of cross validation scores performed is equal to the number of observations in the dataset. In this case there are 150 observations in the iris dataset. The average CV score is 94%.

Leave-P-Out (LPO)

Leave-P-Out is simply a nuanced diffence to the Leave-One-Out idea, in that we can select the number of p to use in our validation set.

1
2
3
4
5
6
lpo = LeavePOut(p=2)
scores = cross_val_score(clf, X, y, cv = lpo)

print("Cross Validation Scores: ", scores[:3])
print("Average CV Score: ", scores.mean())
print("Number of CV Scores used in Average: ", len(scores))
1
2
3
Cross Validation Scores:  [1. 1. 1.]
Average CV Score: 0.9382997762863534
Number of CV Scores used in Average: 11175

As we can see this is an exhaustive method we many more scores being calculated than Leave-One-Out, even with a p = 2, yet it achieves roughly the same average CV score.

Shuffle Split

Unlike KFold, ShuffleSplit leaves out a percentage of the data, not to be used in the train or validation sets. To do so we must decide what the train and test sizes are, as well as the number of splits.

1
2
3
4
5
6
ss = ShuffleSplit(train_size=0.6, test_size=0.3, n_splits = 5)
scores = cross_val_score(clf, X, y, cv = ss)

print("Cross Validation Scores: ", scores[:3])
print("Average CV Score: ", scores.mean())
print("Number of CV Scores used in Average: ", len(scores))
1
2
3
Cross Validation Scores:  [0.93333333 0.93333333 0.97777778]
Average CV Score: 0.9511111111111111
Number of CV Scores used in Average: 5

, , — Aug 6, 2021

Search

    Made with ❤️ and ☀️ on Earth.