When adjusting models we are aiming to increase overall model performance on unseen data. Hyperparameter tuning can lead to much better performance on test sets. However, optimizing parameters to the test set can lead information leakage causing the model to preform worse on unseen data. To correct for this we can perform cross validation. To better understand CV, we will be performing different methods on the iris dataset.
The training data used in the model is split, into k number of smaller sets, to be used to validate the model. The model is then trained on k-1 folds of training set. The remaining fold is then used as a validation set to evaluate the model. As we will be trying to classify different species of iris flowers we will need to import a classifier model, for this exercise we will be using a DecisionTreeClassifier.
1 | from sklearn import datasets |
1 | Cross Validation Scores: [1. 1. 0.83333333] |
In cases where classes are imbalanced we need a way to account for the imbalance in both the train and validation sets. To do so we can stratify the target classes, meaning that both sets will have an equal proportion of all classes. While the number of folds is the same, the average CV increases from the basic k-fold when making sure there is stratified classes.
1 | sk_folds = StratifiedKFold(n_splits = 5) |
1 | Cross Validation Scores: [0.96666667 0.96666667 0.9] |
Instead of selecting the number of splits in the training data set like k-fold LeaveOneOut, utilize 1 observation to validate and n-1 observations to train. This method is an exaustive technique.
1 | loo = LeaveOneOut() |
1 | Cross Validation Scores: [1. 1. 1.] |
We can observe that the number of cross validation scores performed is equal to the number of observations in the dataset. In this case there are 150 observations in the iris dataset. The average CV score is 94%.
Leave-P-Out is simply a nuanced diffence to the Leave-One-Out idea, in that we can select the number of p to use in our validation set.
1 | lpo = LeavePOut(p=2) |
1 | Cross Validation Scores: [1. 1. 1.] |
As we can see this is an exhaustive method we many more scores being calculated than Leave-One-Out, even with a p = 2, yet it achieves roughly the same average CV score.
Unlike KFold, ShuffleSplit leaves out a percentage of the data, not to be used in the train or validation sets. To do so we must decide what the train and test sizes are, as well as the number of splits.
1 | ss = ShuffleSplit(train_size=0.6, test_size=0.3, n_splits = 5) |
1 | Cross Validation Scores: [0.93333333 0.93333333 0.97777778] |
Cross Validation, Data, Machine Learning — Aug 6, 2021
Made with ❤️ and ☀️ on Earth.