Let’s assume we mean k-fold cross-validation used for hyperparameter tuning of algorithms for classification, and with “better,” we mean better at estimating the generalization performance. In this case, my answer would be no, otherwise we would always use LOOCV (Leave one out cross validation) instead of k-fold CV. (A useful reference: Shao, Jun. Linear model selection by cross-validation. Journal of the American statistical Association 88.422 (1993): 486-494.).

In practice, I would say most commonly used (default) value is k=10 in k-fold CV, which is often an appropriate a good choice. But if we are working with small(er) training sets, I would increase the number of folds to use more training data in each iteration; this will lower the bias towards estimating the generalization error. On the other hand, it will also increase the run-time and variance of your estimate. The reason for the increasing variance of the estimate is that the overlap between training sets increases with an increasing size of k – note that the test sets never overlap though.

And far as computational efficiency is concerned – for example, think of training deep neural nets on large(r) datasets including hyperparameter tuning – I would think carefully about the size of k. If our dataset is large, I’d therefore recommend choosing smaller values for k, but it is all a balancing act between bias and variance and computational efficiency, and for our final estimate, we still have our independent test set anyway.




If you like this content and you are looking for similar, more polished Q & A’s, check out my new book Machine Learning Q and AI.