The advantage of cross validation over repeated random sub-sampling is that all of the observations are used for both training and validation, and each observation is used for validation exactly once.
The following code shows you how to implement a five-fold cross validation in Keras, where we use the entire dataset (training and testing together) and print out the averaged predictions of a network on each of the cross validation runs. As we can see, this is achieved by training the model on four random splits and testing it on the remaining split, per each cross validation run. We use the scikit-learn API wrapper provided by Keras and leverage the Keras regressor, along with sklearn's standard scaler, k-fold cross-validator creator, and score evaluator:
import numpy as np
import pandas as pd
​
from keras.models import Sequential...