For the sake of entertainment, we will display some of the more interesting results from our own training experiments to conclude this chapter. The first screenshot shows the output that's generated by our SimpleRNN model at the end of the first epoch (note that the output prints out the first epoch as epoch 0). This is simply an implementational issue, denoting the first index position in range of n epochs. As we can see, even after the very first epoch, the SimpleRNN seems to have picked up on word morphology and generates real English words at low sampling thresholds.
This is just as we expected. Similarly, higher entropy samples (with a threshold of 1.2, for example) produce more stochastic results and generate (from a subjective perspective) interesting sounding words (such as eresdoin, harereus, and nimhte):