As mentioned earlier, the only difference between our networks are the layers that are created and added to the network object. In an LSTM we will add LSTM layers, and in a GRU, unsurprisingly, we will add GRU layers, and so forth. All four types of creation functions are displayed as follows for you to compare:
public static NeuralNetwork MakeLstm(int inputDimension, int hiddenDimension, int hiddenLayers, int outputDimension, INonlinearity decoderUnit, double initParamsStdDev, Random rng)
{
List<ILayer> layers = new List<ILayer>();
for (int h = 0; h<hiddenLayers; h++)
{
layers.Add(h == 0
? new LstmLayer(inputDimension, hiddenDimension, initParamsStdDev, rng)
: new LstmLayer(hiddenDimension, hiddenDimension, initParamsStdDev, rng));
}
layers.Add(new FeedForwardLayer(hiddenDimension, outputDimension, decoderUnit...