The generators of the DiscoGAN are feed-forward convolutional neural networks where the input and output are images. In the first part of the network, the images are scaled down in spatial dimensions while the number of the output feature maps increases as the layers progress. In the second part of the network, the images are scaled up along the spatial dimensions, while the number of output feature maps reduce from layer to layer. In the final output layer, an image with the same spatial dimensions as that of the input is generated. If a generator that converts an image xA to xAB from domain A to domain B is represented by GAB, then we have .
Illustrated here is the build_generator function, which can we used to build the generators for the DiscoGAN network:
def build_generator(self,image,reuse=False,name='generator'):
with tf.variable_scope...