Some ethical thoughts about generative models have already been provided in Chapter 9, Variational Autoencoders. However, a second round of thoughts is in order given the adversarial nature of GANs. That is, there is an implicit demand from a GAN to trick a critic in a min-max game where the generator needs to come out victorious (or the critic as well). This concept generalized to adversarial learning provides the means to attack existing machine learning models.
Very successful computer vision models such as VGG16 (a CNN model) have been attacked by models that perform adversarial attacks. There are patches that you can print, put on a t-shirt, cap, or any object, and as soon as the patch is present in the input to the models being attacked, they are fooled into thinking that the existing object is a completely different one (Brown, T. B., et al. (2017)). Here is an example of an adversarial patch that tricks a model into thinking that...