As a last example of the use of GANs, we will look at what is perhaps the most symptomatic and well-known case, which involves generating adversarial examples representative of human faces.
Apart from the surprising effect that this technique can have on those who examine the results, which are often very realistic, this technique, when used as an attack tool, constitutes a serious threat to all those cybersecurity procedures based on the verification of biometric evidence (often used to access, for example, online banking services, or, more recently, to log in to social networks, and even access your own smartphone).
Moreover, it can be used to deceive even the AI-empowered facial-recognition tools used by the police to identify suspects, consequently reducing their overall reliability.
As demonstrated in the paper Explaining and Harnessing...