Total loss
We just learned the loss function of the generator and the discriminator combining these two losses, and we write our final loss function as follows:
![](https://static.packt-cdn.com/products/9781839210686/graphics/Images/B15558_07_103.png)
So, our objective function is basically a min-max objective function, that is, a maximization for the discriminator and a minimization for the generator, and we find the optimal generator parameter, , and discriminator parameter,
, through backpropagating the respective networks.
So, we perform gradient ascent; that is, maximization on the discriminator:
![](https://static.packt-cdn.com/products/9781839210686/graphics/Images/B15558_07_106.png)
And, we perform gradient descent; that is, minimization on the generator:
![](https://static.packt-cdn.com/products/9781839210686/graphics/Images/B15558_07_107.png)