Exploring controllable scene generation
To truly appreciate and learn what a computer vision model generates, we need to visualize the outputs of the trained model. Since we are dealing with a generative approach, it is easy to do this by simply visualizing the images generated by the model. In this section, we will explore pre-trained GIRAFFE models and look at how well they can generate controllable scenes. We will use pre-trained checkpoints provided by the creators of the GIRAFFE model. The instructions provided in this section are based on the open source GitHub repository at https://github.com/autonomousvision/giraffe.
Create the Anaconda environment called giraffe
with the following commands:
$ cd chap7/giraffe $ conda env create -f environment.yml $ conda activate giraffe
Once the conda
environment has been activated, you can start rendering images for various datasets using their corresponding pre-trained checkpoints. The creators of the GIRAFFE model have shared...