Exploring Controllable Neural Feature Fields
In the previous chapter, you learned how to represent a 3D scene using Neural Radiance Fields (NeRF). We trained a single neural network on posed multi-view images of a 3D scene to learn an implicit representation of it. Then, we used the NeRF model to render the 3D scene from various other viewpoints and viewing angles. With this model, we assumed that the objects and the background are unchanging.
But it is fair to wonder whether it is possible to generate variations of the 3D scene. Can we control the number of objects, their poses, and the scene background? Can we learn about the 3D nature of things without posed images and without understanding the camera parameters?
By the end of this chapter, you will learn that it is indeed possible to do all these things. Concretely, you should have a better understanding of GIRAFFE, a very novel method for controllable 3D image synthesis. This combines ideas from the fields of image synthesis...