3D-GAN, which was proposed by Jiajun Wu, Chengkai Zhang, and Tianfan Xue, et. al. in their paper, Learning a Probabilistic Latent Space of Object Shapes via 3D Generative-Adversarial Modeling, was designed to generate a 3D point cloud of certain types of objects. The design and training process of 3D-GAN is very similar to the vanilla GAN, except that the input and output tensors of the 3D-GAN are five-dimensional, rather than four-dimensional.
Designing GANs for 3D data synthesis
Generators and discriminators in 3D-GAN
The architecture of the generator network of 3D-GAN is as follows:
Architecture of the generator network in 3D-GAN
The generator network consists of five transposed convolution layers (nn.ConvTranspose3d)...