We propose L3DG, the first approach for generative 3D modeling of 3D Gaussians through a latent 3D Gaussian diffusion formulation. This enables effective generative 3D modeling, scaling to generation of entire room-scale scenes which can be very efficiently rendered. To enable effective synthesis of 3D Gaussians, we propose a latent diffusion formulation, operating in a compressed latent space of 3D Gaussians. This compressed latent space is learned by a vector-quantized variational autoencoder (VQ-VAE), for which we employ a sparse convolutional architecture to efficiently operate on room-scale scenes. This way, the complexity of the costly generation process via diffusion is substantially reduced, allowing higher detail on object-level generation, as well as scalability to large scenes. By leveraging the 3D Gaussian representation, the generated scenes can be rendered from arbitrary viewpoints in real-time. We demonstrate that our approach significantly improves visual quality over prior work on unconditional object-level radiance field synthesis and showcase its applicability to room-scale scene generation.
A 3D Gausssian compression model learns to compress 3D Gaussians into sparse quantized features using sparse convolutions and vector-quantization at the bottleneck (VQ-VAE). This enables a 3D diffusion model to efficiently operate on the compressed latent space. At test time, novel scenes are generated by denoising in latent space, which can be sparsified and decoded to high quality 3D Gaussians that can be rendered in real-time.
@inproceedings{roessle2024l3dg,
title={L3DG: Latent 3D Gaussian Diffusion},
author={Roessle, Barbara and M{\"u}ller, Norman and Porzi, Lorenzo and Bul{\`o}, Samuel Rota and Kontschieder, Peter and Dai, Angela and Nie{\ss}ner, Matthias},
booktitle={SIGGRAPH Asia 2024 Conference Papers},
month={December},
year={2024}
}