Differentiable Procedural Models for Single-view 3D Mesh Reconstruction
Most of the existing solutions for single-view 3D object reconstruction rely on deep learning techniques that use implicit or voxel representations of the scene. However, these approaches struggle to generate detailed and high-quality meshes and textures that can be directly applied in practical applications. On the other hand, differentiable rendering techniques can produce superior mesh quality, but they typically require multiple images of the object. We propose a novel approach to single-view 3D reconstruction that leverages procedural generator input parameters as a scene representation. Instead of directly estimating the vertex positions of the mesh, we estimate the input parameters of a procedural generator by minimizing the silhouette loss function between reference and rendered images. By employing differentiable rendering and differentiable procedural generators, we can optimize the loss function using gradients. This enables us to create highly detailed models from a single image.
Procedural generation, 3D reconstruction, differentiable rendering