With the advent of neural radiance fields for novel view synthesis, new methods are emerging to generalize these models to dynamic data, e.g., multi-view videos. Modelling 3D motion via 2D observations is a non-trivial problem, which has resulted in limitations for previous proposals, especially concerning the temporal coherence of the output learned scene. In this thesis work, we present a technique for learning dynamic scenes using Lipschitz regularization. Specifically, time is considered as an additional input to static neural radiance fields and the motion is modelled as a temporal distortion function, implemented as an additional neural network, which is applied to a canonical space. The Lipschitz regularization is applied to this temporal deformation, allowing for smooth dynamics while the canonical space can learn geometry and colour information with arbitrarily high frequency. Both mappings are implemented as MLPs. In our evaluation, we tested the effectiveness of Lipschitz regularization on scenes with rigid, non-rigid and articulated objects with non-Lambertian materials and on multiple neural radiance field architectures. Our experiments show that applying Lipschitz regularization on temporal distortion enables dynamic radiance fields to learn a smooth dynamic scene with improved temporal coherence and fidelity.
- Install Nerfstudio
- Clone this repository
git clone https://github.com/lorenzo-delsignore/smooth-dnerf.git)
- Install this repository as a python package
pip install -e .
- Check if the dynamic models
nerfacto-dnerf
,nerfacto-nerfplayer-dnerf
andvanilla-dnerf
are listed with the commandns-train -h
- The only dataset supported is the DNeRF dataset
- Train a model using the following command:
ns-train <name model> --data <data path>
- Model implemented are D-Nerfacto
nerfacto-dnerf
, D-NerfPlayer-Nerfactonerfacto-nerfplayer-dnerf
, D-NeRFvanilla-dnerf
- Monitor the training with wandb.