From f6e077e843399a693b448a373d2423756c4cab61 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Thomas=20M=C3=BCller?= Date: Fri, 14 Jan 2022 10:40:09 +0100 Subject: [PATCH] Add references to instant NGP --- DOCUMENTATION.md | 3 ++- README.md | 17 ++++++++++------- 2 files changed, 12 insertions(+), 8 deletions(-) diff --git a/DOCUMENTATION.md b/DOCUMENTATION.md index 67d6dc53..980a3047 100644 --- a/DOCUMENTATION.md +++ b/DOCUMENTATION.md @@ -118,7 +118,8 @@ The number of encoded dimensions is twice the specified number of frequencies fo ### Grid -Encoding based on trainable multiresolution grids. The grids can be backed by hashtables, dense storage, or tiled storage. +Encoding based on trainable multiresolution grids. +Used for [Instant Neural Graphics Primitives [Müller et al. 2022]](https://nvlabs.github.io/instant-ngp/). The grids can be backed by hashtables, dense storage, or tiled storage. The number of encoded dimensions is `n_levels * n_features_per_level`. diff --git a/README.md b/README.md index 56f1fa68..bb512e31 100644 --- a/README.md +++ b/README.md @@ -29,17 +29,20 @@ _Fully fused networks vs. TensorFlow v2.5.0 w/ XLA. Measured on 64 (solid line) This framework powers the following publications: +> __Instant Neural Graphics Primitives with a Multiresolution Hash Encoding__ +> [Thomas Müller](https://tom94.net), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Christoph Schied](https://research.nvidia.com/person/christoph-schied), [Alexander Keller](https://research.nvidia.com/person/alex-keller) +> _arXiv [cs.GR], Jan 2022_ +> __[ [Website](https://nvlabs.github.io/instant-ngp/) ] [ [Paper](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.pdf) ] [ [Code](https://github.com/NVlabs/instant-ngp) ] [ [Video](https://nvlabs.github.io/instant-ngp/assets/mueller2022instant.mp4) ]__ + > __Real-time Neural Radiance Caching for Path Tracing__ > [Thomas Müller](https://tom94.net), [Fabrice Rousselle](https://research.nvidia.com/person/fabrice-rousselle), [Jan Novák](http://jannovak.info), [Alexander Keller](https://research.nvidia.com/person/alex-keller) -> _ACM Transactions on Graphics (Proceedings of SIGGRAPH), vol. 40, no. 4, pp. 36:1–36:16, Aug 2021_ -> -> [ [Paper](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.pdf) ] [ [GTC talk](https://gtc21.event.nvidia.com/media/Fully%20Fused%20Neural%20Network%20for%20Radiance%20Caching%20in%20Real%20Time%20Rendering%20%5BE31307%5D/1_liqy6k1c) ] [ [Video](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.mp4) ] [ [Interactive Results Viewer](https://tom94.net/data/publications/mueller21realtime/interactive-viewer/) ] [ [BibTeX](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.bib) ] +> _ACM Transactions on Graphics (Proceedings of SIGGRAPH), vol. 40, no. 4, pp. 36:1–36:16, Aug 2021_ +> __[ [Paper](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.pdf) ] [ [GTC talk](https://gtc21.event.nvidia.com/media/Fully%20Fused%20Neural%20Network%20for%20Radiance%20Caching%20in%20Real%20Time%20Rendering%20%5BE31307%5D/1_liqy6k1c) ] [ [Video](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.mp4) ] [ [Interactive Results Viewer](https://tom94.net/data/publications/mueller21realtime/interactive-viewer/) ] [ [BibTeX](https://tom94.net/data/publications/mueller21realtime/mueller21realtime.bib) ]__ > __Extracting Triangular 3D Models, Materials, and Lighting From Images__ > [Jacob Munkberg](https://research.nvidia.com/person/jacob-munkberg), [Jon Hasselgren](https://research.nvidia.com/person/jon-hasselgren), [Tianchang Shen](http://www.cs.toronto.edu/~shenti11/), [Jun Gao](http://www.cs.toronto.edu/~jungao/), [Wenzheng Chen](http://www.cs.toronto.edu/~wenzheng/), [Alex Evans](https://research.nvidia.com/person/alex-evans), [Thomas Müller](https://tom94.net), [Sanja Fidler](https://www.cs.toronto.edu/~fidler/) -> _[arXiv:2111.12503 [cs.CV]](https://arxiv.org/abs/2111.12503)_, Nov 2021 -> -> [ [Website](https://nvlabs.github.io/nvdiffrec/) ] [ [Paper](https://nvlabs.github.io/nvdiffrec/assets/paper.pdf) ] [ [Video](https://nvlabs.github.io/nvdiffrec/assets/video.mp4) ] [ [BibTeX](https://nvlabs.github.io/nvdiffrec/assets/bib.txt) ] +> _[arXiv:2111.12503 [cs.CV]](https://arxiv.org/abs/2111.12503)_, Nov 2021 +> __[ [Website](https://nvlabs.github.io/nvdiffrec/) ] [ [Paper](https://nvlabs.github.io/nvdiffrec/assets/paper.pdf) ] [ [Video](https://nvlabs.github.io/nvdiffrec/assets/video.mp4) ] [ [BibTeX](https://nvlabs.github.io/nvdiffrec/assets/bib.txt) ]__ ## Usage @@ -155,7 +158,7 @@ The following is a summary of all components of this framework that are currentl | :--- | :---------- | :----- | Composite | `include/tiny-cuda-nn/encodings/composite.h` | Allows composing multiple encodings. Can be, for example, used to assemble the Neural Radiance Caching encoding [[Müller et al. 2021]](https://tom94.net/). | Frequency | `include/tiny-cuda-nn/encodings/frequency.h` | NeRF's [[Mildenhall et al. 2020]](https://www.matthewtancik.com/nerf) positional encoding applied equally to all dimensions. -| Grid | `include/tiny-cuda-nn/encodings/grid.h` | Encoding based on trainable multiresolution grids. The grids can be backed by hashtables, dense storage, or tiled storage. +| Grid | `include/tiny-cuda-nn/encodings/grid.h` | Encoding based on trainable multiresolution grids. Used for [Instant Neural Graphics Primitives [Müller et al. 2022]](https://nvlabs.github.io/instant-ngp/). The grids can be backed by hashtables, dense storage, or tiled storage. | Identity | `include/tiny-cuda-nn/encodings/identity.h` | Leaves values untouched. | Oneblob | `include/tiny-cuda-nn/encodings/oneblob.h` | From Neural Importance Sampling [[Müller et al. 2019]](https://tom94.net/data/publications/mueller18neural/mueller18neural-v4.pdf) and Neural Control Variates [[Müller et al. 2020]](https://tom94.net/data/publications/mueller20neural/mueller20neural.pdf). | SphericalHarmonics | `include/tiny-cuda-nn/encodings/spherical_harmonics.h` | A frequency-space encoding that is more suitable to direction vectors than component-wise ones.