-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Improve voxel memory usage via Texture3D
#13084
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Thank you for the pull request, @jjhembd! ✅ We can confirm we have a CLA on file for you. |
jjhembd
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added some review comments in key areas, and in places where I need feedback.
| /** | ||
| * The maximum number of texture units that can be used from the vertex and fragment | ||
| * shader with this WebGL implementation. The minimum is eight. If both shaders access the | ||
| * shader with this WebGL implementation. The minimum is 32. If both shaders access the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All these comments have been updated to the WebGL2 minimums, since we now default to a WebGL2 context.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When defaulting to a WebGL2 context, is there still a fallback path for devices that only support WebGL 1? I understand voxel rendering will require WebGL 2, but other CesiumJS APIs can run with just WebGL 1?
If not, just wondering if the comments should reflect the minimums for browsers and hardware that CesiumJS supports, rather than the minimums for the preferred/default WebGL 2.
(or if deprecating WebGL 1 support more broadly is on the roadmap, maybe that changes things!)
| } = options; | ||
|
|
||
| if (!context.webgl2) { | ||
| if (!context.webgl2 && !defined(context.options.getWebGLStub)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need to allow a WebGL stub, since that is what we use for testing. Much of the following code can still be reasonably tested in a stub.
I'm open to feedback on where to allow the stub. Should we try to set a .webgl2 property on the stubbed context? Or, since WebGL2 is now the default, should we be switching to a context.webgl1 flag?
| Check.typeOf.number.greaterThan("width", width, 0); | ||
|
|
||
| if (width > ContextLimits.maximumTextureSize) { | ||
| if (width > ContextLimits.maximum3DTextureSize) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to have been an oversight in #12611. It matters for large textures: maximum3DTextureSize tends to be smaller than maximumTextureSize.
| * } | ||
| * }); | ||
| */ | ||
| Texture3D.prototype.copyFrom = function (options) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Compare to Texture.copyFrom. The Texture3D version here supports fewer source data types.
| Check.typeOf.bool("nearestSampling", nearestSampling); | ||
| //>>includeEnd('debug'); | ||
| if (this._nearestSampling === nearestSampling) { | ||
| return; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The texture.sampler setter makes some GL calls, so when possible, we exit early to avoid calling it.
| return PixelFormat.RGB; | ||
| } else if (channelCount === 4) { | ||
| return PixelFormat.RGBA; | ||
| function getPixelFormat(channelCount) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function now assumes WebGL2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the Megatexture concept used beyond voxel rendering? I saw one reference in KeyframeNode.js so just wanted to check if that affects WebGL1 support anywhere else.
| * @returns {Cartesian3} The computed 3D texture dimensions. | ||
| */ | ||
| Megatexture.getApproximateTextureMemoryByteLength = function ( | ||
| Megatexture.get3DTextureDimension = function ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This function is not used outside the class, but we expose it for testing
| inputDimensions, | ||
| types, | ||
| componentTypes, | ||
| // TODO: refine this. If provider.maximumTileCount is not defined, we will always allocate 512 MB per metadata property. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeking feedback here. What if the tileset has 10 metadata properties? This would allocate 5 GB. Should we default to a total memory per tileset?
|
|
||
| it("shows texture memory allocation statistic", function () { | ||
| expect(traversal.textureMemoryByteLength).toBe(textureMemoryByteLength); | ||
| expect(traversal.textureMemoryByteLength).toBe(32); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The new allocation is smaller if the data is smaller than the suggested byte length
donmccurdy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still reading through things, but just a couple initial comments!
| /** | ||
| * The maximum number of texture units that can be used from the vertex and fragment | ||
| * shader with this WebGL implementation. The minimum is eight. If both shaders access the | ||
| * shader with this WebGL implementation. The minimum is 32. If both shaders access the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When defaulting to a WebGL2 context, is there still a fallback path for devices that only support WebGL 1? I understand voxel rendering will require WebGL 2, but other CesiumJS APIs can run with just WebGL 1?
If not, just wondering if the comments should reflect the minimums for browsers and hardware that CesiumJS supports, rather than the minimums for the preferred/default WebGL 2.
(or if deprecating WebGL 1 support more broadly is on the roadmap, maybe that changes things!)
| const sources = shaderSource.sources; | ||
| if (defined(sources)) { | ||
| for (i = 0, length = sources.length; i < length; ++i) { | ||
| for (let i = 0; i < sources.length; ++i) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the cleanup here! Fond memories of being bitten by function-scoped 'var i' loop iterators in the bad old days. :)
| return PixelFormat.RGB; | ||
| } else if (channelCount === 4) { | ||
| return PixelFormat.RGBA; | ||
| function getPixelFormat(channelCount) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the Megatexture concept used beyond voxel rendering? I saw one reference in KeyframeNode.js so just wanted to check if that affects WebGL1 support anywhere else.
|
For context on the matter of WebGL2/WebGL1 support—While we're not likely to fully deprecated WebGL 1 support in the near future, our implicit policy is that newer features like voxels don't need backwards compatibility with WebGL 1. The rational is that we default to WebGL 2 at this point and we want to be able to take advantage of newer features available to us like 3D textures. Additionally, the voxel APIs in particular is marked as "experimental" which means they are subject to breaking changes without deprecation. (The latter is documented in our Coding Guide. Perhaps we should also add a note about out WebGL 1/2 "policy".) |
Description
This PR improves memory efficiency in
VoxelPrimitiveby reworkingMegatextureto use theTexture3Dclass from #12611.Megatextures
Unlike
Cesium3DTilesetwhich renders each tile with separateDrawCommands, voxel tilesets are rendered in a single draw command for all tiles. This is necessary because of the raymarching approach: foreground tiles affect the rendering of background tiles. As a result, all tiles in the scene must be loaded into a single texture, via theMegatextureclass. Different tiles are assigned different regions within the sameMegatexture.Previous
MegatextureimplementationPrior to this PR, the
Megatextureimplementation was backed by a 2D texture. 3D voxel tiles were split into 2D slices, which were then arranged across the 2D texture. This structure madeVoxelPrimitivecompatible with WebGL1 contexts, but had some drawbacks:New
MegatextureimplementationThis PR reworks
Megatextureto useTexture3D, and removes restrictions on the size of the texture. This simplifies the shader code, since we can directly use built-in graphics APIs for both linear and nearest sampling. Also, the texture can be allocated to more closely fit the size of the actual data.How 3D Textures are sized
The data for each tile is like a 3D box, and the
Megatextureis like a bin into which we are packing the boxes. Bin packing in general is a hard optimization problem. The approach used here starts from some simplifying assumptions:All sizing is done based on the maximum number of tiles, which is either a value from the
VoxelProvider, or the available memory divided by the memory size of one tile.We first check for a special case: if all of the tiles can fit in a single 1x1xN stack, without the long dimension exceeding
GL_MAX_3D_TEXTURE_SIZE, then the texture is allocated to that size. This guarantees no wasted space, because this shape can be made to fit the tiles exactly. We make sure to stack the tiles along the smallest tile dimension, to increase the chances of achieving this optimal case.If more than one row of tiles is needed (
GL_MAX_3D_TEXTURE_SIZEtends to be smallish), we then proceed as follows:tileCount[i] * tileDimension[i] < GL_MAX_3D_TEXTURE_SIZEtextureSize[i] = tileCount[i] * tileDimension[i]Other changes
ContextLimitsdocumentation and specs to assume the default WebGL2ContextLimits.maximum3DTextureSizeTexture3D.prototype.copyFrommethod, following the similar method fromTextureVoxelsSandcastle exampleIssue number and link
Resolves #12570
Testing plan
Run all specs locally. Changes in
ShaderBuildershould not affect non-voxel rendering, but we should verify this.Load all voxel-related Sandcastles, and verify:
Author checklist
CONTRIBUTORS.mdCHANGES.mdwith a short summary of my change