Skip to content

Conversation

@jjhembd
Copy link
Contributor

@jjhembd jjhembd commented Dec 5, 2025

Description

This PR improves memory efficiency in VoxelPrimitive by reworking Megatexture to use the Texture3D class from #12611.

Megatextures

Unlike Cesium3DTileset which renders each tile with separate DrawCommands, voxel tilesets are rendered in a single draw command for all tiles. This is necessary because of the raymarching approach: foreground tiles affect the rendering of background tiles. As a result, all tiles in the scene must be loaded into a single texture, via the Megatexture class. Different tiles are assigned different regions within the same Megatexture.

Previous Megatexture implementation

Prior to this PR, the Megatexture implementation was backed by a 2D texture. 3D voxel tiles were split into 2D slices, which were then arranged across the 2D texture. This structure made VoxelPrimitive compatible with WebGL1 contexts, but had some drawbacks:

  • Shader complexity: To rendering a voxel with linear interpolation, we couldn't depend on the hardware sampling, but had to explicitly sample 4 points from two different slices, and interpolate them in the shader.
  • Memory inefficiency: the 2D texture was always sized as a square with the width and height being a power of two. This restriction, combined with the complexity of slicing 3D tiles onto a 2D surface, resulted in some datasets allocating as much as 75% more texture memory than needed.

New Megatexture implementation

This PR reworks Megatexture to use Texture3D, and removes restrictions on the size of the texture. This simplifies the shader code, since we can directly use built-in graphics APIs for both linear and nearest sampling. Also, the texture can be allocated to more closely fit the size of the actual data.

How 3D Textures are sized

The data for each tile is like a 3D box, and the Megatexture is like a bin into which we are packing the boxes. Bin packing in general is a hard optimization problem. The approach used here starts from some simplifying assumptions:

  • All tiles are the same size (it is an implicit octree)
  • For a given maximum number of tiles, allocating <25% unused space is "good enough"
  • For a given hard memory limit, utilizing >80% of the limit is "good enough"

All sizing is done based on the maximum number of tiles, which is either a value from the VoxelProvider, or the available memory divided by the memory size of one tile.

We first check for a special case: if all of the tiles can fit in a single 1x1xN stack, without the long dimension exceeding GL_MAX_3D_TEXTURE_SIZE, then the texture is allocated to that size. This guarantees no wasted space, because this shape can be made to fit the tiles exactly. We make sure to stack the tiles along the smallest tile dimension, to increase the chances of achieving this optimal case.

If more than one row of tiles is needed (GL_MAX_3D_TEXTURE_SIZE tends to be smallish), we then proceed as follows:

  1. Find a factorization of the tile count into a product of small primes (2, 3, 5, or 7). The tile count is rounded up to the nearest such composite number, as needed. (There are some further simplifications--we are not checking every possible product of those primes).
  2. Distribute the factors into 3 dimensions, starting from the axis of the smallest tile dimension, with the constraint tileCount[i] * tileDimension[i] < GL_MAX_3D_TEXTURE_SIZE
  3. Compute the resulting texture size textureSize[i] = tileCount[i] * tileDimension[i]

Other changes

  • Updated ContextLimits documentation and specs to assume the default WebGL2
  • Added ContextLimits.maximum3DTextureSize
  • Added Texture3D.prototype.copyFrom method, following the similar method from Texture
  • Cleaned up Voxels Sandcastle example
  • Removed some WebGL1 workarounds from voxel shaders (Voxels require WebGL2 since Use eye coordinates for voxel raymarching #12933)

Issue number and link

Resolves #12570

Testing plan

Run all specs locally. Changes in ShaderBuilder should not affect non-voxel rendering, but we should verify this.

Load all voxel-related Sandcastles, and verify:

  • Rendering is unchanged
  • Framerate is not affected (let me know if it improves!)
  • Picking works
  • Nearest and linear sampling both work as expected (Open the VoxelInspector to toggle this)

Author checklist

  • I have submitted a Contributor License Agreement
  • I have added my name to CONTRIBUTORS.md
  • I have updated CHANGES.md with a short summary of my change
  • I have added or updated unit tests to ensure consistent code coverage
  • I have updated the inline documentation, and included code examples where relevant
  • I have performed a self-review of my code

@github-actions
Copy link

github-actions bot commented Dec 5, 2025

Thank you for the pull request, @jjhembd!

✅ We can confirm we have a CLA on file for you.

@jjhembd jjhembd marked this pull request as ready for review December 8, 2025 15:16
Copy link
Contributor Author

@jjhembd jjhembd left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some review comments in key areas, and in places where I need feedback.

/**
* The maximum number of texture units that can be used from the vertex and fragment
* shader with this WebGL implementation. The minimum is eight. If both shaders access the
* shader with this WebGL implementation. The minimum is 32. If both shaders access the
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these comments have been updated to the WebGL2 minimums, since we now default to a WebGL2 context.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When defaulting to a WebGL2 context, is there still a fallback path for devices that only support WebGL 1? I understand voxel rendering will require WebGL 2, but other CesiumJS APIs can run with just WebGL 1?

If not, just wondering if the comments should reflect the minimums for browsers and hardware that CesiumJS supports, rather than the minimums for the preferred/default WebGL 2.

(or if deprecating WebGL 1 support more broadly is on the roadmap, maybe that changes things!)

} = options;

if (!context.webgl2) {
if (!context.webgl2 && !defined(context.options.getWebGLStub)) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to allow a WebGL stub, since that is what we use for testing. Much of the following code can still be reasonably tested in a stub.
I'm open to feedback on where to allow the stub. Should we try to set a .webgl2 property on the stubbed context? Or, since WebGL2 is now the default, should we be switching to a context.webgl1 flag?

Check.typeOf.number.greaterThan("width", width, 0);

if (width > ContextLimits.maximumTextureSize) {
if (width > ContextLimits.maximum3DTextureSize) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This appears to have been an oversight in #12611. It matters for large textures: maximum3DTextureSize tends to be smaller than maximumTextureSize.

* }
* });
*/
Texture3D.prototype.copyFrom = function (options) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compare to Texture.copyFrom. The Texture3D version here supports fewer source data types.

Check.typeOf.bool("nearestSampling", nearestSampling);
//>>includeEnd('debug');
if (this._nearestSampling === nearestSampling) {
return;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The texture.sampler setter makes some GL calls, so when possible, we exit early to avoid calling it.

return PixelFormat.RGB;
} else if (channelCount === 4) {
return PixelFormat.RGBA;
function getPixelFormat(channelCount) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function now assumes WebGL2

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the Megatexture concept used beyond voxel rendering? I saw one reference in KeyframeNode.js so just wanted to check if that affects WebGL1 support anywhere else.

* @returns {Cartesian3} The computed 3D texture dimensions.
*/
Megatexture.getApproximateTextureMemoryByteLength = function (
Megatexture.get3DTextureDimension = function (
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function is not used outside the class, but we expose it for testing

inputDimensions,
types,
componentTypes,
// TODO: refine this. If provider.maximumTileCount is not defined, we will always allocate 512 MB per metadata property.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seeking feedback here. What if the tileset has 10 metadata properties? This would allocate 5 GB. Should we default to a total memory per tileset?


it("shows texture memory allocation statistic", function () {
expect(traversal.textureMemoryByteLength).toBe(textureMemoryByteLength);
expect(traversal.textureMemoryByteLength).toBe(32);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new allocation is smaller if the data is smaller than the suggested byte length

Copy link
Member

@donmccurdy donmccurdy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still reading through things, but just a couple initial comments!

/**
* The maximum number of texture units that can be used from the vertex and fragment
* shader with this WebGL implementation. The minimum is eight. If both shaders access the
* shader with this WebGL implementation. The minimum is 32. If both shaders access the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When defaulting to a WebGL2 context, is there still a fallback path for devices that only support WebGL 1? I understand voxel rendering will require WebGL 2, but other CesiumJS APIs can run with just WebGL 1?

If not, just wondering if the comments should reflect the minimums for browsers and hardware that CesiumJS supports, rather than the minimums for the preferred/default WebGL 2.

(or if deprecating WebGL 1 support more broadly is on the roadmap, maybe that changes things!)

const sources = shaderSource.sources;
if (defined(sources)) {
for (i = 0, length = sources.length; i < length; ++i) {
for (let i = 0; i < sources.length; ++i) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the cleanup here! Fond memories of being bitten by function-scoped 'var i' loop iterators in the bad old days. :)

return PixelFormat.RGB;
} else if (channelCount === 4) {
return PixelFormat.RGBA;
function getPixelFormat(channelCount) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the Megatexture concept used beyond voxel rendering? I saw one reference in KeyframeNode.js so just wanted to check if that affects WebGL1 support anywhere else.

@ggetz
Copy link
Contributor

ggetz commented Dec 10, 2025

For context on the matter of WebGL2/WebGL1 support—While we're not likely to fully deprecated WebGL 1 support in the near future, our implicit policy is that newer features like voxels don't need backwards compatibility with WebGL 1. The rational is that we default to WebGL 2 at this point and we want to be able to take advantage of newer features available to us like 3D textures.

Additionally, the voxel APIs in particular is marked as "experimental" which means they are subject to breaking changes without deprecation.

(The latter is documented in our Coding Guide. Perhaps we should also add a note about out WebGL 1/2 "policy".)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Optimize texture allocation for voxels

4 participants