You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an overview of the planned changes for v7. The actual implementation details are subject to change during development.
Feel free to leave feedback, questions or suggestions.
The Current Design
The API in postprocessing is based on three's postprocessing examples; at its core it provides an EffectComposer that uses a WebGLRenderer to render passes. A Pass performs a set of tasks to either render consumable textures or to draw the final result to screen. The RenderPass renders the scene colors to a texture and serves as a starting point for most render pipelines.
A few years ago, #82 introduced the EffectPass which can merge Effect instances into a single fullscreen pass for optimal shader performance. Since then, passes and effects have been added and improved, but some effects like SSR and motion blur are still missing. This is because the current API doesn't provide a good way to implement such effects efficiently. Modern effects need additional scene geometry data such as depth, normals, positions, roughness, velocity, etc. This currently requires the scene to be rendered multiple times using three's limited material override system.
Problems
The EffectComposer creates two internal render targets to store intermediate results.
The purpose of this is to avoid reading from and rendering to the same buffer (feedback loop).
The composer provides access to these buffers via the render method of the passes.
Passes need to tell the composer whether these buffers should be swapped when they're done.
The multisampling (MSAA) setting affects both of these buffers because the composer doesn't know which of them will actually be used by a RenderPass.
The composer provides a DepthTexture to passes that need one, but this feature is partially broken and too limited.
Passes and effects create several render targets internally which makes optimizations impossible.
Some passes and effects re-render the main scene to obtain additional data or to mask objects based on depth.
This leads to poor performance because rendering the main scene is expensive.
Implementation Goals
The buffer management in postprocessing needs to become more sophisticated to support modern requirements.
Rename RenderPass to GeometryPass.
Replace EffectComposer with a more lightweight RenderPipeline class.
Pipelines are used to group and run passes.
The user may create many pipelines. Resources may be shared if they use the same renderer.
Common setups only require one pipeline that contains a ClearPass, a GeometryPass and one or more EffectPass instances.
The first GeometryPass in a pipeline will be considered the main pass (regarding the main scene & camera).
Passes and effects declare input and output resources.
input will include uniforms and textures (alias buffers).
output will include uniforms and renderTargets (alias buffers).
Resources are declared as key-value pairs.
Keys are strings and values are either Texture or WebGLRenderTarget.
Keys can also be of type GBuffer (string enum).
GBuffer inputs will be filled with the actual textures at runtime.
Input and Output both define BUFFER_DEFAULT which will be used to auto connect passes.
All GBuffer textures must be rendered by the GeometryPass with MRT.
The configuration of the render targets is controlled by the passes and effects that produce them.
Declaring input buffers in effects automatically makes these textures available to the respective effect shaders.
Inputs and outputs of internal effects or passes must be propagated to the outermost pass.
Render targets will be managed by a BufferManager (shared private static instance in RenderPipeline).
This manager reacts to configuration changes and determines whether render targets can be shared among passes.
Buffer changes are tracked via events that are dispatched automatically for lifecycle hooks.
The flags renderToScreen and needsSwap will be removed.
The parameters of interface methods will be reduced to the bare minimum.
The last pass in a pipeline will render to screen (default output buffer is null) if pipeline.autoRenderToScreen is true (default).
The general IO concept is similar to other node-based systems like Blender's shader nodes which allow users to define named inputs and outputs. Three's built-in materials must be modified with onBeforeCompile to use MRT effectively (possibly the biggest challenge). Since MRT requires WebGL 2, effects that make use of the GBuffer may use GLSL 300.
The first GeometryPass in a pipeline produces the GBuffer. Other GeometryPass instances in the same pipeline render to the same GBuffer. To render to separate GBuffers, multiple pipelines must be created.
pipeline.addPass(newClearPass());pipeline.addPass(mainPass);pipeline.addPass(hudPass);// Renders to the same buffer as mainPass by default.pipeline.addPass(effectPass);
// defaultBuffer is an alias for output.buffers.get(Output.BUFFER_DEFAULT)hudPass.output.defaultBuffer=effectPass.output.defaultBuffer;pipeline.addPass(newClearPass());pipeline.addPass(mainPass);pipeline.addPass(effectPass);pipeline.addPass(hudPass);// Renders to the same buffer as effectPass.
classExamplePassextendsPass{// Temporary buffers are outputs with private names.// Buffer names will be prefixed internally to avoid collisions.privatestaticBUFFER_TMP_0="buffer.tmp0";privatestaticBUFFER_TMP_1="buffer.tmp1";constructor(){super();this.input.buffers.set(ExampleEffect.BUFFER_TMP_0,null);this.input.buffers.set(ExampleEffect.BUFFER_TMP_1,null);// input.defaultBuffer will automatically be set to previousPass.output.defaultBuffer.texturethis.output.defaultBuffer=newWebGLRenderTarget(...);this.output.buffers.set(ExamplePass.BUFFER_TMP_0,newWebGLRenderTarget(...));this.output.buffers.set(ExamplePass.BUFFER_TMP_1,newWebGLRenderTarget(...));
...
}protectedoverrideonInputChange(): void{this.copyMaterial.inputBuffer=this.input.buffers.get(ExampleEffect.BUFFER_TMP_1);}overrideonResolutionChange(resolution: Resolution): void{const{ width, height }=resolution;this.output.buffers.get(BUFFER_TMP_0).setSize(width,height);this.output.buffers.get(BUFFER_TMP_1).setSize(width,height);this.output.setChanged();}render(): void{const{ renderer, output }=this;this.fullscreenMaterial=this.customMaterial;this.customMaterial.inputBuffer=this.input.defaultBuffer;renderer.setRenderTarget(output.buffers.get(ExamplePass.BUFFER_TMP_0));this.renderFullscreen();this.customMaterial.inputBuffer=this.input.buffers.get(ExampleEffect.BUFFER_TMP_0);renderer.setRenderTarget(output.buffers.get(ExamplePass.BUFFER_TMP_1));this.renderFullscreen();this.fullscreenMaterial=this.copyMaterial;renderer.setRenderTarget(output.defaultBuffer);this.renderFullscreen();}}
GBuffer Usage
classExampleEffectextendsEffect{privatestaticBUFFER_TMP="buffer.tmp";constructor(){super();this.fragmentShader=fragmentShader;this.uniforms.set(..., ...);this.input.buffers.set(GBuffer.DEPTH,null);this.input.buffers.set(GBuffer.NORMAL,null);// GeometryPass provides optimization options for things like normal-depth downsampling.//this.input.buffers.set(GBuffer.NORMAL_DEPTH, null);this.input.buffers.set(ExampleEffect.BUFFER_TMP,null);// Note: Using the default output buffer in an Effect would result in an error.this.output.buffers.set(ExampleEffect.BUFFER_TMP,newWebGLRenderTarget(1,1,{depthBuffer: false}));this.exampleMaterial= ...;}protectedoverrideonInputChange(): void{// Refresh uniforms...constbuffers=this.input.buffers;this.exampleMaterial.depthBuffer=this.input.buffers.get(GBuffer.DEPTH);this.exampleMaterial.normalBuffer=this.input.buffers.get(GBuffer.NORMAL);this.uniforms.get("exampleBuffer").value=buffers.get(ExampleEffect.BUFFER_TMP);}
...
}
Effects have access to the geometry data of the current fragment via the data parameter of the mainImage function. The EffectPass detects whether an effect reads a value from this struct and only fetches the relevant data from the respective textures when it's actually needed. Sampling depth at another coordinate can be done via float readDepth(in vec2 uv). To calculate the view Z based on depth, the function float getViewZ(in float depth) can be used. GData is defined as follows:
The composer provides a DepthTexture to passes that need one, but this feature is partially broken and too limited.
Just want to emphasize my excitement for changes here in particular -- missing/glitchy depth textures has been a major pain point for me over the past couple months and it will be awesome to have it smoothed out a bit.
Another one is the amount of variables and methods in pp marked private that I've had to reach into (many of which are also used in the example demos) which can get quite ugly in a fully-typed codebase. I think more conservative use of private variables would be worth considering.
Another one is the amount of variables and methods in pp marked private that I've had to reach into (many of which are also used in the example demos)
The demos are outdated in that regard. Most settings have been made available through getters/setters in the past - the new manual contains up-to-date examples, but it's still incomplete and will be published as part of v7.
This is an overview of the planned changes for v7. The actual implementation details are subject to change during development.
Feel free to leave feedback, questions or suggestions.
The Current Design
The API in
postprocessing
is based on three's postprocessing examples; at its core it provides anEffectComposer
that uses aWebGLRenderer
to render passes. APass
performs a set of tasks to either render consumable textures or to draw the final result to screen. TheRenderPass
renders the scene colors to a texture and serves as a starting point for most render pipelines.A few years ago, #82 introduced the
EffectPass
which can mergeEffect
instances into a single fullscreen pass for optimal shader performance. Since then, passes and effects have been added and improved, but some effects like SSR and motion blur are still missing. This is because the current API doesn't provide a good way to implement such effects efficiently. Modern effects need additional scene geometry data such as depth, normals, positions, roughness, velocity, etc. This currently requires the scene to be rendered multiple times using three's limited material override system.Problems
EffectComposer
creates two internal render targets to store intermediate results.render
method of the passes.multisampling
(MSAA) setting affects both of these buffers because the composer doesn't know which of them will actually be used by aRenderPass
.DepthTexture
to passes that need one, but this feature is partially broken and too limited.Implementation Goals
The buffer management in
postprocessing
needs to become more sophisticated to support modern requirements.RenderPass
toGeometryPass
.EffectComposer
with a more lightweightRenderPipeline
class.ClearPass
, aGeometryPass
and one or moreEffectPass
instances.GeometryPass
in a pipeline will be considered the main pass (regarding the main scene & camera).input
andoutput
resources.input
will includeuniforms
andtextures
(aliasbuffers
).output
will includeuniforms
andrenderTargets
(aliasbuffers
).Texture
orWebGLRenderTarget
.GBuffer
(string enum).Input
andOutput
both defineBUFFER_DEFAULT
which will be used to auto connect passes.GeometryPass
with MRT.BufferManager
(shared private static instance inRenderPipeline
).renderToScreen
andneedsSwap
will be removed.null
) ifpipeline.autoRenderToScreen
istrue
(default).The general IO concept is similar to other node-based systems like Blender's shader nodes which allow users to define named inputs and outputs. Three's built-in materials must be modified with
onBeforeCompile
to use MRT effectively (possibly the biggest challenge). Since MRT requires WebGL 2, effects that make use of the GBuffer may use GLSL 300.Use Case Examples
Common Setup
Multiple Scenes
The first
GeometryPass
in a pipeline produces the GBuffer. OtherGeometryPass
instances in the same pipeline render to the same GBuffer. To render to separate GBuffers, multiple pipelines must be created.IO Management
GBuffer Usage
Effect Shader Changes
Shader Function Signatures
Fragment Shader
Vertex Shader
Geometry Data
Effects have access to the geometry data of the current fragment via the
data
parameter of themainImage
function. TheEffectPass
detects whether an effect reads a value from this struct and only fetches the relevant data from the respective textures when it's actually needed. Sampling depth at another coordinate can be done viafloat readDepth(in vec2 uv)
. To calculate the view Z based on depth, the functionfloat getViewZ(in float depth)
can be used.GData
is defined as follows:Uniforms, Macros and Varyings
All shaders have access to the following uniforms:
The fragment shader has access to the following additional uniforms:
The following varyings are reserved:
Available vertex attributes:
Available macros:
PerspectiveCamera
, the macroPERSPECTIVE_CAMERA
will be defined.FRAMEBUFFER_PRECISION_HIGH
will be defined.The text was updated successfully, but these errors were encountered: