Replies: 1 comment 4 replies
-
Right, if you render a scene with Going by the Unity docs, there are basically two approaches to handling sRGB input colors and textures:
Note that sRGB conversion is simply a more accurate version of gamma correction. The gamma workflow can be a viable choice when you're only dealing with unlit materials because there are no lighting calculations involved. You could convert to linear and back, but you could also just render the colors and images as they are. Post processing effects can still produce good results but they will technically be wrong. The linear workflow is the correct and recommended approach but it's harder to wrap your head around it and there are some issues in WebGL that make it less appealing for development. Three.js supports gamma correction for textures by converting them manually inside the shaders. It was probably implemented that way because WebGL didn't support hardware sRGB decoding without an extension. This solution has a slight impact on peformance. Support for
Gamma correction, or sRGB output encoding, should be the final step when rendering to screen. This library supports this via three's built-in output encoding system. Setting Tone mapping is used to map HDR colors to LDR for output on conventional monitors. Lighting calculations as well as post processing effects can all produce color values greater than 255. If you don't use tone mapping, those values will be clamped and the original brightness information will be lost. Tone mapping should be performed late in the pipeline but before color correction because most LUTs assume LDR/RGBA8 input colors. It's also worth mentioning that you'll need medium precision buffers for the linear color space workflow as outlined in the readme to avoid losing precision in-between passes. |
Beta Was this translation helpful? Give feedback.
-
This is a question that's been on my mind for a while and researching it has lead me down quite a rabbit hole of color theory. Interesting stuff but I'm still not sure of the answer 😅
My current understanding is that most processing passes (in Unity or Unreal, for example) assume that the input is in linear. If that's true for this library then the correct process is to leave the three.js output in linear, do all the passes in linear (except noise/grain which should be last), then do an sRGB transform followed by tone mapping as the final pass. Is that correct?
Beta Was this translation helpful? Give feedback.
All reactions