When do we need to use renderer.outputEncoding = THREE.sRGBEncoding - three.js

I'm a newbie in three.js. I have been learning three.js by trying to make simple scenes and understanding how the official examples work.
Recently I have been looking at https://threejs.org/examples/?q=trans#webgl_materials_physical_transmission, and I couldn't understand what's the exact reason that the code needs to use renderer.outputEncoding = THREE.sRGBEncoding here. In simpler scenarios such as loading JPGs as textures on cubes, the JPG images would look just fine without setting outputEncoding on the renderer.
I tried googling on similar topics such as gamma correction, and stuff like people saying most of the images online are gamma encoded in the sRGB color space. But I couldn't connect all the dots myself...I would be most grateful if anyone can explain this clearly to me.

If you do not touch renderer.outputEncoding, it means you use no color space workflow in your app. The engine assumes that all input color values are expected to be in linear space. And the final color value of each fragment is not transformed into an output color space.
This kind of workflow is problematic for different reasons. One reason is that your JPG texture is most likely sRGB encoded as well as many other textures. To compute proper colors in the fragment shader, it's important that all input color values are transformed into the same color space though (which is linear color space). If you don't care about color spaces, you quickly end up with wrong output colors.
For simple apps this detail does often not matter because the final image "just looks good". However, depending on what you are doing in your app (e.g. when importing glTF assets), a proper color space workflow is mandatory.
By setting renderer.outputEncoding = THREE.sRGBEncoding you tell the renderer to convert the final color value in the fragment shader from linear to sRGB color space. Consequently, you also have to tell the renderer when textures hold sRGB encoded data. You do this by assigning THREE.sRGBEncoding to the encoding property of textures. THREE.GLTFLoader does this automatically for all color textures. But when loading textures manually, you have to do this by yourself.

Related

Is it possible to attain something like indexed transparency using three.js?

I am struggling with the common transparency sorting issue. I know there are ways around it (like manually sorting the objects or order-independent transparency) but all that can become quite fiddly. I'd be ok if there was a way to have objects that are partly opaque and partly 100% transparent and have them intersect correctly.
In theory this should be possible. Opaque pixels would be rendered to color buffer and z buffer in the standard way and transparent pixels are just left out.
What I'm looking for is something like indexed transparency as it was used with gif files, for instance that all pixels of an object that have the color #FF00FF are not rendered.
I just don't know if and how this would be possible using three.js. Also, I want to be able to use it with custom shaders.
EDIT: Thanks for your comments so far and sorry for the confusion. This is more of a conceptional thing than a specific problem with my code. It's just that I am often faced with the issue that parts of transparent objects cut out parts of other transparent objects which should be in front of them. Also, transparent objects do not intersect correctly, it's always that one covers another. I understand why this happens and that it is a problem which is inherent to the way transparency is treated. But often I only need parts of an object completely transparent, no partial-shine-through-alpha transparency. Which could be possible if there was a way to leave out certain pixels of objects and render the rest like a normal opaque object.
Let's assume I want to have a metal chain and each segment is a PlaneGeometry thing with a texture that shows the shape of an O (and the rest transparent). Now the chain should be shown with correct interlinkage so to say.
Any help welcome!
Cheers!
If you are rendering a three.js scene, and your texture maps contain no partially-transparent pixels -- that is, each pixel is either 100% opaque or 100% transparent, then you can achieve a proper rendering by setting
material.alphaTest = 0.5;
//material.transparent = true; // likely not needed
The same holds true if you are using a binary alpha-map.
If you are writing a custom shader, then you can achieve the same effect by using a pattern like the following in your fragment shader:
if ( texelColor.a < 0.5 ) discard;
three.js r.84

Small sample in opengl es 3, wrong gamma correction

I have a small sample, es-300-fbo-srgb, supposed to showing how to manage gamma correction in opengl es3.
Essentially I have:
a GL_SRGB8_ALPHA8 texture TEXTURE_DIFFUSE
a framebuffer with another GL_SRGB8_ALPHA8 texture on GL_COLOR_ATTACHMENT0 and a GL_DEPTH_COMPONENT24 texture on GL_DEPTH_ATTACHMENT
the back buffer of my default fbo is GL_LINEAR
GL_FRAMEBUFFER_SRGB initially disabled.
I get
instead of
Now, if I recap the display metho, this is what I do:
I render the TEXTURE_DIFFUSE texture on the sRGB fbo and since the source texture is in sRGB space, my fragment shader will read automatically a linear value and write it to the fbo. Fbo should contain now linear values, although it is sRGB, because GL_FRAMEBUFFER_SRGB is disabled, so no linear->sRGB conversion is executed.
I blit the content of the fbo to the default fbo back buffer (through a program). But since the texture of this fbo has the sRGB component, on the read values a wrong gamma operation will be performed because they are assumed in sRGB space when they are not.
a second gamma operation is performed by my monitor when it renders the content of the default fbo
So my image is, if I am right, twice as wrong..
Now, if I glEnable(GL_FRAMEBUFFER_SRGB); I get instead
The image looks like it have been too many times sRGB corrected..
If I, instead, leave the GL_FRAMEBUFFER_SRGB disabled and change the format of the GL_COLOR_ATTACHMENT0 texture of my fbo, I get finally the right image..
Why do I not get the correct image with glEnable(GL_FRAMEBUFFER_SRGB);?
I think you are basically right: you get the net effect of two decoding conversions where one (the one in your monitor) would be enough. I suppose that either your driver or your code breaks something so OpenGL doesn't 'connect the dots' properly; perhaps this answer helps you:
When to call glEnable(GL_FRAMEBUFFER_SRGB)?

How can I dull the reflection effect of an environment map (ie. make it blurred / matte)?

I'm currently rendering a skybox to a THREE.CubeCamera target and am then using that target as the environment map on a material. The idea being that I want to have the colour of a cube affected by the colour of the sky around it, though not fully reflecting it (like how a white matte cube would look in the real world).
For example, here is what I have so far applying the environment map to a THREE.LambertMaterial or THREE.PhongMaterial with reflectivity set to 0.7 (same results):
Notice in the first image that the horizon line is clearly visible (this is at sunset when it's most obvious) and that the material is very reflective. The second image shows the same visible horizon line, which moves with the camera as you orbit. The third image shows the box at midday with blue sky above it (notice how the blue is reflected very strongly).
The effect I'm trying to aim for is a duller, perhaps blurred representation of what we can already see working here. I want the sky to affect the cube but I don't want to fully reflect it, instead I want each side of the cube to have a much more subtle effect without a visible horizon line.
I've experimented with the reflection property of the materials without much luck. Yes, it reduces the reflection effect but it also removes most of the colouring taken from the skybox. I've also tried the shininess property of THREE.PhongMaterial but that didn't seem to do much, if anything.
I understand that environment maps are meant to be reflections, however my hope is that there is a way to achieve what I'm after. I want a reflection of the sky, I just need it to be much less precise and instead more blurred / matte.
What could I do to achieve this?
I achieve this writing my own custom shader based on physically based rendering shading model.
I use cook-torrance model that consider roughness of the material for specular contribution. It's not an easy argument that I can talk in this answer, you can find great references here http://graphicrants.blogspot.it/ at the specular BRDF article.
In this question you can find how I achieve the blurry reflection depending on material roughness.
Hope it can help.
I solved this by passing a different set of textures that were blurred to be the cubemap for the object.

Converting Exported Blender Textures to work with Three.JS

I'm trying to get a blender model that I've exported to display properly, but it appears as though the texture for the leaves isn't being blended correctly as an alpha ( though the trunk itself work fine ). Here's what I'm seeing:
Notice how the leaves aren't aliased through correctly ( i.e. it should look like a tree with leaves, not with gray sheets of paper ).
In Blender the tree looks fine, but I've had a few people tell me that it looks like my alpha is inverted ( I'm not totally sure what that means ). My guess is that, with a bit of file tweaking and conversion, I could get the attached images to work fine. Here are the image resources I've got:
I don't think it's necessary, but in case you want to see the exported JSON, I've dumped it here:
https://gist.github.com/funnylookinhat/5062061
I'm pretty sure that the issue is the black and white image of the oak leaves - given that it's the only difference between the two packed textures. Is there a way I can work with it or convert it so that it applies correctly to the layers of leaves?
UPDATE
I'm able to get something that looks mostly right ( minus some weird transparency layering issues ) - but I'm pretty sure that it isn't being done correctly... any help would still be much appreciated.
I added in transparency on the white/black and green images resulting in these:
Which resulted in the following:
Then I flipped the references to the two of them in the JSON - which resulted in this:
I'm 99% sure this isn't working as intended, it appears as though the Diffuse map isn't working correctly... any suggestions?
Three.js has no mask textures (the black and white texture), so you need to bake it into the alpha channel of the diffuse texture (so use .png format as .jpg does not support alpha - as you are currently doing).
Your update is on the right track although the diffuse alpha is poorly done (holes in leaves). This can be done right e.g. in Gimp by decomposing the diffuse color channels and then recomposing with the added mask layer as alpha (note however that white is assumed to be opaque and black transparent so inversion might be needed).
In the material, don't use the mask texture at all. There might also be problems with leaves overlapping each other which is a difficult problem to solve as transparency in general is quite the PITA. You can try disabling the material's depthWrite and/or playing with alphaTest values (e.g. setting it to 0.1) to get different kind of artifacts.

how use mapping uv in three.js

I use a wood texture image in my model. by default my texture is stretched on the model you see this on woodark. When I changed the repeat the texture is more stretching and I are not understand why. I search to undertand how to use right the mapping in my model with base explain but I have found only examples with colors pixels.
thank to answers
You should make sure your textures have power of two dimensions (ie. 256x256, 512x512 etc). Textures of arbitrary dimensions (NPOT) bring all kinds of mapping trouble in WebGL.
If you are unable to resize textures server-side, you can do it client-side. This link has some sample javascript code, as well as other relevant information: http://www.khronos.org/webgl/wiki/WebGL_and_OpenGL_Differences

Resources