If alpha is 0, RBG is 0 even with premultipyAlpha to false - three.js

I have a 3D model with a RGBA texture where the alpha value should be ignored (I must use those alpha values differently for some reflections effects).
PNG specs say : colors are not premultiplied by alpha.
Then, in shaders we should be able to ignore alpha values and only displaying RGB colors.
I tried many things, disabling premultipliedAlpha in WebGLRenderer, Texture and Material but RGB values are 0 if alpha is 0.
In the example below, the models should have skin colored hands and neck but appear white.
On the center texture is loaded natively by the FBXLoader.
On the right texture is loaded by the TextureLoader.
On the left another texture without alpha is loaded by the TextureLoader for demonstration purpose.
For ShaderMaterials, the fragment shader is :
void main() {
FragColor = vec4(texture2D(map, vUv).rgb, 1.);
}
Codesandbox
What I am doing wrong ? Is that a WebGL or THREE limitation ?
PS: using 2 textures is not an option

Related

Three.js render depth from texture

Is it possible to somehow render to depth buffer from pre-rendered texture?
I am pre-rendering scene like original resident evil games and I would like to apply both pre-rendered depth and color texture to screen.
I previously used technique to make simpler proxy scene for depth but I am wondering if there is a way to use precise pre rendered depth texture instead.
three.js provides a DepthTexture class which can be used to save the depth of a rendered scene into a texture. Typical use cases for such a texture are post processing effects like Depth-of-Field or SSAO.
If you bind a depth texture to a shader, you can sample it like any other texture. However, the sampled depth value is sometimes converted to different representations for further processing. For instance you could compute the viewZ value (which is the z-distance between the rendered fragment and the camera) or convert between perspective and orthographic depth and vice versa. three.js provides helper functions for such tasks.
The official depth texture example uses these helper functions in order to visualize the scene's depth texture. The important function is:
float readDepth( sampler2D depthSampler, vec2 coord ) {
float fragCoordZ = texture2D( depthSampler, coord ).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
In the example, the resulting depth value is used to compute the final color of the fragment.

OpenGL transparency in texture when render with stencil buffer

The question has been updated thanks to the comments.
Screenshot of how textures overlap
To draw 2 points with brush texture using the stencil buffer to avoid textures transparency overlap, the following code is used:
glEnable(GL_STENCIL_TEST.gluint)
glClear(GL_STENCIL_BUFFER_BIT.gluint | GL_DEPTH_BUFFER_BIT.gluint)
glStencilOp(GL_KEEP.gluint, GL_KEEP.gluint, GL_REPLACE.gluint)
glStencilFunc(GL_ALWAYS.gluint, 1, 1)
glStencilMask(1)
glDrawArrays(GL_POINTS.gluint, 0, 1)
glStencilFunc(GL_NOTEQUAL.gluint, 1, 1)
glStencilMask(1)
glDrawArrays(GL_POINTS.gluint, 1, 1)
glDisable(GL_STENCIL_TEST.gluint)
And stencil buffer works, however, each point fill a full rectangle in the stencil buffer, but a texture image has transparency. So maybe texture used in the wrong way?
The texture is loaded like this
glGenTextures(1, &gl_id)
glBindTexture(GL_TEXTURE_2D.gluint, gl_id)
glTexParameteri(GL_TEXTURE_2D.gluint, GL_TEXTURE_MIN_FILTER.gluint, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D.gluint, 0, GL_RGBA, gl_width.int32, gl_height.int32, 0, GL_RGBA.gluint, GL_UNSIGNED_BYTE.gluint, gl_data)
Blending set as
glEnable(GL_BLEND.gluint)
glBlendFunc(GL_ONE.gluint, GL_ONE_MINUS_SRC_ALPHA.gluint)
Could you please advice where to look in order to fill 1s in stencil buffer by exactly not transparent area of brush image?
I recommend to discard the transparent parts of the texture in the fragment shader. A fragment can be completely skipped in the fragment shader by the discard keyword.
See Fragment Shader - Special operations.
Use a small threshold and discard a fragment, if the alpha channel of the texture color is below the threshold:
vec4 texture_color = .....;
float threshold = 0.01;
if ( texture_color.a < threshold )
discard;
An other possibility would be to use an Alpha test. This would be only available in OpenGL compatibility profile, but not in core profile or OpenGL ES.
See Khronos OpenGL-Refpages glAlphaFunc:
The alpha test discards fragments depending on the outcome of a comparison between an incoming fragment's alpha value and a constant reference value.
With the following alpha test, the fragments whos alpha channel is below the threshold are discarded:
float threshold = 0.01;
glAlphaFunc(GL_GEQUAL, threshold);
glEnable(GL_ALPHA_TEST)

sRGB colorPixelFormat in Metal on macOS gives strange results

Background: I'm trying to solve this issue in iTerm2: https://gitlab.com/gnachman/iterm2/issues/6703#note_71933498
iTerm2 uses the default pixel format of MTLPixelFormatBGRA8Unorm for all its textures, including the MTKView's drawable. On my machine, if my texture function returns a color C then the Digital Color Meter app in "Display in sRGB" will show that the onscreen color is C. I don't know why this works! It should convert the raw color values from my texture function to sRGB, causing them to change, right? For example, here's what I see when my fragment function returns (0, 0.16, 0.20, 1) for the window's background color:
That's weird, but I could live with it until someone complained that it didn't work on their machine (see the issue linked to above). For them, the value my fragment function returns goes through the Generic -> sRGb conversion and looks wrong. It's a hackintosh, so it could be a driver bug or something, but it prompted me to look into this more.
If I change the pixel format everywhere to MTLPixelFormatBGRA8Unorm_sRGB then I get colors I can't explain:
The fragment function remains unchanged, returning (0, 0.16, 0.2, 1). It's rendering to a texture I created with the MTLPixelFormatBGRA8Unorm_sRGB pixel format (not the drawable, if it matters, although the drawable has the same pixel format) and I can see in the GPU debugger that this texture is assigned the too-bright values you see above. In this screenshot the digital color meter is showing the color of the "Intermediate Texture" thumbnail:
Here's the fragment function (modified for debugging purposes):
fragment float4
iTermBackgroundColorFragmentShader(iTermBackgroundColorVertexFunctionOutput in [[stage_in]]) {
return float4(0, 0.16, 0.20, 1);
}
I also tried hacking Apple's sample app (MetalTexturedMesh) to use the sRGB pixel format and return the same color from its fragment shader, and I got the same result. Here is the change I made to it. I can only conclude that I don't understand how Metal defines pixel formats, and I can't find any reasonable interpretation that would give the behavior that I see.
The raw bytes are 7C 6E 00 FF in BGRA. That's certainly not what digital color meter reported, but it's also super far from what I expect.
That is what the color meter reported. That byte sequence corresponds to (0, 0.43, 0.49).
I would have expected 33 29 00 ff. Since the texture function is hardcoded to return float4(0, 0.16, 0.20, 1); and the pixel format is sRGB, I'd expect 0.2 in the blue channel to produce 0.2*255=51 (decimal) = 0x33, not 0x7c.
Your expectation is incorrect. Within shaders, colors are always linear, never sRGB. If a texture is sRGB, then reading from it converts from sRGB to linear and writing to it converts from linear to sRGB.
Given the linear (0, 0.16, 0.2) in your shader, the conversion to sRGB described in section 7.7.7 of the Metal Shading Language spec(PDF) would produce (0, 0.44, 0.48) for floating-point and (0, 111, 124) a.k.a. (0, 0x6f, 0x7c) for RGBA8Unorm. That's very close to what you're getting. So, the result seems correct for sRGB.
The conversion algorithm from linear to sRGB as given in the spec is:
if (isnan(c)) c = 0.0;
if (c > 1.0)
c = 1.0;
else if (c < 0.0)
c = 0.0;
else if (c < 0.0031308)
c = 12.92 * c;
else
c = 1.055 * powr(c, 1.0/2.4) - 0.055;
The conversion algorithm from sRGB to linear is:
if (c <= 0.04045)
result = c / 12.92;
else
result = powr((c + 0.055) / 1.055, 2.4);
I can also see that it's clearly much brighter than the same color drawn with without metal (e.g., in -drawRect:).
How did you construct the color to draw for that case? Note that the generic (a.k.a. calibrated) RGB color space is not linear. It has a gamma of ~1.8. Core Graphics has kCGColorSpaceGenericRGBLinear which is linear.
The mystery is why you see a much darker color when you use a non-sRGB texture. The hard-coded color in your shader should always represent the same color. The pixel format of the texture shouldn't affect how it ultimately shows up (within rounding error). That is, it's converted from linear to the texture's pixel format and, on display, from the texture's pixel format to the display color profile. That's transitive, so the result should be the same as going directly from linear to the display color profile.
Are you sure you aren't pulling the bytes from that texture and then interpreting them as though they were sRGB? Or maybe using -newTextureViewWithPixelFormat:...?

Calculate source RGBA given two samples composited over black and white backgrounds

Explanation
I have a semi-transparent color of unknown value.
I have a sample of this unknown color composited over a black background and another sample over a white background.
How do I find the RGBA value of the unknown color?
Example
Note: RGB values of composites are calculated using formulas from the Wikipedia article on alpha compositing
Composite over black:
rgb(103.5, 32.5, 169.5)
Composite over white:
rgb(167.25, 96, 233.25)
Calculated value of unknown color will be:
rgba(138, 43, 226, 0.75)
What I've Read
Manually alpha blending an RGBA pixel with an RGB pixel
Calculate source RGBA value from overlay
It took some experimentation, but I think I figured it out.
Subtracting any of the color component values between the black and white composite should give you the inverse of the original color's alpha value, eg:
A_original = 1 - ((R_white_composite - R_black_composite) / 255) // in %, 0.0 to 1.0
It should yield the same value whether you use the R, G, or B component. Now that you have the original alpha, finding the new components is as easy as:
R_original = R_black_composite / A_original
G_original = G_black_composite / A_original
B_original = B_black_composite / A_original

Geometry Shader Quad Post Processing

Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.

Resources