I have a 3D model with a RGBA texture where the alpha value should be ignored (I must use those alpha values differently for some reflections effects).
PNG specs say : colors are not premultiplied by alpha.
Then, in shaders we should be able to ignore alpha values and only displaying RGB colors.
I tried many things, disabling premultipliedAlpha in WebGLRenderer, Texture and Material but RGB values are 0 if alpha is 0.
In the example below, the models should have skin colored hands and neck but appear white.
On the center texture is loaded natively by the FBXLoader.
On the right texture is loaded by the TextureLoader.
On the left another texture without alpha is loaded by the TextureLoader for demonstration purpose.
For ShaderMaterials, the fragment shader is :
void main() {
FragColor = vec4(texture2D(map, vUv).rgb, 1.);
}
Codesandbox
What I am doing wrong ? Is that a WebGL or THREE limitation ?
PS: using 2 textures is not an option
Is it possible to somehow render to depth buffer from pre-rendered texture?
I am pre-rendering scene like original resident evil games and I would like to apply both pre-rendered depth and color texture to screen.
I previously used technique to make simpler proxy scene for depth but I am wondering if there is a way to use precise pre rendered depth texture instead.
three.js provides a DepthTexture class which can be used to save the depth of a rendered scene into a texture. Typical use cases for such a texture are post processing effects like Depth-of-Field or SSAO.
If you bind a depth texture to a shader, you can sample it like any other texture. However, the sampled depth value is sometimes converted to different representations for further processing. For instance you could compute the viewZ value (which is the z-distance between the rendered fragment and the camera) or convert between perspective and orthographic depth and vice versa. three.js provides helper functions for such tasks.
The official depth texture example uses these helper functions in order to visualize the scene's depth texture. The important function is:
float readDepth( sampler2D depthSampler, vec2 coord ) {
float fragCoordZ = texture2D( depthSampler, coord ).x;
float viewZ = perspectiveDepthToViewZ( fragCoordZ, cameraNear, cameraFar );
return viewZToOrthographicDepth( viewZ, cameraNear, cameraFar );
}
In the example, the resulting depth value is used to compute the final color of the fragment.
Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.
I'm hoping this is possible to do without using frame buffers or shaders, just by straight up using the glBlendFunc or glBlendFuncSeparate.
I'm rendering my scene normally with my standard blend mode:
glBlendFunc(GL.GL_SRC_ALPHA, GL.GL_ONE_MINUS_SRC_ALPHA);
Ontop of that scene, I want to draw a texture which is masked by some other texture. These are drawn at arbitrary positions (not necessarily rectangular, not necessarily the same size / position as each other).
The order is; render the masked texture, then the mask texture.
The masked texture is a regular image, with alpha.
The mask texture either black RGBA(0, 0, 0, 255), or transparent RGBA(0, 0, 0, 0)
I want anything that the black does NOT touch, to be invisible. Basically, the final result should be:
RGBA(masked.r, masked.g, masked.b, masked.a * mask.a)
Below are images of the ordering, to explain what I mean. I'm really looking for a solution to avoid having to use a different shader or stick things onto a framebuffer. If it absolutely isn't possible, please let me know.
I'll explain why this isn't possible. Masking with blending requires three passes because it has three parts: the destination, the source, and the mask. No matter what you do, you must blend the source and the mask into a framebuffer and THEN render to destination.
The stencil buffer, however, is built into the default window framebuffer, provided you tell OpenGL to provide for it (like you would a depth buffer), and appears to do exactly what you want. As a GLUT call, it would look like this to initialize the stencil buffer in your window along with the alpha-enabled color and depth buffers, in a double-buffered window:
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGBA | GLUT_ALPHA | GLUT_DEPTH | GLUT_STENCIL);
The stencil buffer is able to do exactly what you need - you can draw a shape in it, and selectively tell it to either discard or keep pixels inside that shape. Here's an example of how to use it, modified from the the OpenGL Red Book:
GLdouble dRadius = 0.1; // Initial radius of spiral
GLdouble dAngle; // Looping variable
// Use 0 for clear stencil, enable stencil test
glClearStencil(0);
glEnable(GL_STENCIL_TEST);
// Clear stencil buffer
glClear(GL_STENCIL_BUFFER_BIT);
// All drawing commands fail the stencil test, and are not
// drawn, but increment the value in the stencil buffer.
// glStencilFunc takes three arguments: the stencil function, the reference value, and the mask value. Whenever the stencil function is tested (for example GL_LESS), both the reference and the stencil value being tested from the buffer is bitwise ANDed with the mask: GL_LESS returns true if (ref & mask) < (stencil & mask).
glStencilFunc(GL_NEVER, 0x0, 0x0);
// glStencilOp takes three arguments: the stencil operation to call when the stencil test fails, the stencil operation to call when the stencil test passes but the depth test fails, and the stenciloperation to call when the stencil test passes AND the depth test passes (or depth test is disabled or no depth buffer is allocated).
glStencilOp(GL_INCR, GL_INCR, GL_INCR);
// Spiral pattern will create stencil pattern
// Draw the spiral pattern with white lines. We
// make the lines white to demonstrate that the
// stencil function prevents them from being drawn
glColor3f(1.0f, 1.0f, 1.0f);
glBegin(GL_LINE_STRIP);
for(dAngle = 0; dAngle < 400.0; dAngle += 0.1)
{
glVertex2d(dRadius * cos(dAngle), dRadius * sin(dAngle));
dRadius *= 1.002;
}
glEnd();
// Now, allow drawing, except where the stencil pattern is 0x1
// and do not make any further changes to the stencil buffer
glStencilFunc(GL_NOTEQUAL, 0x1, 0x1);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
// Now draw red square
glColor3f(0.0f, 1.0f, 0.0f);
glRectf(0, 0, 200, 200);
The output of this drawing code is a red square, with a spiral across it, starting at (1, 1). The spiral is made up of discarded pixels, and as such will be the same color as the cleared color buffer. If you were to use this code for your purposes, you would draw the square where you wanted your texture to be where the spiral code is written, and then use GL_EQUAL as the stencil function, drawing your masked texture where the red square is drawn. More information on the stencil buffer can be found here.
I want to implement a effect like colorsplash effect by opengl es,so I search by the website and get the guide(http://www.idevgames.com/forums/thread-899.html)
Now I block in the third step for the draw time,i don't know how to create the multi-texture followed the guide,can you help me? give me some suggestions or some code about it
To do multi-texturing, you need to put different textures into the various texture units, and set texture coordinates for them. Something like this:
// Put a texture into texture unit 0
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_RECTANGLE_EXT, texID0);
...
// Put a texture into texture unit 1
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_RECTANGLE_EXT, texID1);
...
// Now draw our textured quad - you could also use VBOs
glBegin (GL_QUADS);
// Set up the texture coordinates for each texture unit for the first vertex
glMultiTexCoord2f (GL_TEXTURE0, x0, y0);
glMultiTexCoord2f (GL_TEXTURE1, x1, y1);
// Define the first vertex's location
glVertex2f (x, y);
... // Do the other 3 vertexes
glEnd();