OpenGL ES Texture Masking - opengl-es

I want to implement a effect like colorsplash effect by opengl es,so I search by the website and get the guide(http://www.idevgames.com/forums/thread-899.html)
Now I block in the third step for the draw time,i don't know how to create the multi-texture followed the guide,can you help me? give me some suggestions or some code about it

To do multi-texturing, you need to put different textures into the various texture units, and set texture coordinates for them. Something like this:
// Put a texture into texture unit 0
glActiveTexture (GL_TEXTURE0);
glBindTexture (GL_TEXTURE_RECTANGLE_EXT, texID0);
...
// Put a texture into texture unit 1
glActiveTexture (GL_TEXTURE1);
glBindTexture (GL_TEXTURE_RECTANGLE_EXT, texID1);
...
// Now draw our textured quad - you could also use VBOs
glBegin (GL_QUADS);
// Set up the texture coordinates for each texture unit for the first vertex
glMultiTexCoord2f (GL_TEXTURE0, x0, y0);
glMultiTexCoord2f (GL_TEXTURE1, x1, y1);
// Define the first vertex's location
glVertex2f (x, y);
... // Do the other 3 vertexes
glEnd();

Related

OpenGL transparency in texture when render with stencil buffer

The question has been updated thanks to the comments.
Screenshot of how textures overlap
To draw 2 points with brush texture using the stencil buffer to avoid textures transparency overlap, the following code is used:
glEnable(GL_STENCIL_TEST.gluint)
glClear(GL_STENCIL_BUFFER_BIT.gluint | GL_DEPTH_BUFFER_BIT.gluint)
glStencilOp(GL_KEEP.gluint, GL_KEEP.gluint, GL_REPLACE.gluint)
glStencilFunc(GL_ALWAYS.gluint, 1, 1)
glStencilMask(1)
glDrawArrays(GL_POINTS.gluint, 0, 1)
glStencilFunc(GL_NOTEQUAL.gluint, 1, 1)
glStencilMask(1)
glDrawArrays(GL_POINTS.gluint, 1, 1)
glDisable(GL_STENCIL_TEST.gluint)
And stencil buffer works, however, each point fill a full rectangle in the stencil buffer, but a texture image has transparency. So maybe texture used in the wrong way?
The texture is loaded like this
glGenTextures(1, &gl_id)
glBindTexture(GL_TEXTURE_2D.gluint, gl_id)
glTexParameteri(GL_TEXTURE_2D.gluint, GL_TEXTURE_MIN_FILTER.gluint, GL_LINEAR)
glTexImage2D(GL_TEXTURE_2D.gluint, 0, GL_RGBA, gl_width.int32, gl_height.int32, 0, GL_RGBA.gluint, GL_UNSIGNED_BYTE.gluint, gl_data)
Blending set as
glEnable(GL_BLEND.gluint)
glBlendFunc(GL_ONE.gluint, GL_ONE_MINUS_SRC_ALPHA.gluint)
Could you please advice where to look in order to fill 1s in stencil buffer by exactly not transparent area of brush image?
I recommend to discard the transparent parts of the texture in the fragment shader. A fragment can be completely skipped in the fragment shader by the discard keyword.
See Fragment Shader - Special operations.
Use a small threshold and discard a fragment, if the alpha channel of the texture color is below the threshold:
vec4 texture_color = .....;
float threshold = 0.01;
if ( texture_color.a < threshold )
discard;
An other possibility would be to use an Alpha test. This would be only available in OpenGL compatibility profile, but not in core profile or OpenGL ES.
See Khronos OpenGL-Refpages glAlphaFunc:
The alpha test discards fragments depending on the outcome of a comparison between an incoming fragment's alpha value and a constant reference value.
With the following alpha test, the fragments whos alpha channel is below the threshold are discarded:
float threshold = 0.01;
glAlphaFunc(GL_GEQUAL, threshold);
glEnable(GL_ALPHA_TEST)

openGL reverse image texturing logic

I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.

Geometry Shader Quad Post Processing

Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.

Direct3D9 Calculating view space point light position

I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.

What is the correct order for a series of OpenGL ES translations and rotations to output things?

I know the camera is at 0,0,0 and I need to rotate the world around it, but I'm getting confused as to what order to do translations and rotations.
If have a theoretical x,y,z coordinate system where the camera is at cx,cy,cz and it's oriented to cox,coy,coz and I have a cube which sits at bx,by,bz oriented to box,boy,boz then what series of glTranslatef and glRotatef are required to rotate the box correctly and at the correct location away from the camera?
Here are the basic operations, but I have no idea what order to put them in and what other operations are required to make it show up as expected.
gl.glLoadIdentity();
// rotation and translation for cube
gl.glRotatef(box, 1,0,0);
gl.glRotatef(boy, 0,1,0);
gl.glRotatef(boz, 0,0,1);
gl.glTranslatef(bx,by,bz);
// rotation and translation for camera
gl.glRotatef(cox, 1,0,0);
gl.glRotatef(coy, 0,1,0);
gl.glRotatef(coz, 0,0,1);
gl.glTranslatef(cx,cy,cz);
// draw the cube
cube.draw(gl);
Do it the other way around: camera transform first, then your object(s):
gl.glLoadIdentity();
// rotation and translation for camera
gl.glRotatef(-cox, 1,0,0);
gl.glRotatef(-coy, 0,1,0);
gl.glRotatef(-coz, 0,0,1);
gl.glTranslatef(-cx,-cy,-cz);
// rotation and translation for cube
gl.glRotatef(box, 1,0,0);
gl.glRotatef(boy, 0,1,0);
gl.glRotatef(boz, 0,0,1);
gl.glTranslatef(bx,by,bz);
// draw the cube
cube.draw(gl);

Resources