In which coordinate space should the vertex be passed to fragment shader? - opengl-es

I am clipping off on the fragment shader (setting the transparency to 0/1) based on the cut off vertex (v_cutPos) and the current vertex (v_currPos) that I get from the vertex shader. These two vertices are passed as world coords.
Now, the cut off logic works fine. But the cut itself is not smooth (it has to follow a certain shape). And when I pass the same vertices after converting to clip space, the cut is much smoother (or finer.)
Is there any explanation to this?
//fragment shader
precision highp float;
varying mediump vec4 v_color;
varying vec4 v_currPos;
varying vec4 v_cutPos;
/* returns 0 if pt is inside box otherwise 1 */
float insideCutArea(vec2 pt, vec2 cutPos)
{
return float(pt.y > cutPos.y);
}
void main(void)
{
float transparency = insideCutArea(v_currPos.xy, v_cutPos.xy);
gl_FragColor = vec4(v_color.xyz, v_color.w * transparency);
}
//vertex shader
varying mediump vec4 v_color;
uniform vec3 cutPos;
varying vec4 v_currPos;
varying vec4 v_cutPos;
void main(void)
{
//-------------------
other transformations
-------------------//
v_cutPos = myPMVMatrix * vec4(cutPos,1.0); //cut is not fine when not multiplying with the matrix
gl_Position = myPMVMatrix * vec4(validVertex, 1.0);
v_currPos = myPMVMatrix * vec4(validVertex, 1.0); //cut is not fine when not multiplying with the matrix
v_color = color;
}
PS: This question was previously closed due to not much clarity. I
have created this again with code explaining what I have done.

Related

Is this GLSL program correct? My cubes are solid black

My phong fragment shader is not shading anything, just making everything black.
This is my fragment shader
precision mediump float;
varying vec3 vposition;
varying vec3 vnormal;
varying vec4 vcolor;
varying vec3 veyePos;
void main() {
vec3 lightPos = vec3(0,0,0);
vec4 s = normalize(vec4(lightPos,1) - vec4(veyePos,1));
vec4 r = reflect(-s,vec4(vnormal, 1));
vec4 v = normalize(-vec4(veyePos, 1));
float spec = max( dot(v,r),0.0 );
float diff = max(dot(vec4(vnormal,1),s),0.0);
vec3 diffColor = diff * vec3(1,0,0);
vec3 specColor = pow(spec,3.0) * vec3(1,1,1);
vec3 ambientColor = vec3(0.1,0.1,0.1);
gl_FragColor = vec4(diffColor + 0.5 * specColor + ambientColor, 1);
}
This is my vertex shader
uniform mat4 uMVPMatrix;
uniform mat4 uMVMatrix;
uniform vec3 eyePos;
attribute vec4 aPosition;
attribute vec4 aColor;
attribute vec4 aNormal;
varying vec4 vcolor;
varying vec3 vposition;
varying vec3 vnormal;
varying vec3 veyePos;
void main() {
mat4 normalMat = transpose(inverse(uMVMatrix));
vec4 vertPos4 = uMVMatrix * vec4(vec3(aPosition), 1.0);
vposition = vec3(vertPos4) / vertPos4.w;
vcolor = aColor;
veyePos = eyePos;
vnormal = vec3(uMVMatrix * vec4(vec3(aNormal),0.0));
gl_Position = uMVPMatrix * aPosition;
}
MVMatrix is model-view matrix
MVPMatrix is model-view-projection matrix
first of all your lighting equations are incorrect:
vector s that you use to calculate the diffuse color should be a unit vector that originates at your vertex (vposition) towards your light. so it would be
s = normalize(lightPos - vposition)
also lightPos should be given in camera space and not in world space (so you should multiply it by your MV matrix)
vector r is the reflection of s around the normal so i dont understand why you take -s also the normal there should be in non-homegenous coordinates so it would be:
r = reflect(s,vnormal)
and finally v is the viewing ray (multiplied by -1) so it should be the vector that originates at vposition and goes towards eyepos.
v = normalize(veyepos - vposition)
also in your vertex shader veyepos (assuming it is the position of your camera) should not be varying (should be flat variable) because you dont want to interpolate it.
in your vertex shader you calculate normalMat but you forgot to use it when calculating your normals in camera space.
also normalMat should be mat3 because it is the inverse transpose of the
3by3 block of your MV matrix originating at (0,0)
** in order to be efficient you should calculate your normalMat on the cpu and pass it as a uniform to your vertex shader
hope this helps

Threejs normal values in shader are set to 0

I'm trying to get this tutorial to work but I ran into two issues, one of which can be found here. The other one is the following.
For convenience this is the code that is supposed to work and here's a jsfiddle.
Vertex-shader:
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec3 position;
uniform vec3 normal;
varying vec3 vNormal;
void main() {
test = 0.5;
vNormal = normal;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
Fragment-shader:
varying mediump vec3 vNormal;
void main() {
mediump vec3 light = vec3(0.5, 0.2, 1.0);
// ensure it's normalized
light = normalize(light);
// calculate the dot product of
// the light to the vertex normal
mediump float dProd = max(0.0, dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, // R
dProd, // G
dProd, // B
1.0); // A
}
The values for normal in the vertex shader or at least the values for vNormal in the fragment shader seem to be 0. The sphere that is supposed to show up stays black. As soon as I change the values for gl_FragColor manually the sphere changes colors. Can anybody tell me why this is not working?
In your vertex shader the vec3 normal should be an attribute (since each vertex has a normal) not a uniform:
attribute vec3 normal;
Here is the working version of your code.

How to use a 3x3 homography matrix in OpenGL ES shaders?

I have a 3x3 homography matrix that works correctly with OpenCV's warpPerspective, but I need to do the warping on GPU for performance reasons. What is the best approach? I tried multiplying in the vertex shader to get the texture coordinates and then render a quad, but I get strange distortions. I'm not sure if it's the interpolation not working as I expect. Attaching output for comparison (it involves two different, but close enough shots).
Absolute difference of warp and other image from GPU:
Composite of warp and other image in OpenCV:
EDIT:
Following are my shaders: the task is image rectification (making epilines become scanlines) + absolute difference.
// Vertex Shader
static const char* warpVS = STRINGIFY
(
uniform highp mat3 homography1;
uniform highp mat3 homography2;
uniform highp int width;
uniform highp int height;
attribute highp vec2 position;
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
highp vec2 convertToTexture(highp vec3 pixelCoords) {
pixelCoords /= pixelCoords.z; // need to project
pixelCoords /= vec3(float(width), float(height), 1.0);
pixelCoords.y = 1.0 - pixelCoords.y; // origin is in bottom left corner for textures
return pixelCoords.xy;
}
void main(void)
{
gl_Position = vec4(position / vec2(float(width) / 2.0, float(height) / 2.0) - vec2(1.0), 0.0, 1.0);
gl_Position.y = -gl_Position.y;
highp vec3 initialCoords = vec3(position, 1.0);
refTexCoords = convertToTexture(homography1 * initialCoords);
curTexCoords = convertToTexture(homography2 * initialCoords);
}
);
// Fragment Shader
static const char* warpFS = STRINGIFY
(
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
uniform mediump sampler2D refTex;
uniform mediump sampler2D curTex;
uniform mediump sampler2D maskTex;
void main(void)
{
if (texture2D(maskTex, refTexCoords).r == 0.0) {
discard;
}
if (any(bvec4(curTexCoords[0] < 0.0, curTexCoords[1] < 0.0, curTexCoords[0] > 1.0, curTexCoords[1] > 1.0))) {
discard;
}
mediump vec4 referenceColor = texture2D(refTex, refTexCoords);
mediump vec4 currentColor = texture2D(curTex, curTexCoords);
gl_FragColor = vec4(abs(referenceColor.r - currentColor.r), 1.0, 0.0, 1.0);
}
);
I think you just need to do the projection per pixel. Make refTexCoords and curTexCoords at least vec3, then do the /z in the pixel shader before texture lookup. Even better use the textureProj GLSL instruction.
You want to do everything that is linear in the vertex shader, but things like projection need to be done in the fragment shader per pixel.
This link might help with some background: http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/

Compute normals in shader issue

I have the following vertex shader to rotate normals. Before I implemented that, I passed also the rotation matrix of the mesh to calculate the normals. That time lighting was just fine.
#version 150
uniform mat4 projection;
uniform mat4 modelview;
in vec3 position;
in vec3 normal;
in vec2 texcoord;
out vec3 fposition;
out vec3 fnormal;
out vec2 ftexcoord;
void main()
{
mat4 mvp = projection * modelview;
fposition = vec3(mvp * vec4(position, 1.0));
fnormal = normalize(mat3(transpose(inverse(modelview))) * normal);
ftexcoord = texcoord;
gl_Position = mvp * vec4(position, 1.0);
}
But with this shader, the lighting computed in the fragment shader turns with the camera. I haven't changed the fragment shader, so the issue should be in the code above.
What am I doing wrong in computing the normals?
The steps you use to create the normal Matrix might be out of order.
Try:
fnormal = normalize(transpose(inverse(mat3(modelview))) * normal)
Edit:
Since you are inverting the mat4, the translation values (which get truncated when a mat4 is converted to a mat3) are probably affecting the calculation of the inverse matrix.

Simple GLSL convolution shader is atrociously slow

I'm trying to implement a 2D outline shader in OpenGL ES2.0 for iOS. It is insanely slow. As in 5fps slow. I've tracked it down to the texture2D() calls. However, without those any convolution shader is undoable. I've tried using lowp instead of mediump, but with that everything is just black, although it does give another 5fps, but it's still unusable.
Here is my fragment shader.
varying mediump vec4 colorVarying;
varying mediump vec2 texCoord;
uniform bool enableTexture;
uniform sampler2D texture;
uniform mediump float k;
void main() {
const mediump float step_w = 3.0/128.0;
const mediump float step_h = 3.0/128.0;
const mediump vec4 b = vec4(0.0, 0.0, 0.0, 1.0);
const mediump vec4 one = vec4(1.0, 1.0, 1.0, 1.0);
mediump vec2 offset[9];
mediump float kernel[9];
offset[0] = vec2(-step_w, step_h);
offset[1] = vec2(-step_w, 0.0);
offset[2] = vec2(-step_w, -step_h);
offset[3] = vec2(0.0, step_h);
offset[4] = vec2(0.0, 0.0);
offset[5] = vec2(0.0, -step_h);
offset[6] = vec2(step_w, step_h);
offset[7] = vec2(step_w, 0.0);
offset[8] = vec2(step_w, -step_h);
kernel[0] = kernel[2] = kernel[6] = kernel[8] = 1.0/k;
kernel[1] = kernel[3] = kernel[5] = kernel[7] = 2.0/k;
kernel[4] = -16.0/k;
if (enableTexture) {
mediump vec4 sum = vec4(0.0);
for (int i=0;i<9;i++) {
mediump vec4 tmp = texture2D(texture, texCoord + offset[i]);
sum += tmp * kernel[i];
}
gl_FragColor = (sum * b) + ((one-sum) * texture2D(texture, texCoord));
} else {
gl_FragColor = colorVarying;
}
}
This is unoptimized, and not finalized, but I need to bring up performance before continuing on. I've tried replacing the texture2D() call in the loop with just a solid vec4 and it runs no problem, despite everything else going on.
How can I optimize this? I know it's possible because I've seen way more involved effects in 3D running no problem. I can't see why this is causing any trouble at all.
I've done this exact thing myself, and I see several things that could be optimized here.
First off, I'd remove the enableTexture conditional and instead split your shader into two programs, one for the true state of this and one for false. Conditionals are very expensive in iOS fragment shaders, particularly ones that have texture reads within them.
Second, you have nine dependent texture reads here. These are texture reads where the texture coordinates are calculated within the fragment shader. Dependent texture reads are very expensive on the PowerVR GPUs within iOS devices, because they prevent that hardware from optimizing texture reads using caching, etc. Because you are sampling from a fixed offset for the 8 surrounding pixels and one central one, these calculations should be moved up into the vertex shader. This also means that these calculations won't have to be performed for each pixel, just once for each vertex and then hardware interpolation will handle the rest.
Third, for() loops haven't been handled all that well by the iOS shader compiler to date, so I tend to avoid those where I can.
As I mentioned, I've done convolution shaders like this in my open source iOS GPUImage framework. For a generic convolution filter, I use the following vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp float texelWidth;
uniform highp float texelHeight;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 widthStep = vec2(texelWidth, 0.0);
vec2 heightStep = vec2(0.0, texelHeight);
vec2 widthHeightStep = vec2(texelWidth, texelHeight);
vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);
textureCoordinate = inputTextureCoordinate.xy;
leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;
topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
}
and the following fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
uniform mediump mat3 convolutionMatrix;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
mediump vec4 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate);
mediump vec4 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate);
mediump vec4 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate);
mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 leftColor = texture2D(inputImageTexture, leftTextureCoordinate);
mediump vec4 rightColor = texture2D(inputImageTexture, rightTextureCoordinate);
mediump vec4 topColor = texture2D(inputImageTexture, topTextureCoordinate);
mediump vec4 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate);
mediump vec4 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate);
mediump vec4 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];
resultColor += leftColor * convolutionMatrix[1][0] + centerColor * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];
resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];
gl_FragColor = resultColor;
}
The texelWidth and texelHeight uniforms are the inverse of the width and height of the input image, and the convolutionMatrix uniform specifies the weights for the various samples in your convolution.
On an iPhone 4, this runs in 4-8 ms for a 640x480 frame of camera video, which is good enough for 60 FPS rendering at that image size. If you just need to do something like edge detection, you can simplify the above, convert the image to luminance in a pre-pass, then only sample from one color channel. That's even faster, at about 2 ms per frame on the same device.
The only way I know of reducing time taken in this shader is by reducing the number of texture fetches. Since your shader samples textures from equally spaced points about the center pixels and linearly combines them, you could reduce the number of fetches by making use of the GL_LINEAR mode availbale for texture sampling.
Basically instead of sampling at every texel, sample in between a pair of texels to directly get a linearly weighted sum.
Let us call the sampling at offset (-stepw,-steph) and (-stepw,0) as x0 and x1 respectively. Then your sum is
sum = x0*k0 + x1*k1
Now instead if you sample in between these two texels, at a distance of
k0/(k0+k1) from x0 and therefore k1/(k0+k1) from x1, then the GPU will perform the linear weighting during the fetch and give you,
y = x1*k1/(k0+k1) + x0*k0/(k1+k0)
Thus sum can be calculated as
sum = y*(k0 + k1) from just one fetch!
If you repeat this for the other adjacent pixels, you will end up doing 4 texture fetches for each of the adjacent offsets, and one extra texture fetch for the center pixel.
The link explains this much better

Resources