How to use a 3x3 homography matrix in OpenGL ES shaders? - opengl-es

I have a 3x3 homography matrix that works correctly with OpenCV's warpPerspective, but I need to do the warping on GPU for performance reasons. What is the best approach? I tried multiplying in the vertex shader to get the texture coordinates and then render a quad, but I get strange distortions. I'm not sure if it's the interpolation not working as I expect. Attaching output for comparison (it involves two different, but close enough shots).
Absolute difference of warp and other image from GPU:
Composite of warp and other image in OpenCV:
EDIT:
Following are my shaders: the task is image rectification (making epilines become scanlines) + absolute difference.
// Vertex Shader
static const char* warpVS = STRINGIFY
(
uniform highp mat3 homography1;
uniform highp mat3 homography2;
uniform highp int width;
uniform highp int height;
attribute highp vec2 position;
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
highp vec2 convertToTexture(highp vec3 pixelCoords) {
pixelCoords /= pixelCoords.z; // need to project
pixelCoords /= vec3(float(width), float(height), 1.0);
pixelCoords.y = 1.0 - pixelCoords.y; // origin is in bottom left corner for textures
return pixelCoords.xy;
}
void main(void)
{
gl_Position = vec4(position / vec2(float(width) / 2.0, float(height) / 2.0) - vec2(1.0), 0.0, 1.0);
gl_Position.y = -gl_Position.y;
highp vec3 initialCoords = vec3(position, 1.0);
refTexCoords = convertToTexture(homography1 * initialCoords);
curTexCoords = convertToTexture(homography2 * initialCoords);
}
);
// Fragment Shader
static const char* warpFS = STRINGIFY
(
varying highp vec2 refTexCoords;
varying highp vec2 curTexCoords;
uniform mediump sampler2D refTex;
uniform mediump sampler2D curTex;
uniform mediump sampler2D maskTex;
void main(void)
{
if (texture2D(maskTex, refTexCoords).r == 0.0) {
discard;
}
if (any(bvec4(curTexCoords[0] < 0.0, curTexCoords[1] < 0.0, curTexCoords[0] > 1.0, curTexCoords[1] > 1.0))) {
discard;
}
mediump vec4 referenceColor = texture2D(refTex, refTexCoords);
mediump vec4 currentColor = texture2D(curTex, curTexCoords);
gl_FragColor = vec4(abs(referenceColor.r - currentColor.r), 1.0, 0.0, 1.0);
}
);

I think you just need to do the projection per pixel. Make refTexCoords and curTexCoords at least vec3, then do the /z in the pixel shader before texture lookup. Even better use the textureProj GLSL instruction.
You want to do everything that is linear in the vertex shader, but things like projection need to be done in the fragment shader per pixel.
This link might help with some background: http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/

Related

How to place a texture in a specific location?

In my code, I'm mixing two textures. I want to position a texture at any place on the plane but when I add an offset to the texture UV XY coordinate the image just gets stretched.
offsetText1 = vec2(0.1,0.1);
vec4 displacement = texture2D(utexture1,vUv+offsetText1);
How do I move the texture to any position without stretching it?
VERTEX SHADER:
varying vec2 vUv;
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying vec2 offsetText1;
void main() {
offsetText1 = vec2(0.1,0.1);
vUv = uv;
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
vec4 displacement = texture2D(utexture1,vUv+offsetText1);
vec4 displacement2 = texture2D(utexture2,vUv);
modelPosition.z += displacement.r*1.0;
modelPosition.z += displacement2.r*40.0;
gl_Position = projectionMatrix * viewMatrix * modelPosition;
}
FRAGMENT SHADER:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying vec2 vUv;
varying vec2 offsetText1;
void main() {
vec3 c;
vec4 Ca = texture2D(utexture1,vUv+offsetText1 );
vec4 Cb = texture2D(utexture2,vUv);
c = Ca.rgb * Ca.a + Cb.rgb * Cb.a * (2.0 - Ca.a);
gl_FragColor = vec4(c, 1.0);
}
image with offsetText1 = vec2(0.0,0.0);
no stretching
image with offsetText1 = vec2(0.1,0.1); image is being stretched from the top right corner.
stretching
That's the behavior of textures. They extend in the range from [0, 1], so when you go beyond 1 or below 0, they'll "wrap". You need to tell it what to do when wrapping. Do you want it to repeat, stretch, or mirror?
You could establish this with the texture.wrapS and .wrapT properties, which accepts one of 3 values:
THREE.RepeatWrapping
THREE.ClampToEdgeWrapping
THREE.MirroredRepeatWrapping
If you want to just show white where the texture extends out of bounds, then you'd have to do that programmatically in your shader code. Here's some pseudocode:
if (uv < 0 || > 1)
color = white

How to make visible uniform variable in both shaders?

I have variable in vertex shader "uIsTeapot".
uniform float uIsTeapot;
Vertex shader work with it very well, but fragment shader don't see it. If I use
if (uIsTeapot == 0.0)
then an error occured
"uIsTeapot": undeclared identifier
but I defined it in vertex shader. If I define uIsTeapot in both shaders as uniform, then program say "Could not initialise shaders" since program don't pass veryfication
if (!gl.getProgramParameter(shaderProgram, gl.LINK_STATUS)) {
alert("Could not initialise shaders");
}
EDITED:
I add mediump to variables and now program compiled without errors, but result is one object on screen, but I draw two object.
Vertex shader
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
attribute vec2 aTextureCoord;
attribute vec4 aVertexColor;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat3 uNMatrix;
uniform vec3 uAmbientColor;
uniform vec3 uPointLightingLocation;
uniform vec3 uPointLightingColor;
uniform mediump float uIsTeapot;
//varying float vIsTeapot;
varying vec2 vTextureCoord;
varying vec3 vLightWeighting;
varying vec4 vColor;
void main(void) {
if (uIsTeapot == 1.0) {
vec4 mvPosition = uMVMatrix * vec4(aVertexPosition, 1.0);
gl_Position = uPMatrix * mvPosition;
vec3 lightDirection = normalize(uPointLightingLocation - mvPosition.xyz);
vec3 transformedNormal = uNMatrix * aVertexNormal;
float directionalLightWeighting = max(dot(transformedNormal, lightDirection), 0.0);
directionalLightWeighting = 100.0;
vLightWeighting = uAmbientColor + uPointLightingColor * directionalLightWeighting;
vTextureCoord = aTextureCoord;
}
if (uIsTeapot == 0.0) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
} }
Fragment shader
precision mediump float;
varying vec2 vTextureCoord;
varying vec3 vLightWeighting;
varying vec4 vColor;
uniform sampler2D uSampler;
uniform mediump float uIsTeapot;
//varying float vIsTeapot;
void main(void) {
if (uIsTeapot == 1.0) {
vec4 textureColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
gl_FragColor = vec4(textureColor.rgb * vLightWeighting, textureColor.a);
} else {
gl_FragColor = vColor;
}
}
You need to declare it in both shaders, with the same precision.
What do you mean by:
then program don't work right.
Can you post your vertex shader and your fragment shader? I suspect you are relying on the default precision, and it is likely that you are using highp in the vertex shader and mediump in the fragment shader
Try using:
uniform mediump float uIsTeapot;
... in both vertex shader and fragment shader.

Can't get OpenGL code to work properly

I'm very new to OpenGL, GLSL and WebGL. I'm trying to get this sample code to work in a tool like http://shdr.bkcore.com/ but I can't get it to work.
Vertex Shader:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Shader:
precision highp float;
uniform float time;
uniform vec2 resolution;
varying vec3 fPosition;
varying vec3 fNormal;
uniform sampler2D tex0;
void main()
{
float border = 0.01;
float circle_radius = 0.5;
vec4 circle_color = vec4(1.0, 1.0, 1.0, 1.0);
vec2 circle_center = vec2(0.5, 0.5);
vec2 uv = gl_TexCoord[0].xy;
vec4 bkg_color = texture2D(tex0,uv * vec2(1.0, -1.0));
// Offset uv with the center of the circle.
uv -= circle_center;
float dist = sqrt(dot(uv, uv));
if ( (dist > (circle_radius+border)) || (dist < (circle_radius-border)) )
gl_FragColor = bkg_color;
else
gl_FragColor = circle_color;
}
I figured that this code must be from an outdated version of the language, so I changed the vertex shader to:
precision highp float;
attribute vec2 position;
attribute vec3 normal;
varying vec2 TextCoord;
attribute vec2 textCoord;
uniform mat3 normalMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
varying vec3 fNormal;
varying vec3 fPosition;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
TextCoord = vec2(textCoord);
}
That seemed to fix the error messages about undeclared identifiers and not being able to "convert from 'float' to highp 4-component something-or-other", but I have no idea if, functionally, this will do the same thing as the original intended.
Also, when I convert to this version of the Vertex Shader I have no idea what I'm supposed to do with this line in the Fragment Shader:
vec2 uv = gl_TexCoord[0].xy;
How do I convert this line to fit in with the converted vertex shader and how can I be sure that the vertex shader is even converted correctly?
gl_TexCoord is from desktop OpenGL, and not part of OpenGL ES. You'll need to create a new user-defined vec2 varying to hold the coordinate value.

Simple GLSL convolution shader is atrociously slow

I'm trying to implement a 2D outline shader in OpenGL ES2.0 for iOS. It is insanely slow. As in 5fps slow. I've tracked it down to the texture2D() calls. However, without those any convolution shader is undoable. I've tried using lowp instead of mediump, but with that everything is just black, although it does give another 5fps, but it's still unusable.
Here is my fragment shader.
varying mediump vec4 colorVarying;
varying mediump vec2 texCoord;
uniform bool enableTexture;
uniform sampler2D texture;
uniform mediump float k;
void main() {
const mediump float step_w = 3.0/128.0;
const mediump float step_h = 3.0/128.0;
const mediump vec4 b = vec4(0.0, 0.0, 0.0, 1.0);
const mediump vec4 one = vec4(1.0, 1.0, 1.0, 1.0);
mediump vec2 offset[9];
mediump float kernel[9];
offset[0] = vec2(-step_w, step_h);
offset[1] = vec2(-step_w, 0.0);
offset[2] = vec2(-step_w, -step_h);
offset[3] = vec2(0.0, step_h);
offset[4] = vec2(0.0, 0.0);
offset[5] = vec2(0.0, -step_h);
offset[6] = vec2(step_w, step_h);
offset[7] = vec2(step_w, 0.0);
offset[8] = vec2(step_w, -step_h);
kernel[0] = kernel[2] = kernel[6] = kernel[8] = 1.0/k;
kernel[1] = kernel[3] = kernel[5] = kernel[7] = 2.0/k;
kernel[4] = -16.0/k;
if (enableTexture) {
mediump vec4 sum = vec4(0.0);
for (int i=0;i<9;i++) {
mediump vec4 tmp = texture2D(texture, texCoord + offset[i]);
sum += tmp * kernel[i];
}
gl_FragColor = (sum * b) + ((one-sum) * texture2D(texture, texCoord));
} else {
gl_FragColor = colorVarying;
}
}
This is unoptimized, and not finalized, but I need to bring up performance before continuing on. I've tried replacing the texture2D() call in the loop with just a solid vec4 and it runs no problem, despite everything else going on.
How can I optimize this? I know it's possible because I've seen way more involved effects in 3D running no problem. I can't see why this is causing any trouble at all.
I've done this exact thing myself, and I see several things that could be optimized here.
First off, I'd remove the enableTexture conditional and instead split your shader into two programs, one for the true state of this and one for false. Conditionals are very expensive in iOS fragment shaders, particularly ones that have texture reads within them.
Second, you have nine dependent texture reads here. These are texture reads where the texture coordinates are calculated within the fragment shader. Dependent texture reads are very expensive on the PowerVR GPUs within iOS devices, because they prevent that hardware from optimizing texture reads using caching, etc. Because you are sampling from a fixed offset for the 8 surrounding pixels and one central one, these calculations should be moved up into the vertex shader. This also means that these calculations won't have to be performed for each pixel, just once for each vertex and then hardware interpolation will handle the rest.
Third, for() loops haven't been handled all that well by the iOS shader compiler to date, so I tend to avoid those where I can.
As I mentioned, I've done convolution shaders like this in my open source iOS GPUImage framework. For a generic convolution filter, I use the following vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp float texelWidth;
uniform highp float texelHeight;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 widthStep = vec2(texelWidth, 0.0);
vec2 heightStep = vec2(0.0, texelHeight);
vec2 widthHeightStep = vec2(texelWidth, texelHeight);
vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);
textureCoordinate = inputTextureCoordinate.xy;
leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;
topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
}
and the following fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
uniform mediump mat3 convolutionMatrix;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
mediump vec4 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate);
mediump vec4 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate);
mediump vec4 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate);
mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 leftColor = texture2D(inputImageTexture, leftTextureCoordinate);
mediump vec4 rightColor = texture2D(inputImageTexture, rightTextureCoordinate);
mediump vec4 topColor = texture2D(inputImageTexture, topTextureCoordinate);
mediump vec4 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate);
mediump vec4 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate);
mediump vec4 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];
resultColor += leftColor * convolutionMatrix[1][0] + centerColor * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];
resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];
gl_FragColor = resultColor;
}
The texelWidth and texelHeight uniforms are the inverse of the width and height of the input image, and the convolutionMatrix uniform specifies the weights for the various samples in your convolution.
On an iPhone 4, this runs in 4-8 ms for a 640x480 frame of camera video, which is good enough for 60 FPS rendering at that image size. If you just need to do something like edge detection, you can simplify the above, convert the image to luminance in a pre-pass, then only sample from one color channel. That's even faster, at about 2 ms per frame on the same device.
The only way I know of reducing time taken in this shader is by reducing the number of texture fetches. Since your shader samples textures from equally spaced points about the center pixels and linearly combines them, you could reduce the number of fetches by making use of the GL_LINEAR mode availbale for texture sampling.
Basically instead of sampling at every texel, sample in between a pair of texels to directly get a linearly weighted sum.
Let us call the sampling at offset (-stepw,-steph) and (-stepw,0) as x0 and x1 respectively. Then your sum is
sum = x0*k0 + x1*k1
Now instead if you sample in between these two texels, at a distance of
k0/(k0+k1) from x0 and therefore k1/(k0+k1) from x1, then the GPU will perform the linear weighting during the fetch and give you,
y = x1*k1/(k0+k1) + x0*k0/(k1+k0)
Thus sum can be calculated as
sum = y*(k0 + k1) from just one fetch!
If you repeat this for the other adjacent pixels, you will end up doing 4 texture fetches for each of the adjacent offsets, and one extra texture fetch for the center pixel.
The link explains this much better

Opengl Render texture on texture leaves the transparent part as black (see pic)

The black part of the texture on the top is transparent but renders as black in opengl es .
Im rendering without any blending, dont know is that what I need
How could I solve this issue? Thanks
Here is my shader:
precision mediump float;
varying vec2 v_texCoord;
uniform sampler2D s_texture;
uniform lowp float distance;
uniform lowp float slope;
void main()
{
highp vec4 color = vec4(1.0);
highp float d = v_texCoord.y * slope + distance;
highp vec4 c = texture2D(s_texture, v_texCoord);
c = (c - d * color) / (1.0 -d);
//vec4 textureColor = texture2D( s_texture, v_texCoord );
//gl_FragColor = textureColor;
gl_FragColor = c;
}
Im rendering without any blending, dont know is taht is waht I need
Yes, if you want something to be transparent, you need either blending or to discard fragments based on the alpha value.

Resources