An outline/sharp transition in a fragment shader - opengl-es

I would like to create a sharp transition effect between pixels in my fragment shader, but I'm not sure how I could do this.
In my vertex shader I have a varying float x; and in my fragment shader I use this value to set the opacity of the color. I quantize the current value to produce a layering effect. What I'd like to do is at a very minimal level of the effect to produce a distinct border (a different color entirely). For example, if x>0.1 and for any neighboring pixel x<0.1 then the resulting color should be black.
It don't see any way in GLSL to gain access to neighbouring pixels (I could be wrong). How could I achieve such an effect. I'm limited to OpenGL-ES2.0 (though if not possible at all on this version, then any solution would be helpful).

You are correct that you cannot access neighboring pixels, this is due to the fact that there is no guarantee which order the pixels are written, they are all drawn in parallel. If you could access neighboring pixels in the framebuffer you would get inconsistent results.
However you can do this in a post-process if you want. Draw your whole scene into a framebuffer texture, and then draw that texture to the screen with a filtering shader.
When drawing from a texture in your shader you can sample neighboring texels all you want, so you could easily compare the delta between two neighboring texels.

If your OpenGL ES implementation supports the OES_standard_derivatives extension, you can get the rate of change of your variable by forward/backward differencing with neighboring pixels in the 2×2 quad being shaded:
float outline(float t, float threshold, float width)
{
return clamp(width - abs(threshold - t) / fwidth(t), 0.0, 1.0);
}
This function returns the coverage for a line of the specified width where t ≈ threshold, using fwidth to determine how far it is from the cutoff. Note that fwidth(t) is equivalent to abs(dFdx(t)) + abs(dFdy(t)) and calculates the width in Manhattan distance, which may overfatten diagonal lines. If you prefer Euclidean distance:
float outline(float t, float threshold, float width)
{
float dx = dFdx(t);
float dy = dFdy(t);
float ewidth = sqrt(dx * dx + dy * dy);
return clamp(width - abs(threshold - t) / ewidth, 0.0, 1.0);
}

In addition to Pivot's implementation based on derivatives, you can grab neighboring pixels from a source image using an offset based on the pixel dimensions of that source. The inverse of the width or height in pixels is the offset from the current texture coordinate that you'll need to use here.
For example, here is a vertex shader I've used to calculate these offsets for the eight pixels that surround a central one:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp float texelWidth;
uniform highp float texelHeight;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 widthStep = vec2(texelWidth, 0.0);
vec2 heightStep = vec2(0.0, texelHeight);
vec2 widthHeightStep = vec2(texelWidth, texelHeight);
vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);
textureCoordinate = inputTextureCoordinate.xy;
leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;
topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
}
and here's a fragment shader that uses this to perform Sobel edge detection:
precision mediump float;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
uniform sampler2D inputImageTexture;
void main()
{
float bottomLeftIntensity = texture2D(inputImageTexture, bottomLeftTextureCoordinate).r;
float topRightIntensity = texture2D(inputImageTexture, topRightTextureCoordinate).r;
float topLeftIntensity = texture2D(inputImageTexture, topLeftTextureCoordinate).r;
float bottomRightIntensity = texture2D(inputImageTexture, bottomRightTextureCoordinate).r;
float leftIntensity = texture2D(inputImageTexture, leftTextureCoordinate).r;
float rightIntensity = texture2D(inputImageTexture, rightTextureCoordinate).r;
float bottomIntensity = texture2D(inputImageTexture, bottomTextureCoordinate).r;
float topIntensity = texture2D(inputImageTexture, topTextureCoordinate).r;
float h = -topLeftIntensity - 2.0 * topIntensity - topRightIntensity + bottomLeftIntensity + 2.0 * bottomIntensity + bottomRightIntensity;
float v = -bottomLeftIntensity - 2.0 * leftIntensity - topLeftIntensity + bottomRightIntensity + 2.0 * rightIntensity + topRightIntensity;
float mag = length(vec2(h, v));
gl_FragColor = vec4(vec3(mag), 1.0);
}
I pass in the texelWidth and texelHeight uniforms, which are 1/width and 1/height of the image, respectively. This does require you to track the input image width and height, but it should work on all OpenGL ES devices, not just those with the derivative extensions.
I do the texture offset calculations in the vertex shader for two reasons: so that offset calculations only need to be performed once per vertex instead of once per fragment, and more importantly because some of the tile-based deferred renderers react very poorly to dependent texture reads where texture offsets are calculated in a fragment shader. The performance can be up to 20X higher for a shader program that removes these dependent texture reads on these devices.

Related

Threejs: compute projected coordinate in fragment shader

I'm struggling with handling Coord in fragment Shader.
In brief, I just want to draw circle with fragment shader using (x,y,z) of world space. But because of camera position and the z of circle's center position, I cannot get actual right projected x and y coords.
Let's suppose that my camera placed at (0, 0, 1000) and perspective with
fov: 45deg
aspect with screen_width/screen_height
nearZ: 1
farZ: 10000
Camera look at (0,0). In this case with three.js, I can get projectionMatrix and ModelViewMatrix of camera(e.g.PerspectiveCamera.projectionMatrix) and also in default I can use viewMatrix in fragmentShader of ShaderMaterial in three.js.
So in fragmentShader, for calculating projected coordinate of circle placed (300, 300, -1000), I write my VertexShader and FragmentShader like below.
My Vertex Shader is only for get projectionMatrix and modelViewMatrix as P and MV.
// vertexShader
varying mat4 P;
varying mat4 MV;
void main(){
P = projectionMatrix;
MV = modelViewMatrix;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
And then, I just calculate x and y using P and MV like below.
// fragmentShader
varying mat4 P;
varying mat4 MV;
uniform float x;
uniform float y;
uniform float z;
uniform float r;
uniform vec2 u_resolution;
float circle(vec2 _st, vec2 _center, float _radius){
vec2 dist = _st - _center + u_resolution;
return 1.-smoothstep(_radius-(_radius*0.01),
_radius+(_radius*0.01),
length(dist));
}
void main(){
vec2 coord = (P * MV * vec4(x, y, z, 1.0)).xy;
float point = circle(gl_FragCoord.xy, coord, r); // ignore r scaling.
gl_FragColor = vec4(vec4(point), point);
}
But the result doesn't match what I expected. And also some weird behaviors were found.
No matter what z of uniform, there's no change at all.
Pixel ratio can be some reason(e.g. retina display has pixel ratio as 2) but from my experiments of it, it has nothing to do with this.
Any mistake that I made? Or any misleading? (somehow there can be mistake in circle function but I think it doesn't make critical problem..)
Lets assume that x, y and z, define the center of a circle in world space. You want to draw a circle in a plane which is parallel to the view port in a screen space pass, where you draw a quad over the entire viewport.
You have to transform the center of the circle from world space coordinates to normalized device coordinates. The best solution would be to do this on the CPU and to set uniform with the result.
According to the code of your question, this can be done in the vertex shader, too. But you have to do a Perspective divide, after the transformation by the model view matrix and the projection matrix, to transform the point form clip space to view normalized device space:
uniform mat4 P;
uniform mat4 MV;
uniform float x;
uniform float y;
uniform float z;
varying vec3 cpt;
void main(){
vec4 cpt_h = projectionMatrix * modelViewMatrix * vec4(x, y, z, 1.0);
vec3 cpt = cpt_h.xyz / cpt_h.w;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
If u_resolution, is the width and the height of the viewport, then the x and y coordinate of the fragment in normalized device space can be calculated by:
vec2 coord = gl_FragCoord.xy / u_resolution.xy * 2.0 - 1.0;
But I recommend to transform the center point of the circle to window (pixel) coordinates, then the radius can be set in pixel, too:
vec2 cpt_p = (cpt.xy * 0.5 + 0.5) * u_resolution.xy;
To calculate the length of a vector you can use the GLSL function length.
The final fragment shader may look like this:
varying vec3 cpt;
uniform vec2 u_resolution;
uniform float u_pixel_ratio; // device pixel ratio
uniform float r; // e.g. 100.0 means a radius of 100 pixel
float circle( vec2 _st, vec2 _center, float _radius )
{
// thickness of the circle in pixel
const float thickness = 20.0;
// distance to the center point in pixel
float dist = length(_st - _center);
return 1.0 - smoothstep(0.0, thickness/2.0, abs(_radius-dist));
}
void main(){
vec2 cpt_p = (cpt.xy * 0.5 + 0.5) * u_resolution.xy * u_pixel_ratio;
float point = circle(gl_FragCoord.xy, cpt_p, r);
gl_FragColor = vec4(point);
}
e.g. a circle with a radius of 50.0 and a thickness of 20.0:
If you want to apply a perspective distortion to the circle, this means the size of the circle decreases by distance, then you have to set the radius r in world coordinates.
Calculate a point on the circle and calculate the distance of the point to the center point of the circle in the vertex shader in normalized device space.
This is the radius which you have to pass from the vertex shader to the fragment shader additional to the center point of the circle.
uniform mat4 P;
uniform mat4 MV;
uniform float x;
uniform float y;
uniform float z;
uniform float r; // e.g. radius in world space
varying vec3 cpt;
varying float radius;
void main(){
vec4 cpt_v = modelViewMatrix * vec4(x, y, z, 1.0);
vec4 rpt_v = vec4(cpt_v.x, cpt_v.y + r, cpt_v.zw);
vec4 cpt_h = projectionMatrix * cpt_v;
vec4 rpt_h = projectionMatrix * rpt_v;
cpt = cpt_h.xyz / cpt_h.w;
vec3 rpt = rpt_v.xyz / rpt_v.w;
radius = length(rpt-cpt);
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
varying vec3 cpt;
varying float radius;
uniform vec2 u_resolution;
uniform float u_pixel_ratio; // device pixel ratio
uniform float r; // e.g. 100.0 means a radius of 100 pixel
float circle( vec2 _st, vec2 _center, float _radius )
{
const float thickness = 20.0;
float dist = length(_st - _center);
return 1.0 - smoothstep(0.0, thickness/2.0, abs(_radius-dist));
}
void main()
{
vec2 cpt_p = (cpt.xy * 0.5 + 0.5) * u_resolution.xy * u_pixel_ratio;
float radius_p = radius * 0.5 * u_resolution.y * u_pixel_ratio.y;
float point = circle(gl_FragCoord.xy, cpt_p, radius_p);
gl_FragColor = vec4(point);
}

Rendering artifacts when using dot(n,l) as texture lookup coordinate Webgl

I'm implementing the xToon shader(pdf) in glsl to use as a shader with Three.js.
I'm getting some rendering artifacts, and I think the problem is due to webgl strangeness that I am not knowledgable about, perhaps relating to a Nan or Inf or something... I'm pulling my hair out.
I'll include the complete fragment and vertex shaders below, but I think this is the offending code located in the fragment shader:
....
vec3 n = normalize(vNormal);
vec3 l = normalize(lightDir);
float d = dot(n, l) * 0.5 + 0.5;
//vec2 texLookUp = vec2(d, loa);
vec2 texLookUp = vec2(d, 0.055);
vec4 dColor = texture2D(texture, texLookUp);
gl_FragColor = dColor;
....
To the best of my debugging efforts there seems to be some problem with using the value d as a component of the texture look up vector. This code produces these strange artifacts:
There shouldn't be those yellow "lines" on those contours...
As you may have noted, I'm not actually using the "loa" value in this code. For a while I thought that this problem was in the way I was calculating loa, but it seems that this bug is independent of loa.
Any help would be much appreciated!
The fragment shader:
uniform vec3 lightDir;
uniform sampler2D texture;
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUv;
// loa calculation for texture lookup
varying highp float loa;
void main() {
vec3 n = normalize(vNormal);
vec3 l = normalize(lightDir);
float d = dot(n, l) * 0.5 + 0.5;
//vec2 texLookUp = vec2(d, loa);
vec2 texLookUp = vec2(d, 0.055);
vec4 dColor = texture2D(texture, texLookUp);
gl_FragColor = dColor;
}
And the vertex shader:
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec3 focalPos;
uniform float inflate;
uniform float zmin;
uniform float r;
varying vec3 vNormal;
varying vec2 vUv;
varying float loa;
void main() {
vec3 n = normalize(normal);
// euclidiean distance to point from camera pos
float depth = length(cameraPos - position);
// 1. detail mapping correcting for perspective projection
float z = depth / zmin;
loa = 1.0 - (log2(z)/log2(r));
loa = clamp(loa, 0.055, 0.9);
vNormal = n;
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(normal * inflate + position, 1.0 );
}
I solved the problem by setting the texture to ClampToEdgeWrapping instead of RepeatWrapping. I was led to this answer by this stack overflow question:
Using floor() function in GLSL when sampling a texture leaves glitch
The solution is explained very well in this blog post:
http://webglfundamentals.org/webgl/lessons/webgl-3d-textures.html
And the functions to deal with this in THREEjs are members of the Texture and are explained in the THREEjs docs here.
Also I needed to set the min filter to Nearest to fully get rid of the artifacts.

GLSL webgl lerp normals from uv offset

I have a displacement map on a plane 512px* 512px (100x100 segments) , as the image for the displacement map scrolls left the vertices snap to position of height not blend smoothly, I have been looking at the mix() function and smooth-step() to morph the normals to their positions over time but i having a hard time implementing it.
uniform sampler2D heightText; //texture greyscale 512x512
uniform float displace;
uniform float time;
uniform float speed;
varying vec2 vUV;
varying float scaleDisplace;
void main() {
vUV = uv;
vec2 uvOffset = vUV + vec2( 0.1, 0.1)* time; // animates offset
vec2 uvCo = vUV + vec2( 0.0, 0.0);
vec2 texSize = vec2(-0.8, 0.8); // scales image larger
vec4 data = texture2D( heightText, uvOffset + fract(uvCo)*texSize.x);
scaleDisplace = data.r;
//vec3 possy = normal * displace * scaleDisplace;
vec3 morphPossy = mix( position, normal *displace , scaleDisplace)* time ;
gl_Position = projectionMatrix * modelViewMatrix * vec4(morphPossy, 1.0 );
}
Using Three.js 71 with vertex and pixel:
Illustration purpose:
Any help appreciated ...
Since you're using a texture as a height map, you should make sure that:
heightText.magFilter = THREE.LinearFilter; // This is the default value.
so that the values you receive are smoothed texel to texel.

Simple GLSL convolution shader is atrociously slow

I'm trying to implement a 2D outline shader in OpenGL ES2.0 for iOS. It is insanely slow. As in 5fps slow. I've tracked it down to the texture2D() calls. However, without those any convolution shader is undoable. I've tried using lowp instead of mediump, but with that everything is just black, although it does give another 5fps, but it's still unusable.
Here is my fragment shader.
varying mediump vec4 colorVarying;
varying mediump vec2 texCoord;
uniform bool enableTexture;
uniform sampler2D texture;
uniform mediump float k;
void main() {
const mediump float step_w = 3.0/128.0;
const mediump float step_h = 3.0/128.0;
const mediump vec4 b = vec4(0.0, 0.0, 0.0, 1.0);
const mediump vec4 one = vec4(1.0, 1.0, 1.0, 1.0);
mediump vec2 offset[9];
mediump float kernel[9];
offset[0] = vec2(-step_w, step_h);
offset[1] = vec2(-step_w, 0.0);
offset[2] = vec2(-step_w, -step_h);
offset[3] = vec2(0.0, step_h);
offset[4] = vec2(0.0, 0.0);
offset[5] = vec2(0.0, -step_h);
offset[6] = vec2(step_w, step_h);
offset[7] = vec2(step_w, 0.0);
offset[8] = vec2(step_w, -step_h);
kernel[0] = kernel[2] = kernel[6] = kernel[8] = 1.0/k;
kernel[1] = kernel[3] = kernel[5] = kernel[7] = 2.0/k;
kernel[4] = -16.0/k;
if (enableTexture) {
mediump vec4 sum = vec4(0.0);
for (int i=0;i<9;i++) {
mediump vec4 tmp = texture2D(texture, texCoord + offset[i]);
sum += tmp * kernel[i];
}
gl_FragColor = (sum * b) + ((one-sum) * texture2D(texture, texCoord));
} else {
gl_FragColor = colorVarying;
}
}
This is unoptimized, and not finalized, but I need to bring up performance before continuing on. I've tried replacing the texture2D() call in the loop with just a solid vec4 and it runs no problem, despite everything else going on.
How can I optimize this? I know it's possible because I've seen way more involved effects in 3D running no problem. I can't see why this is causing any trouble at all.
I've done this exact thing myself, and I see several things that could be optimized here.
First off, I'd remove the enableTexture conditional and instead split your shader into two programs, one for the true state of this and one for false. Conditionals are very expensive in iOS fragment shaders, particularly ones that have texture reads within them.
Second, you have nine dependent texture reads here. These are texture reads where the texture coordinates are calculated within the fragment shader. Dependent texture reads are very expensive on the PowerVR GPUs within iOS devices, because they prevent that hardware from optimizing texture reads using caching, etc. Because you are sampling from a fixed offset for the 8 surrounding pixels and one central one, these calculations should be moved up into the vertex shader. This also means that these calculations won't have to be performed for each pixel, just once for each vertex and then hardware interpolation will handle the rest.
Third, for() loops haven't been handled all that well by the iOS shader compiler to date, so I tend to avoid those where I can.
As I mentioned, I've done convolution shaders like this in my open source iOS GPUImage framework. For a generic convolution filter, I use the following vertex shader:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp float texelWidth;
uniform highp float texelHeight;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
gl_Position = position;
vec2 widthStep = vec2(texelWidth, 0.0);
vec2 heightStep = vec2(0.0, texelHeight);
vec2 widthHeightStep = vec2(texelWidth, texelHeight);
vec2 widthNegativeHeightStep = vec2(texelWidth, -texelHeight);
textureCoordinate = inputTextureCoordinate.xy;
leftTextureCoordinate = inputTextureCoordinate.xy - widthStep;
rightTextureCoordinate = inputTextureCoordinate.xy + widthStep;
topTextureCoordinate = inputTextureCoordinate.xy - heightStep;
topLeftTextureCoordinate = inputTextureCoordinate.xy - widthHeightStep;
topRightTextureCoordinate = inputTextureCoordinate.xy + widthNegativeHeightStep;
bottomTextureCoordinate = inputTextureCoordinate.xy + heightStep;
bottomLeftTextureCoordinate = inputTextureCoordinate.xy - widthNegativeHeightStep;
bottomRightTextureCoordinate = inputTextureCoordinate.xy + widthHeightStep;
}
and the following fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
uniform mediump mat3 convolutionMatrix;
varying vec2 textureCoordinate;
varying vec2 leftTextureCoordinate;
varying vec2 rightTextureCoordinate;
varying vec2 topTextureCoordinate;
varying vec2 topLeftTextureCoordinate;
varying vec2 topRightTextureCoordinate;
varying vec2 bottomTextureCoordinate;
varying vec2 bottomLeftTextureCoordinate;
varying vec2 bottomRightTextureCoordinate;
void main()
{
mediump vec4 bottomColor = texture2D(inputImageTexture, bottomTextureCoordinate);
mediump vec4 bottomLeftColor = texture2D(inputImageTexture, bottomLeftTextureCoordinate);
mediump vec4 bottomRightColor = texture2D(inputImageTexture, bottomRightTextureCoordinate);
mediump vec4 centerColor = texture2D(inputImageTexture, textureCoordinate);
mediump vec4 leftColor = texture2D(inputImageTexture, leftTextureCoordinate);
mediump vec4 rightColor = texture2D(inputImageTexture, rightTextureCoordinate);
mediump vec4 topColor = texture2D(inputImageTexture, topTextureCoordinate);
mediump vec4 topRightColor = texture2D(inputImageTexture, topRightTextureCoordinate);
mediump vec4 topLeftColor = texture2D(inputImageTexture, topLeftTextureCoordinate);
mediump vec4 resultColor = topLeftColor * convolutionMatrix[0][0] + topColor * convolutionMatrix[0][1] + topRightColor * convolutionMatrix[0][2];
resultColor += leftColor * convolutionMatrix[1][0] + centerColor * convolutionMatrix[1][1] + rightColor * convolutionMatrix[1][2];
resultColor += bottomLeftColor * convolutionMatrix[2][0] + bottomColor * convolutionMatrix[2][1] + bottomRightColor * convolutionMatrix[2][2];
gl_FragColor = resultColor;
}
The texelWidth and texelHeight uniforms are the inverse of the width and height of the input image, and the convolutionMatrix uniform specifies the weights for the various samples in your convolution.
On an iPhone 4, this runs in 4-8 ms for a 640x480 frame of camera video, which is good enough for 60 FPS rendering at that image size. If you just need to do something like edge detection, you can simplify the above, convert the image to luminance in a pre-pass, then only sample from one color channel. That's even faster, at about 2 ms per frame on the same device.
The only way I know of reducing time taken in this shader is by reducing the number of texture fetches. Since your shader samples textures from equally spaced points about the center pixels and linearly combines them, you could reduce the number of fetches by making use of the GL_LINEAR mode availbale for texture sampling.
Basically instead of sampling at every texel, sample in between a pair of texels to directly get a linearly weighted sum.
Let us call the sampling at offset (-stepw,-steph) and (-stepw,0) as x0 and x1 respectively. Then your sum is
sum = x0*k0 + x1*k1
Now instead if you sample in between these two texels, at a distance of
k0/(k0+k1) from x0 and therefore k1/(k0+k1) from x1, then the GPU will perform the linear weighting during the fetch and give you,
y = x1*k1/(k0+k1) + x0*k0/(k1+k0)
Thus sum can be calculated as
sum = y*(k0 + k1) from just one fetch!
If you repeat this for the other adjacent pixels, you will end up doing 4 texture fetches for each of the adjacent offsets, and one extra texture fetch for the center pixel.
The link explains this much better

GLSL Shader - How to calculate the height of a texture?

In this question I asked how to create a "mirrored" texture and now I want to move this "mirrored" image down on the y-axis about the height of the image.
I tried something like this with different values of HEIGHT but I cannot find a proper solution:
// Vertex Shader
uniform highp mat4 u_modelViewMatrix;
uniform highp mat4 u_projectionMatrix;
attribute highp vec4 a_position;
attribute lowp vec4 a_color;
attribute highp vec2 a_texcoord;
varying lowp vec4 v_color;
varying highp vec2 v_texCoord;
void main()
{
highp vec4 pos = a_position;
pos.y = pos.y - HEIGHT;
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * pos;
v_color = a_color;
v_texCoord = vec2(a_texcoord.x, 1.0 - a_texcoord.y);
}
What you are actually changing in your code snippet is the Y position of your vertices... this is most certainly not what you want to do.
a_position is your model-space position; the coordinate system that is centered around your quad (I'm assuming you're using a quad to display the texture).
If instead you do the modification in screen-space, you will be able to move the image up and down etc... so change the gl_Position value:
((u_projectionMatrix * u_modelViewMatrix) * pos + Vec4(0,HEIGHT,0,0))
Note that then you will be in screen-space; so check the dimensions of your viewport.
Finally, a better way to achieve the effect you want to do is to use a rotation matrix to flip and tilt the image.
You would then combine this matrix with the rotation of you image (combine it with the modelviewmatrix).
You can choose to either multiply the model matrices by the view projection on the CPU:
original_mdl_mat = ...;
rotated_mdl_mat = Matrix.CreateTranslation(0, -image.Height, 0) * Matrix.CreateRotationY(180) * original_mdl_mat;
mvm_original_mat = Projection * View * original_mdl_mat;
mvm_rotated_mat = Projection * View * rotated_mdl_mat;
or on the GPU:
uniform highp mat4 u_model;
uniform highp mat4 u_viewMatrix;
uniform highp mat4 u_projectionMatrix;
gl_Position = (u_projectionMatrix * u_model * u_viewMatrix) * pos;
The coordinates passed to texture2D always sample the source in the range [0, 1) on both axes, regardless of the original texture size and aspect ratio. So a kneejerk answer is that the height of a texture is always 1.0.
If you want to know the height of the source image comprising the texture in pixels then you'll need to supply that yourself — probably as a uniform — since it isn't otherwise exposed.

Resources