Applying a perspective transformation matrix from GIMP into a GLSL shader - matrix

So I'm trying to add a rotation and a perspective effect to an image into the vertex shader. The rotation works just fine but I'm unable to make the perspective effect. I'm working in 2D.
The rotation matrix is generated from the code but the perspective matrix is a bunch of hardcoded values I got from GIMP by using the perspective tool.
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.58302f, -0.29001f, 103.0f,
-0.00753f, 0.01827f, 203.0f,
-0.00002f, -0.00115f, 1.0f
});
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
uniform mat3 u_rotation;
uniform mat3 u_perspective;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
vec3 vec = vec3(a_texCoord0 * 500.0, 1.0);
vec = vec * u_perspective;
vec = vec3((vec.xy / vec.z) / 500.0, 0.0);
vec -= vec3(0.5, 0.5, 0.0);
vec = vec * u_rotation;
v_texCoords = vec.xy + vec2(0.5);
gl_Position = u_projTrans * a_position;
}
For the rotation, I'm offsetting the origin so that it rotates around the center instead of the top left corner.
Pretty much everything I know about GIMP's perspective tool comes from http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch10.ps This was suggesting I would be able to reproduce what GIMP does after reading it, but it turns out I can't. The result shows nothing (no pixel) while removing the perspective part shows the image rotating properly.
As mentioned in the link, I'm dividing by vec.z to convert my homogeneous coordinates back to a 2D point. I'm not using the origin shifting for the perspective transformation as it was mentioned in the link that the top left corner was used as an origin. p.11:
There is one thing to be careful about - the origin of GIMP
coordinates is at the upper left, with y increasing downwards.
EDIT:
Thanks to #Rabbid76's answer, it's now showing something! However, it's not transforming my texture like the matrix was transforming my image on GIMP.
My transformation matrix on GIMP was supposed to do something a bit like that:
But instead, it looks like something like that:
This is what I think from what I can see from the actual result:
https://imgur.com/X56rp8K (Image used)
(As pointed out, it texture parameter is clamp to edge instead of clamp to border, but that's beside the point)
It looks like it's doing the exact opposite of what I'm looking for. I tried offsetting the origin to the center of the image and to the bottom left before applying the matrix without success. This is a new result but it's still the same problem: How to apply the GIMP perspective matric into a GLSL shader?
EDIT2:
With more testing, I can confirm that it's doing the "opposite". Using this simple downscale transformation matrix:
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.75f, 0f, 50f,
0f, 0.75f, 50f,
0f, 0f, 1.0f
});
The result is an upscaled version of the image:
If I invert the matrix programmatically, it works for the simple scaling matrix! But for the perspective matrix, it shows that:
https://imgur.com/v3TLe2d
EDIT3:
Thanks to #Rabbid76 again it turned out applying the rotation after the perspective matrix does the rotation before and I end up with a result like this: https://imgur.com/n1vWq0M
It is almost it! The only problem is that the image is VERY squished. It's just like the perspective matrix was applied multiple times. But if you look carefully, you can see it rotating while in perspective just like I want it. The problem now is how to unsquish it to get a result just like I had in GIMP. (The root problem is still the same, how to take a GIMP matrix and apply it in a shader)

This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
The matrix
0.58302 -0.29001 103.0
-0.00753 0.01827 203.0
-0.00002 -0.00115 1.0f
is a 2D perspective transformation matrix. It operates with 2D Homogeneous coordinate.
See 2D affine and perspective transformation matrices
Since the matrix which is displayed in GIMP is the transformation from the perspective to the orthogonal view, the inverse matrix has to be used for the transformation.
The inverse matrix can be calculated by calling inv().
The matrix is setup to performs a operation of a Cartesian coordinate in the range [0, 500], to a Homogeneous coordinates in the range [0, 500].
Your assumption is correct, you have to scale the input from the range [0, 1] to [0, 500] and the output from [0, 500] to [0, 1].
But you have to scale the 2D Cartesian coordinates
Further you have to do the rotation after the perspective projection and the Perspective divide.
It may be necessary (dependent on the bitmap and the texture coordinate attributes), that you have to flip the V coordinate of the texture coordinates.
And most important, the transformation has to be done per fragment in the fragment shader.
Note, since this transformation is not linear (it is perspective transformation), it is not sufficient to to calculate the texture coordinates on the corner points.
vec2 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// uv_h: perspective devide and downscale [0, 500] -> [0, 1]
vec3 uv_p = vec3(uv_h.xy / uv_h.z / scale, 1.0);
// rotate
uv_p = vec3(uv_p.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0);
return uv_p.xy;
}
Of course you can do the transformation in the vertex shader too.
But then you have to pass the 2d homogeneous coordinate to from the vertex shader to the fragment shader
This is similar to set a clip space coordinates to gl_Position.
The difference is that you have a 2d homogeneous coordinate and not a 3d. and you have to do the Perspective divide manually in the fragment shader:
Vertex shader:
attribute vec2 a_texCoord0;
varying vec3 v_texCoords_h;
uniform mat3 u_perspective
vec3 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// downscale
return vec3(uv_h.xy / scale, uv_h.z);
}
void main()
{
v_texCoords_h = Project2D( a_texCoord0 );
.....
}
Fragment shader:
varying vec3 v_texCoords_h;
uniform mat3 u_rotation;
void main()
{
// perspective divide
vec2 uv = vertTex.xy / vertTex.z;
// rotation
uv = (vec3(uv.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0)).xy;
.....
}
See the preview, where I used the following 2D projection matrix, which is the inverse matrix from that one which is displayed in GIMP:
2.452f, 2.6675f, -388.0f,
0.0f, 7.7721f, -138.0f,
0.00001f, 0.00968f, 1.0f
Further note, in compare to u_projTrans, u_perspective is initialized in row major order.
Because of that you have to multiply the vector from the left to u_perspective:
vec_h = vec3(vec.xy * 500.0, 1.0) * u_perspective;
But you have to multiply the vector from the right to u_projTrans:
gl_Position = u_projTrans * a_position;
See GLSL Programming/Vector and Matrix Operations
and Data Type (GLSL)
Of course this may change if you transpose the matrix when you set it by glUniformMatrix*

Related

Calculating a transformation matrix to place an object on a sphere in glsl

I'm trying generate some matrices to place trees on a planet on the GPU. The position of each tree is predetermined - based on a biome map and various heightmap data - but this data is GPU resident so I can't do this on the CPU. At the moment I'm instancing using the geometry shader - this will change to traditional instancing if performance is bad, and I'd then compute the model matrices for each tree on a compute shader.
I've got as far as trying to use a modified version of lookAt() but I can't get it working and even if I did, the trees would be perpendicular to the planet instead of standing up. I know I can define a using 3 axis, so the normal of the sphere, a tangent and a bitangent, but given I don't care what direction these tangents and bitangents are in at the moment, what would be a quick way to calculate this matrix in GLSL? Thanks!
void drawInstance(vec3 offset)
{
//Grab the model's position from the model matrix
vec3 modelPos = vec3(modelMatrix[3][0],modelMatrix[3][1],modelMatrix[3][2]);
//Add the offset
modelPos +=offset;
//Eye = where the new pos is, look in x direction for now, planet is at origin so up is just the modelPos normalized
mat4 m = lookAt(modelPos, modelPos + vec3(1,0,0), normalize(modelPos));
//Lookat is intended as a camera matrix, fix this
m = inverse(m);
vec3 pos = gl_in[0].gl_Position.xyz;
gl_Position = vp * m *vec4(pos, 1.0);
EmitVertex();
pos = gl_in[1].gl_Position.xyz ;
gl_Position = vp * m *vec4(pos, 1.0);
EmitVertex();
pos = gl_in[2].gl_Position.xyz;
gl_Position = vp * m * vec4(pos, 1.0);
EmitVertex();
EndPrimitive();
}
void main()
{
vp = proj * view;
mvp = proj * view * modelMatrix;
drawInstance(vec3(0,20,0));
// drawInstance(vec3(0,20,0));
// drawInstance(vec3(0,20,-40));
// drawInstance(vec3(40,40,0));
// drawInstance(vec3(-40,0,0));
}
I would recommend taking a different approach completely.
First, don't use geometry shaders for replicating geometry. That's what the glDrawArraysInstanced is for.
Second, it's hard to define such a matrix procedurally. This is related to the Hairy Ball Theorem.
Instead I would generate a bunch of random rotations on the CPU. Use this method to create a uniformly distributed quaternion. Pass that quaternion to the vertex shader as a single vec4 instanced attribute. In the vertex shader:
Offset the tree vertex by (0, 0, radiusOfThePlanet) so that it's located at the north pole (assuming Z-axis is up).
Apply the quaternion rotation (it will rotate around planet center so the tree stays on the surface).
Apply the planet model-view and camera projection matrices as usual.
This will yield an unbiased uniformly distributed random set of trees.
Found a solution to the problem which allows me to place objects on the surface of a sphere facing in the correct directions. Here is the code:
mat4 m = mat4(1);
vec3 worldPos = getWorldPoint(sphericalCoords);
//Add a random number to the world pos, then normalize it so that it is a point on a unit sphere slightly different to the world pos. The vector between them is a tangent. Change this value to rotate the object once placed on the sphere
vec3 xAxis = normalize(normalize(worldPos + vec3(0.0,0.2,0.0)) - normalize(worldPos));
//Planet is at 0,0,0 so world pos can be used as the normal, and therefore the y axis
vec3 yAxis = normalize(worldPos);
//We can cross the y and x axis to generate a bitangent to use as the z axis
vec3 zAxis = normalize(cross(yAxis, xAxis));
//This is our rotation matrix!
mat3 baseMat = mat3(xAxis, yAxis, zAxis);
//Fill this into our 4x4 matrix
m = mat4(baseMat);
//Transform m by the Radius in the y axis to put it on the surface
mat4 m2 = transformMatrix(mat4(1), vec3(0,radius,0));
m = m * m2;
//Multiply by the MVP to project correctly
m = mvp* m;
//Draw an instance of your object
drawInstance(m);

Coloring rectangle in function of distance to nearest edge produces weird result in diagonals

I'm trying to color a rectangle in ShaderToy/GLSL in function of each pixel's distance to the nearest rectangle edge. However, a weird (darker) result can be seen on its diagonals:
I'm using the rectangle UV coordinates for it, with the following piece of code:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
vec2 uvn=abs(uv-0.5)*2.0;
float maxc=max(uvn.y,uvn.x);
vec3 mate=vec3(maxc);
fragColor = vec4(mate.xyz,1);
}
As you can see, the error seems to come from the max(uvn.y,uvn.x); line of code, as it doesn't interpolate smoothly the color values as one would expect. For comparison, those are the images obtained by sampling uvn.y and uvn.x instead of the maximum between those two:
You can play around with the shader at this URL:
https://www.shadertoy.com/view/ldcyWH
The effect that you can see is optical illusion. You can make this visible by grading the colors. See the answer to stackoverflow question Issue getting gradient square in glsl es 2.0, Gamemaker Studio 2.0.
To achieve a better result, you can use a shader, which smoothly change the gradient, from a circular (or elliptical) gradient in the middle of the the view, to a square gradient at the borders of the view:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
// Normalized pixel coordinates (from 0 to 1)
vec2 uv = fragCoord/iResolution.xy;
vec2 uvn=abs(uv-0.5)*2.0;
vec2 distV = uvn;
float maxDist = max(abs(distV.x), abs(distV.y));
float circular = length(distV);
float square = maxDist;
vec3 color1 = vec3(0.0);
vec3 color2 = vec3(1.0);
vec3 mate=mix(color1, color2, mix(circular,square,maxDist));
fragColor = vec4(mate.xyz,1);
}
Preview:

Three js 2d matrix visualization

I am trying to visualize 2d matrices using Three js. These matrices are the states of the neurons in a neural network. The matrices are not huge (64 x 32) The values in these matrices will change and I want those new values to be displayed in the visualization.
For the 2d matrix I want a plane of neurons.
I have tried creating a particle system using a plane geometry with as many vertices as neurons in the data matrix.
var width = 32;
var height = 64;
var planeGeometry = new THREE.PlaneGeometry( width, height, width - 1 , height - 1 );
var particlePlane = new THREE.ParticleSystem( planeGeometry, shaderMaterial );
In the fragment shader each particle is given a base texture (a white circle)
gl_FragColor = texture2D(baseTexture, gl_PointCoord);
And then I use a second texture containing the data matrix values (greyscale pixel values) to modify each base texture.
// Sets particle texture to desired color
// vertexPosition is a vec2 in coordinates local to the plane
gl_FragColor = gl_FragColor * texture2D( dataTexture, vertexPosition );
To calculate vertexPosition in the vertex share I do the following (irrelevant lines ommitted):
uniform float width;
uniform float height;
varying vec2 vertexPosition;
void main()
{
vertexPosition = vec2( position.x / width, position.y / height );
}
This is where I'm getting caught up. The vertexPosition does not seem to be mapping properly to the dataTexture pixels. I want a one to one correspondence between particles and pixels.
How do I properly map from the location of particles/vertexes on a plane to equivalent pixel locations in a texture?
I am new to three js, so please feel free to tell me my approach is totally off.
To get texture coordinates, there are ready to use projection matrix in glsl, here is what I would use as a vertex shader
varying vec2 vertexPosition;
void main() {
vertexPosition = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Then you have the xy position to use in the fragment in the varying vertexPosition.

Perspective correct texturing of trapezoid in OpenGL ES 2.0

I have drawn a textured trapezoid, however the result does not appear as I had intended.
Instead of appearing as a single unbroken quadrilateral, a discontinuity occurs at the diagonal line where its two comprising triangles meet.
This illustration demonstrates the issue:
(Note: the last image is not intended to be a 100% faithful representation, but it should get the point across.)
The trapezoid is being drawn using GL_TRIANGLE_STRIP in OpenGL ES 2.0 (on an iPhone). It's being drawn completely facing the screen, and is not being tilted (i.e. that's not a 3D sketch you're seeing!)
I have come to understand that I need to perform "perspective correction," presumably in my vertex and/or fragment shaders, but I am unclear how to do this.
My code includes some simple Model/View/Projection matrix math, but none of it currently influences my texture coordinate values. Update: The previous statement is incorrect, according to comment by user infact.
Furthermore, I have found this tidbit in the ES 2.0 spec, but do not understand what it means:
The PERSPECTIVE CORRECTION HINT is not supported because OpenGL
ES 2.0 requires that all attributes be perspectively interpolated.
How can I make the texture draw correctly?
Edit: Added code below:
// Vertex shader
attribute vec4 position;
attribute vec2 textureCoordinate;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * position;
texCoord = textureCoordinate;
}
// Fragment shader
uniform sampler2D texture;
varying mediump vec2 texCoord;
void main()
{
gl_FragColor = texture2D(texture, texCoord);
}
// Update and Drawing code (uses GLKit helpers from iOS)
- (void)update
{
float fov = GLKMathDegreesToRadians(65.0f);
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
projectionMatrix = GLKMatrix4MakePerspective(fov, aspect, 0.1f, 50.0f);
viewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, -4.0f); // zoom out
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glUseProgram(shaders[SHADER_DEFAULT]);
GLKMatrix4 modelMatrix = GLKMatrix4MakeScale(0.795, 0.795, 0.795); // arbitrary scale
GLKMatrix4 modelViewMatrix = GLKMatrix4Multiply(viewMatrix, modelMatrix);
GLKMatrix4 modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix, modelViewMatrix);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX], 1, GL_FALSE, modelViewProjectionMatrix.m);
glBindTexture(GL_TEXTURE_2D, textures[TEXTURE_WALLS]);
glUniform1i(uniforms[UNIFORM_TEXTURE], 0);
glVertexAttribPointer(ATTRIB_VERTEX, 3, GL_FLOAT, GL_FALSE, 0, wall.vertexArray);
glVertexAttribPointer(ATTRIB_TEXTURE_COORDINATE, 2, GL_FLOAT, GL_FALSE, 0, wall.texCoords);
glDrawArrays(GL_TRIANGLE_STRIP, 0, wall.vertexCount);
}
(I'm taking a bit of a punt here, because your picture does not show exactly what I would expect from texturing a trapezoid, so perhaps something else is happening in your case - but the general problem is well known)
Textures will not (by default) interpolate correctly across a trapezoid. When the shape is triangulated for drawing, one of the diagonals will be chosen as an edge, and while that edge is straight through the middle of the texture, it is not through the middle of the trapezoid (picture the shape divided along a diagonal - the two triangles are very much not equal).
You need to provide more than a 2D texture coordinate to make this work - you need to provide a 3D (or rather, projective) texture coordinate, and perform the perspective divide in the fragment shader, post-interpolation (or else use a texture lookup function which will do the same).
The following shows how to provide texture coordinates for a trapezoid using old-school GL functions (which are a little easier to read for demonstration purposes). The commented-out lines are the 2d texture coordinates, which I have replaced with projective coordinates to get the correct interpolation.
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,640,0,480,1,1000);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
const float trap_wide = 600;
const float trap_narrow = 300;
const float mid = 320;
glBegin(GL_TRIANGLE_STRIP);
glColor3f(1,1,1);
// glTexCoord4f(0,0,0,1);
glTexCoord4f(0,0,0,trap_wide);
glVertex3f(mid - trap_wide/2,10,-10);
// glTexCoord4f(1,0,0,1);
glTexCoord4f(trap_narrow,0,0,trap_narrow);
glVertex3f(mid - trap_narrow/2,470,-10);
// glTexCoord4f(0,1,0,1);
glTexCoord4f(0,trap_wide,0,trap_wide);
glVertex3f(mid + trap_wide/2,10,-10);
// glTexCoord4f(1,1,0,1);
glTexCoord4f(trap_narrow,trap_narrow,0,trap_narrow);
glVertex3f(mid + trap_narrow/2,470,-10);
glEnd();
The third coordinate is unused here as we're just using a 2D texture. The fourth coordinate will divide the other two after interpolation, providing the projection. Obviously if you divide it through at the vertices, you'll see you get the original texture coordinates.
Here's what the two renderings look like:
If your trapezoid is actually the result of transforming a quad, it might be easier/better to just draw that quad using GL, rather than transforming it in software and feeding 2D shapes to GL...
What you are trying here is Skewed texture. A sample fragment shader is as follows :
precision mediump float;
varying vec4 vtexCoords;
uniform sampler2D sampler;
void main()
{
gl_FragColor = texture2DProj(sampler,vtexCoords);
}
2 things which should look different are :
1) We are using varying vec4 vtexCoords; . Texture co-ordinates are 4 dimensional.
2) texture2DProj() is used instead of texture2D()
Based on length of small and large side of your trapezium you will assign texture co-ordinates. Following URL might help :
http://www.xyzw.us/~cass/qcoord/
The accepted answer gives the correct solution and explanation but for those looking for a bit more help on the OpenGL (ES) 2.0 pipeline...
const GLfloat L = 2.0;
const GLfloat Z = -2.0;
const GLfloat W0 = 0.01;
const GLfloat W1 = 0.10;
/** Trapezoid shape as two triangles. */
static const GLKVector3 VERTEX_DATA[] = {
{{-W0, 0, Z}},
{{+W0, 0, Z}},
{{-W1, L, Z}},
{{+W0, 0, Z}},
{{+W1, L, Z}},
{{-W1, L, Z}},
};
/** Add a 3rd coord to your texture data. This is the perspective divisor needed in frag shader */
static const GLKVector3 TEXTURE_DATA[] = {
{{0, 0, 0}},
{{W0, 0, W0}},
{{0, W1, W1}},
{{W0, 0, W0}},
{{W1, W1, W1}},
{{0, W1, W1}},
};
////////////////////////////////////////////////////////////////////////////////////
// frag.glsl
varying vec3 v_texPos;
uniform sampler2D u_texture;
void main(void)
{
// Divide the 2D texture coords by the third projection divisor
gl_FragColor = texture2D(u_texture, v_texPos.st / v_texPos.p);
}
Alternatively, in the shader, as per #maverick9888's answer, You can use texture2Dproj though for iOS / OpenGLES2 it still only supports a vec3 input...
void main(void)
{
gl_FragColor = texture2DProj(u_texture, v_texPos);
}
I haven't really benchmarked it properly but for my very simple case (a 1d texture really) the division version seems a bit snappier.

GLSL: simulating 3D texture with 2D texture

I came up with some code that simulates 3D texture lookup using a big 2D texture that contains the tiles. 3D Texture is 128x128x64 and the big 2D texture is 1024x1024, divided into 64 tiles of 128x128.
The lookup code in the fragment shader looks like this:
#extension GL_EXT_gpu_shader4 : enable
varying float LightIntensity;
varying vec3 pos;
uniform sampler2D noisef;
vec4 flat_texture3D()
{
vec3 p = pos;
vec2 inimg = p.xy;
int d = int(p.z*128.0);
float ix = (d % 8);
float iy = (d / 8);
vec2 oc = inimg + vec2(ix, iy);
oc *= 0.125;
return texture2D(noisef, oc);
}
void main (void)
{
vec4 noisevec = flat_texture3D();
gl_FragColor = noisevec;
}
The tiling logic seems to work ok and there is only one problem with this code. It looks like this:
There are strange 1 to 2 pixel wide streaks between the layers of voxels.
The streaks appear just at the border when d changes.
I've been working on this for 2 days now and still without any idea of what's going on here.
This looks like a texture filter issue. Think about it: when you come close to the border, the bilinear filter will consider the neighboring texel, in your case: from another "depth layer".
To avoid this, you can clamp the texture coords so that they are never outside the rect defined outmost texel centers of the tile (similiar to GL_CLAMP_TO_EDGE, but on a per-tile basis). But you should be aware that the problems will become worse when using mipmapping. You should also be aware, that currently you are not able to filter in the z direction, as a real 3D texture would. You could simulate this manually in the shader, of course.
But really: why not just using 3D textures? The hw can do all this for you, with much less overhead...

Resources