OpenGL ES 2D turning NDC into screen coordinates - matrix

I'm using OpenGL ES to draw only 2D shapes, and I have to create my own matrices and pass them to the shader.
I'm using only 3x3 matrices and 2 component vectors. I know that normally for 2D you'd still have 4x4 matrices and 3 component vectors with Z set to 1. But I'm wondering if I can keep it 3x3.
I've got all the matrices working, translation rotation and scale all work. What I'm missing is a "projection" matrix, all my positions are ranging from -1 to 1 and my shapes are deformed because my phone's aspect ratio isn't 1. I need a matrix that maps all my coordinates to the screen with the correct ratio.
Normally you'd use an orthographic projection matrix to do this mapping but those are all 4x4. How can I do this mapping with a 3x3 matrix? And if I can't is there any way to keep the matrices 3x3 and map it in some other way? (I could interpolate the coordinates without a matrix but that wouldn't fix the ratio problem)
To clarify, here is my vertex shader, as you can see I'm using a 3x3 transformation matrix and 2 component vectors:
uniform mat3 u_transform_mat;
attribute vec2 a_vert_pos;
attribute vec2 a_vert_uv;
varying vec2 v_vert_uv;
void main()
{
v_vert_uv = a_vert_uv;
gl_Position = vec4(u_transform_mat * vec3(a_vert_pos, 1.0), 1.0);
}

All your "projection" matrix really needs to do in this case is scale.
Say for example you have landscape screen dimensions, and your aspect ratio is 1.5. With the transformation you have now, the NDC range of [-1, 1] will be stretched out to fit the screen width, meaning that is scaled by a factor 1.5 in horizontal direction relative to the vertical direction.
What you want to keep the proportions intact is to map the range [-1.5, 1.5] in x-direction and the range [-1, 1] in y-direction to the screen. Or in the more general case, [-aspect, aspect] in x-direction.
To map the [-aspect, aspect] range to the NDC range of [-1, 1], you need to scale the x-coordinates by (1 / aspect). Your "projection" matrix is therefore a non-uniform scaling matrix that only scales in x-direction:
[ 1.0f / aspectRatio 0.0f 0.0f ]
P = [ 0.0f 1.0f 0.0f ]
[ 0.0f 0.0f 1.0f ]

Related

Set depth texture for Z-testing in OpenGL ES 2.0 or 3.0

Having a 16-bit uint texture in my C++ code, I would like to use it for z-testing in an OpenGL ES 3.0 app. How can I achieve this?
To give some context, I am making an AR app where virtual objects can be occluded by real objects. The depth texture of real environment is generated, but I can't figure out how to apply it.
In my app, I first use glTexImage2D to render backdrop image from the camera feed, then I draw some virtual objects. I would like the objects to be transparent based on a depth texture. Ideally, the occlusion testing needs to be not binary, but gradual, so that I can alpha blend the objects with background near the occlusion edges.
I can pass and read the depth texture in the fragment shader, but not sure how to use it for z-testing instead of rendering.
Lets assume you have a depth texture uniform sampler2D u_depthmap and the internal format of the depth texture is a floating point format.
To read the texel from the texture, where the current fragment is on, you have to know the size of the viewport (uniform vec2 u_vieport_size). gl_FragCoord contains the window relative coordinate (x, y, z, 1/w) values for the fragment. So the texture coordinate for the depth map is calcualted by:
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
The depth from the depth texture u_depthmap is given in range [0.0, 1.0], because of the internal floating point format. The depth of the fragment is contained in the gl_FragCoord.z, in range [0.0, 1.0], too.
That means that the depth of the map and the depth of the fragment can be calculated as follows:
uniform sampler2D u_depthmap;
uniform vec2 u_vieport_size;
void mian()
{
vec2 map_uv = gl_FragCoord.xy / u_vieport_size;
float map_depth = texture(u_depthmap, map_uv).x;
float frag_depth = gl_FragCoord.z;
.....
}
Note, map_depth and frag_depth are both in the range [0.0, 1.0]. If the were generated both with the same projection (especially the same near and far plane), then they are comparable. This means you have to ensure that the shader generates the same depth values as the ones in the depth map, for the same point in the world. If this is not the case, then you have to linearize the depth values and you have to calculate the view space Z-coordinate.

Convert Cubemap coordinates to equivalents in Equirectangular

I have a set of coordinates of a 6-image Cubemap (Front, Back, Left, Right, Top, Bottom) as follows:
[ [160, 314], Front; [253, 231], Front; [345, 273], Left; [347, 92], Bottom; ... ]
Each image is 500x500p, being [0, 0] the top-left corner.
I want to convert these coordinates to their equivalents in equirectangular, for a 2500x1250p image. The layout is like this:
I don't need to convert the whole image, just the set of coordinates. Is there any straight-forward conversion por a specific pixel?
convert your image+2D coordinates to 3D normalized vector
the point (0,0,0) is the center of your cube map to make this work as intended. So basically you need to add the U,V direction vectors scaled to your coordinates to 3D position of texture point (0,0). The direction vectors are just unit vectors where each axis has 3 options {-1, 0 , +1} and only one axis coordinate is non zero for each vector. Each side of cube map has one combination ... Which one depends on your conventions which we do not know as you did not share any specifics.
use Cartesian to spherical coordinate system transformation
you do not need the radius just the two angles ...
convert the spherical angles to your 2D texture coordinates
This step depends on your 2D texture geometry. The simplest is rectangular texture (I think that is what you mean by equirectangular) but there are other mappings out there with specific features and each require different conversion. Here few examples:
Bump-map a sphere with a texture map
How to do a shader to convert to azimuthal_equidistant
For the rectangle texture you just scale the spherical angles into texture resolution size...
U = lon * Usize/(2*Pi)
V = (lat+(Pi/2)) * Vsize/Pi
plus/minus some orientation signs to match your coordinate systems.
btw. just found this (possibly duplicate QA):
GLSL Shader to convert six textures to Equirectangular projection

Map a texture onto a hyperbolic triangle

I want to map a texture in the form of a lower right euclidean triangle to a hyperbolic triangle on the Poincare Disk, which looks like this:
Here's the texture (the top left triangle of the texture is transparent and unused). You might recognise this as part of Escher's Circle Limit I:
And this is what my polygon looks like (it's centred at the origin, which means that two edges are straight lines, however in general all three edges will be curves as in the first picture):
The centre of the polygon is the incentre of the euclidean triangle formed by its vertices and I'm UV mapping the texture using it's incentre, dividing it into the same number of faces as the polygon has and mapping each face onto the corresponding polygon face. However the the result looks like this:
If anybody thinks this is solvable using UV mapping I'd be happy to provide some example code, however I'm beginning to think this might not be possible and I'll have to write my own shader functions.
UV mapping is a method of mapping a texture onto an OpenGL polygon. The texture is always sampled in Euclidean space using xy coordinates in the range of (0, 1).
To overlay your texture onto a triangle on a Poincare disc, keep hold of the Euclidean coordiantes in your vertices, and use these to sample the texture.
The following code is valid for OpenGL 3.0 ES.
Vertex shader:
#version 300 es
//these should go from 0.0 to 1.0
in vec2 euclideanCoords;
in vec2 hyperbolicCoords;
out vec2 uv;
void main() {
//set z = 0.0 and w = 1.0
gl_Position = vec4(hyperbolicCoords, 0.0, 1.0);
uv = euclideanCoords;
}
Fragment shader:
#version 300 es
uniform sampler2D escherImage;
in vec2 uv;
out vec4 colour;
void main() {
colour = texture(escherImage, uv);
}

Geometry Shader Quad Post Processing

Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.

Coloring a plane based on texture pixels

Using a shader I'm trying to color a plane so it replicates the pixels on a texture. The texture is 32x32 pixels and the plane is also sized 32x32 in space coordinates.
Does anyone know how I would inspect the first pixel on the texture, then use it to color the first square (1x1) on the plane?
Generated texture example: (First pixel is red on purpose)
This code using a vec2 with coordinates (0,0) doesn't work as I expected. I assumed the color at (0,0) would be red but it's not, it's green:
vec4 color = texture2D(texture, vec2(0, 0));
I guess there's something that I'm missing, or not understanding about texture2D as (0,0) doesn't appear to be the ending pixel either.
If anyone could help me out, it would be greatly appriciated. Thanks.
EDIT:
Thanks for the comments and answers! Using this code, it's working now:
// Flip the texture vertically
vec3 verpos2 = verpos.xyz * vec3(1.0, 1.0, -1.0);
// Calculate the pixel coordinates the fragment belongs to
float pixX = floor(verpos2.x - floor(verpos2.x / 32.0) * 32.0);
float pixZ = floor(verpos2.z - floor(verpos2.z / 32.0) * 32.0);
float texX = (pixX + 0.5) / 32.0;
float texZ = (pixZ + 0.5) / 32.0;
gl_FragColor = texture2D(texture, vec2(texX, texZ));
That said, I'm having an issue with jagged lines on the edges of each "block". Looks to me like my math is off and it's confused about what color the sides should be, because I didn't have this problem when using only vertex colors. Can anyone see where I've gone wrong or how it could be done better?
Thanks again!
Yes... as Ben Pious mentioned in a comment, remember that WebGL displays 0,0 in lower left.
Also, for indexing into your textures, try to sample from the "middle" of each pixel. On a 32x32 source texture, to get pixel (0,0) you'd want:
texture2D(theSampler, vec2(0.5/32.0, 0.5/32.0));
Or more generally,
texture2D(theSampler, vec2((xPixelIndex + 0.5) / width, (yPixelIndex + 0.5) / height);
This is only if you're explicitly accessing texture pixels; if you're getting values interpolated and passed through from the vertex shader (say a (-1,-1) to (1,1) square, and pass varying vec2((x+1)/2,(y+1)/2) to the fragment shader), this "middle of each pixel" is reflected in your varying value.
But it's probably just the Y-up like Ben says. :)

Resources