Using a shader I'm trying to color a plane so it replicates the pixels on a texture. The texture is 32x32 pixels and the plane is also sized 32x32 in space coordinates.
Does anyone know how I would inspect the first pixel on the texture, then use it to color the first square (1x1) on the plane?
Generated texture example: (First pixel is red on purpose)
This code using a vec2 with coordinates (0,0) doesn't work as I expected. I assumed the color at (0,0) would be red but it's not, it's green:
vec4 color = texture2D(texture, vec2(0, 0));
I guess there's something that I'm missing, or not understanding about texture2D as (0,0) doesn't appear to be the ending pixel either.
If anyone could help me out, it would be greatly appriciated. Thanks.
EDIT:
Thanks for the comments and answers! Using this code, it's working now:
// Flip the texture vertically
vec3 verpos2 = verpos.xyz * vec3(1.0, 1.0, -1.0);
// Calculate the pixel coordinates the fragment belongs to
float pixX = floor(verpos2.x - floor(verpos2.x / 32.0) * 32.0);
float pixZ = floor(verpos2.z - floor(verpos2.z / 32.0) * 32.0);
float texX = (pixX + 0.5) / 32.0;
float texZ = (pixZ + 0.5) / 32.0;
gl_FragColor = texture2D(texture, vec2(texX, texZ));
That said, I'm having an issue with jagged lines on the edges of each "block". Looks to me like my math is off and it's confused about what color the sides should be, because I didn't have this problem when using only vertex colors. Can anyone see where I've gone wrong or how it could be done better?
Thanks again!
Yes... as Ben Pious mentioned in a comment, remember that WebGL displays 0,0 in lower left.
Also, for indexing into your textures, try to sample from the "middle" of each pixel. On a 32x32 source texture, to get pixel (0,0) you'd want:
texture2D(theSampler, vec2(0.5/32.0, 0.5/32.0));
Or more generally,
texture2D(theSampler, vec2((xPixelIndex + 0.5) / width, (yPixelIndex + 0.5) / height);
This is only if you're explicitly accessing texture pixels; if you're getting values interpolated and passed through from the vertex shader (say a (-1,-1) to (1,1) square, and pass varying vec2((x+1)/2,(y+1)/2) to the fragment shader), this "middle of each pixel" is reflected in your varying value.
But it's probably just the Y-up like Ben says. :)
Related
In a scenario where vertices are displaced in the vertex shader, how to retrieve their transformed positions in WebGL / Three.js?
Other questions here suggest to write the positions to a texture and then read the pixels, but the resulting value don't seem to be correct.
In the example below the position is passed to the fragment shader without any transformations:
// vertex shader
varying vec4 vOut;
void main() {
gl_Position = vec4(position, 1.0);
vOut = vec4(position, 1.0);
}
// fragment shader
varying vec4 vOut;
void main() {
gl_FragColor = vOut;
}
Then reading the output texture, I would expect pixel[0].r to be identical to positions[0].x, but that is not the case.
Here is a jsfiddle showing the problem:
https://jsfiddle.net/brunoimbrizi/m0z8v25d/2/
What am I missing?
Solved. Quite a few things were wrong with the jsfiddle mentioned in the question.
width * height should be equal to the vertex count. A PlaneBufferGeometry with 4 by 4 segments results in 25 vertices. 3 by 3 results in 16. Always (w + 1) * (h + 1).
The positions in the vertex shader need a nudge of 1.0 / width.
The vertex shader needs to know about width and height, they can be passed in as uniforms.
Each vertex needs an attribute with its index so it can be correctly mapped.
Each position should be one pixel in the resulting texture.
The resulting texture should be drawn as gl.POINTS with gl_PointSize = 1.0.
Working jsfiddle: https://jsfiddle.net/brunoimbrizi/m0z8v25d/13/
You're not writing the vertices out correctly.
https://jsfiddle.net/ogawzpxL/
First off you're clipping the geometry, so your vertices actually end outside the view, and you see the middle of the quad without any vertices.
You can use the uv attribute to render the entire quad in the view.
gl_Position = vec4( uv * 2. - 1. , 0. ,1.);
Everything in the buffer represents some point on the quad. What seems to be tricky is when you render, the pixel will sample right next to your vertex. In the fiddle i've applied an offset to the world space thing by how much it would be in pixel space, and it didn't really work.
The reason why it seems to work with points is that this is all probably wrong :) If you want to transform only the vertices, then you need to store them properly in the texture. You can use points for this, but ideally they wouldn't be spaced out so much. In your scenario, they would fill the first couple of rows of the texture (since it's much larger than it could be).
You might start running into problems as soon as you try to apply this to something other than PlaneGeometry. In which case this problem has to be broken down.
I'm about to project image into cylindrical panorama. But first I need to get the pixel (or color from pixel) I'm going to draw, then then do some Math in shaders with polar coordinates to get new position of pixel and then finally draw pixel.
Using this way I'll be able to change shape of image from polygon shape to whatever I want.
But I cannot find anything about this method (get pixel first, then do the Math and get new position for pixel).
Is there something like this, please?
OpenGL historically doesn't work that way around; it forward renders — from geometry to pixels — rather than backwards — from pixel to geometry.
The most natural way to achieve what you want to do is to calculate texture coordinates based on geometry, then render as usual. For a cylindrical mapping:
establish a mapping from cylindrical coordinates to texture coordinates;
with your actual geometry, imagine it placed within the cylinder, then from each vertex proceed along the normal until you intersect the cylinder. Use that location to determine the texture coordinate for the original vertex.
The latter is most easily and conveniently done within your geometry shader; it's a simple ray intersection test, with attributes therefore being only vertex location and vertex normal, and texture location being a varying that is calculated purely from the location and normal.
Extemporaneously, something like:
// get intersection as if ray hits the circular region of the cylinder,
// i.e. where |(position + n*normal).xy| = 1
float planarLengthOfPosition = length(position.xy);
float planarLengthOfNormal = length(normal.xy);
float planarDistanceToPerimeter = 1.0 - planarLengthOfNormal;
vec3 circularIntersection = position +
(planarDistanceToPerimeter/planarLengthOfNormal)*normal;
// get intersection as if ray hits the bottom or top of the cylinder,
// i.e. where |(position + n*normal).z| = 1
float linearLengthOfPosition = abs(position.z);
float linearLengthOfNormal = abs(normal.z);
float linearDistanceToEdge = 1.0 - linearLengthOfPosition;
vec3 endIntersection = position +
(linearDistanceToEdge/linearLengthOfNormal)*normal;
// pick whichever of those was lesser
vec3 cylindricalIntersection = mix(circularIntersection,
endIntersection,
step(linearDistanceToEdge,
planarDistanceToPerimeter));
// ... do something to map cylindrical intersection to texture coordinates ...
textureCoordinateVarying =
coordinateFromCylindricalPosition(cylindricalIntersection);
With a common implementation of coordinateFromCylindricalPosition possibly being simply return vec2(atan(cylindricalIntersection.y, cylindricalIntersection.x) / 6.28318530717959, cylindricalIntersection.z * 0.5);.
Im trying to crate a shader, that converts fft-data (passed as a texture) to a bar graphic and then to on a circle in the center of the screen. Here is a image of what im trying to achieve: link to image
i experimentet a bit with shader toy and came along wit this shader: link to shadertoy
with all the complex shaders i saw on shadertoy, it thought this should be doable with maths somehow.
can anybody here give me a hint how to do it?
It’s very doable — you just have to think about the ranges you’re sampling in. In your Shadertoy example, you have the following:
float r = length(uv);
float t = atan(uv.y, uv.x);
fragColor = vec4(texture2D(iChannel0, vec2(r, 0.1)));
So r is going to vary roughly from 0…1 (extending past 1 in the corners), and t—the angle of the uv vector—is going to vary from 0…2π.
Currently, you’re sampling your texture at (r, 0.1)—in other words, every pixel of your output will come from the V position 10% down your source texture and varying across it. The angle you’re calculating for t isn’t being used at all. What you want is for changes in the angle (t) to move across your texture in the U direction, and for changes in the distance-from-center (r) to move across the texture in the V direction. In other words, this:
float r = length(uv);
float t = atan(uv.y, uv.x) / 6.283; // normalize it to a [0,1] range - 6.283 = 2*pi
fragColor = vec4(texture2D(iChannel0, vec2(t, r)));
For the source texture you provided above, you may find your image appearing “inside out”, in which case you can subtract r from 1.0 to flip it.
I'm using OpenGL ES to draw only 2D shapes, and I have to create my own matrices and pass them to the shader.
I'm using only 3x3 matrices and 2 component vectors. I know that normally for 2D you'd still have 4x4 matrices and 3 component vectors with Z set to 1. But I'm wondering if I can keep it 3x3.
I've got all the matrices working, translation rotation and scale all work. What I'm missing is a "projection" matrix, all my positions are ranging from -1 to 1 and my shapes are deformed because my phone's aspect ratio isn't 1. I need a matrix that maps all my coordinates to the screen with the correct ratio.
Normally you'd use an orthographic projection matrix to do this mapping but those are all 4x4. How can I do this mapping with a 3x3 matrix? And if I can't is there any way to keep the matrices 3x3 and map it in some other way? (I could interpolate the coordinates without a matrix but that wouldn't fix the ratio problem)
To clarify, here is my vertex shader, as you can see I'm using a 3x3 transformation matrix and 2 component vectors:
uniform mat3 u_transform_mat;
attribute vec2 a_vert_pos;
attribute vec2 a_vert_uv;
varying vec2 v_vert_uv;
void main()
{
v_vert_uv = a_vert_uv;
gl_Position = vec4(u_transform_mat * vec3(a_vert_pos, 1.0), 1.0);
}
All your "projection" matrix really needs to do in this case is scale.
Say for example you have landscape screen dimensions, and your aspect ratio is 1.5. With the transformation you have now, the NDC range of [-1, 1] will be stretched out to fit the screen width, meaning that is scaled by a factor 1.5 in horizontal direction relative to the vertical direction.
What you want to keep the proportions intact is to map the range [-1.5, 1.5] in x-direction and the range [-1, 1] in y-direction to the screen. Or in the more general case, [-aspect, aspect] in x-direction.
To map the [-aspect, aspect] range to the NDC range of [-1, 1], you need to scale the x-coordinates by (1 / aspect). Your "projection" matrix is therefore a non-uniform scaling matrix that only scales in x-direction:
[ 1.0f / aspectRatio 0.0f 0.0f ]
P = [ 0.0f 1.0f 0.0f ]
[ 0.0f 0.0f 1.0f ]
Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.