GLSL: simulating 3D texture with 2D texture - opengl-es

I came up with some code that simulates 3D texture lookup using a big 2D texture that contains the tiles. 3D Texture is 128x128x64 and the big 2D texture is 1024x1024, divided into 64 tiles of 128x128.
The lookup code in the fragment shader looks like this:
#extension GL_EXT_gpu_shader4 : enable
varying float LightIntensity;
varying vec3 pos;
uniform sampler2D noisef;
vec4 flat_texture3D()
{
vec3 p = pos;
vec2 inimg = p.xy;
int d = int(p.z*128.0);
float ix = (d % 8);
float iy = (d / 8);
vec2 oc = inimg + vec2(ix, iy);
oc *= 0.125;
return texture2D(noisef, oc);
}
void main (void)
{
vec4 noisevec = flat_texture3D();
gl_FragColor = noisevec;
}
The tiling logic seems to work ok and there is only one problem with this code. It looks like this:
There are strange 1 to 2 pixel wide streaks between the layers of voxels.
The streaks appear just at the border when d changes.
I've been working on this for 2 days now and still without any idea of what's going on here.

This looks like a texture filter issue. Think about it: when you come close to the border, the bilinear filter will consider the neighboring texel, in your case: from another "depth layer".
To avoid this, you can clamp the texture coords so that they are never outside the rect defined outmost texel centers of the tile (similiar to GL_CLAMP_TO_EDGE, but on a per-tile basis). But you should be aware that the problems will become worse when using mipmapping. You should also be aware, that currently you are not able to filter in the z direction, as a real 3D texture would. You could simulate this manually in the shader, of course.
But really: why not just using 3D textures? The hw can do all this for you, with much less overhead...

Related

How do you increase the space between pixels in a fragment shader?

I'm currently working on a shader for a very mundane effect I'd like to achieve. It's a little bit hard to explain, but the basic gist is that I'm trying to "pull apart" the pixels of a pixel art image.
You can see my current progress, however minor, at this jsfiddle:
https://jsfiddle.net/roomyrooms/Lh46z2gw/85/
I can distort the image easily, of course, and make it stretch the further away from the center it is. But this distorts and warps it smoothly, and all the pixels remain connected (whether they're squished/smeared/etc.)
I would like to get an effect where the space between the pixels is stretched rather than the pixels themselves stretching. Sort of like if you were to swipe sand across a table. The grains of sand stay the same size, they just get further apart.
Welcome any ideas! Thanks. Here's what I've got code-wise so far:
var fragmentShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
uniform highp vec4 inputSize;
uniform float time;
vec2 mapCoord( vec2 coord )
{
coord *= inputSize.xy;
coord += inputSize.zw;
return coord;
}
vec2 unmapCoord( vec2 coord )
{
coord -= inputSize.zw;
coord /= inputSize.xy;
return coord;
}
void main(void)
{
vec2 coord = mapCoord(vTextureCoord);
float dist = distance(coord.x, inputSize.x/2.);
coord.x += dist/4.;
coord = unmapCoord(coord);
gl_FragColor = texture2D(uSampler, coord);
}`
EDIT: Added an illustration of the effect I'm trying to achieve here:
I can get something along these lines:
With modulo, but it's discarding half of the image in the process.
You can:
discard some fragments (that is slow on some mobile devices)
use stencil mask to just draw where you want
draw transparent pixels alpha=0 for the ones that you do not want.
and lastly, you can draw an array of points or squires and move them around.
As far as I know, the fragment shader will run on every pixel in your triangle. You can only tell it what color to set that pixel to. In your example you're already duplicating columns of pixels so you can discard some hopefully without losing any of the source image's pixels if you stretch the coord 2x then discard every other column.
vec2 coord = mapCoord(vTextureCoord);
if(coord.x < 100.0 && floor(coord.x/2.0)==floor((coord.x+1.0)/2.0))
discard;

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

Applying a perspective transformation matrix from GIMP into a GLSL shader

So I'm trying to add a rotation and a perspective effect to an image into the vertex shader. The rotation works just fine but I'm unable to make the perspective effect. I'm working in 2D.
The rotation matrix is generated from the code but the perspective matrix is a bunch of hardcoded values I got from GIMP by using the perspective tool.
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.58302f, -0.29001f, 103.0f,
-0.00753f, 0.01827f, 203.0f,
-0.00002f, -0.00115f, 1.0f
});
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
uniform mat3 u_rotation;
uniform mat3 u_perspective;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
vec3 vec = vec3(a_texCoord0 * 500.0, 1.0);
vec = vec * u_perspective;
vec = vec3((vec.xy / vec.z) / 500.0, 0.0);
vec -= vec3(0.5, 0.5, 0.0);
vec = vec * u_rotation;
v_texCoords = vec.xy + vec2(0.5);
gl_Position = u_projTrans * a_position;
}
For the rotation, I'm offsetting the origin so that it rotates around the center instead of the top left corner.
Pretty much everything I know about GIMP's perspective tool comes from http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch10.ps This was suggesting I would be able to reproduce what GIMP does after reading it, but it turns out I can't. The result shows nothing (no pixel) while removing the perspective part shows the image rotating properly.
As mentioned in the link, I'm dividing by vec.z to convert my homogeneous coordinates back to a 2D point. I'm not using the origin shifting for the perspective transformation as it was mentioned in the link that the top left corner was used as an origin. p.11:
There is one thing to be careful about - the origin of GIMP
coordinates is at the upper left, with y increasing downwards.
EDIT:
Thanks to #Rabbid76's answer, it's now showing something! However, it's not transforming my texture like the matrix was transforming my image on GIMP.
My transformation matrix on GIMP was supposed to do something a bit like that:
But instead, it looks like something like that:
This is what I think from what I can see from the actual result:
https://imgur.com/X56rp8K (Image used)
(As pointed out, it texture parameter is clamp to edge instead of clamp to border, but that's beside the point)
It looks like it's doing the exact opposite of what I'm looking for. I tried offsetting the origin to the center of the image and to the bottom left before applying the matrix without success. This is a new result but it's still the same problem: How to apply the GIMP perspective matric into a GLSL shader?
EDIT2:
With more testing, I can confirm that it's doing the "opposite". Using this simple downscale transformation matrix:
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.75f, 0f, 50f,
0f, 0.75f, 50f,
0f, 0f, 1.0f
});
The result is an upscaled version of the image:
If I invert the matrix programmatically, it works for the simple scaling matrix! But for the perspective matrix, it shows that:
https://imgur.com/v3TLe2d
EDIT3:
Thanks to #Rabbid76 again it turned out applying the rotation after the perspective matrix does the rotation before and I end up with a result like this: https://imgur.com/n1vWq0M
It is almost it! The only problem is that the image is VERY squished. It's just like the perspective matrix was applied multiple times. But if you look carefully, you can see it rotating while in perspective just like I want it. The problem now is how to unsquish it to get a result just like I had in GIMP. (The root problem is still the same, how to take a GIMP matrix and apply it in a shader)
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
The matrix
0.58302 -0.29001 103.0
-0.00753 0.01827 203.0
-0.00002 -0.00115 1.0f
is a 2D perspective transformation matrix. It operates with 2D Homogeneous coordinate.
See 2D affine and perspective transformation matrices
Since the matrix which is displayed in GIMP is the transformation from the perspective to the orthogonal view, the inverse matrix has to be used for the transformation.
The inverse matrix can be calculated by calling inv().
The matrix is setup to performs a operation of a Cartesian coordinate in the range [0, 500], to a Homogeneous coordinates in the range [0, 500].
Your assumption is correct, you have to scale the input from the range [0, 1] to [0, 500] and the output from [0, 500] to [0, 1].
But you have to scale the 2D Cartesian coordinates
Further you have to do the rotation after the perspective projection and the Perspective divide.
It may be necessary (dependent on the bitmap and the texture coordinate attributes), that you have to flip the V coordinate of the texture coordinates.
And most important, the transformation has to be done per fragment in the fragment shader.
Note, since this transformation is not linear (it is perspective transformation), it is not sufficient to to calculate the texture coordinates on the corner points.
vec2 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// uv_h: perspective devide and downscale [0, 500] -> [0, 1]
vec3 uv_p = vec3(uv_h.xy / uv_h.z / scale, 1.0);
// rotate
uv_p = vec3(uv_p.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0);
return uv_p.xy;
}
Of course you can do the transformation in the vertex shader too.
But then you have to pass the 2d homogeneous coordinate to from the vertex shader to the fragment shader
This is similar to set a clip space coordinates to gl_Position.
The difference is that you have a 2d homogeneous coordinate and not a 3d. and you have to do the Perspective divide manually in the fragment shader:
Vertex shader:
attribute vec2 a_texCoord0;
varying vec3 v_texCoords_h;
uniform mat3 u_perspective
vec3 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// downscale
return vec3(uv_h.xy / scale, uv_h.z);
}
void main()
{
v_texCoords_h = Project2D( a_texCoord0 );
.....
}
Fragment shader:
varying vec3 v_texCoords_h;
uniform mat3 u_rotation;
void main()
{
// perspective divide
vec2 uv = vertTex.xy / vertTex.z;
// rotation
uv = (vec3(uv.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0)).xy;
.....
}
See the preview, where I used the following 2D projection matrix, which is the inverse matrix from that one which is displayed in GIMP:
2.452f, 2.6675f, -388.0f,
0.0f, 7.7721f, -138.0f,
0.00001f, 0.00968f, 1.0f
Further note, in compare to u_projTrans, u_perspective is initialized in row major order.
Because of that you have to multiply the vector from the left to u_perspective:
vec_h = vec3(vec.xy * 500.0, 1.0) * u_perspective;
But you have to multiply the vector from the right to u_projTrans:
gl_Position = u_projTrans * a_position;
See GLSL Programming/Vector and Matrix Operations
and Data Type (GLSL)
Of course this may change if you transpose the matrix when you set it by glUniformMatrix*

cracks at edges of shader material in THREE.js

I'm using a shader material to wrap a rectangular texture around a torus in the expected way, gluing edge to edge. The problem is that at the edges, the texture doesn't quite wrap around, and there's this little glowing crack which looks like scanning.
I've isolated the problem and found that it's in my fragment shader. I need to be able to shift the uv values freely in my actual program, so i have some lines in the fragment shader normalizing the uv values to keep them between 0 and 1. However, this results in the unfortunate texture cracks. I am having trouble understanding why... in the beginning shouldn't the uv coordinates be inbetween 0 and 1? Even without any rotation, going on, these lines cause the cracks. Remove the lines, no cracks... but I can't remove them if I'm rotating the texture because then I get a totally undesired effect. hoping someone could explain how these lines in the fragment shader are causing the cracks and what I can do to achieve the same behavior but avoid the cracks, I'm a real novice in glsl.
the problem occurs in the below, with the if statements.
uniform sampler2D iChannel0;
uniform float rotX;
uniform float rotY;
varying vec2 vUv;
void main(){
vec2 uv = vUv;
uv.x = uv.x + rotX;
uv.y = uv.y + rotY;
if (uv.y > 1.)
uv.y = uv.y -1.;
if (uv.y < 0.)
uv.y = uv.y + 1.;
if (uv.x > 1.)
uv.x = uv.x -1.;
if(uv.x < 0.)
uv.x = uv.x + 1.;
gl_FragColor = texture2D(iChannel0, vec2(uv.x,uv.y));
}
Here's a very distilled jsfiddle
reproducing the problem and showing a meridian texture rotation on the torus.

GLSL Shader: Mapping Bars in Polar-Coordinates

I'd like to create a polar representation of this shader: https://www.shadertoy.com/view/4sfSDN
So that it looks like in this screenshot:
http://postimg.org/image/uwc34jxxz/
I know the basics of the polar-system: How to calculate r and ϕ, but i can only use those values with a texture2d() load function on a image.
When i only have a amplitude value like in the shader above, i dont get it working.
r should somehow be based of the amplitude, but then i dont know how to draw the circle without the texture2d() function... i can draw a circle with r only, but then there are no different amplitudes. Or do i even need to fill a matrix with the generated bars in a loop and load the circle from there?
Im quite sure it is possible, because of the insane shaders on shadertoy, but i dont quite get it...
Can anyone point me out to a solution?
From the shader you posted I think it should be enough to simply transform the uv to polar coordinates.
So what you are looking for are angle and radius from the center. First let us transform the uv so it gives the vector pointing from the center:
uv = fragCoord - (iResolution*.5);
Next try to normalize it. Since the view is not square the normalization transform should only be by 1 coordinate such that
if(iResolution.x>iResolution.y)
{
uv = uv/iResolution.y;
}
else
{
uv = uv/iResolution.x;
}
This will kind of produce a fit effect but you may just hard code one or the other if you need to. min can be used if available (uv = uv/min(iResolution.x, iResolution.y))) to remove the condition.
So at this point the uv vector points from the center toward the pixel position in a coordinate system that is normalized in one dimension.
Now to get the angle you may simply use atan(uv.y, uv.x). To get the radius you then need length(uv).
The radius in your case will be for the shorter dimension in range [0, .5] so you may multiply it by 2.0 but this is a factor you may later change to get the desired effect so that the maximum value is not hitting the border but maybe having 80% or so (just play around with it).
The angle is in range of [-Pi, Pi] plus in the docs it says it does not work for X = 0 which you will need to handle yourself then. So now the angle must be transformed to be in range [.0, 1.0] to access the texture coordinate:
angle = angle/(Pi*2.0) + .5
So now construct the new uv
uv = vec2(angle, radius)
And use the same shader you did before.
You will also need to keep in mind that radius may be larger then 1.0 in corners which may produce a wrong texture access. In such cases it would be best to discard the fragment.
From the shader toy:
#define M_PI 3.1415926535897932384626433832795
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy - (iResolution.xy*.5);
uv = uv/min(iResolution.x, iResolution.y);
float angle = atan(uv.y, uv.x);
angle = angle/(M_PI*2.0) + .5;
float radius = length(uv);
uv = vec2(angle, radius*2.0);
float bars = 24.;
float fft = texture2D( iChannel0, vec2(floor(uv.x*bars)/bars,0.25) ).x;
float amp = (fft - uv.y)*100.;
fragColor = vec4(amp,0.,0.,1.0);
}

Resources