For a little background this is for doing particle collisions with lookup textures on the GPU. I read the position texture with javascript and create a grid texture that contains the particles that are in the corresponding grid cell. The working example that is mentioned in the post can be viewed here: https://pacific-hamlet-84784.herokuapp.com/
The reason I want the buckets system is that it will allow me to do much fewer checks and the number of checks wouldn't increase with the number of particles.
For the actual problem description:
I am attempting to read from a lookup texture centered around a pixel (lets say i have a texture that is 10x10, and I want to read the pixels around (4,2), i would read
(3,1),(3,2)(3,3)
(4,1),(4,2)(4,3)
(5,1),(5,2)(5,3)
The loop is a little more complicated but that is the general idea. If I make the loop look like the following
float xcenter = 5.0;
float ycenter = 5.0;
for(float i = -5.0; i < 5.0; i++){
for(float j = -5.0; j < 5.0; j++){
}
}
It works (however it goes over all of the particles which defeats the purpose), however if I calculate the value dynamically (which is what I need), then I get really bizarre behavior. Is this a problem with GLSL or a problem with my code? I output the values to an image and read the pixel values and they all appear to be within the right range. The problem is coming from using the for loop variables (i,j) to change a bucket index that is calculated outside of the loop, and use that variable to index a texture.
The entire shader code can be seen here:
(if I remove the hard coded 70, and remove the comments it breaks, but all of those values are between 0 and 144. This is where I am confused. I feel like this code should still work fine.).
uniform sampler2D pos;
uniform sampler2D buckets;
uniform vec2 res;
uniform vec2 screenSize;
uniform float size;
uniform float bounce;
const float width = &WIDTH;
const float height = &HEIGHT;
const float cellSize = &CELLSIZE;
const float particlesPerCell = &PPC;
const float bucketsWidth = &BW;
const float bucketsHeight = &BH;
$rand
void main(){
vec2 uv = gl_FragCoord.xy / res;
vec4 posi = texture2D( pos , uv );
float x = posi.x;
float y = posi.y;
float z = posi.z;
float target = 1.0 * size;
float x_bkt = floor( (x + (screenSize.x/2.0) )/cellSize);
float y_bkt = floor( (y + (screenSize.y/2.0) )/cellSize);
float x_bkt_ind_start = 70.0; //x_bkt * particlesPerCell;
float y_bkt_ind_start =70.0; //y_bkt * particlesPerCell;
//this is the code that is acting weirdly
for(float j = -144.0 ; j < 144.0; j++){
for(float i = -144.0 ; i < 144.0; i++){
float x_bkt_ind = (x_bkt_ind_start + i)/bucketsWidth;
float y_bkt_ind = (y_bkt_ind_start + j)/bucketsHeight;
vec4 ind2 = texture2D( buckets , vec2(x_bkt_ind,y_bkt_ind) );
if( abs(ind2.z - 1.0) > 0.00001 || x_bkt_ind < 0.0 || x_bkt_ind > 1.0 || y_bkt_ind < 0.0 || y_bkt_ind > 1.0 ){
continue;
}
vec4 pos2 = texture2D( pos , vec2(ind2.xy)/res );
vec2 diff = posi.xy - pos2.xy;
float dist = length(diff);
vec2 uvDiff = ind2.xy - gl_FragCoord.xy ;
float uvDist = abs(length(uvDiff));
if(dist <= target && uvDist >= 0.5){
float factor = (dist-target)/dist;
x = x - diff.x * factor * 0.5;
y = y - diff.y * factor * 0.5;
}
}
}
gl_FragColor = vec4( x, y, x_bkt_ind_start , y_bkt_ind_start);
}
EDIT:
To make my problem clear, what is happening is that when I do the first texture lookup, I get the position of the particle:
vec2 uv = gl_FragCoord.xy / res;
vec4 posi = texture2D( pos , uv );
After, I calculate the bucket that the particle is in:
float x_bkt = floor( (x + (screenSize.x/2.0) )/cellSize);
float y_bkt = floor( (y + (screenSize.y/2.0) )/cellSize);
float x_bkt_ind_start = x_bkt * particlesPerCell;
float y_bkt_ind_start = y_bkt * particlesPerCell;
All of this is correct. Like I am getting the correct values and if I set these as the output values of the shader and read the pixels they are the correct values. I also changed my implementation a little and this code works fine.
In order to text the for loop, I replaced the pixel lookup coordinates in the grid bucket by the pixel positions. I adapted the code and it works fine, however I have to recalculate the buckets multiple times per frame so the code is not very efficient. If instead of storing the pixel positions I store the uv coordinates of the pixels and then do a lookup using those uv positions:
//get the texture coordinate that is offset by the for loop
float x_bkt_ind = (x_bkt_ind_start + i)/bucketsWidth;
float y_bkt_ind = (y_bkt_ind_start + j)/bucketsHeight;
//use the texture coordinates to get the stored texture coordinate in the actual position table from the bucket table
vec4 ind2 = texture2D( buckets , vec2(x_bkt_ind,y_bkt_ind) );
and then I actually get the position
vec4 pos2 = texture2D( pos , vec2(ind2.xy)/res );
this pos2 value will be wrong. I am pretty sure that the ind2 value is correct because if instead of storing a pixel coordinate in that bucket table I store position values and remove the second texture lookup, the code runs fine. But using the second lookup causes the code to break.
In the original post if I set the bucket to be any value, lets say the middle of the texture, and iterate over every possible bucket coordinate around the pixel, it works fine. However if I calculate the bucket position and iterate over every pixel it does not. I wonder if it has to do with the say glsl compiles the shaders and that some sort of optimization it is making is causing the double texture lookups to break in the for look. Or it is just a mistake in my code. I was able to get the single texture lookup in a for loop working when I just stored position values in the bucket texture.
Related
My application is coded in Javascript + Three.js / WebGL + GLSL. I have 200 curves, each one made of 85 points. To animate the curves I add a new point and remove the last.
So I made a positions shader that stores the new positions onto a texture (1) and the lines shader that writes the positions for all curves on another texture (2).
The goal is to use textures as arrays: I know the first and last index of a line, so I need to convert those indices to uv coordinates.
I use FBOHelper to debug FBOs.
1) This 1D texture contains the new points for each curve (200 in total): positionTexture
2) And these are the 200 curves, with all their points, one after the other: linesTexture
The black parts are the BUG here. Those texels shouldn't be black.
How does it work: at each frame the shader looks up the new point for each line in the positionTexture and updates the linesTextures accordingly, with a for loop like this:
#define LINES_COUNT = 200
#define LINE_POINTS = 85 // with 100 it works!!!
// Then in main()
vec2 uv = gl_FragCoord.xy / resolution.xy;
for (float i = 0.0; i < LINES_COUNT; i += 1.0) {
float startIdx = i * LINE_POINTS; // line start index
float endIdx = beginIdx + LINE_POINTS - 1.0; // line end index
vec2 lastCell = getUVfromIndex(endIdx); // last uv coordinate reserved for current line
if (match(lastCell, uv)) {
pos = texture2D( positionTexture, vec2((i / LINES_COUNT) + minFloat, 0.0)).xyz;
} else if (index >= startIdx && index < endIdx) {
pos = texture2D( lineTexture, getNextUV(uv) ).xyz;
}
}
This works, but it's slightly buggy when I have many lines (150+): likely a precision problem. I'm not sure if the functions I wrote to look up the textures are right. I wrote functions like getNextUV(uv) to get the value from the next index (converted to uv coordinates) and copy to the previous. Or match(xy, uv) to know if the current fragment is the texel I want.
I though I could simply use the classic formula:
index = uv.y * width + uv.x
But it's more complicated than that. For example match():
// Wether a point XY is within a UV coordinate
float size = 132.0; // width and height of texture
float unit = 1.0 / size;
float minFloat = unit / size;
bool match(vec2 point, vec2 uv) {
vec2 p = point;
float x = floor(p.x / unit) * unit;
float y = floor(p.y / unit) * unit;
return x <= uv.x && x + unit > uv.x && y <= uv.y && y + unit > uv.y;
}
Or getUVfromIndex():
vec2 getUVfromIndex(float index) {
float row = floor(index / size); // Example: 83.56 / 10 = 8
float col = index - (row * size); // Example: 83.56 - (8 * 10) = 3.56
col = col / size + minFloat; // u = 0.357
row = row / size + minFloat; // v = 0.81
return vec2(col, row);
}
Can someone explain what's the most efficient way to lookup values in a texture, by getting a uv coordinate from index value?
Texture coordinates go from the edge of pixels not the centers so your formula to compute a UV coordinates needs to be
u = (xPixelCoord + .5) / widthOfTextureInPixels;
v = (yPixelCoord + .5) / heightOfTextureInPixels;
So I'm guessing you want getUVfromIndex to be
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float widthOfTexture = sizeOfTexture.x;
float col = mod(index, widthOfTexture);
float row = floor(index / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
Or, based on some other experience with math issues in shaders you might need to fudge index
uniform vec2 sizeOfTexture; // allow texture to be any size
vec2 getUVfromIndex(float index) {
float fudgedIndex = index + 0.1;
float widthOfTexture = sizeOfTexture.x;
float col = mod(fudgedIndex, widthOfTexture);
float row = floor(fudgedIndex / widthOfTexture);
return (vec2(col, row) + .5) / sizeOfTexture;
}
If you're in WebGL2 you can use texelFetch which takes integer pixel coordinates to get a value from a texture
I currently use the following code in my fragment shader to display depth images. The values I get from this are normalized. I read them using readpixels. But I currently need the original values without normalizing. I can take the vertex positions I have and manually multiply with MVMatrix but is there a simpler way to extract it?
if (vIsDepth > 0.5)
{
float z = position_1.z;
float n = 1.0;
float f = 20.0;
float ndcDepth = (2.0 * z - n - f)/(f - n);
float clipDepth = ndcDepth /position_1.w;
float cr = ((clipDepth*0.5)+0.5);
gl_FragColor = vec4(cr,cr,cr,1.0);
}
I have tried to make better quality of my volume ray casting algorithm. I have set a smaller step of raycast (quality is better), but it causes problem. It is on pictures below (black areas where they shouldnt be).
I am using RGB cube to get direction of ray in volume.
I think, i have the same algorithm like there: volume rendering (using glsl) with ray casting algorithm
Have anybody some ideas, where could be a problem? I need to resolve this, because deadline of my diplom thesis is to close:( I realy don't know, why it doesnt work:(
EDIT:
I cant show there my all code (it could be problem, if i will supply it before hand it in school). But the key code to going throught the volume:
// All variables neede to rays
vec3 rayDirection = texture2D(backFaceCube, texCoo).xyz - varcolor.xyz;
float lenRay = length(rayDirection);
vec3 normDir = normalize(rayDirection);
float d = qualitySteps; //quality steps is size of steps defined by user -> example: 0.01, 0.001, 0.0001 etc.
vec3 step = normDir * d;
float lenStep = length(step);
float accumulatedLength = 0.0;
and then in cycle:
posInCube.xyz += step;
accumulatedLength += lenStep;
...
...
...
if(accumulatedLength >= lenRay || accumulatedColor.a > 1.0 ) {
break;
}
EDIT2:(sorry but like comment it was too long)
Yes, the texture is noisy...i have tried to delete the condition with alpha: if(accumulatedColor.a > 1.0), but the result is same.
I think that there is some direct correlation with length of ray and size of step. I tried many combination and i have found these things.
If step is big, i am able to go throught all volume, but if it is small, than i am realy not able to go throught volume (maybe). If step is extremely big, than i can see mirroved object (it can be caused by repeating texture if i go out of the texture on GPU). If step is too small, than i am able to mapped only small part of texture -> it seems, that ray is too short, but in reality he isnt. Questins are, why mapping of 3D coordinates to 2D texture is wrong and depend on size of step..
Can you please supply the code for your fragment shader?
Are you traversing the whole vector from front to end position? Here's an example shader (the code might contain some errors since I just wrote it from the top of my head. I unfortunately can't test the code on my computer at the moment):
in vec2 texCoord;
out vec4 outColor;
uniform float stepSize;
uniform int numSteps;
uniform sampler2d frontTexture;
uniform sampler2d backTexture;
uniform sampler3d volumeTexture;
uniform sampler1d transferTexture; // Density to RGB
void main()
{
vec4 color = vec4(0.0);
vec3 startPosition = texture(frontTexture, texCoord);
vec3 endPosition = texture(backTexture, texCoord);
vec3 delta = normalize(startPosition - endPosition) * stepSize;
vec3 position = startPosition;
for (int i = 0; i < numSteps; ++i)
{
float density = texture(volumeTexture, position).r;
vec3 voxelColor = texture(transferTexture, density);
// Sampling distance correction
color.a = 1.0 - pow((1.0 - color.a), stepSize * 500.0);
// Front to back blending (no shading done)
color.rgb = color.rgb + (1.0 - color.a) * voxelColor.a * voxelColor.rgb;
color.a = color.a + (1.0 - color.a) * voxelColor.a;
if (color.a >= 1.0)
{
break;
}
// Advance
position += direction;
if (position.x > 1.0 || position.y > 1.0 || position.z > 1.0)
{
break;
}
}
outColor = color;
}
Is it possible for me to add line thickness in the fragment shader considering that I draw the line with GL_LINES? Most of the examples I saw seem to access only the texels within the primitive in the fragment shader and a line thickness shader would need to write to texels outside the line primitive to obtain the thickness. If it is possible however, a very small, basic, example, would be great.
Quite a lot is possible with fragment shaders. Just look what some guys are doing. I'm far away from that level myself but this code can give you an idea:
#define resolution vec2(500.0, 500.0)
#define Thickness 0.003
float drawLine(vec2 p1, vec2 p2) {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float a = abs(distance(p1, uv));
float b = abs(distance(p2, uv));
float c = abs(distance(p1, p2));
if ( a >= c || b >= c ) return 0.0;
float p = (a + b + c) * 0.5;
// median to (p1, p2) vector
float h = 2 / c * sqrt( p * ( p - a) * ( p - b) * ( p - c));
return mix(1.0, 0.0, smoothstep(0.5 * Thickness, 1.5 * Thickness, h));
}
void main()
{
gl_FragColor = vec4(
max(
max(
drawLine(vec2(0.1, 0.1), vec2(0.1, 0.9)),
drawLine(vec2(0.1, 0.9), vec2(0.7, 0.5))),
drawLine(vec2(0.1, 0.1), vec2(0.7, 0.5))));
}
Another alternative is to check with texture2D for the color of nearby pixel - that way you can make you image glow or thicken (e.g. if any of the adjustment pixels are white - make current pixel white, if next to nearby pixel is white - make current pixel grey).
No, it is not possible in the fragment shader using only GL_LINES. This is because GL restricts you to draw only on the geometry you submit to the rasterizer, so you need to use geometry that encompasses the jagged original line plus any smoothing vertices. E.g., you can use a geometry shader to expand your line to a quad around the ideal line (or, actually two triangles) which can pose as a thick line.
In general, if you generate bigger geometry (including a full screen quad), you can use the fragment shader to draw smooth lines.
Here's a nice discussion on that subject (with code samples).
Here's my approach. Let p1 and p2 be the two points defining the line, and let point be the point whose distance to the line you wish to measure. Point is most likely gl_FragCoord.xy / resolution;
Here's the function.
float distanceToLine(vec2 p1, vec2 p2, vec2 point) {
float a = p1.y-p2.y;
float b = p2.x-p1.x;
return abs(a*point.x+b*point.y+p1.x*p2.y-p2.x*p1.y) / sqrt(a*a+b*b);
}
Then use that in your mix and smoothstep functions.
Also check out this answer:
https://stackoverflow.com/a/9246451/911207
A simple hack is to just add a jitter in the vertex shader:
gl_Position += vec4(delta, delta, delta, 0.0);
where delta is the pixelsize i.e. 1.0/viewsize
Do the line-draw pass twice using zero, and then the delta as jitter (passed in as a uniform).
To draw a line in Fragment Shader, we should check that the current pixel (UV) is on the line position. (is not efficient using only the Fragment shader code! this is just for the test with glslsandbox)
An acceptable UV point should have these two conditions:
1- The maximum permissible distance between (uv, pt1) should be smaller than the distance between (pt1, pt2).
With this condition we create a assumed circle with the center of pt2 and radious = distance(pt2, pt1) and also prevent the drawing of line that is longer than the distance(pt2, pt1).
2- For each UV we assume a hypothetical circle with a connection point on ptc position of the line(pt2,pt1).
If the distance between UV and PTC is less than the line tickness, we select this UV as the line point.
in our code:
r = distance (uv, pt1) / distance (pt1, pt2) give us a value between 0 and 1.
we interpolate a point (ptc) between pt1 and pt2 with value of r
code:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
float line(vec2 uv, vec2 pt1, vec2 pt2,vec2 resolution)
{
float clrFactor = 0.0;
float tickness = 3.0 / max(resolution.x, resolution.y); //only used for tickness
float r = distance(uv, pt1) / distance(pt1, pt2);
if(r <= 1.0) // if desired Hypothetical circle in range of vector(pt2,pt1)
{
vec2 ptc = mix(pt1, pt2, r); // ptc = connection point of Hypothetical circle and line calculated with interpolation
float dist = distance(ptc, uv); // distance betwenn current pixel (uv) and ptc
if(dist < tickness / 2.0)
{
clrFactor = 1.0;
}
}
return clrFactor;
}
void main()
{
vec2 uv = gl_FragCoord.xy / resolution.xy; //current point
//uv = current pixel
// 0 < uv.x < 1 , 0 < uv.x < 1
// left-down= (0,0)
// right-top= (1,1)
vec2 pt1 = vec2(0.1, 0.1); //line point1
vec2 pt2 = vec2(0.8, 0.7); //line point2
float lineFactor = line(uv, pt1, pt2, resolution.xy);
vec3 color = vec3(.5, 0.7 , 1.0);
gl_FragColor = vec4(color * lineFactor , 1.);
}
I'm working with a GPU based particle system.
There are 1 million particles computed by passing in the x,y,z positions as rgb values on a 1024*1024 texture. The same is being done for their velocities.
I'm trying to make them move from an arbitrary point to a point on sphere.
My current shader, which I'm using for the computation, is moving from one point to another directly.
I'm not using the mass or velocity texture at the moment
// float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
// vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
// vec3 acc = -0.0002*p; // Centripetal force
// vec3 ayAcc = 0.00001*normalize(cross(vec3(0, 1 ,0),p)); // Angular force
// vec3 new_v = v + mass*(acc+ayAcc);
vec3 new_p = p + ((moveToPos - p) / duration);
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
moveToPos is the mouse pointer as a float (0.0f > 1.0f)
the coordinate system is being translated from (0.5,0.5 > -0.5,-0.5) to (0.0,0.0 > 1.0,1.0)
I'm completely new to vector maths, and the calculations that are confusing me. I know I need to use the formula:
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
but calculating the angles from moveToPos(xyz) > p(xyz) is remaining a problem
I wrote the original version of this GPU-particles shader a few years back (now #: https://github.com/num3ric/Cinder-Particles). Here is one possible approach to your problem.
I would start with a fragment shader applying a spring force to the particles so that they more or less are constrained to the surface of a sphere. Something like this:
uniform sampler2D posArray;
uniform sampler2D velArray;
varying vec4 texCoord;
void main(void)
{
float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
vec3 v = texture2D( velArray, texCoord.st).rgb;
float x0 = 0.5; //distance from center of sphere to be maintaned
float x = distance(p, vec3(0,0,0)); // current distance
vec3 acc = -0.0002*(x - x0)*p; //apply spring force (hooke's law)
vec3 new_v = v + mass*(acc);
new_v = 0.999*new_v; // friction to slow down velocities over time
vec3 new_p = p + new_v;
//Render to positions texture
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//Render to velocities texture
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
}
Then, I would pass a new vec3 uniform for the mouse position intersecting a sphere of the same radius (done outside the shader in Cinder).
Now, combining this with the previous soft spring constraint. You could add a tangential force towards this attraction point. Start with a simple (mousePos - p) acceleration, and then figure out a way to make this force exclusively tangential using cross-products.
I'm not sure how the spherical coordinates approach would work here.
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
Where do you get ϕ and θ? The textures stores the positions and velocities in cartesian coordinates. Plus, converting back and forth is not really an option.
My explanation could be too advanced if you are not comfortable with vectors. Unfortunately, shaders and particle animation are very mathematical by nature.
Here is a solution that I've worked out - it works, however if I move the center point of the spheres outside their own bounds, I lose particles.
#define NPEOPLE 5
uniform sampler2D posArray;
uniform sampler2D velArray;
uniform vec3 centerPoint[NPEOPLE];
uniform float radius[NPEOPLE];
uniform float duration;
varying vec4 texCoord;
void main(void) {
float personToGet = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
float mass = texture2D( velArray, texCoord.st).a;
vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
vec3 vec_p = p - centerPoint[int(personToGet)];
float len_vec_p = sqrt( ( vec_p.x * vec_p.x ) + (vec_p.y * vec_p.y) + (vec_p.z * vec_p.z) );
vec_p = ( ( radius[int(personToGet)] /* mass */ ) / len_vec_p ) * vec_p;
vec3 new_p = ( vec_p + centerPoint[int(personToGet)] );
new_p = p + ( (new_p - p) / (duration) );
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
vec3 new_v = v;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, personToGet);
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, mass);
}
I'm passing in arrays of 5 vec3f's and a float mapped as 5 center points and radii.
The particles are setup with a random position at the beginning and move towards the number in the array mapped to the alpha value of the position array.
My aim is to pass in blob data from openCV and map the spheres to people on a camera feed.
It's really uninteresting visually at the moment, so will need to use the velocity texture to add to the behaviour of the particles.