I'm working with a GPU based particle system.
There are 1 million particles computed by passing in the x,y,z positions as rgb values on a 1024*1024 texture. The same is being done for their velocities.
I'm trying to make them move from an arbitrary point to a point on sphere.
My current shader, which I'm using for the computation, is moving from one point to another directly.
I'm not using the mass or velocity texture at the moment
// float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
// vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
// vec3 acc = -0.0002*p; // Centripetal force
// vec3 ayAcc = 0.00001*normalize(cross(vec3(0, 1 ,0),p)); // Angular force
// vec3 new_v = v + mass*(acc+ayAcc);
vec3 new_p = p + ((moveToPos - p) / duration);
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
moveToPos is the mouse pointer as a float (0.0f > 1.0f)
the coordinate system is being translated from (0.5,0.5 > -0.5,-0.5) to (0.0,0.0 > 1.0,1.0)
I'm completely new to vector maths, and the calculations that are confusing me. I know I need to use the formula:
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
but calculating the angles from moveToPos(xyz) > p(xyz) is remaining a problem
I wrote the original version of this GPU-particles shader a few years back (now #: https://github.com/num3ric/Cinder-Particles). Here is one possible approach to your problem.
I would start with a fragment shader applying a spring force to the particles so that they more or less are constrained to the surface of a sphere. Something like this:
uniform sampler2D posArray;
uniform sampler2D velArray;
varying vec4 texCoord;
void main(void)
{
float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
vec3 v = texture2D( velArray, texCoord.st).rgb;
float x0 = 0.5; //distance from center of sphere to be maintaned
float x = distance(p, vec3(0,0,0)); // current distance
vec3 acc = -0.0002*(x - x0)*p; //apply spring force (hooke's law)
vec3 new_v = v + mass*(acc);
new_v = 0.999*new_v; // friction to slow down velocities over time
vec3 new_p = p + new_v;
//Render to positions texture
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//Render to velocities texture
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
}
Then, I would pass a new vec3 uniform for the mouse position intersecting a sphere of the same radius (done outside the shader in Cinder).
Now, combining this with the previous soft spring constraint. You could add a tangential force towards this attraction point. Start with a simple (mousePos - p) acceleration, and then figure out a way to make this force exclusively tangential using cross-products.
I'm not sure how the spherical coordinates approach would work here.
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
Where do you get ϕ and θ? The textures stores the positions and velocities in cartesian coordinates. Plus, converting back and forth is not really an option.
My explanation could be too advanced if you are not comfortable with vectors. Unfortunately, shaders and particle animation are very mathematical by nature.
Here is a solution that I've worked out - it works, however if I move the center point of the spheres outside their own bounds, I lose particles.
#define NPEOPLE 5
uniform sampler2D posArray;
uniform sampler2D velArray;
uniform vec3 centerPoint[NPEOPLE];
uniform float radius[NPEOPLE];
uniform float duration;
varying vec4 texCoord;
void main(void) {
float personToGet = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
float mass = texture2D( velArray, texCoord.st).a;
vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
vec3 vec_p = p - centerPoint[int(personToGet)];
float len_vec_p = sqrt( ( vec_p.x * vec_p.x ) + (vec_p.y * vec_p.y) + (vec_p.z * vec_p.z) );
vec_p = ( ( radius[int(personToGet)] /* mass */ ) / len_vec_p ) * vec_p;
vec3 new_p = ( vec_p + centerPoint[int(personToGet)] );
new_p = p + ( (new_p - p) / (duration) );
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
vec3 new_v = v;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, personToGet);
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, mass);
}
I'm passing in arrays of 5 vec3f's and a float mapped as 5 center points and radii.
The particles are setup with a random position at the beginning and move towards the number in the array mapped to the alpha value of the position array.
My aim is to pass in blob data from openCV and map the spheres to people on a camera feed.
It's really uninteresting visually at the moment, so will need to use the velocity texture to add to the behaviour of the particles.
Related
I'm drawing a 2d plan on the screen using webgl. I would like to rotate the plan a bit to give a 3d impression.
current:
wanted:
My first approach was to use vanishing points like drawing in perspective but I didn't know how to change the y coordinate and I didn't get to the end. Is there an easier way to rotate the output?
Here is my code:
uniform float scale;
uniform vec2 ratio;
uniform vec2 center;
in vec3 fillColor;
in vec2 position;
out vec3 color;
void main() {
color = fillColor;
gl_Position = vec4((position - center) * ratio, 0.0, scale);
}
If you want to build a whole game engine or a complex animation, you will need to dig into perspective projection matrices.
But if you just want to achieve this little effect and try to understand how it works, you can just use the w coord of gl_Position. This coordinate is essential to tell the GPU how to interpolate UV textures in a valid 3D way, for example. And it will be divided to x, y and z.
So let's assume you want to display a rectangle. You will need two triangles.
4 vertices will suffice if you use TRIANGLE_STRIP mode. We could use only one attribute, but for the sake of tutorial, I will use two:
Vertex #
attPos
attUV
0
-1, +1
0, 0
1
-1, +1
0, 1
2
+1, +1
1, 0
3
+1, -1
1, 1
And all the logic will be in the vertex shader:
uniform float uniScale;
uniform float uniAspectRatio;
attribute vec2 attPos;
attribute vec2 attUV;
varying vec2 varUV;
void main() {
varUV = attUV;
gl_Position = vec4(
attPos.x * uniScale,
attPos.y * uniAspectRatio,
1.0,
attUV.y < 0.5 ? uniScale : 1.0
);
}
The line attUV.y < 0.5 ? uniScale : 1.0 means
If attUV.y is 0, then use uniScale
Otherwise use 1.0
The attUV attribute let's you use a texture if you want. In this example,
I just simulate a checkboard with this fragment shader:
precision mediump float;
const float MARGIN = 0.1;
const float CELLS = 8.0;
const vec3 ORANGE = vec3(1.0, 0.5, 0.0);
const vec3 BLUE = vec3(0.0, 0.6, 1.0);
varying vec2 varUV;
void main() {
float u = fract(varUV.x * CELLS);
float v = fract(varUV.y * CELLS);
if (u > MARGIN && v > MARGIN) gl_FragColor = vec4(BLUE, 1.0);
else gl_FragColor = vec4(ORANGE, 1.0);
}
You can see all this in action in this CopePen:
https://codepen.io/tolokoban/full/oNpBRyO
In effort to learn vertex/fragment shaders I decided to create a simple rain effect by updating the y position of a point in the vertex shader and resetting it back to animate through again using Three.js PointCloud. I got it to animate across the screen once but gets stuck after resetting the y position.
uniform float size;
uniform float delta;
varying float vOpacity;
varying float vTexture;
void main() {
vOpacity = opacity;
vTexture = texture;
gl_PointSize = 164.0;
vec3 p = position;
vec3 p = position;
p.y -= delta * 50.0;
vec4 mvPosition = modelViewMatrix * vec4(1.0 * p, 1.0 );
vec4 nPos = projectionMatrix * mvPosition;
if(nPos.y < -200.0){
nPos.y = 100.0;
}
gl_Position = nPos;
}
Any ideas? Thanks
shader does not change the vertex position permanently
that means
gl_Position = nPos;
will not propagate to your position attribute in geometry
shader only runs on graphics card and has no access to memory of the browser
you can change your code to this:
nPos.y = mod(nPos.y, 300.0) - 200.0;
now the y coordinate should change as you want it to(going from 100 to -200 then back to 100)
I'm tinkering with Joost van Dongen's Interior mapping shader and I'm trying to implement self-shadowing. Still I couldn't quite figure out what coordinates shadow casting light vectors need to be in. You can see somewhat working demo at here I've attached the light position with an offset to the camera position just to see whats happening but obviously it doesn't look right either.
Shader code is below. Look for SHADOWS DEV in fragment shader. Vectors in question are: shad_E and shad_I.
vertex shader:
varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space
varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction
uniform vec3 lightPosition;
void main() {
// inverse veiw matrix
mat4 modelViewMatrixInverse = InverseMatrix( modelViewMatrix );
// surface position in object space
oP = position;
// position of the eye in object space
oE = modelViewMatrixInverse[3].xyz;
// incident ray direction in object space
oI = oP - oE;
// link the light position to camera for testing
// need to find a way for world space directional light to work
shad_E = oE - lightPosition;
// light vector
shad_I = oP - shad_E;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
fragment shader:
varying vec3 oP; // surface position in object space
varying vec3 oE; // position of the eye in object space
varying vec3 oI; // incident ray direction in object space
varying vec3 shad_E; // shadow light position
varying vec3 shad_I; // shadow direction
uniform vec3 wallFreq;
uniform float wallsBias;
uniform vec3 wallCeilingColor;
uniform vec3 wallFloorColor;
uniform vec3 wallXYColor;
uniform vec3 wallZYColor;
float checker(vec2 uv, float checkSize) {
float fmodResult = mod( floor(checkSize * uv.x) + floor(checkSize * uv.y), 2.0);
if (fmodResult < 1.0) {
return 1.0;
} else {
return 0.85;
}
}
void main() {
// INTERIOR MAPPING by Joost van Dongen
// http://interiormapping.oogst3d.net/
// email: joost#ronimo-games.com
// Twitter: #JoostDevBlog
vec3 wallFrequencies = wallFreq / 2.0 - wallsBias;
//calculate wall locations
vec3 walls = ( floor( oP * wallFrequencies) + step( vec3( 0.0 ), oI )) / wallFrequencies;
//how much of the ray is needed to get from the oE to each of the walls
vec3 rayFractions = ( walls - oE) / oI;
//texture-coordinates of intersections
vec2 intersectionXY = (oE + rayFractions.z * oI).xy;
vec2 intersectionXZ = (oE + rayFractions.y * oI).xz;
vec2 intersectionZY = (oE + rayFractions.x * oI).zy;
//use the intersection as the texture coordinates for the ceiling
vec3 ceilingColour = wallCeilingColor * checker( intersectionXZ, 2.0 );
vec3 floorColour = wallFloorColor * checker( intersectionXZ, 2.0 );
vec3 verticalColour = mix(floorColour, ceilingColour, step(0.0, oI.y));
vec3 wallXYColour = wallXYColor * checker( intersectionXY, 2.0 );
vec3 wallZYColour = wallZYColor * checker( intersectionZY, 2.0 );
// SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //
vec3 shad_P = oP; // just surface position in object space
vec3 shad_walls = ( floor( shad_P * wallFrequencies) + step( vec3( 0.0 ), shad_I )) / wallFrequencies;
vec3 shad_rayFr = ( shad_walls - shad_E ) / shad_I;
// Cast shadow from ceiling planes (intersectionXZ)
wallZYColour *= mix( 0.3, 1.0, step( shad_rayFr.x, shad_rayFr.y ));
verticalColour *= mix( 0.3, 1.0, step( rayFractions.y, shad_rayFr.y ));
wallXYColour *= mix( 0.3, 1.0, step( shad_rayFr.z, shad_rayFr.y ));
// SHADOWS DEV // SHADOWS DEV // SHADOWS DEV // SHADOWS DEV //
// intersect walls
float xVSz = step(rayFractions.x, rayFractions.z);
vec3 interiorColour = mix(wallXYColour, wallZYColour, xVSz);
float rayFraction_xVSz = mix(rayFractions.z, rayFractions.x, xVSz);
float xzVSy = step(rayFraction_xVSz, rayFractions.y);
interiorColour = mix(verticalColour, interiorColour, xzVSy);
gl_FragColor.xyz = interiorColour;
}
Based on my very limited understanding of what you're trying to implement, it seems you would need to take the location of the intersection between the eye vector and the interior plane it hits, then trace it back to the light.
To trace back to the light, you would first have to check if the interior plane intersected by the eye vector is back-facing from the light's perspective, which would make it in shadow. If it's front-facing then you have to ray cast from within the room to the light and check for an intersection with any of the other interior planes.
Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}
Is it possible for me to add line thickness in the fragment shader considering that I draw the line with GL_LINES? Most of the examples I saw seem to access only the texels within the primitive in the fragment shader and a line thickness shader would need to write to texels outside the line primitive to obtain the thickness. If it is possible however, a very small, basic, example, would be great.
Quite a lot is possible with fragment shaders. Just look what some guys are doing. I'm far away from that level myself but this code can give you an idea:
#define resolution vec2(500.0, 500.0)
#define Thickness 0.003
float drawLine(vec2 p1, vec2 p2) {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float a = abs(distance(p1, uv));
float b = abs(distance(p2, uv));
float c = abs(distance(p1, p2));
if ( a >= c || b >= c ) return 0.0;
float p = (a + b + c) * 0.5;
// median to (p1, p2) vector
float h = 2 / c * sqrt( p * ( p - a) * ( p - b) * ( p - c));
return mix(1.0, 0.0, smoothstep(0.5 * Thickness, 1.5 * Thickness, h));
}
void main()
{
gl_FragColor = vec4(
max(
max(
drawLine(vec2(0.1, 0.1), vec2(0.1, 0.9)),
drawLine(vec2(0.1, 0.9), vec2(0.7, 0.5))),
drawLine(vec2(0.1, 0.1), vec2(0.7, 0.5))));
}
Another alternative is to check with texture2D for the color of nearby pixel - that way you can make you image glow or thicken (e.g. if any of the adjustment pixels are white - make current pixel white, if next to nearby pixel is white - make current pixel grey).
No, it is not possible in the fragment shader using only GL_LINES. This is because GL restricts you to draw only on the geometry you submit to the rasterizer, so you need to use geometry that encompasses the jagged original line plus any smoothing vertices. E.g., you can use a geometry shader to expand your line to a quad around the ideal line (or, actually two triangles) which can pose as a thick line.
In general, if you generate bigger geometry (including a full screen quad), you can use the fragment shader to draw smooth lines.
Here's a nice discussion on that subject (with code samples).
Here's my approach. Let p1 and p2 be the two points defining the line, and let point be the point whose distance to the line you wish to measure. Point is most likely gl_FragCoord.xy / resolution;
Here's the function.
float distanceToLine(vec2 p1, vec2 p2, vec2 point) {
float a = p1.y-p2.y;
float b = p2.x-p1.x;
return abs(a*point.x+b*point.y+p1.x*p2.y-p2.x*p1.y) / sqrt(a*a+b*b);
}
Then use that in your mix and smoothstep functions.
Also check out this answer:
https://stackoverflow.com/a/9246451/911207
A simple hack is to just add a jitter in the vertex shader:
gl_Position += vec4(delta, delta, delta, 0.0);
where delta is the pixelsize i.e. 1.0/viewsize
Do the line-draw pass twice using zero, and then the delta as jitter (passed in as a uniform).
To draw a line in Fragment Shader, we should check that the current pixel (UV) is on the line position. (is not efficient using only the Fragment shader code! this is just for the test with glslsandbox)
An acceptable UV point should have these two conditions:
1- The maximum permissible distance between (uv, pt1) should be smaller than the distance between (pt1, pt2).
With this condition we create a assumed circle with the center of pt2 and radious = distance(pt2, pt1) and also prevent the drawing of line that is longer than the distance(pt2, pt1).
2- For each UV we assume a hypothetical circle with a connection point on ptc position of the line(pt2,pt1).
If the distance between UV and PTC is less than the line tickness, we select this UV as the line point.
in our code:
r = distance (uv, pt1) / distance (pt1, pt2) give us a value between 0 and 1.
we interpolate a point (ptc) between pt1 and pt2 with value of r
code:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
float line(vec2 uv, vec2 pt1, vec2 pt2,vec2 resolution)
{
float clrFactor = 0.0;
float tickness = 3.0 / max(resolution.x, resolution.y); //only used for tickness
float r = distance(uv, pt1) / distance(pt1, pt2);
if(r <= 1.0) // if desired Hypothetical circle in range of vector(pt2,pt1)
{
vec2 ptc = mix(pt1, pt2, r); // ptc = connection point of Hypothetical circle and line calculated with interpolation
float dist = distance(ptc, uv); // distance betwenn current pixel (uv) and ptc
if(dist < tickness / 2.0)
{
clrFactor = 1.0;
}
}
return clrFactor;
}
void main()
{
vec2 uv = gl_FragCoord.xy / resolution.xy; //current point
//uv = current pixel
// 0 < uv.x < 1 , 0 < uv.x < 1
// left-down= (0,0)
// right-top= (1,1)
vec2 pt1 = vec2(0.1, 0.1); //line point1
vec2 pt2 = vec2(0.8, 0.7); //line point2
float lineFactor = line(uv, pt1, pt2, resolution.xy);
vec3 color = vec3(.5, 0.7 , 1.0);
gl_FragColor = vec4(color * lineFactor , 1.);
}