I read a 2d light shader in shader toy which can be used to create 2d (point) light.
https://www.shadertoy.com/view/4dfXDn
vec4 drawLight(vec2 p, vec2 pos, vec4 color, float range)
{
float ld = length(p - pos);
if (ld > range) return vec4(0.0);
float fall = (range - ld)/range;
fall *= fall;
return (fall) * color;
}
void main() {
vec2 p = gl_FragCoord.xy;
vec2 c = u_resolution.xy / 2.0;
vec4 col = vec4(0.0);
vec2 lightPos = vec2(c);
vec4 lightCol = vec4(1.000,0.25,0.000,1.000);
col += drawLight(p, lightPos, lightCol, 400.0);
gl_FragColor = col;
}
However, I can't figure out how to make another "shape" of light using this?
How can I modify the drawLight function to have another parameter, which modifies the original light, like 1.0 is a full circle light, and 0.25 is a quad-light?
in your code the
float ld = length(p - pos);
Is computing your distance from light uniformly in all directions (euclidean distance). If you want different shading change the equation...
For example you can compute minimal perpendicular distance to a polygon shaped light like this:
Vertex:
// Vertex
#version 420 core
layout(location=0) in vec2 pos; // glVertex2f <-1,+1>
layout(location=8) in vec2 txr; // glTexCoord2f Unit0 <0,1>
out smooth vec2 t1; // fragment position <0,1>
void main()
{
t1=txr;
gl_Position=vec4(pos,0.0,1.0);
}
Fragment:
// Fragment
#version 420 core
uniform sampler2D txrmap; // texture unit
uniform vec2 t0; // mouse position <0,1>
in smooth vec2 t1; // fragment position <0,1>
out vec4 col;
// light shape
const int n=3;
const float ldepth=0.25; // distance to full disipation of light
const vec2 lpolygon[n]= // vertexes CCW
{
vec2(-0.05,-0.05),
vec2(+0.05,-0.05),
vec2( 0.00,+0.05),
};
void main()
{
int i;
float l;
vec2 p0,p1,p,n01;
// compute perpendicular distance to edges of polygon
for (p1=vec2(lpolygon[n-1]),l=0.0,i=0;i<n;i++)
{
p0=p1; p1=lpolygon[i]; // p0,p1 = edge of polygon
p=p1-p0; // edge direction
n01=normalize(vec2(+p.y,-p.x)); // edge normal CCW
// n01=normalize(vec2(-p.y,+p.x)); // edge normal CW
l=max(dot(n01,t1-t0-p0),l);
}
// convert to light strength
l = max(ldepth-l,0.0)/ldepth;
l=l*l*l;
// render
// col=l*texture2D(txrmap,t1);
col = l*vec4(1.0,1.0,1.0,0.0);
}
I used similar code How to implement 2D raycasting light effect in GLSL as a start point hence the slightly different names of variables.
The idea is to compute perpendicular distance of fragment to all the edges of your light shape and pick the biggest one as the others are facing wrong side.
The lpolygon[n] is the shape of light relative to light position t0 and the t1 is fragment position. It must be in CCW winding otherwise you would need to
negate the normal computation (mine view is flipped so it might look its CW but its not). I used range <0,1> as you can use that as texture coordinate directly...
Here screenshot:
Here some explanations:
For analytical shape you need to use analytical distance computation ...
If linear interpolation happens during the rasterization stage in the OpenGL pipeline, and the vertices have already been transformed to screen-space, where does the depth information used for perspectively correct interpolation come from?
Can anybody give a detailed description of how OpenGL goes from screen-space primitives to fragments with correctly interpolated values?
The output of a vertex shader is a four component vector, vec4 gl_Position. From Section 13.6 Coordinate Transformations of core GL 4.4 spec:
Clip coordinates for a vertex result from shader execution, which yields a vertex coordinate gl_Position.
Perspective division on clip coordinates yields normalized device coordinates, followed by a viewport transformation (see section 13.6.1) to convert these coordinates into window coordinates.
OpenGL does the perspective divide as
device.xyz = gl_Position.xyz / gl_Position.w
But then keeps the 1 / gl_Position.w as the last component of gl_FragCoord:
gl_FragCoord.xyz = device.xyz scaled to viewport
gl_FragCoord.w = 1 / gl_Position.w
This transform is bijective, so no depth information is lost. In fact as we see below, the 1 / gl_Position.w is crucial for perspective correct interpolation.
Short introduction to barycentric coordinates
Given a triangle (P0, P1, P2) one can parametrize all the points inside the triangle by the linear combinations of the vertices:
P(b0,b1,b2) = P0*b0 + P1*b1 + P2*b2
where b0 + b1 + b2 = 1 and b0 ≥ 0, b1 ≥ 0, b2 ≥ 0.
Given a point P inside the triangle, the coefficients (b0, b1, b2) that satisfy the equation above are called the barycentric coordinates of that point. For non-degenerate triangles they are unique, and can be calculated as quotients of the areas of the following triangles:
b0(P) = area(P, P1, P2) / area(P0, P1, P2)
b1(P) = area(P0, P, P2) / area(P0, P1, P2)
b2(P) = area(P0, P1, P) / area(P0, P1, P2)
Each bi can be thought of as 'how much of Pi has to be mixed in'. So b = (1,0,0), (0,1,0) and (0,0,1) are the vertices of the triangle, (1/3, 1/3, 1/3) is the barycenter, and so on.
Given an attribute (f0, f1, f2) on the vertices of the triangle, we can now interpolate it over the interior:
f(P) = f0*b0(P) + f1*b1(P) + f2*b2(P)
This is a linear function of P, therefore it is the unique linear interpolant over the given triangle. The math also works in either 2D or 3D.
Perspective correct interpolation
Let's say we fill a projected 2D triangle on the screen. For every fragment we have its window coordinates. First we calculate its barycentric coordinates by inverting the P(b0,b1,b2) function, which is a linear function in window coordinates. This gives us the barycentric coordinates of the fragment on the 2D triangle projection.
Perspective correct interpolation of an attribute would vary linearly in the clip coordinates (and by extension, world coordinates). For that we need to get the barycentric coordinates of the fragment in clip space.
As it happens (see [1] and [2]), the depth of the fragment is not linear in window coordinates, but the depth inverse (1/gl_Position.w) is. Accordingly the attributes and the clip-space barycentric coordinates, when weighted by the depth inverse, vary linearly in window coordinates.
Therefore, we compute the perspective corrected barycentric by:
( b0 / gl_Position[0].w, b1 / gl_Position[1].w, b2 / gl_Position[2].w )
B = -------------------------------------------------------------------------
b0 / gl_Position[0].w + b1 / gl_Position[1].w + b2 / gl_Position[2].w
and then use it to interpolate the attributes from the vertices.
Note: GL_NV_fragment_shader_barycentric exposes the device-linear barycentric coordinates through gl_BaryCoordNoPerspNV and the perspective corrected through gl_BaryCoordNV.
Implementation
Here is a C++ code that rasterizes and shades a triangle on the CPU, in a manner similar to OpenGL. I encourage you to compare it with the shaders listed below:
struct Renderbuffer { int w, h, ys; void *data; };
struct Vert { vec4 position, texcoord, color; };
struct Varying { vec4 texcoord, color; };
void vertex_shader(const Vert &in, vec4 &gl_Position, Varying &OUT) {
OUT.texcoord = in.texcoord;
OUT.color = in.color;
gl_Position = vec4(in.position.x, in.position.y, -2*in.position.z - 2*in.position.w, -in.position.z);
}
void fragment_shader(vec4 &gl_FragCoord, const Varying &IN, vec4 &OUT) {
OUT = IN.color;
vec2 wrapped = IN.texcoord.xy - floor(IN.texcoord.xy);
bool brighter = (wrapped[0] < 0.5) != (wrapped[1] < 0.5);
if(!brighter)
OUT.rgb *= 0.5f;
}
// render output unit/render operations pipeline
void rop(Renderbuffer &buf, int x, int y, const vec4 &c) {
uint8_t *p = (uint8_t*)buf.data + buf.ys*(buf.h - y - 1) + 4*x;
p[0] = linear_to_srgb8(c[0]);
p[1] = linear_to_srgb8(c[1]);
p[2] = linear_to_srgb8(c[2]);
p[3] = lround(c[3]*255);
}
void draw_triangle(Renderbuffer &color_attachment, const box2 &viewport, const Vert *verts) {
auto area = [](const vec2 &p0, const vec2 &p1, const vec2 &p2) { return cross(p1 - p0, p2 - p0); };
auto interpolate = [](const auto a[3], auto p, const vec3 &coord) { return coord.x*a[0].*p + coord.y*a[1].*p + coord.z*a[2].*p; };
Varying perVertex[3];
vec4 gl_Position[3];
box2 aabb = { viewport.hi, viewport.lo };
for(int i = 0; i < 3; ++i) {
vertex_shader(verts[i], gl_Position[i], perVertex[i]);
// convert to normalized device coordinates
gl_Position[i].w = 1/gl_Position[i].w;
gl_Position[i].xyz *= gl_Position[i].w;
// convert to window coordinates
gl_Position[i].xy = mix(viewport.lo, viewport.hi, 0.5f*(gl_Position[i].xy + 1.0f));
aabb = join(aabb, gl_Position[i].xy);
}
const float denom = 1/area(gl_Position[0].xy, gl_Position[1].xy, gl_Position[2].xy);
// loop over all pixels in the rectangle bounding the triangle
const ibox2 iaabb = lround(aabb);
for(int y = iaabb.lo.y; y < iaabb.hi.y; ++y)
for(int x = iaabb.lo.x; x < iaabb.hi.x; ++x)
{
vec4 gl_FragCoord;
gl_FragCoord.xy = vec2(x, y) + 0.5f;
// fragment barycentric coordinates in window coordinates
const vec3 barycentric = denom*vec3(
area(gl_FragCoord.xy, gl_Position[1].xy, gl_Position[2].xy),
area(gl_Position[0].xy, gl_FragCoord.xy, gl_Position[2].xy),
area(gl_Position[0].xy, gl_Position[1].xy, gl_FragCoord.xy)
);
// discard fragment outside the triangle. this doesn't handle edges correctly.
if(barycentric.x < 0 || barycentric.y < 0 || barycentric.z < 0)
continue;
// interpolate inverse depth linearly
gl_FragCoord.z = interpolate(gl_Position, &vec4::z, barycentric);
gl_FragCoord.w = interpolate(gl_Position, &vec4::w, barycentric);
// clip fragments to the near/far planes (as if by GL_ZERO_TO_ONE)
if(gl_FragCoord.z < 0 || gl_FragCoord.z > 1)
continue;
// convert to perspective correct (clip-space) barycentric
const vec3 perspective = 1/gl_FragCoord.w*barycentric*vec3(gl_Position[0].w, gl_Position[1].w, gl_Position[2].w);
// interpolate attributes
Varying varying = {
interpolate(perVertex, &Varying::texcoord, perspective),
interpolate(perVertex, &Varying::color, perspective),
};
vec4 color;
fragment_shader(gl_FragCoord, varying, color);
rop(color_attachment, x, y, color);
}
}
int main(int argc, char *argv[]) {
Renderbuffer buffer = { 512, 512, 512*4 };
buffer.data = calloc(buffer.ys, buffer.h);
// VAO interleaved attributes buffer
Vert verts[] = {
{ { -1, -1, -2, 1 }, { 0, 0, 0, 1 }, { 0, 0, 1, 1 } },
{ { 1, -1, -1, 1 }, { 10, 0, 0, 1 }, { 1, 0, 0, 1 } },
{ { 0, 1, -1, 1 }, { 0, 10, 0, 1 }, { 0, 1, 0, 1 } },
};
box2 viewport = { 0, 0, buffer.w, buffer.h };
draw_triangle(buffer, viewport, verts);
stbi_write_png("out.png", buffer.w, buffer.h, 4, buffer.data, buffer.ys);
}
OpenGL shaders
Here are the OpenGL shaders used to generate the reference image.
Vertex shader:
#version 450 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec4 texcoord;
layout(location = 2) in vec4 color;
out gl_PerVertex { vec4 gl_Position; };
layout(location = 0) out Varying { vec4 texcoord; vec4 color; } OUT;
void main() {
OUT.texcoord = texcoord;
OUT.color = color;
gl_Position = vec4(position.x, position.y, -2*position.z - 2*position.w, -position.z);
}
Fragment shader:
#version 450 core
layout(location = 0) in Varying { vec4 texcoord; vec4 color; } IN;
layout(location = 0) out vec4 OUT;
void main() {
OUT = IN.color;
vec2 wrapped = fract(IN.texcoord.xy);
bool brighter = (wrapped.x < 0.5) != (wrapped.y < 0.5);
if(!brighter)
OUT.rgb *= 0.5;
}
Results
Here are the almost identical images generated by the C++ (left) and OpenGL (right) code:
The differences are caused by different precision and rounding modes.
For comparison, here is one that is not perspective correct (uses barycentric instead of perspective for the interpolation in the code above):
The formula that you will find in the GL specification (look on page 427; the link is the current 4.4 spec, but it has always been that way) for perspective-corrected interpolation of the attribute value in a triangle is:
a * f_a / w_a + b * f_b / w_b + c * f_c / w_c
f=-----------------------------------------------------
a / w_a + b / w_b + c / w_c
where a,b,c denote the barycentric coordinates of the point in the triangle we are interpolating for (a,b,c >=0, a+b+c = 1), f_i the attribute value at vertex i, and w_i the clip space w coordinate of vertex i. Note that the barycentric coordinates are calculated only for the 2D projection of the window space coords of the triangle (so z is ignored).
This is what the formulas that ybungalowbill gave in his fine answer boils down to, in the general case, with an arbitrary projection axis. Actually, the last row of the projection matrix defines just the projection axis the image plane will be orthogonal to, and the clip space w component is just the dot product between the vertex coords and that axis.
In the typical case, the projection matrix has (0,0,-1,0) as the last row, so it transfroms so that w_clip = -z_eye, and this is what ybungalowbill used. However, since w is what we actually will do the division by (that is the only nonlinear step in the whole transformation chain), this will work for any projection axis. It will also work in the trivial case of orthogonal projections where w is always 1 (or at least constant).
Note a few things for an efficient implementation of this. The inversion 1/w_i can be pre-calculated per vertex (let's call them q_i in the following), it does not have to be re-evaluated per fragment. And it is totally free since we divide by w anyway, when going into NDC space, so we can save that value. The GL spec does never describe how a certain feature is to be implemented internally, but the fact that the screen space coordinates will be accessible in glFragCoord.xyz, and gl_FragCoord.w is guaranteed to give the (lineariliy interpolated) 1/w clip space coordinate is quite revealing here. That per-fragment 1_w value is actually the denominator of the formula given above.
The factors a/w_a, b/w_b and c/w_c are each used two times in the formula. And these are also constant for any attribute value, now matter how many attributes there are to be interpolated. So, per fragment, you can calculate a'=q_a * a, b'=q_b * b and c'=q_c and get
a' * f_a + b' * f_b + c' * f_c
f=------------------------------
a' + b' + c'
So the perspective interpolation boils down to
3 additional multiplications,
2 additional additions, and
1 additional division
per fragment.
I do not know how to correctly say, in general, the essence is, I found a bloom shader: https://threejs.org/examples/webgl_postprocessing_unreal_bloom.html
It works fine, but a little not as it is necessary for me, it allocates only bright areas and highlights.
I need to highlight not the brightness, I need to highlight the intensity of the color.
For example:
In the picture I highlighted a circle where there should be a selection, have ideas how to do this?
Thanks in advance)
You could use a RGBtoHSV function to check the hue, saturation, and value of a pixel then take the distance from that to the actual color to decide to bloom or not
From this answer:
vec3 rgb2hsv(vec3 c)
{
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
Therefore
// PSEUDO CODE!
uniform vec3 targetHSV; // supply hue, saturation, value in 0 to 1 range for each.
// Red = 0,1,1
vec3 color = texture2D(renderTarget, uv).rgb;
vec3 hsv = rgb2hsv(color);
vec3 hueDist = abs(hsv.x - targetHSV.x);
// hue wraps
if (hueDist > 0.5) {
hueDist = 1. - hueDist;
}
// 2x for hue because it's at most .5 dist?
float dist = length(vec3(hueDist * 2., hsv.yz - targetHSV.yz));
// now use dist < threshold or smoothstep or something to decide
// whether value contributes to bloom
is it possible to draw a perfect horizontal line of a single pixel height at any chosen position on the vertical axis with a fragment shader applied to a screen aligned quad?
I have found many solutions with smoothstep or more complex functions but i am looking for an elegant and fast way of doing this.
A solution i have made is by using an exponential function and making it steeper but it have many shortcomings that i don't want (the line is not really one pixel height due to the exponential function and it is rather tricky to get one right), here is the GLSL code :
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
// a centered horizontal line
float v = pow(uv.y - 0.5, 2.);
// make it steeper
v *= 100000.;
// make it white on a black background
v = clamp(1. - v, 0., 1.);
fragColor = vec4(v);
}
Here is the shadertoy code which execute this: https://www.shadertoy.com/view/Ms2cWh
What i would like :
a perfect horizontal line drawn to a specific Y position in pixels units or normalized
its intensity limited to [0, 1] range without clamping
a fast way of doing it
If you just want to:
draw a perfect horizontal line of a single pixel height at any chosen
position on the vertical axis with a fragment shader applied to a
screen aligned quad
, then maybe:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
int iPosition = 250; // the y coord in pixels
int iThickness = 10; // the thickness in pixels
vec2 uv = fragCoord.xy / iResolution.xy;
float v = float( iPosition ) / iResolution.y;
float vHalfHeight = ( float( iThickness ) / ( iResolution.y ) ) / 2.;
if ( uv.y > v - vHalfHeight && uv.y < v + vHalfHeight )
fragColor = vec4(1.,1.,1.,1.); // or whatever color
}
Here is a neat solution without branching. I don't know if it is really faster than with branching though.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
float py = iMouse.y/iResolution.y;
float hh = 1./iResolution.y;
// can also be replace with step(0., hh-abs(uv.y-py))
float v = sign(hh-abs(uv.y-py));
fragColor = vec4(v);
}
I know the question was answered properly before me, but in case someone is looking for a way to render a textured line in a pixel perfect way I wrote an article with some examples.
It's about pixel perfect UI in general, but using it for a line is just a matter of clamping/repeating texture sampling. Also I'm using Unity, but there is no reason the method would be exclusive to it.
I'm working with a GPU based particle system.
There are 1 million particles computed by passing in the x,y,z positions as rgb values on a 1024*1024 texture. The same is being done for their velocities.
I'm trying to make them move from an arbitrary point to a point on sphere.
My current shader, which I'm using for the computation, is moving from one point to another directly.
I'm not using the mass or velocity texture at the moment
// float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
// vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
// vec3 acc = -0.0002*p; // Centripetal force
// vec3 ayAcc = 0.00001*normalize(cross(vec3(0, 1 ,0),p)); // Angular force
// vec3 new_v = v + mass*(acc+ayAcc);
vec3 new_p = p + ((moveToPos - p) / duration);
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
moveToPos is the mouse pointer as a float (0.0f > 1.0f)
the coordinate system is being translated from (0.5,0.5 > -0.5,-0.5) to (0.0,0.0 > 1.0,1.0)
I'm completely new to vector maths, and the calculations that are confusing me. I know I need to use the formula:
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
but calculating the angles from moveToPos(xyz) > p(xyz) is remaining a problem
I wrote the original version of this GPU-particles shader a few years back (now #: https://github.com/num3ric/Cinder-Particles). Here is one possible approach to your problem.
I would start with a fragment shader applying a spring force to the particles so that they more or less are constrained to the surface of a sphere. Something like this:
uniform sampler2D posArray;
uniform sampler2D velArray;
varying vec4 texCoord;
void main(void)
{
float mass = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
vec3 v = texture2D( velArray, texCoord.st).rgb;
float x0 = 0.5; //distance from center of sphere to be maintaned
float x = distance(p, vec3(0,0,0)); // current distance
vec3 acc = -0.0002*(x - x0)*p; //apply spring force (hooke's law)
vec3 new_v = v + mass*(acc);
new_v = 0.999*new_v; // friction to slow down velocities over time
vec3 new_p = p + new_v;
//Render to positions texture
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, mass);
//Render to velocities texture
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, 1.0);
}
Then, I would pass a new vec3 uniform for the mouse position intersecting a sphere of the same radius (done outside the shader in Cinder).
Now, combining this with the previous soft spring constraint. You could add a tangential force towards this attraction point. Start with a simple (mousePos - p) acceleration, and then figure out a way to make this force exclusively tangential using cross-products.
I'm not sure how the spherical coordinates approach would work here.
x=Rsinϕcosθ
y=Rsinϕsinθ
z=Rcosϕ
Where do you get ϕ and θ? The textures stores the positions and velocities in cartesian coordinates. Plus, converting back and forth is not really an option.
My explanation could be too advanced if you are not comfortable with vectors. Unfortunately, shaders and particle animation are very mathematical by nature.
Here is a solution that I've worked out - it works, however if I move the center point of the spheres outside their own bounds, I lose particles.
#define NPEOPLE 5
uniform sampler2D posArray;
uniform sampler2D velArray;
uniform vec3 centerPoint[NPEOPLE];
uniform float radius[NPEOPLE];
uniform float duration;
varying vec4 texCoord;
void main(void) {
float personToGet = texture2D( posArray, texCoord.st).a;
vec3 p = texture2D( posArray, texCoord.st).rgb;
float mass = texture2D( velArray, texCoord.st).a;
vec3 v = texture2D( velArray, texCoord.st).rgb;
// map into 'cinder space'
p = (p * - 1.0) + 0.5;
vec3 vec_p = p - centerPoint[int(personToGet)];
float len_vec_p = sqrt( ( vec_p.x * vec_p.x ) + (vec_p.y * vec_p.y) + (vec_p.z * vec_p.z) );
vec_p = ( ( radius[int(personToGet)] /* mass */ ) / len_vec_p ) * vec_p;
vec3 new_p = ( vec_p + centerPoint[int(personToGet)] );
new_p = p + ( (new_p - p) / (duration) );
// map out of 'cinder space'
new_p = (new_p - 0.5) * -1.0;
vec3 new_v = v;
gl_FragData[0] = vec4(new_p.x, new_p.y, new_p.z, personToGet);
gl_FragData[1] = vec4(new_v.x, new_v.y, new_v.z, mass);
}
I'm passing in arrays of 5 vec3f's and a float mapped as 5 center points and radii.
The particles are setup with a random position at the beginning and move towards the number in the array mapped to the alpha value of the position array.
My aim is to pass in blob data from openCV and map the spheres to people on a camera feed.
It's really uninteresting visually at the moment, so will need to use the velocity texture to add to the behaviour of the particles.