How can I give a border blur effect with GLSL? - three.js

To give it the feel of a real beam project, I created a shader using a RawShaderMaterial that shows the blur coming in from the edges.
However, I'm getting diagonal lines in each corner, and I'm having trouble getting it to work.
Can anyone help me modify my code to make it work correctly? Or if you have any other ideas to implement the blur shader naturally, please let me know.

Bearing in mind that vUv ranges [0 .. 1] in both x and y, the area where the map is not faded, is an axis-aligned rectangle over [edge .. 1 - 2 * edge] in both dimensions. We can drop all the conditionals and use the distance from vUv to that rectangle.
void main(){
float edgeMin = edge;
float edgeMax = 1.0 - edge;
gl_FragColor = texture2D( map, vUv );
// dx => lilnear distance parallel to x from vUv to rectangle
// (0.5, 0.5) is the center of the rectangle in uv
// the abs and max work together to make any vUv inside the
// rectangle return 0
float dx = max(abs(vUv.x - 0.5) - (0.5 - edge), 0.);
// similarly for dy, along the parallel to y
float dy = max(abs(vUv.y - 0.5) - (0.5 - edge), 0.);
// d is the result of a Euclidean distance; this rounds the corners
float d = sqrt(dx * dx + dy * dy);
// alpha should be opacity at the edge of the rectangle, and also
// inside of it (all where d == 0), and 0. at the edge of the plane
// (where d == edge). So we do an inverse lerp.
gl_FragColor.a = opacity - opacity * d / edge;
}

Related

Creating gyroid pattern in 2D image algorithm

I'm trying to fill an image with gyroid lines with certain thickness at certain spacing, but math is not my area. I was able to create a sine wave and shift a bit in the X direction to make it looks like a gyroid but it's not the same.
The idea behind is to stack some images with the same resolution and replicate gyroid into 2D images, so we still have XYZ, where Z can be 0.01mm to 0.1mm per layer
What i've tried:
int sineHeight = 100;
int sineWidth = 100;
int spacing = 100;
int radius = 10;
for (int y1 = 0; y1 < mat.Height; y1 += sineHeight+spacing)
for (int x = 0; x < mat.Width; x++)
{
// Simulating first image
int y2 = (int)(Math.Sin((double)x / sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1+y2), radius, EmguExtensions.WhiteColor, -1, LineType.AntiAlias);
// Simulating second image, shift by x to make it look a bit more with gyroid
y2 = (int)(Math.Sin((double)x / sineWidth + sineWidth) * sineHeight / 2.0 + sineHeight / 2.0 + radius);
Circle(mat, new System.Drawing.Point(x, y1 + y2), radius, EmguExtensions.GreyColor, -1, LineType.AntiAlias);
}
Resulting in: (White represents layer 1 while grey layer 2)
Still, this looks nothing like real gyroid, how can I replicate the formula to work in this space?
You have just single ugly slice because I do not see any z in your code (its correct the surface has horizontal and vertical sin waves like this every 0.5*pi in z).
To see the 3D surface you have to raycast z ...
I would expect some conditional testing of actually iterated x,y,z result of gyroid equation against some small non zero number like if (result<= 1e-6) and draw the stuff only then or compute color from the result instead. This is ideal to do in GLSL.
In case you are not familiar with GLSL and shaders the Fragment shader is executed for each pixel (called fragment) of the rendered QUAD so you just put the code inside your nested x,y for loops and use your x,y instead of pos (you can ignore the Vertex shader its not important).
You got 2 basic options to render this:
Blending the ray casted surface pixels together creating X-Ray like image. It can be combined with SSS techniques to get the impression of glass or semitransparent material. Here simple GLSL example for the blending:
Vertex:
#version 400 core
in vec2 position;
out vec2 pos;
void main(void)
{
pos=position;
gl_Position = vec4(position.xy,0.0,1.0);
}
Fragment:
#version 400 core
in vec2 pos;
out vec3 out_col;
void main(void)
{
float n,x,y,z,dz,d,i,di;
const float scale=2.0*3.1415926535897932384626433832795;
n=100.0; // layers
x=pos.x*scale; // x postion of pixel
y=pos.y*scale; // y postion of pixel
dz=2.0*scale/n; // z step
di=1.0/n; // color increment
i=0.0; // color intensity
for (z=-scale;z<=scale;z+=dz) // do all layers
{
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
if (d<=1e-6) i+=di; // if near surface add to color
}
out_col=vec3(1.0,1.0,1.0)*i;
}
Usage is simple just render 2D quad covering screen without any matrices with corner pos points in range <-1,+1>. Here result:
Another technique is to render first hit to surface creating mesh like image. In order to see the details we need to add basic (double sided) directional lighting for which surface normal is needed. The normal can be computed by simply partialy derivate the equation by x,y,z. As now the surface is opaque then we can stop on first hit and also ray cast just single period in z as anything after that is hidden anyway. Here simple example:
Fragment:
#version 400 core
in vec2 pos; // input fragmen (pixel) position <-1,+1>
out vec3 col; // output fragment (pixel) RGB color <0,1>
void main(void)
{
bool _discard=true;
float N,x,y,z,dz,d,i;
vec3 n,l;
const float pi=3.1415926535897932384626433832795;
const float scale =3.0*pi; // 3.0 periods in x,y
const float scalez=2.0*pi; // 1.0 period in z
N=200.0; // layers per z (quality)
x=pos.x*scale; // <-1,+1> -> [rad]
y=pos.y*scale; // <-1,+1> -> [rad]
dz=2.0*scalez/N; // z step
l=vec3(0.0,0.0,1.0); // light unit direction
i=0.0; // starting color intensity
n=vec3(0.0,0.0,1.0); // starting normal only to get rid o warning
for (z=0.0;z>=-scalez;z-=dz) // raycast z through all layers in view direction
{
// gyroid equation
d =sin(x)*cos(y); // compute gyroid equation
d+=sin(y)*cos(z);
d+=sin(z)*cos(x);
// surface hit test
if (d>1e-6) continue; // skip if too far from surface
_discard=false; // remember that surface was hit
// compute normal
n.x =+cos(x)*cos(y); // partial derivate by x
n.x+=+sin(y)*cos(z);
n.x+=-sin(z)*sin(x);
n.y =-sin(x)*sin(y); // partial derivate by y
n.y+=+cos(y)*cos(z);
n.y+=+sin(z)*cos(x);
n.z =+sin(x)*cos(y); // partial derivate by z
n.z+=-sin(y)*sin(z);
n.z+=+cos(z)*cos(x);
break; // stop raycasting
}
// skip rendering if no hit with surface (hole)
if (_discard) discard;
// directional lighting
n=normalize(n);
i=abs(dot(l,n));
// ambient + directional lighting
i=0.3+(0.7*i);
// output fragment (render pixel)
gl_FragDepth=z; // depth (optional)
col=vec3(1.0,1.0,1.0)*i; // color
}
I hope I did not make error in partial derivates. Here result:
[Edit1]
Based on your code I see it like this (X-Ray like Blending)
var mat = EmguExtensions.InitMat(new System.Drawing.Size(2000, 1080));
double zz, dz, d, i, di = 0;
const double scalex = 2.0 * Math.PI / mat.Width;
const double scaley = 2.0 * Math.PI / mat.Height;
const double scalez = 2.0 * Math.PI;
uint layerCount = 100; // layers
for (int y = 0; y < mat.Height; y++)
{
double yy = y * scaley; // y position of pixel
for (int x = 0; x < mat.Width; x++)
{
double xx = x * scalex; // x position of pixel
dz = 2.0 * scalez / layerCount; // z step
di = 1.0 / layerCount; // color increment
i = 0.0; // color intensity
for (zz = -scalez; zz <= scalez; zz += dz) // do all layers
{
d = Math.Sin(xx) * Math.Cos(yy); // compute gyroid equation
d += Math.Sin(yy) * Math.Cos(zz);
d += Math.Sin(zz) * Math.Cos(xx);
if (d > 1e-6) continue;
i += di; // if near surface add to color
}
i*=255.0;
mat.SetByte(x, y, (byte)(i));
}
}

How to build my funny timeline?

Building my responsive website, I would like to build my funny timeline, but I cannot come up with a solution.
It would be a sprite such as a rocket or flying saucer taking off at the bottom of middle of the page and coming out with smoke.
Smoke would remain more or less and disclose my timeline.
Sketch
Is anyone does have an idea how to make that possible?
To simulate smoke, you have to use a particle system.
As you maybe know, WebGL is able to draw triangles, lines and points.
This last one is what we need. The smoke is made of hundreds of semi-transparent white disks of slighly different sizes. Each point is defined by 7 attributes :
x, y: starting position.
vx, vy: direction.
radius: maximal radius.
life: number of milliseconds before it disappears.
delay: Number of milliseconds to wait before its birth.
One trick is to create points along a vertical centered axis. The more you go up, the more the delay increases. The other trick is to make the point more more transparent as it reaches it end of live.
Here is how you create such vertices :
function createVertices() {
var x, y, vx, vy, radius, life, delay;
var vertices = [];
for( delay=0; delay<1; delay+=0.01 ) {
for( var loops=0; loops<5; loops++ ) {
// Going left.
x = rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = -rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
// Going right.
x = -rnd(0.01);
y = (2.2 * delay - 1) + rnd(-0.01, 0.01);
vx = rnd(0, 1.5) * 0.0001;
vy = -rnd(0.001);
radius = rnd(0.1, 0.25) / 1000;
life = rnd(2000, 5000);
vertices.push( x, y, vx, vy, radius, life, delay );
}
}
var buff = gl.createBuffer();
gl.bindBuffer( gl.ARRAY_BUFFER, buff );
gl.bufferData( gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW );
return Math.floor( vertices.length / 7 );
}
As you can see, I created points going right and points going left to get a growing fuzzy triangle.
Then you need a vertex shader controling the position and size of the points.
WebGL provide the output variable gl_PointSize which is the size (in pixels) of the square to draw for the current point.
uniform float uniWidth;
uniform float uniHeight;
uniform float uniTime;
attribute vec2 attCoords;
attribute vec2 attDirection;
attribute float attRadius;
attribute float attLife;
attribute float attDelay;
varying float varAlpha;
const float PERIOD = 10000.0;
const float TRAVEL_TIME = 2000.0;
void main() {
float time = mod( uniTime, PERIOD );
time -= TRAVEL_TIME * attDelay;
if( time < 0.0 || time > attLife) return;
vec2 pos = attCoords + time * attDirection;
gl_Position = vec4( pos.xy, 0, 1 );
gl_PointSize = time * attRadius * min(uniWidth, uniHeight);
varAlpha = 1.0 - (time / attLife);
}
Finally, the fragment shader will display a point in white. but the more you go far from the center, the more transparent the fragments become.
To know where you are in the square drawn for the current point, you can read the global WebGL variable gl_PointCoord.
precision mediump float;
varying float varAlpha;
void main() {
float x = gl_PointCoord.x - 0.5;
float y = gl_PointCoord.y - 0.5;
float radius = x * x + y * y;
if( radius > 0.25 ) discard;
float alpha = varAlpha * 0.8 * (0.25 - radius);
gl_FragColor = vec4(1, 1, 1, alpha);
}
Here is a live example : https://jsfiddle.net/m1a9qry6/1/

Three.js Shader color threshold

I do not know how to correctly say, in general, the essence is, I found a bloom shader: https://threejs.org/examples/webgl_postprocessing_unreal_bloom.html
It works fine, but a little not as it is necessary for me, it allocates only bright areas and highlights.
I need to highlight not the brightness, I need to highlight the intensity of the color.
For example:
In the picture I highlighted a circle where there should be a selection, have ideas how to do this?
Thanks in advance)
You could use a RGBtoHSV function to check the hue, saturation, and value of a pixel then take the distance from that to the actual color to decide to bloom or not
From this answer:
vec3 rgb2hsv(vec3 c)
{
vec4 K = vec4(0.0, -1.0 / 3.0, 2.0 / 3.0, -1.0);
vec4 p = mix(vec4(c.bg, K.wz), vec4(c.gb, K.xy), step(c.b, c.g));
vec4 q = mix(vec4(p.xyw, c.r), vec4(c.r, p.yzx), step(p.x, c.r));
float d = q.x - min(q.w, q.y);
float e = 1.0e-10;
return vec3(abs(q.z + (q.w - q.y) / (6.0 * d + e)), d / (q.x + e), q.x);
}
Therefore
// PSEUDO CODE!
uniform vec3 targetHSV; // supply hue, saturation, value in 0 to 1 range for each.
// Red = 0,1,1
vec3 color = texture2D(renderTarget, uv).rgb;
vec3 hsv = rgb2hsv(color);
vec3 hueDist = abs(hsv.x - targetHSV.x);
// hue wraps
if (hueDist > 0.5) {
hueDist = 1. - hueDist;
}
// 2x for hue because it's at most .5 dist?
float dist = length(vec3(hueDist * 2., hsv.yz - targetHSV.yz));
// now use dist < threshold or smoothstep or something to decide
// whether value contributes to bloom

Drawing a perfect horizontal line at a specific position with a fragment shader

is it possible to draw a perfect horizontal line of a single pixel height at any chosen position on the vertical axis with a fragment shader applied to a screen aligned quad?
I have found many solutions with smoothstep or more complex functions but i am looking for an elegant and fast way of doing this.
A solution i have made is by using an exponential function and making it steeper but it have many shortcomings that i don't want (the line is not really one pixel height due to the exponential function and it is rather tricky to get one right), here is the GLSL code :
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
// a centered horizontal line
float v = pow(uv.y - 0.5, 2.);
// make it steeper
v *= 100000.;
// make it white on a black background
v = clamp(1. - v, 0., 1.);
fragColor = vec4(v);
}
Here is the shadertoy code which execute this: https://www.shadertoy.com/view/Ms2cWh
What i would like :
a perfect horizontal line drawn to a specific Y position in pixels units or normalized
its intensity limited to [0, 1] range without clamping
a fast way of doing it
If you just want to:
draw a perfect horizontal line of a single pixel height at any chosen
position on the vertical axis with a fragment shader applied to a
screen aligned quad
, then maybe:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
int iPosition = 250; // the y coord in pixels
int iThickness = 10; // the thickness in pixels
vec2 uv = fragCoord.xy / iResolution.xy;
float v = float( iPosition ) / iResolution.y;
float vHalfHeight = ( float( iThickness ) / ( iResolution.y ) ) / 2.;
if ( uv.y > v - vHalfHeight && uv.y < v + vHalfHeight )
fragColor = vec4(1.,1.,1.,1.); // or whatever color
}
Here is a neat solution without branching. I don't know if it is really faster than with branching though.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
float py = iMouse.y/iResolution.y;
float hh = 1./iResolution.y;
// can also be replace with step(0., hh-abs(uv.y-py))
float v = sign(hh-abs(uv.y-py));
fragColor = vec4(v);
}
I know the question was answered properly before me, but in case someone is looking for a way to render a textured line in a pixel perfect way I wrote an article with some examples.
It's about pixel perfect UI in general, but using it for a line is just a matter of clamping/repeating texture sampling. Also I'm using Unity, but there is no reason the method would be exclusive to it.

Is it possible to draw line thickness in a fragment shader?

Is it possible for me to add line thickness in the fragment shader considering that I draw the line with GL_LINES? Most of the examples I saw seem to access only the texels within the primitive in the fragment shader and a line thickness shader would need to write to texels outside the line primitive to obtain the thickness. If it is possible however, a very small, basic, example, would be great.
Quite a lot is possible with fragment shaders. Just look what some guys are doing. I'm far away from that level myself but this code can give you an idea:
#define resolution vec2(500.0, 500.0)
#define Thickness 0.003
float drawLine(vec2 p1, vec2 p2) {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float a = abs(distance(p1, uv));
float b = abs(distance(p2, uv));
float c = abs(distance(p1, p2));
if ( a >= c || b >= c ) return 0.0;
float p = (a + b + c) * 0.5;
// median to (p1, p2) vector
float h = 2 / c * sqrt( p * ( p - a) * ( p - b) * ( p - c));
return mix(1.0, 0.0, smoothstep(0.5 * Thickness, 1.5 * Thickness, h));
}
void main()
{
gl_FragColor = vec4(
max(
max(
drawLine(vec2(0.1, 0.1), vec2(0.1, 0.9)),
drawLine(vec2(0.1, 0.9), vec2(0.7, 0.5))),
drawLine(vec2(0.1, 0.1), vec2(0.7, 0.5))));
}
Another alternative is to check with texture2D for the color of nearby pixel - that way you can make you image glow or thicken (e.g. if any of the adjustment pixels are white - make current pixel white, if next to nearby pixel is white - make current pixel grey).
No, it is not possible in the fragment shader using only GL_LINES. This is because GL restricts you to draw only on the geometry you submit to the rasterizer, so you need to use geometry that encompasses the jagged original line plus any smoothing vertices. E.g., you can use a geometry shader to expand your line to a quad around the ideal line (or, actually two triangles) which can pose as a thick line.
In general, if you generate bigger geometry (including a full screen quad), you can use the fragment shader to draw smooth lines.
Here's a nice discussion on that subject (with code samples).
Here's my approach. Let p1 and p2 be the two points defining the line, and let point be the point whose distance to the line you wish to measure. Point is most likely gl_FragCoord.xy / resolution;
Here's the function.
float distanceToLine(vec2 p1, vec2 p2, vec2 point) {
float a = p1.y-p2.y;
float b = p2.x-p1.x;
return abs(a*point.x+b*point.y+p1.x*p2.y-p2.x*p1.y) / sqrt(a*a+b*b);
}
Then use that in your mix and smoothstep functions.
Also check out this answer:
https://stackoverflow.com/a/9246451/911207
A simple hack is to just add a jitter in the vertex shader:
gl_Position += vec4(delta, delta, delta, 0.0);
where delta is the pixelsize i.e. 1.0/viewsize
Do the line-draw pass twice using zero, and then the delta as jitter (passed in as a uniform).
To draw a line in Fragment Shader, we should check that the current pixel (UV) is on the line position. (is not efficient using only the Fragment shader code! this is just for the test with glslsandbox)
An acceptable UV point should have these two conditions:
1- The maximum permissible distance between (uv, pt1) should be smaller than the distance between (pt1, pt2).
With this condition we create a assumed circle with the center of pt2 and radious = distance(pt2, pt1) and also prevent the drawing of line that is longer than the distance(pt2, pt1).
2- For each UV we assume a hypothetical circle with a connection point on ptc position of the line(pt2,pt1).
If the distance between UV and PTC is less than the line tickness, we select this UV as the line point.
in our code:
r = distance (uv, pt1) / distance (pt1, pt2) give us a value between 0 and 1.
we interpolate a point (ptc) between pt1 and pt2 with value of r
code:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
float line(vec2 uv, vec2 pt1, vec2 pt2,vec2 resolution)
{
float clrFactor = 0.0;
float tickness = 3.0 / max(resolution.x, resolution.y); //only used for tickness
float r = distance(uv, pt1) / distance(pt1, pt2);
if(r <= 1.0) // if desired Hypothetical circle in range of vector(pt2,pt1)
{
vec2 ptc = mix(pt1, pt2, r); // ptc = connection point of Hypothetical circle and line calculated with interpolation
float dist = distance(ptc, uv); // distance betwenn current pixel (uv) and ptc
if(dist < tickness / 2.0)
{
clrFactor = 1.0;
}
}
return clrFactor;
}
void main()
{
vec2 uv = gl_FragCoord.xy / resolution.xy; //current point
//uv = current pixel
// 0 < uv.x < 1 , 0 < uv.x < 1
// left-down= (0,0)
// right-top= (1,1)
vec2 pt1 = vec2(0.1, 0.1); //line point1
vec2 pt2 = vec2(0.8, 0.7); //line point2
float lineFactor = line(uv, pt1, pt2, resolution.xy);
vec3 color = vec3(.5, 0.7 , 1.0);
gl_FragColor = vec4(color * lineFactor , 1.);
}

Resources