What is OpenGL ES 2 Shader language analog for HYDRA (pixel bender) sampleLinear? - opengl-es

So I look onto OpenGL ES shader specs but do not see such...
For example - I created simple "pinch to zoon" and "rotate to turn around" and "move to move center" HYDRA pixel bender filter. it can be executed in flash. It is based on default pixel bender twirl example and this:
<languageVersion: 1.0;>
kernel zoomandrotate
< namespace : "Pixel Bender Samples";
vendor : "Kabumbus";
version : 3;
description : "rotate and zoom an image around"; >
{
// define PI for the degrees to radians calculation
const float PI = 3.14159265;
// An input parameter to specify the center of the twirl effect.
// As above, we're using metadata to indicate the minimum,
// maximum, and default values, so that the tools can set the values
// in the correctly in the UI for the filter.
parameter float2 center
<
minValue:float2(0.0, 0.0);
maxValue:float2(2048.0, 2048.0);
defaultValue:float2(256.0, 256.0);
>;
// An input parameter to specify the angle that we would like to twirl.
// For this parameter, we're using metadata to indicate the minimum,
// maximum, and default values, so that the tools can set the values
// in the correctly in the UI for the filter.
parameter float twirlAngle
<
minValue:float(0.0);
maxValue:float(360.0);
defaultValue:float(90.0);
>;
parameter float zoomAmount
<
minValue:float(0.01);
maxValue:float(10.0);
defaultValue:float(1);
>;
// An input parameter that indicates how we want to vary the twirling
// within the radius. We've added support to modulate by one of two
// functions, a gaussian or a sinc function. Since Flash does not support
// bool parameters, we instead are using this as an int with two possible
// values. Setting this parameter to be 1 will
// cause the gaussian function to be used, unchecking it will cause
// the sinc function to be used.
parameter int gaussOrSinc
<
minValue:int(0);
maxValue:int(1);
defaultValue:int(0);
>;
input image4 oImage;
output float4 outputColor;
// evaluatePixel(): The function of the filter that actually does the
// processing of the image. This function is called once
// for each pixel of the output image.
void
evaluatePixel()
{
// convert the angle to radians
float twirlAngleRadians = radians(twirlAngle);
// calculate where we are relative to the center of the twirl
float2 relativePos = outCoord() - center;
// calculate the absolute distance from the center normalized
// by the twirl radius.
float distFromCenter = length( relativePos );
distFromCenter = 1.0;
// modulate the angle based on either a gaussian or a sync.
float adjustedRadians;
// precalculate either the gaussian or the sinc weight
float sincWeight = sin( distFromCenter ) * twirlAngleRadians / ( distFromCenter );
float gaussWeight = exp( -1.0 * distFromCenter * distFromCenter ) * twirlAngleRadians;
// protect the algorithm from a 1 / 0 error
adjustedRadians = (distFromCenter == 0.0) ? twirlAngleRadians : sincWeight;
// switch between a gaussian falloff or a sinc fallof
adjustedRadians = (gaussOrSinc == 1) ? adjustedRadians : gaussWeight;
// rotate the pixel sample location.
float cosAngle = cos( adjustedRadians );
float sinAngle = sin( adjustedRadians );
float2x2 rotationMat = float2x2(
cosAngle, sinAngle,
-sinAngle, cosAngle
);
relativePos = rotationMat * relativePos;
float scale = zoomAmount;
// sample and set as the output color. since relativePos
// is related to the center location, we need to add it back in.
// We use linear sampling to smooth out some of the pixelation.
outputColor = sampleLinear( oImage, relativePos/scale + center );
}
}
So now I want to port it into OpenGL ES shader. math and parameters are convertable into OpenGL ES shader language, but what to do with sampleLinear? what is analog for it in openGL ES shader languge?
update:
So I had created something similar to my HYDRA filter... compatable with webGL and OpenGL ES shaders...
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 resolution;
uniform float time;
uniform sampler2D tex0;
void main(void)
{
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
// a rotozoom
vec2 cst = vec2( cos(.5*time), sin(.5*time) );
mat2 rot = 0.5*cst.x*mat2(cst.x,-cst.y,cst.y,cst.x);
vec3 col = texture2D(tex0,0.5*rot*p+sin(0.1*time)).xyz;
gl_FragColor = vec4(col,1.0);
}
To see how it works get modern browser, navigate to shadertoy provide it with one texture ( http://www.iquilezles.org/apps/shadertoy/presets/tex4.jpg for example), paste my code into editable text aeria and hit ... Have fun. So.. now I have another problem... I want to have one image and black around it not copies of that same image... Any one knows how to do that?

Per Adobe's Pixel Blender Reference, sampleLinear "Handles coordinates not at pixel centers by performing bilinear interpolation on the adjacent pixel values."
The correct way to achieve that in OpenGL is to use texture2D, as you already are, but to set the texture environment for linear filtering via glTexParameter.
You can use the step function and multiply by its result to get black for out-of-bounds pixels, or give your texture a single pixel black border and switch to clamping rather than repeat, also via glTexParameter.
If you want to do it in code, try:
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 resolution;
uniform float time;
uniform sampler2D tex0;
void main(void)
{
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
// a rotozoom
vec2 cst = vec2( cos(.5*time), sin(.5*time) );
mat2 rot = 0.5*cst.x*mat2(cst.x,-cst.y,cst.y,cst.x);
vec2 samplePos = 0.5*rot*p+sin(0.1*time);
float mask = step(samplePos.x, 0.0) * step(samplePos.y, 0.0) * (1.0 - step(samplePos.x, 1.0)) * (1.0 - step(samplePos.y, 1.0));
vec3 col = texture2D(tex0,samplePos).xyz;
gl_FragColor = vec4(col*mask,1.0);
}
That'd restrict colours to coming from the box from (0, 0) to (1, 1), but it looks like the shader heads off to some significantly askew places, so I'm not sure exactly what you want.

Related

GLSL - Blur only the red channel

So, I have these two functions in GLSL. One that splits a texture by its rgb channels and then displaces them individually. And another that just blurs a texture. I want to combine them. But I want to be able to only blur the channel im displacing. So, for instance I might want to blur the red channel in the rgbShift function.
Problem is that the red channel is a single float and the blur function expects a full sample2D image so it can apply UV and stuff. I guess I need a way to blur just a single float? Im not very experienced with GLSL and ive been trying to figure this out for a few days now. Ill be very thankfull for any pointers or suggestions at all.
The GLSL functions can be viewed below.
vec4 blur5(sampler2D image, vec2 uv, vec2 resolution, vec2 direction) {
vec4 color = vec4(0.0);
vec2 offset = (vec2(1.3333333333333333) * direction) / resolution;
color += texture2D(image, uv) * 0.29411764705882354;
color += texture2D(image, uv + offset) * 0.35294117647058826;
color += texture2D(image, uv - offset) * 0.35294117647058826;
return color;
}
vec3 rgbShift(sampler2D textureimage, vec2 uv, float offset) {
float displace = sin(PI*vUv.y) * offset;
float r = texture2D(textureimage, uv + displace).r;
float g = texture2D(textureimage, uv).g;
float b = texture2D(textureimage, uv + -displace).b;
return vec3(r, g, b);
}
Heres me thinking out loud:
I guess I want to do something like this:
vec4 blurredTexture = blur5(textureImage);
float red = texture2D(blurredTexture, uv + displace).r;
Or this:
float redChannel = texture2D(blurredTexture, uv + displace).r;
vec4 blurredRedChannel = blur5(redChannel );
But neither will work because I cant figure out how to convert the types. I either need to convert the blurred vec4 into a sample2D for the rgbShift function. Or the red channel float into a sample2D for the blur function. Is it even possible to convert a value into a sample2D one way or another?
Maybe I need some other solution where I dont need to convert sample2D at all.
Is it even possible to convert a value into a sample2D one way or another?
Sort-of. You'll need to write that value to a temporary texture. Then you can bind that texture and run a 2nd pass that will sample from that texture. That's probably an overkill for the simple filtering you're trying to do.
Maybe I need some other solution where I dont need to convert sample2D at all.
A simpler solution is to combine those two functions into one:
vec3 shiftAndBlur(sampler2D image, vec2 uv, float offset, vec2 resolution, vec2 direction) {
vec2 offset = (vec2(1.3333333333333333) * direction) / resolution;
float displace = sin(PI*vUv.y) * offset;
float r = texture2D(image, uv + displace).r * 0.29411764705882354
+ texture2D(image, uv + displace + offset).r * 0.35294117647058826
+ texture2D(image, uv + displace - offset).r * 0.35294117647058826;
float g = texture2D(image, uv).g;
float b = texture2D(image, uv - displace).b;
return vec3(r,g,b);
}

Showing Point Cloud Structure using Lighting in Three.js

I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

Volume ray casting doesn't work fine (Webgl + GLSL + Three.js)

I have tried to make better quality of my volume ray casting algorithm. I have set a smaller step of raycast (quality is better), but it causes problem. It is on pictures below (black areas where they shouldnt be).
I am using RGB cube to get direction of ray in volume.
I think, i have the same algorithm like there: volume rendering (using glsl) with ray casting algorithm
Have anybody some ideas, where could be a problem? I need to resolve this, because deadline of my diplom thesis is to close:( I realy don't know, why it doesnt work:(
EDIT:
I cant show there my all code (it could be problem, if i will supply it before hand it in school). But the key code to going throught the volume:
// All variables neede to rays
vec3 rayDirection = texture2D(backFaceCube, texCoo).xyz - varcolor.xyz;
float lenRay = length(rayDirection);
vec3 normDir = normalize(rayDirection);
float d = qualitySteps; //quality steps is size of steps defined by user -> example: 0.01, 0.001, 0.0001 etc.
vec3 step = normDir * d;
float lenStep = length(step);
float accumulatedLength = 0.0;
and then in cycle:
posInCube.xyz += step;
accumulatedLength += lenStep;
...
...
...
if(accumulatedLength >= lenRay || accumulatedColor.a > 1.0 ) {
break;
}
EDIT2:(sorry but like comment it was too long)
Yes, the texture is noisy...i have tried to delete the condition with alpha: if(accumulatedColor.a > 1.0), but the result is same.
I think that there is some direct correlation with length of ray and size of step. I tried many combination and i have found these things.
If step is big, i am able to go throught all volume, but if it is small, than i am realy not able to go throught volume (maybe). If step is extremely big, than i can see mirroved object (it can be caused by repeating texture if i go out of the texture on GPU). If step is too small, than i am able to mapped only small part of texture -> it seems, that ray is too short, but in reality he isnt. Questins are, why mapping of 3D coordinates to 2D texture is wrong and depend on size of step..
Can you please supply the code for your fragment shader?
Are you traversing the whole vector from front to end position? Here's an example shader (the code might contain some errors since I just wrote it from the top of my head. I unfortunately can't test the code on my computer at the moment):
in vec2 texCoord;
out vec4 outColor;
uniform float stepSize;
uniform int numSteps;
uniform sampler2d frontTexture;
uniform sampler2d backTexture;
uniform sampler3d volumeTexture;
uniform sampler1d transferTexture; // Density to RGB
void main()
{
vec4 color = vec4(0.0);
vec3 startPosition = texture(frontTexture, texCoord);
vec3 endPosition = texture(backTexture, texCoord);
vec3 delta = normalize(startPosition - endPosition) * stepSize;
vec3 position = startPosition;
for (int i = 0; i < numSteps; ++i)
{
float density = texture(volumeTexture, position).r;
vec3 voxelColor = texture(transferTexture, density);
// Sampling distance correction
color.a = 1.0 - pow((1.0 - color.a), stepSize * 500.0);
// Front to back blending (no shading done)
color.rgb = color.rgb + (1.0 - color.a) * voxelColor.a * voxelColor.rgb;
color.a = color.a + (1.0 - color.a) * voxelColor.a;
if (color.a >= 1.0)
{
break;
}
// Advance
position += direction;
if (position.x > 1.0 || position.y > 1.0 || position.z > 1.0)
{
break;
}
}
outColor = color;
}

Is it possible to draw line thickness in a fragment shader?

Is it possible for me to add line thickness in the fragment shader considering that I draw the line with GL_LINES? Most of the examples I saw seem to access only the texels within the primitive in the fragment shader and a line thickness shader would need to write to texels outside the line primitive to obtain the thickness. If it is possible however, a very small, basic, example, would be great.
Quite a lot is possible with fragment shaders. Just look what some guys are doing. I'm far away from that level myself but this code can give you an idea:
#define resolution vec2(500.0, 500.0)
#define Thickness 0.003
float drawLine(vec2 p1, vec2 p2) {
vec2 uv = gl_FragCoord.xy / resolution.xy;
float a = abs(distance(p1, uv));
float b = abs(distance(p2, uv));
float c = abs(distance(p1, p2));
if ( a >= c || b >= c ) return 0.0;
float p = (a + b + c) * 0.5;
// median to (p1, p2) vector
float h = 2 / c * sqrt( p * ( p - a) * ( p - b) * ( p - c));
return mix(1.0, 0.0, smoothstep(0.5 * Thickness, 1.5 * Thickness, h));
}
void main()
{
gl_FragColor = vec4(
max(
max(
drawLine(vec2(0.1, 0.1), vec2(0.1, 0.9)),
drawLine(vec2(0.1, 0.9), vec2(0.7, 0.5))),
drawLine(vec2(0.1, 0.1), vec2(0.7, 0.5))));
}
Another alternative is to check with texture2D for the color of nearby pixel - that way you can make you image glow or thicken (e.g. if any of the adjustment pixels are white - make current pixel white, if next to nearby pixel is white - make current pixel grey).
No, it is not possible in the fragment shader using only GL_LINES. This is because GL restricts you to draw only on the geometry you submit to the rasterizer, so you need to use geometry that encompasses the jagged original line plus any smoothing vertices. E.g., you can use a geometry shader to expand your line to a quad around the ideal line (or, actually two triangles) which can pose as a thick line.
In general, if you generate bigger geometry (including a full screen quad), you can use the fragment shader to draw smooth lines.
Here's a nice discussion on that subject (with code samples).
Here's my approach. Let p1 and p2 be the two points defining the line, and let point be the point whose distance to the line you wish to measure. Point is most likely gl_FragCoord.xy / resolution;
Here's the function.
float distanceToLine(vec2 p1, vec2 p2, vec2 point) {
float a = p1.y-p2.y;
float b = p2.x-p1.x;
return abs(a*point.x+b*point.y+p1.x*p2.y-p2.x*p1.y) / sqrt(a*a+b*b);
}
Then use that in your mix and smoothstep functions.
Also check out this answer:
https://stackoverflow.com/a/9246451/911207
A simple hack is to just add a jitter in the vertex shader:
gl_Position += vec4(delta, delta, delta, 0.0);
where delta is the pixelsize i.e. 1.0/viewsize
Do the line-draw pass twice using zero, and then the delta as jitter (passed in as a uniform).
To draw a line in Fragment Shader, we should check that the current pixel (UV) is on the line position. (is not efficient using only the Fragment shader code! this is just for the test with glslsandbox)
An acceptable UV point should have these two conditions:
1- The maximum permissible distance between (uv, pt1) should be smaller than the distance between (pt1, pt2).
With this condition we create a assumed circle with the center of pt2 and radious = distance(pt2, pt1) and also prevent the drawing of line that is longer than the distance(pt2, pt1).
2- For each UV we assume a hypothetical circle with a connection point on ptc position of the line(pt2,pt1).
If the distance between UV and PTC is less than the line tickness, we select this UV as the line point.
in our code:
r = distance (uv, pt1) / distance (pt1, pt2) give us a value between 0 and 1.
we interpolate a point (ptc) between pt1 and pt2 with value of r
code:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
float line(vec2 uv, vec2 pt1, vec2 pt2,vec2 resolution)
{
float clrFactor = 0.0;
float tickness = 3.0 / max(resolution.x, resolution.y); //only used for tickness
float r = distance(uv, pt1) / distance(pt1, pt2);
if(r <= 1.0) // if desired Hypothetical circle in range of vector(pt2,pt1)
{
vec2 ptc = mix(pt1, pt2, r); // ptc = connection point of Hypothetical circle and line calculated with interpolation
float dist = distance(ptc, uv); // distance betwenn current pixel (uv) and ptc
if(dist < tickness / 2.0)
{
clrFactor = 1.0;
}
}
return clrFactor;
}
void main()
{
vec2 uv = gl_FragCoord.xy / resolution.xy; //current point
//uv = current pixel
// 0 < uv.x < 1 , 0 < uv.x < 1
// left-down= (0,0)
// right-top= (1,1)
vec2 pt1 = vec2(0.1, 0.1); //line point1
vec2 pt2 = vec2(0.8, 0.7); //line point2
float lineFactor = line(uv, pt1, pt2, resolution.xy);
vec3 color = vec3(.5, 0.7 , 1.0);
gl_FragColor = vec4(color * lineFactor , 1.);
}

Resources