Incorrect reading from DataTextures in shader - three.js

I have a program that works great when my POT DataTextures are 1:1 (width:height) in their texel dimensions, however when they are 2:1 or 1:2 in texel dimensions it appears that the texels are being incorrectly read and applied. I'm using continuous indexes (1,2,3,4,5...) to access the texels using the two functions below.
I'm wondering if there is something wrong with how I am accessing the texel data, or perhaps if my use of a Float32Array for the integer indexes needs to be switched to a Uint8Array or something else? Thanks in advance!
This function finds the uv for textures that have one texel per particle cloud in my visualization:
float texelSizeX = 1.0 / uPerCloudBufferWidth;
float texelSizeY = 1.0 / uPerCloudBufferHeight;
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferHeight)*texelSizeY );
perCloudUV += vec2(0.5*texelSizeX, 0.5*texelSizeY);
This function finds the uv for textures that contain one texel for each particle contained in all of the clouds:
float pTexelSizeX = 1.0 / uPerParticleBufferWidth;
float pTexelSizeY = 1.0 / uPerParticleBufferHeight;
vec2 perParticleUV = vec2(
mod(aParticleIndex, uPerParticleBufferWidth)*pTexelSizeX,
floor(aParticleIndex / uPerParticleBufferHeight)*pTexelSizeY );
perParticleUV += vec2(0.5*pTexelSizeX, 0.5*pTexelSizeY);

Shouldn't this
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferHeight)*texelSizeY );
be this?
vec2 perMotifUV = vec2(
mod(cellIndex, uPerCloudBufferWidth)*texelSizeX,
floor(cellIndex / uPerCloudBufferWidth)*texelSizeY ); // <=- use width
And same for the other? Divide by width not height

Related

Showing Point Cloud Structure using Lighting in Three.js

I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!

Referencing texels in a data texture using indexes in the shader

I have values in the texels of a DataTexture that I am trying to access using indexes in my shader. The indexes [0, 1, 2, 3, 4, 5, 6... 62, 63] are continuous, while the data texture has a height and width (uTextureDimension) of 8. After some research I wrote this function to take a particular index value, and reference the corresponding texel:
vec2 customUV = vec2( mod(aIndex, uTextureDimension) / uTextureDimension, floor(aIndex / uTextureDimension) / uTextureDimension );
vec4 texelValues = texture2D( tDataTexture, customUV ).xyzw;
I also tried this version to reference the texel from its center point. Also no dice:
vec2 perMotifUV = vec2( mod( aIndex, uTextureDimension ) * (( 1.0 / uTextureDimension )/2.0), floor( aIndex / uTextureDimension ) * (( 1.0 / uTextureDimension )/2.0) );
vec4 texelValues = texture2D( tDataTexture, customUV ).xyzw;
After working with this since yesterday afternoon, editing it here and there, and looking around for other solutions, I'm still not getting the expected results. I should say that in using Three.js, the shaders are set to high precision float - is this part of the problem? Can anyone nudge me on track here? Thanks!
The first one seems right, if everything is a float. Are you using NEAREST filter mode? Technically you should also add a half-texel-size offset too:
vec2 texelSize = 1.0 / uTextureDimension;
vec2 customUV = vec2( mod(aIndex, uTextureDimension)*texelSize, floor(aIndex / uTextureDimension)*texelSize );
customUV += vec2(0.5*texelSize);
EDIT - Also your indices should go 0-63 not 0-64

How to get fullscreen texture coordinates for a fullscreen texture from a previous rendering pass?

I do two rendering passes in webgl application using three.js (contrived example here):
renderer.render(depthScene, camera, depthTarget);
renderer.render(scene, camera);
The first rendering pass is to the render target depthTarget which I want to access in the second rendering pass as a texture uniform:
uniform sampler2D tDepth;
float unpack_depth( const in vec4 rgba_depth ) { ... }
void main() {
vec2 screenTexCoord = vec2( 1.0, 1.0 );
float depth = 1.0 - unpack_depth( texture2D( tDepth, screenTexCoord ) );
gl_FragColor = vec4( vec3( depth ), 1.0 );
}
My question is how do I get the value for screenTexCoord? It is not gl_FragCoord.xy.
To avoid a possible misunderstanding: I don't want to render the texture from the first pass to a quad. I want to use the texture from the first pass while rendering the geometry in the second pass.
EDIT:
According to the WebGL specification gl_FragCoord contains window coordinates which are normalized device coordinates (ndc) scaled by the viewport. The ndc are within [-1, 1] so the following should yield coordinates within [0, 1] for texture lookup:
vec2 ndcXY = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 screenTexCoord = (ndcXY+1.0)/2.0;
But somewhere I must be wrong because the updated example does still not show the (packed) depth?!
Finally figured it out myself. The correct way to calculate the texture coordinates is just:
vec2 screenTexCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
See a working example here.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

What is OpenGL ES 2 Shader language analog for HYDRA (pixel bender) sampleLinear?

So I look onto OpenGL ES shader specs but do not see such...
For example - I created simple "pinch to zoon" and "rotate to turn around" and "move to move center" HYDRA pixel bender filter. it can be executed in flash. It is based on default pixel bender twirl example and this:
<languageVersion: 1.0;>
kernel zoomandrotate
< namespace : "Pixel Bender Samples";
vendor : "Kabumbus";
version : 3;
description : "rotate and zoom an image around"; >
{
// define PI for the degrees to radians calculation
const float PI = 3.14159265;
// An input parameter to specify the center of the twirl effect.
// As above, we're using metadata to indicate the minimum,
// maximum, and default values, so that the tools can set the values
// in the correctly in the UI for the filter.
parameter float2 center
<
minValue:float2(0.0, 0.0);
maxValue:float2(2048.0, 2048.0);
defaultValue:float2(256.0, 256.0);
>;
// An input parameter to specify the angle that we would like to twirl.
// For this parameter, we're using metadata to indicate the minimum,
// maximum, and default values, so that the tools can set the values
// in the correctly in the UI for the filter.
parameter float twirlAngle
<
minValue:float(0.0);
maxValue:float(360.0);
defaultValue:float(90.0);
>;
parameter float zoomAmount
<
minValue:float(0.01);
maxValue:float(10.0);
defaultValue:float(1);
>;
// An input parameter that indicates how we want to vary the twirling
// within the radius. We've added support to modulate by one of two
// functions, a gaussian or a sinc function. Since Flash does not support
// bool parameters, we instead are using this as an int with two possible
// values. Setting this parameter to be 1 will
// cause the gaussian function to be used, unchecking it will cause
// the sinc function to be used.
parameter int gaussOrSinc
<
minValue:int(0);
maxValue:int(1);
defaultValue:int(0);
>;
input image4 oImage;
output float4 outputColor;
// evaluatePixel(): The function of the filter that actually does the
// processing of the image. This function is called once
// for each pixel of the output image.
void
evaluatePixel()
{
// convert the angle to radians
float twirlAngleRadians = radians(twirlAngle);
// calculate where we are relative to the center of the twirl
float2 relativePos = outCoord() - center;
// calculate the absolute distance from the center normalized
// by the twirl radius.
float distFromCenter = length( relativePos );
distFromCenter = 1.0;
// modulate the angle based on either a gaussian or a sync.
float adjustedRadians;
// precalculate either the gaussian or the sinc weight
float sincWeight = sin( distFromCenter ) * twirlAngleRadians / ( distFromCenter );
float gaussWeight = exp( -1.0 * distFromCenter * distFromCenter ) * twirlAngleRadians;
// protect the algorithm from a 1 / 0 error
adjustedRadians = (distFromCenter == 0.0) ? twirlAngleRadians : sincWeight;
// switch between a gaussian falloff or a sinc fallof
adjustedRadians = (gaussOrSinc == 1) ? adjustedRadians : gaussWeight;
// rotate the pixel sample location.
float cosAngle = cos( adjustedRadians );
float sinAngle = sin( adjustedRadians );
float2x2 rotationMat = float2x2(
cosAngle, sinAngle,
-sinAngle, cosAngle
);
relativePos = rotationMat * relativePos;
float scale = zoomAmount;
// sample and set as the output color. since relativePos
// is related to the center location, we need to add it back in.
// We use linear sampling to smooth out some of the pixelation.
outputColor = sampleLinear( oImage, relativePos/scale + center );
}
}
So now I want to port it into OpenGL ES shader. math and parameters are convertable into OpenGL ES shader language, but what to do with sampleLinear? what is analog for it in openGL ES shader languge?
update:
So I had created something similar to my HYDRA filter... compatable with webGL and OpenGL ES shaders...
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 resolution;
uniform float time;
uniform sampler2D tex0;
void main(void)
{
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
// a rotozoom
vec2 cst = vec2( cos(.5*time), sin(.5*time) );
mat2 rot = 0.5*cst.x*mat2(cst.x,-cst.y,cst.y,cst.x);
vec3 col = texture2D(tex0,0.5*rot*p+sin(0.1*time)).xyz;
gl_FragColor = vec4(col,1.0);
}
To see how it works get modern browser, navigate to shadertoy provide it with one texture ( http://www.iquilezles.org/apps/shadertoy/presets/tex4.jpg for example), paste my code into editable text aeria and hit ... Have fun. So.. now I have another problem... I want to have one image and black around it not copies of that same image... Any one knows how to do that?
Per Adobe's Pixel Blender Reference, sampleLinear "Handles coordinates not at pixel centers by performing bilinear interpolation on the adjacent pixel values."
The correct way to achieve that in OpenGL is to use texture2D, as you already are, but to set the texture environment for linear filtering via glTexParameter.
You can use the step function and multiply by its result to get black for out-of-bounds pixels, or give your texture a single pixel black border and switch to clamping rather than repeat, also via glTexParameter.
If you want to do it in code, try:
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 resolution;
uniform float time;
uniform sampler2D tex0;
void main(void)
{
vec2 p = -1.0 + 2.0 * gl_FragCoord.xy / resolution.xy;
// a rotozoom
vec2 cst = vec2( cos(.5*time), sin(.5*time) );
mat2 rot = 0.5*cst.x*mat2(cst.x,-cst.y,cst.y,cst.x);
vec2 samplePos = 0.5*rot*p+sin(0.1*time);
float mask = step(samplePos.x, 0.0) * step(samplePos.y, 0.0) * (1.0 - step(samplePos.x, 1.0)) * (1.0 - step(samplePos.y, 1.0));
vec3 col = texture2D(tex0,samplePos).xyz;
gl_FragColor = vec4(col*mask,1.0);
}
That'd restrict colours to coming from the box from (0, 0) to (1, 1), but it looks like the shader heads off to some significantly askew places, so I'm not sure exactly what you want.

Resources