3D Texture Rendering Using 2D Texture - opengl-es

I want render 3d texture at OpenGL es 2.0 environment. So I make 3d texture data to 2d texture.
3d texture (256 * 256 * 100) -> 2d texture(2560 * 2560)
I think two offsets are same.
offset = z3 * 256 * 256 + y3 * 256 + x3
offset = y2 * 2560 + x2
But result is not good.
vec3 size3 = vec3(256.0, 256.0, 100.0);
vec2 size2 = vec2(2560.0, 2560.0);
vec2 calc3dTo2d(vec3 coords) {
vec3 offset3 = vec3(coords.x * size3.x, coords.y * size3.y, coords.z * size3.z);
float offset = offset3.z * size3.x * size3.y + offset3.y * size3.x + offset3.x;
float y = floor(offset / size2.x) / size2.y;
float x = fract(offset / size2.x);
return vec2(x, y);
}
What I'm missing?

Related

Tricolor Fragment Shader

I'm developing an animation using Three.js which simply involves an Icosahedron Buffer Geometry and a Shader Material applied together into a mesh. The goal is to get the object to resemble this:
Vision for final product:
I'm new to GLSL so getting to this point hasn't been easy. I borrowed some code from the shaders used in this codepen. The issue with this shader is that the colors rendered are merely distinct outputs of Red, Green, and Blue, whereas I want three specific colors each with their own unique hex value.
Here is a snippet of the fragment shader code that I have tried:
uniform int u_color1;
uniform int u_color2;
uniform int u_color3;
void main() {
float r1 = float(u_color1 / 256 / 256);
float g1 = float(u_color1 / 256 - int(r1 * 256.0));
float b1 = float(u_color1 - int(r1 * 256.0 * 256.0) - int(g1 * 256.0));
float r2 = float(u_color2 / 256 / 256);
float g2 = float(u_color2 / 256 - int(r1 * 256.0));
float b2 = float(u_color2 - int(r2 * 256.0 * 256.0) - int(g2 * 256.0));
float r3 = float(u_color3 / 256 / 256);
float g3 = float(u_color3 / 256 - int(r3 * 256.0));
float b3 = float(u_color3 - int(r3 * 256.0 * 256.0) - int(g3 * 256.0));
vec3 color1 = vec3((r1/255.0) , (g1/255.0) , (b1/255.0) );
vec3 color2 = vec3((r2/255.0) , (g2/255.0) , (b2/255.0) );
vec3 color3 = vec3((r3/255.0) , (g3/255.0) , (b3/255.0) );
vec4 outColor = vec4(r1 /256.0, g1/256.0, b1/256.0, 1.0);
gl_FragColor = outColor;
}
Any input is appreciated. Thanks!

GLSL sparking vertex shader

I am trying to tweak this ShaderToy example for vertices to create 'sparks'
out of them. Have tried to play with gl_PointCoord and gl_FragCoord without any results. Maybe, someone here could help me?
I need effect similar to this animated gif:
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
#define M_PI 3.1415926535897932384626433832795
float rand(vec2 co)
{
return fract(sin(dot(co.xy ,vec2(12.9898,78.233))) * 43758.5453);
}
void main( ) {
float size = 30.0;
float prob = 0.95;
vec2 pos = floor(1.0 / size * gl_FragCoord.xy);
float color = 0.0;
float starValue = rand(pos);
if (starValue > prob)
{
vec2 center = size * pos + vec2(size, size) * 0.5;
float t = 0.9 + sin(time + (starValue - prob) / (1.0 - prob) * 45.0);
color = 1.0 - distance(gl_FragCoord.xy, center) / (0.5 * size);
color = color * t / (abs(gl_FragCoord.y - center.y)) * t / (abs(gl_FragCoord.x - center.x));
}
else if (rand(gl_FragCoord.xy / resolution.xy) > 0.996)
{
float r = rand(gl_FragCoord.xy);
color = r * ( 0.25 * sin(time * (r * 5.0) + 720.0 * r) + 0.75);
}
gl_FragColor = vec4(vec3(color), 1.0);
}
As I understand have to play with vec2 pos, setting it to a vertex position.
You don't need to play with pos. As Vertex Shader is only run by each vertex, there is no way to process its pixel values there using Pos. However, you can do processing pixel using gl_PointCoord.
I can think of two ways only for changing the scale of a texture
gl_PointSize in Vertex Shader in opengl es
In Fragment Shader, you can change the texture UV value, for example,
vec4 color = texture(texture0, ((gl_PointCoord-0.5) * factor) + vec2(0.5));
If you don't want to use any texture but only pixel processing in FS,
you can set UV like ((gl_PointCoord-0.5) * factor) + vec2(0.5)
instead of uv which is normally set as fragCoord.xy / iResolution.xy in Shadertoy

How does this 2d noise generation function work? Does it have a name?

I came across this 2D noise function in the Book of Shaders
float noise(vec2 st) {
vec2 integerPart = floor(st);
vec2 fractionalPart = fract(st);
float s00 = random(integerPart);
float s01 = random(integerPart + vec2(0.0, 1.0));
float s10 = random(integerPart + vec2(1.0, 0.0));
float s11 = random(integerPart + vec2(1.0, 1.0));
float dx1 = s10 - s00;
float dx2 = s11 - s01;
float dy1 = s01 - s00;
float dy2 = s11 - s10;
float alpha = smoothstep(0.0, 1.0, fractionalPart.x);
float beta = smoothstep(0.0, 1.0, fractionalPart.y);
return s00 + alpha * dx1 + (1 - alpha) * beta * dy1 + alpha * beta * dy2;
}
It is clear what this function does: it generates four random numbers at the vertices of a square, then interpolates them. What I am finding difficult is understanding why the interpolation (the s00 + alpha * dx1 + (1 - alpha) * beta * dy1 + alpha * beta * dy2 expression) works. How is it interpolating the four values when it does not seem to be symmetric in the x and y values?
If you expand the last line, it's:
return s00 * (1-alpha) * (1-beta) +
s10 * alpha * (1-beta) +
s01 * (1-alpha) * beta +
s11 * alpha * beta;
Which is symmetric in x and y. If you add up the weights:
alpha * beta + (1-alpha) * beta + alpha * (1-beta) + (1-alpha) * (1-beta)
= (alpha + 1-alpha) * beta + (alpha + 1-alpha) * (1-beta)
= beta + 1-beta
= 1
so it's an affine combination of the values at the corners

Atmosphere Scattering for Earth from space and on the ground

Please provide prompt how to make the atmosphere of the Earth so that it is visible from space and from the ground (as shown in the image)
a model of the earth:
Earth = new THREE.Mesh(new THREE.SphereGeometry(6700,32,32),ShaderMaterialEarth);
model of the cosmos:
cosmos= new THREE.Mesh(new THREE.SphereGeometry(50000,32,32),ShaderMaterialCosmos);
and a light source:
sun = new THREE.DirectionalLight();
where to start, just I do not know. Perhaps this should do ShaderMaterialCosmos, where to pass position of the camera, and calculate how should be painted pixel. But how?
I tried using the following but get zero vectors at the entrance of the fragment shader
http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter16.html
vertexShader:
#define M_PI 3.1415926535897932384626433832795
const float ESun=1.0;
const float Kr = 0.0025;
const float Km = 0.0015;
const int nSamples = 2;
const float fSamples = 1.0;
const float fScaleDepth = 0.25;
varying vec2 vUv;
varying vec3 wPosition;
varying vec4 c0;
varying vec4 c1;
varying vec3 t0;
uniform vec3 v3CameraPos; , // The camera's current position
uniform vec3 v3LightDir; // Direction vector to the light source
uniform vec3 v3InvWavelength; // 1 / pow(wavelength, 4) for RGB
uniform float fCameraHeight; // The camera's current height
const float fOuterRadius=6500.0; // The outer (atmosphere) radius
const float fInnerRadius=6371.0; // The inner (planetary) radius
const float fKrESun=Kr*ESun; // Kr * ESun
const float fKmESun=Km*ESun; // Km * ESun
const float fKr4PI=Kr*4.0*M_PI; // Kr * 4 * PI
const float fKm4PI=Km*4.0*M_PI; // Km * 4 * PI
const float fScale=1.0/(fOuterRadius-fInnerRadius); // 1 / (fOuterRadius - fInnerRadius)
const float fScaleOverScaleDepth= fScale / fScaleDepth; // fScale / fScaleDepth
const float fInvScaleDepth=1.0/0.25;
float getNearIntersection(vec3 v3Pos, vec3 v3Ray, float fDistance2, float fRadius2)
{
float B = 2.0 * dot(v3Pos, v3Ray);
float C = fDistance2 - fRadius2;
float fDet = max(0.0, B*B - 4.0 * C);
return 0.5 * (-B - sqrt(fDet));
}
float scale(float fCos)
{
float x = 1.0 - fCos;
return fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25))));
}
void main() {
// Get the ray from the camera to the vertex and its length (which
// is the far point of the ray passing through the atmosphere)
vec3 v3Pos = position.xyz;
vec3 v3Ray = v3Pos - v3CameraPos;
float fFar = length(v3Ray);
v3Ray /= fFar;
// Calculate the closest intersection of the ray with
// the outer atmosphere (point A in Figure 16-3)
float fNear = getNearIntersection(v3CameraPos, v3Ray, fCameraHeight*fCameraHeight, fOuterRadius*fOuterRadius);
// Calculate the ray's start and end positions in the atmosphere,
// then calculate its scattering offset
vec3 v3Start = v3CameraPos + v3Ray * fNear;
fFar -= fNear;
float fStartAngle = dot(v3Ray, v3Start) / fOuterRadius;
float fStartDepth = exp(-fInvScaleDepth);
float fStartOffset = fStartDepth * scale(fStartAngle);
// Initialize the scattering loop variables
float fSampleLength = fFar / fSamples;
float fScaledLength = fSampleLength * fScale;
vec3 v3SampleRay = v3Ray * fSampleLength;
vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5;
// Now loop through the sample points
vec3 v3FrontColor = vec3(0.0, 0.0, 0.0);
for(int i=0; i<nSamples; i++) {
float fHeight = length(v3SamplePoint);
float fDepth = exp(fScaleOverScaleDepth * (fInnerRadius - fHeight));
float fLightAngle = dot(v3LightDir, v3SamplePoint) / fHeight;
float fCameraAngle = dot(v3Ray, v3SamplePoint) / fHeight;
float fScatter = (fStartOffset + fDepth * (scale(fLightAngle) * scale(fCameraAngle)));
vec3 v3Attenuate = exp(-fScatter * (v3InvWavelength * fKr4PI + fKm4PI));
v3FrontColor += v3Attenuate * (fDepth * fScaledLength);
v3SamplePoint += v3SampleRay;
}
wPosition = (modelMatrix * vec4(position,1.0)).xyz;
c0.rgb = v3FrontColor * (v3InvWavelength * fKrESun);
c1.rgb = v3FrontColor * fKmESun;
t0 = v3CameraPos - v3Pos;
vUv = uv;
}
fragmentShader:
float getMiePhase(float fCos, float fCos2, float g, float g2){
return 1.5 * ((1.0 - g2) / (2.0 + g2)) * (1.0 + fCos2) / pow(1.0 + g2 - 2.0*g*fCos, 1.5);
}
// Rayleigh phase function
float getRayleighPhase(float fCos2){
//return 0.75 + 0.75 * fCos2;
return 0.75 * (2.0 + 0.5 * fCos2);
}
varying vec2 vUv;
varying vec3 wPosition;
varying vec4 c0;
varying vec4 c1;
varying vec3 t0;
uniform vec3 v3LightDir;
uniform float g;
uniform float g2;
void main() {
float fCos = dot(v3LightDir, t0) / length(t0);
float fCos2 = fCos * fCos;
gl_FragColor = getRayleighPhase(fCos2) * c0 + getMiePhase(fCos, fCos2, g, g2) * c1;
gl_FragColor = c1;
}
Chapter 16 of GPU Gem 2 has nice explanation and illustration for achieving your goal in real time.
Basically you need to perform ray casting through the atmosphere layer and evaluate the light scattering.

edge detection on depth buffer [cel shading]

I am currently writing a cel shading shader, but I'm having issues with edge detection. I am currently using the following code utilizing laplacian edge detection on non-linear depth buffer values:
uniform sampler2d depth_tex;
void main(){
vec4 color_out;
float znear = 1.0;
float zfar = 50000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy).r;
float lineAmp = mix( 0.001, 0.0, clamp( (500.0 / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) )/2.0), 0.0, 1.0 ) );// make the lines thicker at close range
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depthn = depthn / depthm;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy - vec2( (0.002 + lineAmp)*0.625 , 0.0) ).r;
depths = depths / depthm;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + vec2(0.0 , 0.002 + lineAmp) ).r;
depthw = depthw / depthm;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy - vec2(0.0 , 0.002 + lineAmp) ).r;
depthe = depthe / depthm;
float Contour = -4.0 + depthn + depths + depthw + depthe;
float lineAmp2 = 100.0 * clamp( depthm - 0.99, 0.0, 1.0);
lineAmp2 = lineAmp2 * lineAmp2;
Contour = (512.0 + lineAmp2 * 204800.0 ) * Contour;
if(Contour > 0.15){
Contour = (0.15 - Contour) / 1.5 + 0.5;
} else
Contour = 1.0;
color_out.rgb = color_out.rgb * Contour;
color_out.a = 1.0;
gl_FragColor = color_out;
}
but it is hackish[note the lineAmp2], and the details at large distances are lost. So I made up some other algorithm:
[Note that Laplacian edge detection is in use]
1.Get 5 samples from the depth buffer: depthm, depthn, depths, depthw, depthe, where depthm is exactly where the processed fragment is, depthn is slightly to the top, depths is slightly to the bottom etc.
2.Calculate their real coordinates in camera space[as well as convert to linear].
3.Compare the side samples to the middle sample by substracting and then normalize each difference by dividing by difference in distance between two camera-space points and add all four results. This should in theory help with situation, where at large distances from the camera two fragments are very close on the screen but very far in camera space, which is fatal for linear depth testing.
where:
2.a convert the non linear depth to linear using an algorithm from [url=http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer]http://stackoverflow.com/questions/6652253/getting-the-true-z-value-from-the-depth-buffer[/url]
exact code:
uniform sampler2D depthBuffTex;
uniform float zNear;
uniform float zFar;
varying vec2 vTexCoord;
void main(void)
{
float z_b = texture2D(depthBuffTex, vTexCoord).x;
float z_n = 2.0 * z_b - 1.0;
float z_e = 2.0 * zNear * zFar / (zFar + zNear - z_n * (zFar - zNear));
}
2.b convert the screen coordinates to be [tan a, tan b], where a is horizontal angle and b i vertical. There probably is a better terminology with some spherical coordinates but I don't know these yet.
2.c create a 3d vector ( converted screen coordinates, 1.0 ) and scale it by linear depth. I assume this is estimated camera space coordinates of the fragment. It looks like it.
3.a each difference is as follows: (depthm - sidedepth)/lenght( positionm - sideposition)
And I may have messed up something at any point. Code looks fine, but the algorithm may not be, as I made it up myself.
My code:
uniform sampler2d depth_tex;
void main(){
float znear = 1.0;
float zfar = 10000000000.0;
float depthm = texture2D(depth_tex, gl_TexCoord[0].xy + distort ).r;
depthm = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthm - 1.0 ) * (zfar - znear) ); //convert to linear
vec2 scorm = (gl_TexCoord[0].xy + distort) -0.5; //conversion to desired coordinates space. This line returns value from range (-0.5,0.5)
scorm = scorm * 2.0 * 0.5; // normalize to (-1, 1) and multiply by tan FOV/2, and default fov is IIRC 60 degrees
scorm.x = scorm.x * 1.6; //1.6 is aspect ratio 16/10
vec3 posm = vec3( scorm, 1.0 );
posm = posm * depthm; //scale by linearized depth
float depthn = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2( 0.002*0.625 , 0.0) ).r; //0.625 is aspect ratio 10/16
depthn = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthn - 1.0 ) * (zfar - znear) );
vec2 scorn = (gl_TexCoord[0].xy + distort + vec2( 0.002*0.625, 0.0) ) -0.5;
scorn = scorn * 2.0 * 0.5;
scorn.x = scorn.x * 1.6;
vec3 posn = vec3( scorn, 1.0 );
posn = posn * depthn;
float depths = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2( 0.002*0.625 , 0.0) ).r;
depths = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depths - 1.0 ) * (zfar - znear) );
vec2 scors = (gl_TexCoord[0].xy + distort - vec2( 0.002*0.625, 0.0) ) -0.5;
scors = scors * 2.0 * 0.5;
scors.x = scors.x * 1.6;
vec3 poss = vec3( scors, 1.0 );
poss = poss * depths;
float depthw = texture2D(depth_tex, gl_TexCoord[0].xy + distort + vec2(0.0 , 0.002) ).r;
depthw = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthw - 1.0 ) * (zfar - znear) );
vec2 scorw = ( gl_TexCoord[0].xy + distort + vec2( 0.0 , 0.002) ) -0.5;
scorw = scorw * 2.0 * 0.5;
scorw.x = scorw.x * 1.6;
vec3 posw = vec3( scorw, 1.0 );
posw = posw * depthw;
float depthe = texture2D(depth_tex, gl_TexCoord[0].xy + distort - vec2(0.0 , 0.002) ).r;
depthe = 2.0 * zfar * znear / (zfar + znear - ( 2.0 * depthe - 1.0 ) * (zfar - znear) );
vec2 score = ( gl_TexCoord[0].xy + distort - vec2( 0.0 , 0.002) ) -0.5;
score = score * 2.0 * 0.5;
score.x = score.x * 1.6;
vec3 pose = vec3( score, 1.0 );
pose = pose * depthe;
float Contour = ( depthn - depthm )/length(posm - posn) + ( depths - depthm )/length(posm - poss) + ( depthw - depthm )/length(posm - posw) + ( depthe - depthm )/length(posm - pose);
Contour = 0.25 * Contour;
color_out.rgb = vec3( Contour, Contour, Contour );
color_out.a = 1.0;
gl_FragColor = color_out;
}
The exact issue with the second code is that it exhibits some awful artifacts at larger distances.
My goal is to make either of them work properly. Are there any tricks I could use to improve precision/quality in both linearized and non-linearized depth buffer? Is anything wrong with my algorithm for linearized depth buffer?

Resources