How to implement a ShaderToy shader in three.js? - three.js

looking for info on how to recreate the ShaderToy parameters iGlobalTime, iChannel etc within threejs. I know that iGlobalTime is the time elapsed since the Shader started, and I think the iChannel stuff is for pulling rgb out of textures, but would appreciate info on how to set these.
edit: have been going through all the shaders that come with three.js examples and think that the answers are all in there somewhere - just have to find the equivalent to e.g. iChannel1 = a texture input etc.

I am not sure if you have answered your question, but it might be good for others to know the integration steps for shadertoys to THREEJS.
First, you need to know that shadertoys is a fragment shaders. That being said, you have to set a "general purpose" vertex shader that should work with all shadertoys (fragment shaders).
Step 1
Create a "general purpose" vertex shader
varying vec2 vUv;
void main()
{
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4(position, 1.0 );
gl_Position = projectionMatrix * mvPosition;
}
This vertex shader is pretty basic. Notice that we defined a varying variable vUv to tell the fragment shader where is the texture mapping. This is important because we are not going to use the screen resolution (iResolution) for our base rendering. We will use the texture coordinates instead. We have done that in order to integrate multiple shadertoys on different objects in the same THREEJS scene.
Step 2
Pick the shadertoys that we want and create the fragment shader. (I have chosen a simple toy that performs well: Simple tunnel 2D by niklashuss).
Here is the given code for this toy:
void main(void)
{
vec2 p = gl_FragCoord.xy / iResolution.xy;
vec2 q = p - vec2(0.5, 0.5);
q.x += sin(iGlobalTime* 0.6) * 0.2;
q.y += cos(iGlobalTime* 0.4) * 0.3;
float len = length(q);
float a = atan(q.y, q.x) + iGlobalTime * 0.3;
float b = atan(q.y, q.x) + iGlobalTime * 0.3;
float r1 = 0.3 / len + iGlobalTime * 0.5;
float r2 = 0.2 / len + iGlobalTime * 0.5;
float m = (1.0 + sin(iGlobalTime * 0.5)) / 2.0;
vec4 tex1 = texture2D(iChannel0, vec2(a + 0.1 / len, r1 ));
vec4 tex2 = texture2D(iChannel1, vec2(b + 0.1 / len, r2 ));
vec3 col = vec3(mix(tex1, tex2, m));
gl_FragColor = vec4(col * len * 1.5, 1.0);
}
Step 3
Customize the shadertoy raw code to have a complete GLSL fragment shader.
The first thing missing out the code are the uniforms and varyings declaration. Add them at the top of your frag shader file (just copy and paste the following):
uniform float iGlobalTime;
uniform sampler2D iChannel0;
uniform sampler2D iChannel1;
varying vec2 vUv;
Note, only the shadertoys variables used for that sample are declared, plus the varying vUv previously declared in our vertex shader.
The last thing we have to twick is the proper UV mapping, now that we have decided to not use the screen resolution. To do so, just replace the line that uses the IResolution uniforms i.e.:
vec2 p = gl_FragCoord.xy / iResolution.xy;
with:
vec2 p = -1.0 + 2.0 *vUv;
That's it, your shaders are now ready for usage in your THREEJS scenes.
Step 4
Your THREEJS code:
Set up uniform:
var tuniform = {
iGlobalTime: { type: 'f', value: 0.1 },
iChannel0: { type: 't', value: THREE.ImageUtils.loadTexture( 'textures/tex07.jpg') },
iChannel1: { type: 't', value: THREE.ImageUtils.loadTexture( 'textures/infi.jpg' ) },
};
Make sure the textures are wrapping:
tuniform.iChannel0.value.wrapS = tuniform.iChannel0.value.wrapT = THREE.RepeatWrapping;
tuniform.iChannel1.value.wrapS = tuniform.iChannel1.value.wrapT = THREE.RepeatWrapping;
Create the material with your shaders and add it to a planegeometry. The planegeometry() will simulate the shadertoys 700x394 screen resolution, in other words it will best transfer the work the artist intented to share.
var mat = new THREE.ShaderMaterial( {
uniforms: tuniform,
vertexShader: vshader,
fragmentShader: fshader,
side:THREE.DoubleSide
} );
var tobject = new THREE.Mesh( new THREE.PlaneGeometry(700, 394,1,1), mat);
Finally, add the delta of the THREE.Clock() to iGlobalTime value and not the total time in your update function.
tuniform.iGlobalTime.value += clock.getDelta();
That is it, you are now able to run most of the shadertoys with this setup...

2022 edit: The version of Shaderfrog described below is no longer being actively developed. There are bugs in the compiler used making it not able to parse all shaders correctly for import, and it doesn't support many of Shadertoy's features, like multiple image buffers. I'm working on a new tool if you want to follow along, otherwise you can try the following method, but it likely won't work most of the time.
Original answer follows:
This is an old thread, but there's now an automated way to do this. Simply go to http://shaderfrog.com/app/editor/new and on the top right click "Import > ShaderToy" and paste in the URL. If it's not public you can paste in the raw source code. Then you can save the shader (requires sign up, no email confirm), and click "Export > Three.js".
You might need to tweak the parameters a little after import, but I hope to have this improved over time. For example, ShaderFrog doesn't support audio nor video inputs yet, but you can preview them with images instead.
Proof of concept:
ShaderToy https://www.shadertoy.com/view/MslGWN
ShaderFrog http://shaderfrog.com/app/view/247
Full disclosure: I am the author of this tool which I launched last week. I think this is a useful feature.

This is based on various sources , including the answer of #INF1.
Basically you insert missing uniform variables from Shadertoy (iGlobalTime etc, see this list: https://www.shadertoy.com/howto) into the fragment shader, the you rename mainImage(out vec4 z, in vec2 w) to main(), and then you change z in the source code to 'gl_FragColor'. In most Shadertoys 'z' is 'fragColor'.
I did this for two cool shaders from this guy (https://www.shadertoy.com/user/guil) but unfortunately I didn't get the marble example to work (https://www.shadertoy.com/view/MtX3Ws).
A working jsFiddle is here: https://jsfiddle.net/dirkk0/zt9dhvqx/
Change the shader from frag1 to frag2 in line 56 to see both examples.
And don't 'Tidy' in jsFiddle - it breaks the shaders.
EDIT:
https://medium.com/#dirkk/converting-shaders-from-shadertoy-to-threejs-fe17480ed5c6

Related

Showing Point Cloud Structure using Lighting in Three.js

I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!

THREE.JS GLSL sprite always front to camera

I'm creating a glow effect for car stop lights and found a shader that makes it possible to always face the camera:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main() {
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * -viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
This solution is quite simple and almost works. It reacts to camera movement and it would be great. BUT this element is a child of a car. The car itself is moving around and when it rotates the material stops pointing directly at the camera.
I don't want to use SpritePlugin or LensFlarePlugin because they slow down my game by 20fps so I'll stick to this lightweight solution.
I found a solution for Direct 3d that you have to remove rotation data from tranformation matrix, but I don't know how to do this in THREE.js
I guess that instead of adding calculations with car transformation there must be a way to simplify this shader instead.
How to simplify this shader so the material always faces the camera?
From the link below: "To do spherical billboarding, just remove all rotations by setting the identity matrix". How to do it ShaderMaterial in THREE.js?
http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
The problem here I think is intercepting transformation matrix from ShaderMaterial before it's passed to the shader, but I'm not sure.
Probably irrelevant but here's also fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main() {
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
edit: for now I found a workaround which is eliminating parent's rotation influence by setting opposite quaternion. Not perfect and it's happening in CPU not GPU
this.quaternion._x = -this.parent.quaternion._x;
this.quaternion._y = -this.parent.quaternion._y;
this.quaternion._z = -this.parent.quaternion._z;
this.quaternion._w = -this.parent.quaternion._w;
Are you looking for an implementation of billboarding? (make a 2D sprite always face camera) If so, all you need to do is this:
"vec3 billboard(vec2 v, mat4 view){",
" vec3 up = vec3(view[0][1], view[1][1], view[2][1]);",
" vec3 right = vec3(view[0][0], view[1][0], view[2][0]);",
" vec3 p = right * v.x + up * v.y;",
" return p;",
"}"
v is the offset from the center, basically the 4 vertices in a plane that faces the z-axis. Eg. (1.0, 1.0), (1.0, -1.0), (-1.0, 1.0), and (-1.0, -1.0).
Use it like so:
"vec3 worldPos = billboard(a_offset, u_view);"
// then do whatever else.

How to get fullscreen texture coordinates for a fullscreen texture from a previous rendering pass?

I do two rendering passes in webgl application using three.js (contrived example here):
renderer.render(depthScene, camera, depthTarget);
renderer.render(scene, camera);
The first rendering pass is to the render target depthTarget which I want to access in the second rendering pass as a texture uniform:
uniform sampler2D tDepth;
float unpack_depth( const in vec4 rgba_depth ) { ... }
void main() {
vec2 screenTexCoord = vec2( 1.0, 1.0 );
float depth = 1.0 - unpack_depth( texture2D( tDepth, screenTexCoord ) );
gl_FragColor = vec4( vec3( depth ), 1.0 );
}
My question is how do I get the value for screenTexCoord? It is not gl_FragCoord.xy.
To avoid a possible misunderstanding: I don't want to render the texture from the first pass to a quad. I want to use the texture from the first pass while rendering the geometry in the second pass.
EDIT:
According to the WebGL specification gl_FragCoord contains window coordinates which are normalized device coordinates (ndc) scaled by the viewport. The ndc are within [-1, 1] so the following should yield coordinates within [0, 1] for texture lookup:
vec2 ndcXY = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
vec2 screenTexCoord = (ndcXY+1.0)/2.0;
But somewhere I must be wrong because the updated example does still not show the (packed) depth?!
Finally figured it out myself. The correct way to calculate the texture coordinates is just:
vec2 screenTexCoord = gl_FragCoord.xy / vec2( viewWidth, viewHeight );
See a working example here.

Get position from depth texture

Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}

Threejs Shader - gl_FragColor with Alpha (opacity not changing)

I'm trying to write a simple shader where half of my scene will be displayed and half of the scene will be transparent. I can't seem to figure out why the transparency isn't working:
uniform sampler2D tDiffuse;
varying vec2 vUv;
void main(){
vec2 p = vUv;
vec4 color;
if(p.x < 0.5){
color = (1.0, 0.0, 0.0, 0.0);
}else{
color = texture2D(tDiffuse, p);
}
gl_FragColor = color;
}
The shader is definitely running without errors - the right half of the screen is my threejs scene and the left half of the screen is red (when it should really be transparent). I've read that maybe I need to call glBlendFunc(GL_SRC_ALPHA); - but I am getting errors when I try this. To do this I did renderer.context.blendFuncSeparate(GL_SRC_ALPHA); in my main js file (not the shader). Am I supposed to place this somewhere else to make it work?
Any help would be appreciated. For reference I'm applying my shader with the standard effectsComposer, shaderPass, etc - which most threejs shader examples use.
Thanks in advance for your help!!!
It is difficult to help you with only partial information and code snippets, so I can only make educated guesses.
By default EffectComposer uses a render target with RGB format. Did specify RGBA?
Did you specify material.transparent = true?
three.js r.56
I had this problem and for me it was that the material didn't have transparency enabled.
let myMaterial = new THREE.ShaderMaterial({
uniforms: myUniforms,
fragmentShader: myFragmentShader(),
vertexShader: myVertexShader(),
});
myMaterial.transparent = true;

Resources