Difficulty with proper layering in THREE.js scene - three.js

I am working on a hex-based game. I am currently trying to add a "fog of war" effect where certain tiles lay under an alpha mask to show that information is unknown. Unfortunately I'm running into some problems achieving the effect that I want. The way I'm implementing the fog is to create a mesh over all the tiles that has no alpha if the tile is "visible" and .7 if it is not. I think adjust the mesh position based on the camera position so it always stays in perspective. This is the effect:
Unfortunately, the first way I tried to do this has an undesired effect at low viewing angles. Because I'm shifting the fog to lay over tiles even as perspective changes, at low angles it will also cover the tops of mountains and trees. See below:
The second thing I tried was implementing a two scene solution from How to change the zOrder of object with Threejs?. I put the fog and the unseen tiles in one scene, and the seen tiles in another, and then rendered the seen tiles on top of the unseen. That solved the darkness problem for far tiles, however it now introduces another problem for near tiles. See below:
I'm a little stumped what to do. I'm fairly new to THREE.js (at least the more advanced parts of the library) so I'm wondering if there's something I'm missing that might work.
For reference, here's my vertex shader for the fog:
varying vec4 vColor;
void main() {
vec3 cRel = cameraPosition - position;
float dx = (20.0 * cRel.x) / cRel.y;
float dz = (20.0 * cRel.z) / cRel.y;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(
position.x + dx,
position.y,
position.z + dz,
1.0
);
if(color.x == 1.0 && color.y == 1.0 && color.z == 1.0) {
vColor = vec4(0.0, 0.0, 0.0, 0.0);
} else {
vColor = vec4(color, 0.7);
}
}
and my fragment shader:
varying vec4 vColor;
float expGradient(float val, float max) {
return (max + 1.0 / 10.0) * val / (val + 1.0 / 10.0);
}
void main() {
gl_FragColor = vec4(
vColor.x,
vColor.y,
vColor.z,
expGradient(vColor.w, 0.7)
);
}
I'm using the color of (1.0, 1.0, 1.0) to signify that it should be "seen".

Related

Showing Point Cloud Structure using Lighting in Three.js

I am generating a point cloud representing a rock using Three.js, but am facing a problem with visualizing its structure clearly. In the second screenshot below I would like to be able to denote the topography of the rock, like the corner (shown better in the third screenshot) of the structure, in a more explicit way, as I want to be able to maneuver around the rock and select different points. I have rocks that are more sparse (harder to see structure as points very far away) and more dense (harder to see structure from afar because points all mashed together, like first screenshot but even when closer to the rock), and finding a generalized way to approach this problem has been difficult.
I posted about this problem before here, thinking that representing the ‘depth’ of the rock into the screen would suffice, but after attempting the proposed solution I still could not find a nice way to represent the topography better. Is there a way to add a source of light that my shaders can pick up on? I want to see whether I can represent the colors differently based on their orientation to the source. Using a different software, a friend was able to produce the below image - is there a way to simulate this in Three.js?
For context, I am using Points with a BufferGeometry and ShaderMaterial. Below is the shader code I currently have:
Vertex:
precision mediump float;
varying vec3 vColor;
attribute float alpha;
varying float vAlpha;
uniform float scale;
void main() {
vAlpha = alpha;
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
#ifdef USE_SIZEATTENUATION
//bool isPerspective = ( projectionMatrix[ 2 ][ 3 ] == - 1.0 );
//if ( isPerspective ) gl_PointSize *= ( scale / -mvPosition.z );
#endif
gl_PointSize = 2.0;
gl_Position = projectionMatrix * mvPosition;
}
and
Fragment:
#ifdef GL_OES_standard_derivatives
#extension GL_OES_standard_derivatives : enable
#endif
precision mediump float;
varying vec3 vColor;
varying float vAlpha;
uniform vec2 u_depthRange;
float LinearizeDepth(float depth, float near, float far)
{
float z = depth * 2.0 - 1.0; // Back to NDC
return (2.0 * near * far / (far + near - z * (far - near)) - near) / (far-near);
}
void main() {
float r = 0.0, delta = 0.0, alpha = 1.0;
vec2 cxy = 2.0 * gl_PointCoord.xy - 1.0;
r = dot(cxy, cxy);
float lineardepth = LinearizeDepth(gl_FragCoord.z, u_depthRange[0], u_depthRange[1]);
if (r > 1.0) {
discard;
}
// Reseted back to 1.0 instead of using lineardepth method above
gl_FragColor = vec4(vColor, 1.0);
}
Thank you so much for your help!

How can I move only specific vertices from my vertexshader ? (And how to choose them)

I created a square like this :
THREE.PlaneBufferGeometry(1, 1, 1, 50);
Regarding its material I used a shader material.
THREE.ShaderMaterial()
In my vertexShader function I call a 2d noise function that moves each vertices of my square like this :
But in the end I only want the left side of my square to move. I think if I only call the 50 first vertices or 1 vertices every 2, this should work.
Here's the code of my vertexShader :
void main() {
vUv = uv;
vec3 pos = position.xyz;
pos.x += noiseFunction(vec2(pos.y, time));
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
Does anyone know how can I only select the left-side vertices of my square ? Thanks
The position vector maps the vertex position in local-space, which means that the center of the quad is in the position (0,0).
Therefore, if you want to apply these changes only to the vertices in the left side, you need check if the x coordinate of the vertex is negative x-space.
void main() {
vUv = uv;
vec3 pos = position.xyz;
if ( pos.x < 0.0 ) {
pos.x += noiseFunction(vec2(pos.y, time));
}
// to avoid conditional branching, remove the entire if-block
// and replace it with the line below
// pos.x += noiseFunction(vec2(pos.y, time)) * max(sign(-pos.x), 0.0);
gl_Position = projectionMatrix * modelViewMatrix * vec4(pos, 1.0);
}
I've used an if-statement to make it clear what I meant, but in reality you should avoid it.
That way you prevent conditional branching on GPU.

Partially transparent shader occluding objects in THREE.js

I am making a game with a fog of war layer covering the board. I want to have a cursor that shows up when the player mouses over a tile, and I'm implementing this with a glow effect around the tile, also implemented using a shader.
I'm running into a strange issue: the glow effect works fine for positive x values (when the camera is set at x = -250, y = 250) but I can't see it for negative x values unless the camera gets rotated to almost completely vertical (or I move the camera underneath the fog of war layer).
It's hard to explain, so I've made a CodePen demonstrating the problem: https://codepen.io/jakedluhy/pen/QqzajN?editors=0010
I'm pretty new to custom shaders, so any insight or help would be appreciated. Here's the shaders for the fog of war:
// Vertex
varying vec4 vColor;
void main() {
vec3 cRel = cameraPosition - position;
float dx = (20.0 * cRel.x) / cRel.y;
float dz = (20.0 * cRel.z) / cRel.y;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(
position.x + dx,
position.y,
position.z + dz,
1.0
);
vColor = vec4(0.0, 0.0, 0.0, 0.7);
}
// Fragment
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
And the shaders for the "glow":
// Vertex
varying vec4 vColor;
attribute float alpha;
void main() {
vColor = vec4(color, alpha);
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position, 1.0);
}
// Fragment
varying vec4 vColor;
void main() {
gl_FragColor = vColor;
}
The math in the vertex shader for the fog of war is to keep the fog in a relative position to the game board.
Tagging THREE.js and glsl because I'm not sure whether this is a THREE.js exclusive problem or not...
Edit: version 0.87.1
Your example looks pretty weird. By setting depthWrite:false on your fog material the two boxes render.
version 0.87.1

THREE.JS GLSL sprite always front to camera

I'm creating a glow effect for car stop lights and found a shader that makes it possible to always face the camera:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main() {
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * -viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
This solution is quite simple and almost works. It reacts to camera movement and it would be great. BUT this element is a child of a car. The car itself is moving around and when it rotates the material stops pointing directly at the camera.
I don't want to use SpritePlugin or LensFlarePlugin because they slow down my game by 20fps so I'll stick to this lightweight solution.
I found a solution for Direct 3d that you have to remove rotation data from tranformation matrix, but I don't know how to do this in THREE.js
I guess that instead of adding calculations with car transformation there must be a way to simplify this shader instead.
How to simplify this shader so the material always faces the camera?
From the link below: "To do spherical billboarding, just remove all rotations by setting the identity matrix". How to do it ShaderMaterial in THREE.js?
http://www.geeks3d.com/20140807/billboarding-vertex-shader-glsl/
The problem here I think is intercepting transformation matrix from ShaderMaterial before it's passed to the shader, but I'm not sure.
Probably irrelevant but here's also fragment shader:
uniform vec3 glowColor;
varying float intensity;
void main() {
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
edit: for now I found a workaround which is eliminating parent's rotation influence by setting opposite quaternion. Not perfect and it's happening in CPU not GPU
this.quaternion._x = -this.parent.quaternion._x;
this.quaternion._y = -this.parent.quaternion._y;
this.quaternion._z = -this.parent.quaternion._z;
this.quaternion._w = -this.parent.quaternion._w;
Are you looking for an implementation of billboarding? (make a 2D sprite always face camera) If so, all you need to do is this:
"vec3 billboard(vec2 v, mat4 view){",
" vec3 up = vec3(view[0][1], view[1][1], view[2][1]);",
" vec3 right = vec3(view[0][0], view[1][0], view[2][0]);",
" vec3 p = right * v.x + up * v.y;",
" return p;",
"}"
v is the offset from the center, basically the 4 vertices in a plane that faces the z-axis. Eg. (1.0, 1.0), (1.0, -1.0), (-1.0, 1.0), and (-1.0, -1.0).
Use it like so:
"vec3 worldPos = billboard(a_offset, u_view);"
// then do whatever else.

Low shader performance on iPad 1st gen

I have my painting application which is written using OpenGL ES 1.0 and some Quartz.
I'm trying to rewrite it using OpenGL ES 2.0 for better performance and new features.
I have written 2 shaders: one renders user's input to texture and second mixes this texture with some other textures according to some rules.
Suddenly I realized that second shader works too long on iPad 1st generation - I have 10-15 fps only. iPad 2 works perfectly with 60+ fps. I was slightly shocked because original app (OpenGL ES 1.0) works fine on both devices. It renders only two polygons (but almost fullscreen).
I've tried some optimizations like changing precision, commented some math operations, hardcoded some textures calls - It helped a little, but I'm still far away from 60 fps. Only when I fully comment call of this shader I've got 60 fps.
Am I missing something? I haven't much experience in OpenGL but i do believe this shader must work fine on both generations of devices, just like original application works. My vertex and fragment shaders are:
===============Vertex Shader===================
uniform mat4 modelViewProjectionMatrix;
attribute vec3 position;
attribute vec2 texCoords;
varying vec2 fTexCoords;
void main()
{
fTexCoords = texCoords;
vec4 postmp = vec4(position.xyz, 1.0);
gl_Position = modelViewProjectionMatrix * postmp;
}
===============Fragment Shader===================
precision highp float;
varying lowp vec4 colorVarying;
varying highp vec2 fTexCoords;
uniform sampler2D texture; // black & white user should paint
uniform sampler2D drawingTexture; // texture with user drawings I rendered earlier
uniform sampler2D paperTexture; // texture of sheet of paper
uniform float currentArea; // which area we should not shadow
uniform float isShadowingOn; // bool - should we shadow some areas of picture
void main()
{
// I pass 1024*1024 texture here but I only need 560*800 so I do some calculations to find real texture coordinates
vec2 convertedTexCoords = vec2(fTexCoords.x * 560.0/1024.0, fTexCoords.y * 800.0/1024.0);
vec4 bgImageColor = texture2D(texture, convertedTexCoords);
float area = bgImageColor.a;
bgImageColor.a = 1.0;
vec4 paperColor = texture2D(paperTexture, convertedTexCoords);
vec4 drawingColor = texture2D(drawingTexture, convertedTexCoords);
// if special area
if ( abs(area - 1.0) < 0.0001) {
// if shadowing ON
if (isShadowingOn == 1.0) {
// if color of original image is black
if ( (bgImageColor.r < 0.1) && (bgImageColor.g < 0.1) && (bgImageColor.b < 0.1) ) {
gl_FragColor = vec4(bgImageColor.rgb, 1.0) * vec4(0.5, 0.5, 0.5, 1.0);
}
// if color of original image is grey
else if ( abs(bgImageColor.r - bgImageColor.g) < 0.15 && abs(bgImageColor.r - bgImageColor.b) < 0.15 && abs(bgImageColor.g - bgImageColor.b) < 0.15 && bgImageColor.r < 0.8 && bgImageColor.g < 0.8 && bgImageColor.b < 0.8){ gl_FragColor = vec4(paperColor.rgb * bgImageColor.rgb * 0.4 - drawingColor.rgb * 0.4, 1.0);}
else
{
gl_FragColor = vec4(bgImageColor.rgb, 1.0) * vec4(0.5, 0.5, 0.5, 1.0);
}
}
// if shadowing is OFF
else {
// if color of original image is black
if ( (bgImageColor.r < 0.1) && (bgImageColor.g < 0.1) && (bgImageColor.b < 0.1) ) {
gl_FragColor = vec4(bgImageColor.rgb, 1.0);
}
// if color of original image is gray
else if ( abs(bgImageColor.r - bgImageColor.g) < 0.15 && abs(bgImageColor.r - bgImageColor.b) < 0.15 && abs(bgImageColor.g - bgImageColor.b) < 0.15
&& bgImageColor.r < 0.8 && bgImageColor.g < 0.8 && bgImageColor.b < 0.8){
gl_FragColor = vec4(paperColor.rgb * bgImageColor.rgb * 0.4 - drawingColor.rgb * 0.4, 1.0);
}
// rest
else {
gl_FragColor = vec4(bgImageColor.rgb, 1.0);
}
}
}
// if area of fragment is equal to current area
else if ( abs(area-currentArea/255.0) < 0.0001 ) {
gl_FragColor = vec4(paperColor.rgb * bgImageColor.rgb - drawingColor.rgb, 1.0);
}
// if area of fragment is NOT equal to current area
else {
if (isShadowingOn == 1.0) {
gl_FragColor = vec4(paperColor.rgb * bgImageColor.rgb - drawingColor.rgb, 1.0) * vec4(0.5, 0.5, 0.5, 1.0);
} else {
gl_FragColor = vec4(paperColor.rgb * bgImageColor.rgb - drawingColor.rgb, 1.0);
}
}
}
Branching is really expensive to do in a shader, as it removes possibilities for the GPU to run the shader in parallel, and you are having a lot of branches in your fragment shader (the one shader that should be as fast as possible anyway). Even worse than that, you are branching based on values computed on the GPU itself which also drastically drains your performance.
You really should try to remove as many branches as possible, rather let the GPU do some "extra work" by eg. not trying to optimize the texture atlas and render everything (if this is possible), this will still be faster than your current version. If this doesn't work, try to split up your shader in multiple smaller shaders which each only does a specific part of your larger shader and branch on the CPU rather than on the GPU (you only need to do this once per draw call and not for every "pixel").
Beyond JustSid's valid point about branching in the shader, I see a few other things wrong here. First, if I just run this fragment shader through Imagination Texhnologies' PVRUniSco Editor (which you really should get, and is part of their free SDK), I see this:
which shows a best-case performance of 42 cycles, worst of 52 for this shader. From a similar case of fragment shader tuning I asked about, I found that an 11-16 cycle fragment shader took 35-68 ms to render on an iPad 1 (15 - 29 FPS). You're going to need to make this a lot tighter to get reasonable render times for it.
To eliminate some of the branches, you might be able to use a step function or play tricks with your alpha channel. I've done this and seen a massive reduction in shader rendering times. I would not pass in the isShadowingOn uniform, but I would split this into two shaders to use in the different cases of this being on and off.
Beyond branching, I can see that you're performing a dependent texture read for bgImageColor, paperColor, and drawingColor as a result of calculating the texture coordinates to fetch within your fragment shader. This is horribly expensive on the tile-based deferred renderer within iOS devices, because it prevents certain optimizations for texture fetching from being used. Instead of calculating this per-fragment, I recommend moving this calculation to the vertex shader and passing in the result as a varying to your fragment shader. Use that varying as the coordinate to fetch your textures and you'll see a massive boost in performance.
There are also smaller things you can do to tweak this. For example,
gl_FragColor = vec4((paperColor.rgb * bgImageColor.rgb - drawingColor.rgb) * 0.4, 1.0);
should be slightly faster than
gl_FragColor = vec4(paperColor.rgb * bgImageColor.rgb * 0.4 - drawingColor.rgb * 0.4, 1.0);
The editor will live-compile your shader, so you can try out these manipulations in code and see the results in terms of estimated GPU cycles.

Resources