Three.js Get local position of vertex in shader, is that even what I need? - three.js

I am attempting to implement this technique of rendering grass into my three.js app.
http://davideprati.com/demo/grass/
On level terrain at y position 0, everything looks absolutely fantastic!
Problem is, my app (game) has the terrain modified by a heightmap so very few (if any) positions on that terrain are at y position 0.
It seems this vertex shader animation code assumes the grass object is sitting at y position 0 for the following vertex shader code to work as intended:
if (pos.y > 1.0) {
float noised = noise(pos.xy);
pos.y += sin(globalTime * magnitude * noised);
pos.z += sin(globalTime * magnitude * noised);
if (pos.y > 1.7){
pos.x += sin(globalTime * noised);
}
}
This condition works on the assumption that terrain is flat and at position 0, so that only vertices above the ground animate. Well.. umm.. since all vertices are above 1 with a heightmap (mostly), some strange effects occur, such as grass sliding all over the place lol.
Is there a way to do this where I can specify a y position threshold based more on the sprite than its world position? Or is there a better way all together to deal with this "slidy" problem?
I am an extreme noobie when it comes to shader code =]
Any help would be greatly appreciated.
I have no idea what I'm doing.
Edit* Ok, I think the issue is that I am altering the y position of each mesh merged into the main grass container geometry based on the y position of the terrain it sits on. I guess the shader is looking at the local position, but since the geometry itself vertically displaced, the shader doesn’t know how to compensate. Hmm…
Ok, I made a fiddle that demonstrates the issue:
https://jsfiddle.net/titansoftime/a3xr8yp7/
Change the value on line# 128 to a 1 instead of 2 and everything looks fine. Not sure how to go about fixing this.
Also, I have no idea why the colors are doing that, they look fine in my app.

If I understood the question correctly:
You are right in asking for "local" position. Lets say the single strand of grass is a narrow strip, with some height segments.
If you want this to be modular, easy to scale and such, this would most likely extend in some direction in the 0-1 range. Lets say it has four segments along that direction, which would yield vertices with with coordinates [0.0, 0.333, 0.666, 1.0]. It makes slightly more sense than an arbitrary range, because it's easy to reason that 0 is ground, 1 is the tip of the blade.
This is the "local" or model space. When you multiply this with the modelMatrix you transform it to world space (call it localToWorld).
In the shader it could look something like this
void main(){
vec4 localPosition = vec4( position, 1.);
vec4 worldPosition = modelMatrix * localPosition;
vec4 viewPosition = viewMatrix * worldPosition;
vec4 projectedPosition = projectionMatrix * viewPosition; //either orthographic or perspective
gl_Position = projectedPosition;
}
This is the classic "you have a scene graph node" which you transform. Depending on what you set for your mesh position, rotation and scale vec4 worldPosition will be different, but the local position is always the same. You can't tell from that value alone if something is the bottom or top, any value is viable since your terrain can be anything.
With this approach, you can write a shader and logic saying that if a vertex is at height of 0 (or less than some epsilon) don't animate.
So this brings us to some logic, that works in some assumed space (you have a rule for 1.0, and 1.7).
Because you are translating the geometries, and merging them, you no longer have this user friendly space that is the model space. Now these blades may very well skip local2world transformation (it may very well end up being just an identity matrix).
This messes up your logic for selecting the vertices obviously.
If you have to take the approach of distributing them as such, then you need another channel to carry the meaning of that local space, even if you only use it for that animation.
Two suitable channels already exist - UV, and vertex color. Uv's you can imagine as having another flat mesh, in another space, that maps to the mesh you are rendering. But in this particular case it seems like you can use a custom attribute aBladeHeight that can be a float for example.
void main(){
vec4 worldPosition = vec4(position, 1.); //you "burnt/baked" this transformation in, so no need to go from local to world in the shader
vec2 localPosition = uv; //grass in 2d, not transformed to your terrain
//this check knows whats on the bottom of the grass
//rather than whats on the ground (has no idea where the ground is)
if(localPosition.y){
//since local does not exist, the only space we work in is world
//we apply the transformation in that space, but the filter
//is the check above, in uv space, where we know whats the bottom, whats the top
worldPosition.xy += myLogic();
}
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
To mimic the "local space"
void main(){
vec4 localSpace = vec4(uv,0.,1.);
gl_Position = projectionMatrix * modelViewMatrix * localSpace;
}
And all the blades would render overlapping each other.
EDIT
With instancing the shader would look something like this:
attribute vec4 aInstanceMatrix0; //16 floats to encode a matrix4
attribute vec4 aInstanceMatrix1;
attribute vec4 aInstanceMatrix2;
//attribute vec4 aInstanceMatrix3; //but one you know will be 0,0,0,1 so you can pack in the first 3
void main(){
vec4 localPos = vec4(position, 1.); //the local position is intact, its the normalized 0-1 blade
//do your thing in local space
if(localPos.y > foo){
localPos.xz += myLogic();
}
//notice the difference, instead of using the modelMatrix, you use the instance attributes in it's place
mat4 localToWorld = mat4(
aInstanceMatrix0,
aInstanceMatrix1,
aInstanceMatrix2,
//aInstanceMatrix3
0. , 0. , 0. , 1. //this is actually wrong i think, it should be the last column not row, but for illustrative purposes,
);
//to pack it more effeciently the rows would look like this
// xyz w
// xyz w
// xyz w
// 000 1
// off the top of my head i dont know what the correct code is
mat4 foo = mat4(
aInstanceMatrix0.xyz, 0.,
aInstanceMatrix1.xyz, 0.,
aInstanceMatrix2.xyz, 0.,
aInstanceMatrix0.w, aInstanceMatrix1.w, aInstanceMatrix2.w, 1.
)
//you can still use the modelMatrix with this if you want to move the ENTIRE hill with all the grass with .position.set()
vec4 worldPos = localToWorld * localPos;
gl_Position = projectionMatrix * viewMatrix * worldPos;
}

Related

OpenGL simple antialiased polygon grid shader

How to make a test grid pattern with antialiased lines in a fragment shader?
I remember I found this challenging, so I'll post the answer here for my future self and for anyone who wants the same effect.
This shader is meant to be rendered "above" the already textured plane in a separate render call. The reason I'm doing that - is because in my program I am generating the texture of the surface through several render calls, slowly building it up layer by layer. And then I wanted to make a simple black grid over it, so I make the last render call to do this.
That's why the base color here is (0,0,0,0), basically a nothing. Then I can use GL mixing patterns to overlay the result of this shader over whatever my texture is.
Note that you needn't do that separately. You can just as easily modify this code to display a certain color (like smooth grey) or even a texture of your choice. Simply pass the texture to the shader and modify the last line accordingly.
Also note that I use constants that I set up during shader compillation. Basically, I just load the shader string, but before passing it to a shader compiler - I search and replace the __CONSTANT_SOMETHING with an actual value I want. Don't forget that that's all text, so you need to replace it with text, for example:
//java code
shaderCode = shaderCode.replaceFirst("__CONSTANT_SQUARE_SIZE", String.valueOf(GlobalSettings.PLANE_SQUARE_SIZE));
If I could share with you the code I use for anti-aliased grids, it might help the complexity. All I've done is use the texture coordinates to paint a grid on a plane. I used GLSL's genType fract(genType x) to repeat texture space. Then I used the absolute value function to essentially calculate each pixel's distance to the grid line. The rest of the operations are to interpret that as a color.
You can play with this code directly on Shadertoy.com by pasting it into a new shader.
If you want to use it in your code, the only lines you need are the part starting at the gridSize variable and ending with the grid variable.
iResolution.y is the screen height, uv is the texture coordinate of your plane.
gridSize and width should probably be supplied with a uniform variable.
void mainImage(out vec4 fragColor, in vec2 fragCoord) {
// aspect correct pixel coordinates (for shadertoy only)
vec2 uv = fragCoord / iResolution.xy * vec2(iResolution.x / iResolution.y, 1.0);
// get some diagonal lines going (for shadertoy only)
uv.yx += uv.xy * 0.1;
// for every unit of texture space, I want 10 grid lines
float gridSize = 10.0;
// width of a line on the screen plus a little bit for AA
float width = (gridSize * 1.2) / iResolution.y;
// chop up into grid
uv = fract(uv * gridSize);
// abs version
float grid = max(
1.0 - abs((uv.y - 0.5) / width),
1.0 - abs((uv.x - 0.5) / width)
);
// Output to screen (for shadertoy only)
fragColor = vec4(grid, grid, grid, 1.0);
}
Happy shading!
Here're my shaders:
Vertex:
#version 300 es
precision highp float;
precision highp int;
layout (location=0) in vec3 position;
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
uniform vec2 coordShift;
uniform mat4 modelMatrix;
out highp vec3 vertexPosition;
const float PLANE_SCALE = __CONSTANT_PLANE_SCALE; //assigned during shader compillation
void main()
{
// generate position data for the fragment shader
// does not take view matrix or projection matrix into account
// TODO: +3.0 part is contingent on the actual mesh. It is supposed to be it's lowest possible coordinate.
// TODO: the mesh here is 6x6 with -3..3 coords. I normalize it to 0..6 for correct fragment shader calculations
vertexPosition = vec3((position.x+3.0)*PLANE_SCALE+coordShift.x, position.y, (position.z+3.0)*PLANE_SCALE+coordShift.y);
// position data for the OpenGL vertex drawing
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
Note that I calculate VertexPosition here and pass it to the fragment shader. This is so that my grid "moves" when the object moves. The thing is, in my app I have the ground basically stuck to the main entity. The entity (call it character or whatever) doesn't move across the plane or changes its position relative to the plane. But to create the illusion of movement - I calculate the coordinate shift (relative to the square size) and use that to calculate vertex position.
It's a bit complicated, but I thought I would include that. Basically, if the square size is set to 5.0 (i.e. we have a 5x5 meter square grid), then coordShift of (0,0) would mean that the character stands in the lower left corner of the square; coordShift of (2.5,2.5) would be the middle, and (5,5) would be top right. After going past 5, the shifting loops back to 0. Go below 0 - it loops to 5.
So basically the grid ever "moves" within one square, but because it is uniform - the illusion is that you're walking on an infinite grid surface instead.
Also note that you can make the same thing work with multi-layered grids, for example where every 10th line is thicker. All you really need to do is make sure your coordShift represents the largest distance your grid pattern shifts.
Just in case someone wonders why I made it loop - it's for precision sake. Sure, you could just pass raw character's coordinate to the shader, and it'll work fine around (0,0), but as you get 10000 units away - you will notice some serious precision glitches, like your lines getting distorted or even "fuzzy" like they're made out of brushes.
Here's the fragment shader:
#version 300 es
precision highp float;
in highp vec3 vertexPosition;
out mediump vec4 fragColor;
const float squareSize = __CONSTANT_SQUARE_SIZE;
const vec3 color_l1 = __CONSTANT_COLOR_L1;
void main()
{
// calculate deriviatives
// (must be done at the start before conditionals)
float dXy = abs(dFdx(vertexPosition.z)) / 2.0;
float dYy = abs(dFdy(vertexPosition.z)) / 2.0;
float dXx = abs(dFdx(vertexPosition.x)) / 2.0;
float dYx = abs(dFdy(vertexPosition.x)) / 2.0;
// find and fill horizontal lines
int roundPos = int(vertexPosition.z / squareSize);
float remainder = vertexPosition.z - float(roundPos)*squareSize;
float width = max(dYy, dXy) * 2.0;
if (remainder <= width)
{
float diff = (width - remainder) / width;
fragColor = vec4(color_l1, diff);
return;
}
if (remainder >= (squareSize - width))
{
float diff = (remainder - squareSize + width) / width;
fragColor = vec4(color_l1, diff);
return;
}
// find and fill vertical lines
roundPos = int(vertexPosition.x / squareSize);
remainder = vertexPosition.x - float(roundPos)*squareSize;
width = max(dYx, dXx) * 2.0;
if (remainder <= width)
{
float diff = (width - remainder) / width;
fragColor = vec4(color_l1, diff);
return;
}
if (remainder >= (squareSize - width))
{
float diff = (remainder - squareSize + width) / width;
fragColor = vec4(color_l1, diff);
return;
}
// fill base color
fragColor = vec4(0,0,0, 0);
return;
}
It is currently built for a 1-pixel thick lines only, but you can control thickness by controlling the "width"
Here, the first important part is dfdx / dfdy functions. These are GLSL functions, and I'll simply say that they let you determine how much space in WORLD coordinates your fragment takes on the screen, based on the Z-distance of that spot on your plane.
Well, that was a mouthful. I'm sure you can figure it out if you read docs for them though.
Then I take the maximum of those outputs as width. Basically, depending on the way your camera is looking you want to "stretch" the width of your line a bit.
remainder - is basically how far this fragment is from the line that we want to draw in world coordinates. If it's too far - we don't need to fill it.
If you simply take the max here, you will get a non-antialiased line 1 pizel wide. It'll basically look like a perfect 1-pixel line shape from MS paint.
But increasing width, you make those straight segments stretch further and overlap.
You can see that I compare remainder with line width here. The greater the width - the bigger the remainder can be to "hit" it. I have to compare this from both sides, because otherwise you're only looking at pixels that are close to the line from the negative coord side, and discount the positive, which could still be hitting it.
Now, for the simple antialiasing effect, we need to make those overlapping segments "fade out" as they near their ends. For this purpose, I calculate the fraction to see how deeply the remainder is inside the line. When the fraction equals 1, this means that our line that we want to draw basically goes straight through the middle of the fragment that we're currently drawing. As the fraction approaches 0, it means the fragment is farther and farther away from the line, and should thus be made more and more transparent.
Finally, we do this from both sides for horizontal and vertical lines separately. We have to do them separate because dFdX / dFdY needs to be different for vertical and horizontal lines, so we can't do them in one formula.
And at last, if we didn't hit any of the lines close enough - we fill the fragment with transparent color.
I'm not sure if that's THE best code for the task - but it works. If you have suggestions let me know!
p.s. shaders are written for Opengl-ES, but they should work for OpenGL too.

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

Three.js shader pointLight position

I'm trying to write my shader and add light sources, I sort of figured it out and did it.
But there is a problem, the position of the light source is incorrectly determined, when the camera rotates or moves, something unimaginable happens.
So I get the position of the vertex shader
vec3 vGlobalPosition = (modelMatrix * vec4(position, 1.0 )).xyz
Now I'm trying to make an illuminated area
float lightDistance = pointLights[ i ].distance;
vec3 lightPosition = pointLights[ i ].position;
float diffuseCoefficient = max(
1.0 - (distance(lightPosition,vGlobalPosition) / lightDistance ), 0.0);
gl_FragColor.rgb += color.rgb * diffuseCoefficient;
But as I wrote earlier if you rotate the camera, the lighting area moves to different positions.
I set the light position manually and everything became normal.
vec3 lightPosition = vec3(2000,0,2000);
...
The question is how to get the right position of the light source? I need a global position, what position is contained in the light source I do not know.
Added an example: http://codepen.io/korner/pen/XMzEaG
Your problem lies with vPos. Currently you do:
vPos = (modelMatrix * vec4(position, 1.0)).xyz
Instead you need to multiply the position with modelViewMatrix:
vPos = (modelViewMatrix * vec4(position, 1.0)).xyz;
You need to use modelViewMatrix because PointLight.position is relative to the camera.

GLSL Shader: Mapping Bars in Polar-Coordinates

I'd like to create a polar representation of this shader: https://www.shadertoy.com/view/4sfSDN
So that it looks like in this screenshot:
http://postimg.org/image/uwc34jxxz/
I know the basics of the polar-system: How to calculate r and ϕ, but i can only use those values with a texture2d() load function on a image.
When i only have a amplitude value like in the shader above, i dont get it working.
r should somehow be based of the amplitude, but then i dont know how to draw the circle without the texture2d() function... i can draw a circle with r only, but then there are no different amplitudes. Or do i even need to fill a matrix with the generated bars in a loop and load the circle from there?
Im quite sure it is possible, because of the insane shaders on shadertoy, but i dont quite get it...
Can anyone point me out to a solution?
From the shader you posted I think it should be enough to simply transform the uv to polar coordinates.
So what you are looking for are angle and radius from the center. First let us transform the uv so it gives the vector pointing from the center:
uv = fragCoord - (iResolution*.5);
Next try to normalize it. Since the view is not square the normalization transform should only be by 1 coordinate such that
if(iResolution.x>iResolution.y)
{
uv = uv/iResolution.y;
}
else
{
uv = uv/iResolution.x;
}
This will kind of produce a fit effect but you may just hard code one or the other if you need to. min can be used if available (uv = uv/min(iResolution.x, iResolution.y))) to remove the condition.
So at this point the uv vector points from the center toward the pixel position in a coordinate system that is normalized in one dimension.
Now to get the angle you may simply use atan(uv.y, uv.x). To get the radius you then need length(uv).
The radius in your case will be for the shorter dimension in range [0, .5] so you may multiply it by 2.0 but this is a factor you may later change to get the desired effect so that the maximum value is not hitting the border but maybe having 80% or so (just play around with it).
The angle is in range of [-Pi, Pi] plus in the docs it says it does not work for X = 0 which you will need to handle yourself then. So now the angle must be transformed to be in range [.0, 1.0] to access the texture coordinate:
angle = angle/(Pi*2.0) + .5
So now construct the new uv
uv = vec2(angle, radius)
And use the same shader you did before.
You will also need to keep in mind that radius may be larger then 1.0 in corners which may produce a wrong texture access. In such cases it would be best to discard the fragment.
From the shader toy:
#define M_PI 3.1415926535897932384626433832795
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy - (iResolution.xy*.5);
uv = uv/min(iResolution.x, iResolution.y);
float angle = atan(uv.y, uv.x);
angle = angle/(M_PI*2.0) + .5;
float radius = length(uv);
uv = vec2(angle, radius*2.0);
float bars = 24.;
float fft = texture2D( iChannel0, vec2(floor(uv.x*bars)/bars,0.25) ).x;
float amp = (fft - uv.y)*100.;
fragColor = vec4(amp,0.,0.,1.0);
}

GLSL: simulating 3D texture with 2D texture

I came up with some code that simulates 3D texture lookup using a big 2D texture that contains the tiles. 3D Texture is 128x128x64 and the big 2D texture is 1024x1024, divided into 64 tiles of 128x128.
The lookup code in the fragment shader looks like this:
#extension GL_EXT_gpu_shader4 : enable
varying float LightIntensity;
varying vec3 pos;
uniform sampler2D noisef;
vec4 flat_texture3D()
{
vec3 p = pos;
vec2 inimg = p.xy;
int d = int(p.z*128.0);
float ix = (d % 8);
float iy = (d / 8);
vec2 oc = inimg + vec2(ix, iy);
oc *= 0.125;
return texture2D(noisef, oc);
}
void main (void)
{
vec4 noisevec = flat_texture3D();
gl_FragColor = noisevec;
}
The tiling logic seems to work ok and there is only one problem with this code. It looks like this:
There are strange 1 to 2 pixel wide streaks between the layers of voxels.
The streaks appear just at the border when d changes.
I've been working on this for 2 days now and still without any idea of what's going on here.
This looks like a texture filter issue. Think about it: when you come close to the border, the bilinear filter will consider the neighboring texel, in your case: from another "depth layer".
To avoid this, you can clamp the texture coords so that they are never outside the rect defined outmost texel centers of the tile (similiar to GL_CLAMP_TO_EDGE, but on a per-tile basis). But you should be aware that the problems will become worse when using mipmapping. You should also be aware, that currently you are not able to filter in the z direction, as a real 3D texture would. You could simulate this manually in the shader, of course.
But really: why not just using 3D textures? The hw can do all this for you, with much less overhead...

Resources