Shader ignoring position variable - three.js

I have a plane with the following shaders:
<script type="x-shader/x-vertex" id="vertexshader">
varying vec3 col;
void main()
{
col = vec3( position.z, position.z, 1 );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
varying vec3 col;
void main()
{
// set color for the current vertex
gl_FragColor = vec4(col, 1);
}
</script>
I'm moving the plane away from the camera like this:
function renderFrame()
{
cube.position.z -= 1;
requestAnimationFrame( renderFrame );
renderer.render(scene, camera);
};
The problem is that I would assume since the cube is moving, the fragment shaders position value would change but it doesn't. Doesn't the 'position' variable passes objects current location in 3D space? If not, how can I detect it's location and rotation or I would have to pass this information manually to shader using uniform variable?
Working example: http://webgl.demised.net/experiments/003_Shaders.php
Also, do shaders execute once on initialization or on each frame?

The position variable in the vertex shader is a vertex attribute. Vertex attributes are usually read from a vertex buffer, which resides in GPU memory. Due to that, vertex data should be changed as little as possible to avoid the potentially costly data transfer to GPU memory. That's why translations and rotations of objects are not performed by changing vertex buffer data. Instead, the uniform modelViewMatrix is used. For more complex objects, transfering a single matrix is cheaper than transfering the entire model.
The model view matrix is the model matrix (which is basically the translation of the plane) combined with the view matrix (which arranges the camera in the scene). It is not possible to get the plane position from this matrix. However, you could get the distance to the camera along the camera's view direction. This would be the entry in the third row, fourth column. Try to calculate your color based on this value.
Shaders are executed many times. Every vertex that is rendered issues a call of the vertex shader. Every filled fragment issues a call of the fragment shader. This happens every time the image is rendered (i.e. every frame).

Related

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

Texture lookup inside FBO simulation shader

I'm trying to make FBO-particle system by calculating positions in separate pass. Using code from this post now http://barradeau.com/blog/?p=621.
I render sphere of particles, without any movement:
The only thing i'm adding so far is a texture in simulation fragment shader:
void main() {
vec3 pos = texture2D( texture, vUv ).xyz;
//THIS LINE, pos is approx in -200..200 range
float map = texture2D(texture1, abs(pos.xy/200.)).r;
...
// save map value in ping-pong texture as alpha
gl_FragColor = vec4( pos, map );
texture1 is: half black half white.
Then in render vertex shader i read this map parameter:
map = texture2D( positions, position.xy ).a;
and use it in render fragment shader to see the color:
vec3 finalColor = mix(vec3(1.,0.,0.),vec3(0.,1.,0.),map);
gl_FragColor = vec4( finalColor, .2 );
So what i hope to see is: (made by setting same texture in render shaders)
But what i really see is: (by setting texture in simulation shaders)
Colors are mixed up, though mostly you can see more red ones where they should be, but there are a lot of green particles in between.
Also tried to make my own demo with simplified texture and same idea and i got this:
Also mixed up, but you can still guess image.
Same error.
I think i am missing something obvious. But i was struggling with this a couple of days now, not able to find a mistake by myself.
Would be very grateful for someone to point me in the right direction. Thank you in advance!
Demo with error: http://cssing.org.ua/examples/fbo-error/
Full code i'm referring: https://github.com/akella/fbo-test
You should disable texture filtering by using GL_NEAREST min/mag filters.
My guess is that THREE.TextureLoader() loads texture with mipmaps and texture2D call in vertex shader uses the lowest-res mipmap. In vertex shaders you should use texture2DLod(texture, texCoord, 0.0) - note the 3rd param, lod, which specifies 0 mipmap level.

Three.js Get local position of vertex in shader, is that even what I need?

I am attempting to implement this technique of rendering grass into my three.js app.
http://davideprati.com/demo/grass/
On level terrain at y position 0, everything looks absolutely fantastic!
Problem is, my app (game) has the terrain modified by a heightmap so very few (if any) positions on that terrain are at y position 0.
It seems this vertex shader animation code assumes the grass object is sitting at y position 0 for the following vertex shader code to work as intended:
if (pos.y > 1.0) {
float noised = noise(pos.xy);
pos.y += sin(globalTime * magnitude * noised);
pos.z += sin(globalTime * magnitude * noised);
if (pos.y > 1.7){
pos.x += sin(globalTime * noised);
}
}
This condition works on the assumption that terrain is flat and at position 0, so that only vertices above the ground animate. Well.. umm.. since all vertices are above 1 with a heightmap (mostly), some strange effects occur, such as grass sliding all over the place lol.
Is there a way to do this where I can specify a y position threshold based more on the sprite than its world position? Or is there a better way all together to deal with this "slidy" problem?
I am an extreme noobie when it comes to shader code =]
Any help would be greatly appreciated.
I have no idea what I'm doing.
Edit* Ok, I think the issue is that I am altering the y position of each mesh merged into the main grass container geometry based on the y position of the terrain it sits on. I guess the shader is looking at the local position, but since the geometry itself vertically displaced, the shader doesn’t know how to compensate. Hmm…
Ok, I made a fiddle that demonstrates the issue:
https://jsfiddle.net/titansoftime/a3xr8yp7/
Change the value on line# 128 to a 1 instead of 2 and everything looks fine. Not sure how to go about fixing this.
Also, I have no idea why the colors are doing that, they look fine in my app.
If I understood the question correctly:
You are right in asking for "local" position. Lets say the single strand of grass is a narrow strip, with some height segments.
If you want this to be modular, easy to scale and such, this would most likely extend in some direction in the 0-1 range. Lets say it has four segments along that direction, which would yield vertices with with coordinates [0.0, 0.333, 0.666, 1.0]. It makes slightly more sense than an arbitrary range, because it's easy to reason that 0 is ground, 1 is the tip of the blade.
This is the "local" or model space. When you multiply this with the modelMatrix you transform it to world space (call it localToWorld).
In the shader it could look something like this
void main(){
vec4 localPosition = vec4( position, 1.);
vec4 worldPosition = modelMatrix * localPosition;
vec4 viewPosition = viewMatrix * worldPosition;
vec4 projectedPosition = projectionMatrix * viewPosition; //either orthographic or perspective
gl_Position = projectedPosition;
}
This is the classic "you have a scene graph node" which you transform. Depending on what you set for your mesh position, rotation and scale vec4 worldPosition will be different, but the local position is always the same. You can't tell from that value alone if something is the bottom or top, any value is viable since your terrain can be anything.
With this approach, you can write a shader and logic saying that if a vertex is at height of 0 (or less than some epsilon) don't animate.
So this brings us to some logic, that works in some assumed space (you have a rule for 1.0, and 1.7).
Because you are translating the geometries, and merging them, you no longer have this user friendly space that is the model space. Now these blades may very well skip local2world transformation (it may very well end up being just an identity matrix).
This messes up your logic for selecting the vertices obviously.
If you have to take the approach of distributing them as such, then you need another channel to carry the meaning of that local space, even if you only use it for that animation.
Two suitable channels already exist - UV, and vertex color. Uv's you can imagine as having another flat mesh, in another space, that maps to the mesh you are rendering. But in this particular case it seems like you can use a custom attribute aBladeHeight that can be a float for example.
void main(){
vec4 worldPosition = vec4(position, 1.); //you "burnt/baked" this transformation in, so no need to go from local to world in the shader
vec2 localPosition = uv; //grass in 2d, not transformed to your terrain
//this check knows whats on the bottom of the grass
//rather than whats on the ground (has no idea where the ground is)
if(localPosition.y){
//since local does not exist, the only space we work in is world
//we apply the transformation in that space, but the filter
//is the check above, in uv space, where we know whats the bottom, whats the top
worldPosition.xy += myLogic();
}
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
To mimic the "local space"
void main(){
vec4 localSpace = vec4(uv,0.,1.);
gl_Position = projectionMatrix * modelViewMatrix * localSpace;
}
And all the blades would render overlapping each other.
EDIT
With instancing the shader would look something like this:
attribute vec4 aInstanceMatrix0; //16 floats to encode a matrix4
attribute vec4 aInstanceMatrix1;
attribute vec4 aInstanceMatrix2;
//attribute vec4 aInstanceMatrix3; //but one you know will be 0,0,0,1 so you can pack in the first 3
void main(){
vec4 localPos = vec4(position, 1.); //the local position is intact, its the normalized 0-1 blade
//do your thing in local space
if(localPos.y > foo){
localPos.xz += myLogic();
}
//notice the difference, instead of using the modelMatrix, you use the instance attributes in it's place
mat4 localToWorld = mat4(
aInstanceMatrix0,
aInstanceMatrix1,
aInstanceMatrix2,
//aInstanceMatrix3
0. , 0. , 0. , 1. //this is actually wrong i think, it should be the last column not row, but for illustrative purposes,
);
//to pack it more effeciently the rows would look like this
// xyz w
// xyz w
// xyz w
// 000 1
// off the top of my head i dont know what the correct code is
mat4 foo = mat4(
aInstanceMatrix0.xyz, 0.,
aInstanceMatrix1.xyz, 0.,
aInstanceMatrix2.xyz, 0.,
aInstanceMatrix0.w, aInstanceMatrix1.w, aInstanceMatrix2.w, 1.
)
//you can still use the modelMatrix with this if you want to move the ENTIRE hill with all the grass with .position.set()
vec4 worldPos = localToWorld * localPos;
gl_Position = projectionMatrix * viewMatrix * worldPos;
}

how can i iterate with loop in sampler2D

I have some data encoded in a floating point texture 2k by 2k. The data are longitude, latitude, time, and date as R,G,B,A. Those are all normalized but for now that is not a problem. I can de-normalize them later if i want to.
What i need now is to iterate through the whole texture and find what longitude, latitude should be in that fragment coordinate. I assume that the whole atlas has normalized coordinates and it maps the whole openGL context. Besides coordinates i will filter data with time and date but that is an if condition that is easy to be done. Because pixel coordinates that i have will not map exactly that coordinate i will use a small delta value to fix that issue for now and i will sue that delta value to precompute other points that are close to that co.
Now i have some driver crashes on iGPU (it should be out of memory or something similar) even if i want to add something in 2 for nested loops or even if I use a discard.
The code i now is this
NOTE f_time is the filter for the time and for now i have a slider so that i will have some interaction with the values.
precision mediump float;
precision mediump int;
const int maxTextureSize = 2048;
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform float f_time;
uniform ivec2 textureDimensions;
void main(void) {
float delta = 0.001;// now bigger delta just to make it work then we tune it
// compute 1 pixel in texture coordinates.
vec2 onePixel = vec2(1.0, 1.0) / float(textureDimensions.x);
vec2 position = ( gl_FragCoord.xy / float(textureDimensions.x) );
vec4 color = texture2D(u_texture, v_texCoord);
vec4 outColor = vec4(0.0);
float dist_x = distance( color.r, gl_FragCoord.x);
float dist_y = distance( color.g, gl_FragCoord.y);
//float dist_x = distance( color.g, gl_PointCoord.s);
//float dist_y = distance( color.b, gl_PointCoord.t);
for(int i = 0; i < maxTextureSize; i++){
if(i < textureDimensions.x ){
break;
}
for(int j = 0; j < maxTextureSize ; j++){
if(j < textureDimensions.y ){
break;
}
// Where i am stuck now how to get the texture coordinate and test it with fragment shader
// the precomputation
vec4 pixel= texture2D(u_texture,vec2(i,j));
if(pixel.r > f_time){
outColor = vec4(1.0, 1.0, 1.0, 1.0);
// for now just break, no delta calculation to sum this point with others so that
// we will have an approximation of other points into that pixel
break;
}
}
}
// this works
if(color.t > f_time){
//gl_FragColor = color;//;vec4(1.0, 1.0, 1.0, 1.0);
}
gl_FragColor = outColor;
}
What you are trying to do is simply not feasible.
You are trying to access a texture up to four million times, all within a single fragment shader invocation.
The way modern GPUs usually detect infinite loop conditions is by seeing how long your shader runs, and then killing it if it has run for "too long", the length of which is usually sufficiently generous. Your code, which does up to 4 million texture accesses, will almost certainly trigger this condition.
Which typically leads to a GPU reset.
Generally speaking, the way you would find the position in a texture which is associated with some fragment is to do so directly. That is, create a 1:1 correspondence between screen fragment locations (gl_FragCoord) and texels in the texture. That way, your texture does not need to contain X/Y coordinates, and each fragment shader can access the data meant for that specific invocation.
What you're trying to do seems to be to pass a large table (four million elements) to the GPU, and then have the GPU process it. The ordering of values is (generally) irrelevant; any value could potentially modify any pixel. Some pixels don't have values applied to them, while others may have multiple values applied.
This is serial programmer thinking, not parallel thinking. The way you'd code that on the CPU is to walk each element in the table, look at where it goes, and build the results for each pixel.
In a parallel algorithm, you don't work that way. Each invocation needs to be able to instantly find the data in the table that applies to it. You should never be doing some kind of search through a table for your data. Especially not a linear search.
You need to think of this from the perspective of your fragment shader.
In your data table, for each position on the screen, there is a list of data values that apply to that screen position. Correct? What you need to do is make that list directly available to each fragment shader invocation. And since each fragment's list is not constant in size, you will need to use a linked list rather than a fixed-size array.
To do this, you build a texture the size of your render target. Each texel in the texture specifies the location in the data table of the first element that this fragment needs to process. This provides every fragment shader invocation with the location of its first element. Since some fragment shaders may have no data applied to them, you need to set aside some special texture coordinate value to represent "none".
The data in the data table consists of your time and date, but rather than "longitude/latitude", it has the texture coordinate of the next texel in the texture that applies for that fragment shader. This is how you make a linked list in shaders. Each location in the data table specifies the next location to be processed.
If that location was the last data to be processed, then the location will be the "none" value from before.
You should also be using a buffer texture or an SSBO to hold your data table, rather than a 2D texture. It would make things much easier.

OpenGL - trouble passing ALL data into shader at once

I'm trying to display textures on quads (2 triangles) using opengl 3.3
Drawing a texture on a quad works great; however when I have ONE textures (sprite atlas) but using 2 quads(objects) to display different parts of the atlas. When in draw loop, they end up switching back and fourth(one disappears than appears again, etc) at their individual translated locations.
The way I'm drawing this is not the standard DrawElements for each quad(or object) but I package all quads, uv, translations, etc send them up to the shader as one big chunk (as "in" variables): Vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
in vec3 vertexPosition_modelspace;
in vec3 vertexColor;
in vec2 vertexUV;
in vec3 translation;
in vec4 rotation;
in vec3 scale;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
// Output data ; will be interpolated for each fragment.
out vec3 fragmentColor;
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
...
void main(){
mat4 Model = mat4(1.0);
mat4 t = translationMatrix(translation);
mat4 s = scaleMatrix(scale);
mat4 r = rotationMatrix(vec3(rotation), rotation[3]);
Model *= t * r * s;
gl_Position = MVP * Model * vec4 (vertexPosition_modelspace,1); //* MVP;
// The color of each vertex will be interpolated
// to produce the color of each fragment
fragmentColor = vertexColor;
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
Is the vertex shader working as I think it would with a large chunk of data - that it draws each segment passed up as uniform individually because it does not seem like it? Is my train of thought correct on this?
For completeness this is my fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec3 fragmentColor;
// Interpolated values from the vertex shaders
in vec2 UV;
// Ouput data
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
void main()
{
// Output color = color of the texture at the specified UV
color = texture2D( myTextureSampler, UV ).rgba;
}
A request for more information was made so I will put how i bind this data up to the vertex shader. The following code is just one I use for my translations. I have more for color, rotation, scale, uv, etc:
gl.BindBuffer(gl.ARRAY_BUFFER, tvbo)
gl.BufferData(gl.ARRAY_BUFFER, len(data.Translations)*4, gl.Ptr(data.Translations), gl.DYNAMIC_DRAW)
tAttrib := uint32(gl.GetAttribLocation(program, gl.Str("translation\x00")))
gl.EnableVertexAttribArray(tAttrib)
gl.VertexAttribPointer(tAttrib, 3, gl.FLOAT, false, 0, nil)
...
gl.DrawElements(gl.TRIANGLES, int32(len(elements)), gl.UNSIGNED_INT, nil)
You have just single sampler2D
which means you have just single texture at your disposal
regardless on how many of them you bind.
If you really need to pass the data as single block
then you should add sampler per each texture you got
not sure how many objects/textures you have
but you are limited by gfx hw limit on texture units with this way of data passing
also you need to add another value to your data telling which primitive use which texture unit
and inside fragment then select the right texture sampler ...
You should add stuff like this:
// vertex
in int usedtexture;
out int txr;
void main()
{
txr=usedtexture;
}
// fragment
uniform sampler2D myTextureSampler0;
uniform sampler2D myTextureSampler1;
uniform sampler2D myTextureSampler2;
uniform sampler2D myTextureSampler3;
in vec2 UV;
in int txr;
out vec4 color;
void main
{
if (txr==0) color = texture2D( myTextureSampler0, UV ).rgba;
else if (txr==1) color = texture2D( myTextureSampler1, UV ).rgba;
else if (txr==2) color = texture2D( myTextureSampler2, UV ).rgba;
else if (txr==3) color = texture2D( myTextureSampler3, UV ).rgba;
else color=vec4(0.0,0.0,0.0,0.0);
}
This way of passing is not good for these reasons:
number of used textures is limited to HW texture units limit
if your rendering would need additional textures like normal/shininess/light maps
then you need more then 1 texture per object type and your limit is suddenly divided by 2,3,4...
You need if/switch statements inside fragment which can slow things down considerably
Yes you can do it brunch less but then you would need to access all textures all the time increasing heat stress on gfx without reason...
This kind of passing is suitable for
all textures inside single image (as you mentioned texture atlas)
which can be faster this way and reasonable for scenes with small number of object types (or materials) but large object count...
Since I needed more input on this matter, I linked this page to reddit and someone was able to help me with one response! Anyways the reddit link is here:
https://www.reddit.com/r/opengl/comments/3gyvlt/opengl_passing_all_scene_data_into_shader_each/
The issue of seeing two individual textures/quads after passing all vertices as one data structure over to vertex shader was because my element indices were off. I needed to determine the correct index of each set of vertices for my 2 triangle(quad) objects. Simply had to do something like this:
vertexInfo.Elements = append(vertexInfo.Elements, uint32(idx*4), uint32(idx*4+1), uint32(idx*4+2), uint32(idx*4), uint32(idx*4+2), uint32(idx*4+3))

Resources