Texture lookup inside FBO simulation shader - three.js

I'm trying to make FBO-particle system by calculating positions in separate pass. Using code from this post now http://barradeau.com/blog/?p=621.
I render sphere of particles, without any movement:
The only thing i'm adding so far is a texture in simulation fragment shader:
void main() {
vec3 pos = texture2D( texture, vUv ).xyz;
//THIS LINE, pos is approx in -200..200 range
float map = texture2D(texture1, abs(pos.xy/200.)).r;
...
// save map value in ping-pong texture as alpha
gl_FragColor = vec4( pos, map );
texture1 is: half black half white.
Then in render vertex shader i read this map parameter:
map = texture2D( positions, position.xy ).a;
and use it in render fragment shader to see the color:
vec3 finalColor = mix(vec3(1.,0.,0.),vec3(0.,1.,0.),map);
gl_FragColor = vec4( finalColor, .2 );
So what i hope to see is: (made by setting same texture in render shaders)
But what i really see is: (by setting texture in simulation shaders)
Colors are mixed up, though mostly you can see more red ones where they should be, but there are a lot of green particles in between.
Also tried to make my own demo with simplified texture and same idea and i got this:
Also mixed up, but you can still guess image.
Same error.
I think i am missing something obvious. But i was struggling with this a couple of days now, not able to find a mistake by myself.
Would be very grateful for someone to point me in the right direction. Thank you in advance!
Demo with error: http://cssing.org.ua/examples/fbo-error/
Full code i'm referring: https://github.com/akella/fbo-test

You should disable texture filtering by using GL_NEAREST min/mag filters.

My guess is that THREE.TextureLoader() loads texture with mipmaps and texture2D call in vertex shader uses the lowest-res mipmap. In vertex shaders you should use texture2DLod(texture, texCoord, 0.0) - note the 3rd param, lod, which specifies 0 mipmap level.

Related

How to draw the data (from gleReadPixels) onto the default (display) farmebuffer in OpenGLES 2.0

Sorry if I am asking something which is already available. So far I could not trace. I read details about FBO and got a fair idea about off-screen buffering. http://mattfife.com/?p=2813 is a nice little article on FBOs. In all the examples, including this one, I don't see details on how to write the data, retrieved through glReadPixels call, onto default display framebuffer. Sorry if I am missing anything silly. I did my due diligence but could not get any example.
Note: I am using OpenGLES 2.0, hence I cannot use calls such as glDrawPixels, etc.
Basically my requirement is to have off-screen buffering. Because I am working on subtitle/captions wherein scrolling of the caption will have to repeat the rendering of lines till those go out of caption-display area.
I got a suggestion to use FBO and bind the texture created to the main default framebuffer.
My actual need is caption/ subtitle (which can be in scrolling mode)
Suppose the first time I had below on display,
This is Line Number - 1
This is Line Number - 2
This is Line Number - 3
After scrolling, then I want to have,
This is Line Number - 2
This is Line Number - 3
This is Line Number - 4
In the second time when I want to render, I will have to update the content in offscreen FBO? That would be re-writing line-2 and line-3 at a new position, removing line-1 and adding line-4.
Create a framebuffer with an texture attachment (see Attaching Images). Note glFramebufferTexture2D is supported by OpenGL ES 2.0.
The color plane of the framebuffer can be loaded to the CPU by glReadPixels, the same way as when you use a Renderbuffer. But the rendering is stored in a 2D Texture.
Bind the texture and the default framebuffer and render a screen space quad with the texture on it.
Render a quad (GL_TRIANLGE_FAN) with the vertex coordinates (-1, -1), (1, -1), (1, 1), (-1, 1) and use the following simple OpenGL ES 2.0 shader:
Vertex shader
attribute vec2 pos;
varying vec2 uv;
void main()
{
uv = pos * 0.5 + 0.5;
gl_Position = vec4(pos, 0.0, 1.0);
}
Fragment shader
precision mediump float;
varying vec2 uv;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = texture2D(u_texture, uv);
}

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

Three.js Get local position of vertex in shader, is that even what I need?

I am attempting to implement this technique of rendering grass into my three.js app.
http://davideprati.com/demo/grass/
On level terrain at y position 0, everything looks absolutely fantastic!
Problem is, my app (game) has the terrain modified by a heightmap so very few (if any) positions on that terrain are at y position 0.
It seems this vertex shader animation code assumes the grass object is sitting at y position 0 for the following vertex shader code to work as intended:
if (pos.y > 1.0) {
float noised = noise(pos.xy);
pos.y += sin(globalTime * magnitude * noised);
pos.z += sin(globalTime * magnitude * noised);
if (pos.y > 1.7){
pos.x += sin(globalTime * noised);
}
}
This condition works on the assumption that terrain is flat and at position 0, so that only vertices above the ground animate. Well.. umm.. since all vertices are above 1 with a heightmap (mostly), some strange effects occur, such as grass sliding all over the place lol.
Is there a way to do this where I can specify a y position threshold based more on the sprite than its world position? Or is there a better way all together to deal with this "slidy" problem?
I am an extreme noobie when it comes to shader code =]
Any help would be greatly appreciated.
I have no idea what I'm doing.
Edit* Ok, I think the issue is that I am altering the y position of each mesh merged into the main grass container geometry based on the y position of the terrain it sits on. I guess the shader is looking at the local position, but since the geometry itself vertically displaced, the shader doesn’t know how to compensate. Hmm…
Ok, I made a fiddle that demonstrates the issue:
https://jsfiddle.net/titansoftime/a3xr8yp7/
Change the value on line# 128 to a 1 instead of 2 and everything looks fine. Not sure how to go about fixing this.
Also, I have no idea why the colors are doing that, they look fine in my app.
If I understood the question correctly:
You are right in asking for "local" position. Lets say the single strand of grass is a narrow strip, with some height segments.
If you want this to be modular, easy to scale and such, this would most likely extend in some direction in the 0-1 range. Lets say it has four segments along that direction, which would yield vertices with with coordinates [0.0, 0.333, 0.666, 1.0]. It makes slightly more sense than an arbitrary range, because it's easy to reason that 0 is ground, 1 is the tip of the blade.
This is the "local" or model space. When you multiply this with the modelMatrix you transform it to world space (call it localToWorld).
In the shader it could look something like this
void main(){
vec4 localPosition = vec4( position, 1.);
vec4 worldPosition = modelMatrix * localPosition;
vec4 viewPosition = viewMatrix * worldPosition;
vec4 projectedPosition = projectionMatrix * viewPosition; //either orthographic or perspective
gl_Position = projectedPosition;
}
This is the classic "you have a scene graph node" which you transform. Depending on what you set for your mesh position, rotation and scale vec4 worldPosition will be different, but the local position is always the same. You can't tell from that value alone if something is the bottom or top, any value is viable since your terrain can be anything.
With this approach, you can write a shader and logic saying that if a vertex is at height of 0 (or less than some epsilon) don't animate.
So this brings us to some logic, that works in some assumed space (you have a rule for 1.0, and 1.7).
Because you are translating the geometries, and merging them, you no longer have this user friendly space that is the model space. Now these blades may very well skip local2world transformation (it may very well end up being just an identity matrix).
This messes up your logic for selecting the vertices obviously.
If you have to take the approach of distributing them as such, then you need another channel to carry the meaning of that local space, even if you only use it for that animation.
Two suitable channels already exist - UV, and vertex color. Uv's you can imagine as having another flat mesh, in another space, that maps to the mesh you are rendering. But in this particular case it seems like you can use a custom attribute aBladeHeight that can be a float for example.
void main(){
vec4 worldPosition = vec4(position, 1.); //you "burnt/baked" this transformation in, so no need to go from local to world in the shader
vec2 localPosition = uv; //grass in 2d, not transformed to your terrain
//this check knows whats on the bottom of the grass
//rather than whats on the ground (has no idea where the ground is)
if(localPosition.y){
//since local does not exist, the only space we work in is world
//we apply the transformation in that space, but the filter
//is the check above, in uv space, where we know whats the bottom, whats the top
worldPosition.xy += myLogic();
}
gl_Position = projectionMatrix * viewMatrix * worldPosition;
}
To mimic the "local space"
void main(){
vec4 localSpace = vec4(uv,0.,1.);
gl_Position = projectionMatrix * modelViewMatrix * localSpace;
}
And all the blades would render overlapping each other.
EDIT
With instancing the shader would look something like this:
attribute vec4 aInstanceMatrix0; //16 floats to encode a matrix4
attribute vec4 aInstanceMatrix1;
attribute vec4 aInstanceMatrix2;
//attribute vec4 aInstanceMatrix3; //but one you know will be 0,0,0,1 so you can pack in the first 3
void main(){
vec4 localPos = vec4(position, 1.); //the local position is intact, its the normalized 0-1 blade
//do your thing in local space
if(localPos.y > foo){
localPos.xz += myLogic();
}
//notice the difference, instead of using the modelMatrix, you use the instance attributes in it's place
mat4 localToWorld = mat4(
aInstanceMatrix0,
aInstanceMatrix1,
aInstanceMatrix2,
//aInstanceMatrix3
0. , 0. , 0. , 1. //this is actually wrong i think, it should be the last column not row, but for illustrative purposes,
);
//to pack it more effeciently the rows would look like this
// xyz w
// xyz w
// xyz w
// 000 1
// off the top of my head i dont know what the correct code is
mat4 foo = mat4(
aInstanceMatrix0.xyz, 0.,
aInstanceMatrix1.xyz, 0.,
aInstanceMatrix2.xyz, 0.,
aInstanceMatrix0.w, aInstanceMatrix1.w, aInstanceMatrix2.w, 1.
)
//you can still use the modelMatrix with this if you want to move the ENTIRE hill with all the grass with .position.set()
vec4 worldPos = localToWorld * localPos;
gl_Position = projectionMatrix * viewMatrix * worldPos;
}

Three.js, custom shader and png texture with transparency

I have an extremely simple PNG texture: a grey circle with a transparent background.
I use it as a uniform map for a THREE.ShaderMaterial:
var uniforms = THREE.UniformsUtils.merge( [basicShader.uniforms] );
uniforms['map'].value = THREE.ImageUtils.loadTexture( "img/particle.png" );
uniforms['size'].value = 100;
uniforms['opacity'].value = 0.5;
uniforms['psColor'].value = new THREE.Color( 0xffffff );
Here is my fragment shader (just part of it):
gl_FragColor = vec4( psColor, vOpacity );
gl_FragColor = gl_FragColor * texture2D( map,vec2( gl_PointCoord.x, 1.0 - gl_PointCoord.y ) );
gl_FragColor = gl_FragColor * vec4( vColor, 1.0 );
I applied the material to some particles (THREE.PointCloud mesh) and it works quite well:
But if i turn the camera of more than 180 degrees I see this:
I understand that the fragment shader is not correctly taking into account the alpha value of the PNG texture.
What is the best approach in this case, to get the right color and opacity (from custom attributes) and still get the alpha right from the PNG?
And why is it behaving correctly on one side?
Transparent objects must be rendered from back to front -- from furthest to closest. This is because of the depth buffer.
But PointCloud particles are not sorted based on distance from the camera. That would be too inefficient. The particles are always rendered in the same order, regardless of the camera position.
You have several work-arounds.
The first is to discard fragments for which the alpha is low. You can use a pattern like so:
if ( textureColor.a < 0.5 ) discard;
Another option is to set material.depthTest = false or material.depthWrite = false. You might not like the side effects, however, if you have other objects in the scene.
three.js r.71

How do I create a proper bevel effect fragment shader in Open GL ES 2.0?

I'm new to writing fragment shaders in GLSL for OpenGL ES2.0 and I'm trying to create a fragment shader that creates a bevel effect for a given graphic. Here's what I've been able to do so far
(ignore the lower wall and other texturing, only look at the top part which is where the bevel effect is applied):
Here's what the desired result should be:
Notice the difference in shading at diagonals, they are more lightly shaded than horizontal edges. Notice the transition from diagonal edges to horizontal or verticals. Also notice the thickness of the bevel. I'd like to get as close to this desired result as possible.
Right now the fragment shader I'm using is fairly simple, here's the code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D s_texture;
uniform float u_time;
void main()
{
vec2 onePixel = vec2(0, 1.0 / 640.0);
vec2 texCoord = v_texCoord;
vec4 color;
color.rgb = vec3(0.5);
color += texture2D(s_texture, texCoord - onePixel) * 5.0;
color -= texture2D(s_texture, texCoord + onePixel) * 5.0;
color.rgb = vec3((color.r + color.g + color.b) / 3.0);
gl_FragColor = vec4(color.rgb, 1);
}
What would I need to add to my shader to create the desired effect?
I think the example you have shown was not done entirely with fragment shader code. It was likely done by beveling the geometry, which could be done by a geometry shader, except that does not exist in ES, so I would either use an authoring tool like Blender to do the beveling to your model or maybe use a texture to do a bump mapping technique.
The optimal way to have Bevel effect is to modify mesh with Blender or other editor.
If you do want to achieve this with Shader, it may be possible by using a bump map which is prepared specifically to hide the edge.
There may be some multi pass and render buffer solutions, but don’t know much about those. You can find edges from depth buffer. But it’s not the best way in terms of performance.
I recently found a way to have Bevel effect without special textures and changing geometry (that is why I’m answering this question:). But it does require modifications to vertex data: you need to actually add other normal vectors to each vertex. So you have to convert the mesh to work specifically with that shader. article

Resources