How to Blend/Blur Two Images in OpenglES/OpenGl , So that output looks like a windows aero theme - windows

I am in process of implementing this Windows aero theme on my Arm Mali GPU
Follow this image, i want a effect like this
checkout this http://vistastyles.org/wp-content/uploads/2007/06/windows_aero_by_blingboy31.jpg
my question is how do identify which area of image(top) is intermixed with image(bottom)
it may possible that there may be multiple images getting overlapped, So do i need to take weight of each image or layer to identify the amount of blur to show.. ??
& how do i draw the images ? do i need to draw the blur image of background first & then normal image or other way??
Any help is appericiated...
SEASON 2
/*******************************************/
Well I tried the datenWolf's way, But stucked at blur shader
I used this fragment shader for blurring but it doesn't give the exact Translucent effect, I wish to get
#version 120
uniform sampler2D sceneTex;
uniform float rt_w;
uniform float rt_h;
uniform float vx_offset;
float offset[3] = float[]( 0.0, 1.3846153846, 3.2307692308 );
float weight[3] = float[]( 0.2270270270, 0.3162162162, 0.0702702703 );
void main()
{
vec3 tc = vec3(1.0, 0.0, 0.0);
if (gl_TexCoord[0].x<(vx_offset-0.01))
{
vec2 uv = gl_TexCoord[0].xy;
tc = texture2D(sceneTex, uv).rgb * weight[0];
for (int i=1; i<3; i++)
{
tc += texture2D(sceneTex, uv + vec2(offset[i])/rt_w, 0.0).rgb * weight[i];
tc += texture2D(sceneTex, uv - vec2(offset[i])/rt_w, 0.0).rgb * weight[i];
}
}
else if (gl_TexCoord[0].x>=(vx_offset+0.01))
{
tc = texture2D(sceneTex, gl_TexCoord[0].xy).rgb;
}
gl_FragColor = vec4(tc, 1.0);
}
I was looking for a Blur effect as shown in this link
http://incubator.quasimondo.com/processing/superfast_blur.php
but this software algorithm is not working in my code, Any help for a Blur Shader With this kind of effect will be helpful, I am using OpenGLES 2.0 on Arm mali

There's no detection process involved in window composition. Every window is available as a separate texture and the windows are drawn in depth order using a blurring step inbetween.
It is done about this. You have two framebuffer objects A and B for the whole screen. Foreach window you
draw the area the window covers from B to A with a blurring filter applied
you draw the actual window contents on top of this to B
swap A←→B
repeat for every window in depth order. Basically this comes out as a number of texture switches, and although those are sort of expensive, the number of (overlapping) windows on a screen will always be within reasonable figures, so this is not really a problem.

Related

Rendering to custom FrameBuffer using same texture both as input and output

Some Fragment shaders in ShaderToy (e.g. fluid dynamics, https://www.shadertoy.com/view/4tGfDW ) use same buffer as both input and output. But when I try to do this in my C/C++ code it does not work (I renders strange checkerboard artifacts like inconsistent visual memory). To workaround this issue I have to use two different FrameBuffers A,B and flip textures ( first render A to B then render B back to A )
I understand that OpenGL does not allow to use the same texture both as input and output (?) due to memory consistency issues.
But isn't there more elegant solution than using two FrameBuffers ? E.g. using some lock, or temporary cache (I don't know some sychronization flag which takes care of this)???
EDIT - Details to answer the comment/question:
OpenGL (depending the GL version) has some very specific rules of what
can and can''t be done when the same texture is used as render target
and sampler input. If your use case can be implemented within this set
of requirements or not is not clear, as you have not explained what
exactly you need or want to do here.
basically I want to implement Fluid-Dynamics solver (e.g. that from ShaderToy linked above) as well as other partial differential equation solvers. That means each pixel output depends on some convolution mask (derivative, laplacian, average) of neighboring pixels. There may be also some movement (advection) which means reading values form distant pixels.
Currently I realized the artifacts appear mostly when I read/write pixels which are different place - i.e. it is non-local (e.g. pixel[100,100] depend on pixel[10,10])
Example of simple Fluid-Solver from Shadertoy:
vec4 solveFluid(sampler2D smp, vec2 uv, vec2 w, float time, vec3 mouse, vec3 lastMouse)
{
const float K = 0.2;
const float v = 0.55;
vec4 data = textureLod(smp, uv, 0.0);
vec4 tr = textureLod(smp, uv + vec2(w.x , 0), 0.0);
vec4 tl = textureLod(smp, uv - vec2(w.x , 0), 0.0);
vec4 tu = textureLod(smp, uv + vec2(0 , w.y), 0.0);
vec4 td = textureLod(smp, uv - vec2(0 , w.y), 0.0);
vec3 dx = (tr.xyz - tl.xyz)*0.5;
vec3 dy = (tu.xyz - td.xyz)*0.5;
vec2 densDif = vec2(dx.z ,dy.z);
data.z -= dt*dot(vec3(densDif, dx.x + dy.y) ,data.xyz); //density
vec2 laplacian = tu.xy + td.xy + tr.xy + tl.xy - 4.0*data.xy;
vec2 viscForce = vec2(v)*laplacian;
data.xyw = textureLod(smp, uv - dt*data.xy*w, 0.).xyw; //advection
vec2 newForce = vec2(0);
data.xy += dt*(viscForce.xy - K/dt*densDif + newForce); //update velocity
data.xy = max(vec2(0), abs(data.xy)-1e-4)*sign(data.xy); //linear velocity decay
#ifdef USE_VORTICITY_CONFINEMENT
data.w = (tr.y - tl.y - tu.x + td.x);
vec2 vort = vec2(abs(tu.w) - abs(td.w), abs(tl.w) - abs(tr.w));
vort *= VORTICITY_AMOUNT/length(vort + 1e-9)*data.w;
data.xy += vort;
#endif
data.y *= smoothstep(.5,.48,abs(uv.y-0.5)); //Boundaries
data = clamp(data, vec4(vec2(-10), 0.5 , -10.), vec4(vec2(10), 3.0 , 10.));
return data;
}
Currently I realized the artifacts appear mostly when I read/write pixels which are different place - i.e. it is non-local (e.g. pixel[100,100] depend on pixel[10,10])
Yes, this is never going to work on GPUs, as there are no particular guarantees on the order of individual fragment shader invocations whatsoever. So if the invocation writing to pixel [100,100] will see the results of the invocation writing to [10,10] or the original data will be totally random. As per the spec, you're getting undefined values when reading in such a cuncurrent read/write scenario, so theoretically, you could get even not one or the other, but see partial writes or totally different values (although that's not likely to occur on real world hardware).
And any order guarantees of such a scale simply does not make sense within the render pipeline, so there is also no partical means of synchronization you can manually add to solve this issue.
To workaround this issue I have to use two different FrameBuffers A,B and flip textures ( first render A to B then render B back to A )
Yes, the ping-pong approach is what you should do for this use case. And honestly, it should not incur any significant performance penalty in that scenario anyway, as you seem to write to each output pixel once anyway, so you don't need an additional copy of "untouched" pixels. So all it costs is the additional memory.

How to draw the data (from gleReadPixels) onto the default (display) farmebuffer in OpenGLES 2.0

Sorry if I am asking something which is already available. So far I could not trace. I read details about FBO and got a fair idea about off-screen buffering. http://mattfife.com/?p=2813 is a nice little article on FBOs. In all the examples, including this one, I don't see details on how to write the data, retrieved through glReadPixels call, onto default display framebuffer. Sorry if I am missing anything silly. I did my due diligence but could not get any example.
Note: I am using OpenGLES 2.0, hence I cannot use calls such as glDrawPixels, etc.
Basically my requirement is to have off-screen buffering. Because I am working on subtitle/captions wherein scrolling of the caption will have to repeat the rendering of lines till those go out of caption-display area.
I got a suggestion to use FBO and bind the texture created to the main default framebuffer.
My actual need is caption/ subtitle (which can be in scrolling mode)
Suppose the first time I had below on display,
This is Line Number - 1
This is Line Number - 2
This is Line Number - 3
After scrolling, then I want to have,
This is Line Number - 2
This is Line Number - 3
This is Line Number - 4
In the second time when I want to render, I will have to update the content in offscreen FBO? That would be re-writing line-2 and line-3 at a new position, removing line-1 and adding line-4.
Create a framebuffer with an texture attachment (see Attaching Images). Note glFramebufferTexture2D is supported by OpenGL ES 2.0.
The color plane of the framebuffer can be loaded to the CPU by glReadPixels, the same way as when you use a Renderbuffer. But the rendering is stored in a 2D Texture.
Bind the texture and the default framebuffer and render a screen space quad with the texture on it.
Render a quad (GL_TRIANLGE_FAN) with the vertex coordinates (-1, -1), (1, -1), (1, 1), (-1, 1) and use the following simple OpenGL ES 2.0 shader:
Vertex shader
attribute vec2 pos;
varying vec2 uv;
void main()
{
uv = pos * 0.5 + 0.5;
gl_Position = vec4(pos, 0.0, 1.0);
}
Fragment shader
precision mediump float;
varying vec2 uv;
uniform sampler2D u_texture;
void main()
{
gl_FragColor = texture2D(u_texture, uv);
}

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

How do I create a proper bevel effect fragment shader in Open GL ES 2.0?

I'm new to writing fragment shaders in GLSL for OpenGL ES2.0 and I'm trying to create a fragment shader that creates a bevel effect for a given graphic. Here's what I've been able to do so far
(ignore the lower wall and other texturing, only look at the top part which is where the bevel effect is applied):
Here's what the desired result should be:
Notice the difference in shading at diagonals, they are more lightly shaded than horizontal edges. Notice the transition from diagonal edges to horizontal or verticals. Also notice the thickness of the bevel. I'd like to get as close to this desired result as possible.
Right now the fragment shader I'm using is fairly simple, here's the code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D s_texture;
uniform float u_time;
void main()
{
vec2 onePixel = vec2(0, 1.0 / 640.0);
vec2 texCoord = v_texCoord;
vec4 color;
color.rgb = vec3(0.5);
color += texture2D(s_texture, texCoord - onePixel) * 5.0;
color -= texture2D(s_texture, texCoord + onePixel) * 5.0;
color.rgb = vec3((color.r + color.g + color.b) / 3.0);
gl_FragColor = vec4(color.rgb, 1);
}
What would I need to add to my shader to create the desired effect?
I think the example you have shown was not done entirely with fragment shader code. It was likely done by beveling the geometry, which could be done by a geometry shader, except that does not exist in ES, so I would either use an authoring tool like Blender to do the beveling to your model or maybe use a texture to do a bump mapping technique.
The optimal way to have Bevel effect is to modify mesh with Blender or other editor.
If you do want to achieve this with Shader, it may be possible by using a bump map which is prepared specifically to hide the edge.
There may be some multi pass and render buffer solutions, but don’t know much about those. You can find edges from depth buffer. But it’s not the best way in terms of performance.
I recently found a way to have Bevel effect without special textures and changing geometry (that is why I’m answering this question:). But it does require modifications to vertex data: you need to actually add other normal vectors to each vertex. So you have to convert the mesh to work specifically with that shader. article

GLSL: simulating 3D texture with 2D texture

I came up with some code that simulates 3D texture lookup using a big 2D texture that contains the tiles. 3D Texture is 128x128x64 and the big 2D texture is 1024x1024, divided into 64 tiles of 128x128.
The lookup code in the fragment shader looks like this:
#extension GL_EXT_gpu_shader4 : enable
varying float LightIntensity;
varying vec3 pos;
uniform sampler2D noisef;
vec4 flat_texture3D()
{
vec3 p = pos;
vec2 inimg = p.xy;
int d = int(p.z*128.0);
float ix = (d % 8);
float iy = (d / 8);
vec2 oc = inimg + vec2(ix, iy);
oc *= 0.125;
return texture2D(noisef, oc);
}
void main (void)
{
vec4 noisevec = flat_texture3D();
gl_FragColor = noisevec;
}
The tiling logic seems to work ok and there is only one problem with this code. It looks like this:
There are strange 1 to 2 pixel wide streaks between the layers of voxels.
The streaks appear just at the border when d changes.
I've been working on this for 2 days now and still without any idea of what's going on here.
This looks like a texture filter issue. Think about it: when you come close to the border, the bilinear filter will consider the neighboring texel, in your case: from another "depth layer".
To avoid this, you can clamp the texture coords so that they are never outside the rect defined outmost texel centers of the tile (similiar to GL_CLAMP_TO_EDGE, but on a per-tile basis). But you should be aware that the problems will become worse when using mipmapping. You should also be aware, that currently you are not able to filter in the z direction, as a real 3D texture would. You could simulate this manually in the shader, of course.
But really: why not just using 3D textures? The hw can do all this for you, with much less overhead...

Resources