How do I create a proper bevel effect fragment shader in Open GL ES 2.0? - opengl-es

I'm new to writing fragment shaders in GLSL for OpenGL ES2.0 and I'm trying to create a fragment shader that creates a bevel effect for a given graphic. Here's what I've been able to do so far
(ignore the lower wall and other texturing, only look at the top part which is where the bevel effect is applied):
Here's what the desired result should be:
Notice the difference in shading at diagonals, they are more lightly shaded than horizontal edges. Notice the transition from diagonal edges to horizontal or verticals. Also notice the thickness of the bevel. I'd like to get as close to this desired result as possible.
Right now the fragment shader I'm using is fairly simple, here's the code:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D s_texture;
uniform float u_time;
void main()
{
vec2 onePixel = vec2(0, 1.0 / 640.0);
vec2 texCoord = v_texCoord;
vec4 color;
color.rgb = vec3(0.5);
color += texture2D(s_texture, texCoord - onePixel) * 5.0;
color -= texture2D(s_texture, texCoord + onePixel) * 5.0;
color.rgb = vec3((color.r + color.g + color.b) / 3.0);
gl_FragColor = vec4(color.rgb, 1);
}
What would I need to add to my shader to create the desired effect?

I think the example you have shown was not done entirely with fragment shader code. It was likely done by beveling the geometry, which could be done by a geometry shader, except that does not exist in ES, so I would either use an authoring tool like Blender to do the beveling to your model or maybe use a texture to do a bump mapping technique.

The optimal way to have Bevel effect is to modify mesh with Blender or other editor.
If you do want to achieve this with Shader, it may be possible by using a bump map which is prepared specifically to hide the edge.
There may be some multi pass and render buffer solutions, but don’t know much about those. You can find edges from depth buffer. But it’s not the best way in terms of performance.
I recently found a way to have Bevel effect without special textures and changing geometry (that is why I’m answering this question:). But it does require modifications to vertex data: you need to actually add other normal vectors to each vertex. So you have to convert the mesh to work specifically with that shader. article

Related

How do you increase the space between pixels in a fragment shader?

I'm currently working on a shader for a very mundane effect I'd like to achieve. It's a little bit hard to explain, but the basic gist is that I'm trying to "pull apart" the pixels of a pixel art image.
You can see my current progress, however minor, at this jsfiddle:
https://jsfiddle.net/roomyrooms/Lh46z2gw/85/
I can distort the image easily, of course, and make it stretch the further away from the center it is. But this distorts and warps it smoothly, and all the pixels remain connected (whether they're squished/smeared/etc.)
I would like to get an effect where the space between the pixels is stretched rather than the pixels themselves stretching. Sort of like if you were to swipe sand across a table. The grains of sand stay the same size, they just get further apart.
Welcome any ideas! Thanks. Here's what I've got code-wise so far:
var fragmentShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
uniform highp vec4 inputSize;
uniform float time;
vec2 mapCoord( vec2 coord )
{
coord *= inputSize.xy;
coord += inputSize.zw;
return coord;
}
vec2 unmapCoord( vec2 coord )
{
coord -= inputSize.zw;
coord /= inputSize.xy;
return coord;
}
void main(void)
{
vec2 coord = mapCoord(vTextureCoord);
float dist = distance(coord.x, inputSize.x/2.);
coord.x += dist/4.;
coord = unmapCoord(coord);
gl_FragColor = texture2D(uSampler, coord);
}`
EDIT: Added an illustration of the effect I'm trying to achieve here:
I can get something along these lines:
With modulo, but it's discarding half of the image in the process.
You can:
discard some fragments (that is slow on some mobile devices)
use stencil mask to just draw where you want
draw transparent pixels alpha=0 for the ones that you do not want.
and lastly, you can draw an array of points or squires and move them around.
As far as I know, the fragment shader will run on every pixel in your triangle. You can only tell it what color to set that pixel to. In your example you're already duplicating columns of pixels so you can discard some hopefully without losing any of the source image's pixels if you stretch the coord 2x then discard every other column.
vec2 coord = mapCoord(vTextureCoord);
if(coord.x < 100.0 && floor(coord.x/2.0)==floor((coord.x+1.0)/2.0))
discard;

Finding the size of a screen pixel in UV coordinates for use by the fragment shader

I've got a very detailed texture (with false color information I'm rendering with a false-color lookup in the fragment shader). My problem is that sometimes the user will zoom far away from this texture, and the fine detail will be lost: fine lines in the texture can't be seen. I would like to modify my code to make these lines pop out.
My thinking is that I can run fast filter over neighboring textels and pick out the biggest/smallest/most interesting value to render. What I'm not sure how to do is to find out if (and how much) to do this. When the user is zoomed into a triangle, I want the standard lookup. When they are zoomed out, a single pixel on the screen maps to many texture pixels.
How do I get an estimate of this? I am doing this with both orthogographic and perspective cameras.
My thinking is that I could somehow use the vertex shader to get an estimate of how big one screen pixel is in UV space and pass that as a varying to the fragment shader, but I still don't have a solid grasp on either the transforms and spaces enough to get the idea.
My current vertex shader is quite simple:
varying vec2 vUv;
varying vec3 vPosition;
varying vec3 vNormal;
varying vec3 vViewDirection;
void main() {
vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPosition = (modelMatrix *
vec4(position,1.0)).xyz;
gl_Position = projectionMatrix * mvPosition;
vec3 transformedNormal = normalMatrix * vec3( normal );
vNormal = normalize( transformedNormal );
vViewDirection = normalize(mvPosition.xyz);
}
How do I get something like vDeltaUV, which gives the distance between screen pixels in UV units?
Constraints: I'm working in WebGL, inside three.js.
Here is an example of one image, where the user has zoomed perspective in close to my texture:
Here is the same example, but zoomed out; the feature above is a barely-perceptible diagonal line near the center (see the coordinates to get a sense of scale). I want this line to pop out by rendering all pixels with the red-est color of the corresponding array of textels.
Addendum (re LJ's comment)...
No, I don't think mipmapping will do what I want here, for two reasons.
First, I'm not actually mapping the texture; that is, I'm doing something like this:
gl_FragColor = texture2D(mappingtexture, texture2d(vec2(inputtexture.g,inputtexture.r))
The user dynamically creates the mappingtexture, which allows me to vary the false-color map in realtime. I think it's actually a very elegant solution to my application.
Second, I don't want to draw the AVERAGE value of neighboring pixels (i.e. smoothing) I want the most EXTREME value of neighboring pixels (i.e. something more akin to edge finding). "Extreme" in this case is technically defined by my encoding of the g/r color values in the input texture.
Solution:
Thanks to the answer below, I've now got a working solution.
In my javascript code, I had to add:
extensions: {derivatives: true}
to my declaration of the ShaderMaterial. Then in my fragment shader:
float dUdx = dFdx(vUv.x); // Difference in U between this pixel and the one to the right.
float dUdy = dFdy(vUv.x); // Difference in U between this pixel and the one to the above.
float dU = sqrt(dUdx*dUdx + dUdy*dUdy);
float pixel_ratio = (dU*(uInputTextureResolution));
This allows me to do things like this:
float x = ... the u coordinate in pixels in the input texture
float y = ... the v coordinate in pixels in the input texture
vec4 inc = get_encoded_adc_value(x,y);
// Extremum mapping:
if(pixel_ratio>2.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x+1.0, y));
}
if(pixel_ratio>3.0) {
inc = most_extreme_value(inc, get_encoded_adc_value(x-1.0, y));
}
The effect is subtle, but definitely there! The lines pop much more clearly.
Thanks for the help!
You can't do this in the vertex shader as it's pre-rasterization stage hence output resolution agnostic, but in the fragment shader you could use dFdx, dFdy and fwidth using the GL_OES_standard_derivatives extension(which is available pretty much everywhere) to estimate the sampling footprint.
If you're not updating the texture in realtime a simpler and more efficient solution would be to generate custom mip levels for it on the CPU.

Texture lookup inside FBO simulation shader

I'm trying to make FBO-particle system by calculating positions in separate pass. Using code from this post now http://barradeau.com/blog/?p=621.
I render sphere of particles, without any movement:
The only thing i'm adding so far is a texture in simulation fragment shader:
void main() {
vec3 pos = texture2D( texture, vUv ).xyz;
//THIS LINE, pos is approx in -200..200 range
float map = texture2D(texture1, abs(pos.xy/200.)).r;
...
// save map value in ping-pong texture as alpha
gl_FragColor = vec4( pos, map );
texture1 is: half black half white.
Then in render vertex shader i read this map parameter:
map = texture2D( positions, position.xy ).a;
and use it in render fragment shader to see the color:
vec3 finalColor = mix(vec3(1.,0.,0.),vec3(0.,1.,0.),map);
gl_FragColor = vec4( finalColor, .2 );
So what i hope to see is: (made by setting same texture in render shaders)
But what i really see is: (by setting texture in simulation shaders)
Colors are mixed up, though mostly you can see more red ones where they should be, but there are a lot of green particles in between.
Also tried to make my own demo with simplified texture and same idea and i got this:
Also mixed up, but you can still guess image.
Same error.
I think i am missing something obvious. But i was struggling with this a couple of days now, not able to find a mistake by myself.
Would be very grateful for someone to point me in the right direction. Thank you in advance!
Demo with error: http://cssing.org.ua/examples/fbo-error/
Full code i'm referring: https://github.com/akella/fbo-test
You should disable texture filtering by using GL_NEAREST min/mag filters.
My guess is that THREE.TextureLoader() loads texture with mipmaps and texture2D call in vertex shader uses the lowest-res mipmap. In vertex shaders you should use texture2DLod(texture, texCoord, 0.0) - note the 3rd param, lod, which specifies 0 mipmap level.

SceneKit painting on texture with texture coordinates

I have a Collada model that I load into SceneKit. When I perform a hittest on the model I am able to retrieve the texture coordinates of the model that was hit.
With these texture coordinates I should be able to replace texture coordinates with a color.
So this way I should be able to draw on the model
Correct me if I am wrong so far.
I read a lot of articles till now but I just don't get my shaders right.
( Though I did get some funky effects ;-)
My vertex shader :
precision highp float;
attribute vec4 position;
attribute vec2 textureCoordinate;
attribute vec2 aTexureCoordForColor; //coordinates from the hittest
uniform mat4 modelViewProjection;
varying vec2 aTexureCoordForColorVarying; // passing to the fragment shader here
varying vec2 texCoord;
void main(void) {
// Pass along to the fragment shader
texCoord = textureCoordinate;
aTexureCoordForColorVarying = aTexureCoordForColor; //assigning here
// output the projected position
gl_Position = modelViewProjection * position;
}
my fragment shader
precision highp float;
uniform sampler2D yourTexture;
uniform vec2 uResolution;
uniform int uTexureCoordsCount;
varying vec2 texCoord;
varying vec2 aTexureCoordForColorVarying;
void main(void) {
// ??????????? no idea anymore what to do here
gl_FragColor = texture2D(yourTexture, texCoord);
}
If you need more code please let me know.
First, shaders aren't the only way to draw onto an object's material. One other option that might work well for you is to use a SpriteKit scene as the material's contents — see this answer for some help with that.
If you stick to the shader route, you don't need to rewrite the whole shader program just to paint on top of the existing texture. (If you do, you lose things that SceneKit's program provides for you, like lighting and bump mapping. No sense reinventing those wheels unless you really want to.) Instead, use a shader modifier — a little snippet of GLSL that gets inserted into the SceneKit shader program. The SCNShadable reference explains how to use those.
Third, I'm not sure you're providing the texture coordinates to your shader in the best way. You want every fragment to get the same texcoord value for the clicked point, so there's little point to passing it into GL as an attribute and interpolating it between the vertex and fragment stages. Just pass it as a uniform, and set that uniform on your material with key-value coding. (See the SCNShadable reference again for info on binding shader parameters with KVC.)
Finally, to get at the main point of your question... :)
To change the output color of the fragment shader (or shader modifier) at or near a particular set of texture coordinates, just compare your passed-in click coordinates to the current set of texcoords that'd be used for the regular texture lookup. Here's an example that does that, going the shader modifier route:
uniform vec2 clickTexcoord;
// set this from ObjC/Swift code with setValue:forKey:
// and an NSValue with CGPoint data
uniform float radius = 0.01;
// change this to determine how large an area to highlight
uniform vec3 paintColor = vec4(0.0, 1.0, 0.0);
// nice and green; you can change this with KVC, too
#pragma body
if (distance(_surface.diffuseTexcoord.x, clickTexcoord.x) < radius) {
_surface.diffuse.rgb = paintColor
}
Use this example as a SCNShaderModifierEntryPointSurface shader modifier and lighting/shading will still be applied to the result. If you want your paint to override lighting, use a SCNShaderModifierEntryPointFragment shader modifier instead, and in the GLSL snippet set _output.color.rgb instead of _surface.color.rgb.

GLSL: simulating 3D texture with 2D texture

I came up with some code that simulates 3D texture lookup using a big 2D texture that contains the tiles. 3D Texture is 128x128x64 and the big 2D texture is 1024x1024, divided into 64 tiles of 128x128.
The lookup code in the fragment shader looks like this:
#extension GL_EXT_gpu_shader4 : enable
varying float LightIntensity;
varying vec3 pos;
uniform sampler2D noisef;
vec4 flat_texture3D()
{
vec3 p = pos;
vec2 inimg = p.xy;
int d = int(p.z*128.0);
float ix = (d % 8);
float iy = (d / 8);
vec2 oc = inimg + vec2(ix, iy);
oc *= 0.125;
return texture2D(noisef, oc);
}
void main (void)
{
vec4 noisevec = flat_texture3D();
gl_FragColor = noisevec;
}
The tiling logic seems to work ok and there is only one problem with this code. It looks like this:
There are strange 1 to 2 pixel wide streaks between the layers of voxels.
The streaks appear just at the border when d changes.
I've been working on this for 2 days now and still without any idea of what's going on here.
This looks like a texture filter issue. Think about it: when you come close to the border, the bilinear filter will consider the neighboring texel, in your case: from another "depth layer".
To avoid this, you can clamp the texture coords so that they are never outside the rect defined outmost texel centers of the tile (similiar to GL_CLAMP_TO_EDGE, but on a per-tile basis). But you should be aware that the problems will become worse when using mipmapping. You should also be aware, that currently you are not able to filter in the z direction, as a real 3D texture would. You could simulate this manually in the shader, of course.
But really: why not just using 3D textures? The hw can do all this for you, with much less overhead...

Resources