My problem is the following. I have a canvas in which I am drawing a piece using WebGL.
When it renders, it is fine.
But then, two seconds later or so, without moving the camera or anything, some of the triangles disappear.
And after moving the camera or something, the triangles that are gone stays the same (I have read that in some cases is due to the buffer and the distance to the object so by zooming in or out the triangles that are gone can change).
What could be the problem?
I am applying both color and texture to each element in order to print black lines around each "square" (my texture is a square with black border and white inside). That means that the final color is computed in the fragment shader by multiplying the color times the texture. That also means that some of the nodes are duplicated or even more (in order to give the TextureVertex attribute to a node I need a different node as it belongs to each element) It is important to notice that, when I create a mesh with less number of nodes, they do not disappear. Anyway, I have seen WebGL examples on the net very complex, and I may have just 1000 nodes so I don't think it could be a problem of my graphic hardware.
What do you think could be the problem? How would you solve it? If you require more info just let me know. I didn't include code because it seems to be rendered OK at the beginning, and furthermore I only have this problem with "big" meshes.
Thanks for the comment. please find here both images:
First draw
A few seconds later.
EDITED: Im gonna give some more details in case this helps to find the problem. I will give you the information regarding one of the squares (the rest of the squares would follow same scheme). Notice that they are defined in the code behind as public variables and then I pass them to the html script:
Nodes for vertex buffer:
serverSideInitialCoordinates = {
-1.0,-1.0,0.0,
1.0,-1.0,0.0,
1.0,1.0,0.0,
-1.0,1.0,0.0,
0.0,-1.0,0.0,
1.0,0.0,0.0,
0.0,1.0,0.0,
-1.0,0.0,0.0,
0.0,0.0,0.0,
};
Connectivity to form triangles:
serverSideConnectivity = {
0,4,8,
0,8,7,
1,5,8,
1,8,4,
2,6,8,
2,8,5,
3,7,8,
3,8,6
};
Colors:not relevant.
TextureVertex:{
0.0,0.0
1.0,0.0
1.0,1.0
0.0,1.0
0.5,0.0
1.0,0.5
0.5,1.0
0.0,0.5
0.5,0.5
};
As I mentioned I have an image which is white with just few pixels black around the borders. So in the fragment shader I have something similar to this:
gl_FragColor = texture2D(u_texture, v_texcoord) * vColor;
Then I have a function that loads the image and gets the texture.
In the function InitBuffers I create the buffers and assign to them the vertexPosition, The colors and the connectivity of the triangles.
Finally in the Draw function I bind the buffers again : vertexPosition, color (bind it as colorattribute), texture (bind it as textureVertex), and connectivity, and then set Matrix Uniform and draw. I don think the problem is here because it works fine for smaller meshes, but I still dont know why it doesn't for larger ones. I thought maybe performance of firefox is worse than other browsers' but then I ran on firefox difficult WebGL models I found on the web and they work fine, no triangles missing. If I print same objects without the texture (just colors) it works fine and no triangles are missing. Do you think that maybe it takes a lot of effort for the shader to get the color everytime by multiplying both things? Can you think of another way?
My idea was just to draw black lines between some nodes instead of using a complete texture, but I cant get it working, either I draw the triangles or I draw the lines but it doesn't allow me to draw both at same time. If I put code for both, only the last "elements" are drawn.
Related
Hello I'm trying to archive the effect in the image below (that is like shine light but only on top of the raw image)
Unfortunately I can not figure out how to do it, tried some shaders and assets from the asset store, but so far no one has worked, also I dont know much about shaders.
The raw image is an ui element, and renders a render texture that is being captured by a camera.
I'm totally lost here, any kind of help will be appreciated, how to make that effect?
Fresnel shaders use the difference between the surface normal and the view vector to detect which pixels are facing the viewer and which aren't. A UI plane will always face the user, so no luck there.
Solving this with shaders can be done in two ways - either you bake a normal map of the imagined "curvature" of the outer edge (example), or you create a signed distance field (example), or some similar method which maps the distance to the edge. A normal map would probably allow for the most complex effects, and i am sure that some fresnel shaders could work with that too. It does however require you to make a model of the shape and bake the normals from that.
A signed distance field on the other hand can be generated with script from an image, so if you have a lot of images, it might be the fastest approach. Getting the edge distance in real time inside the shader would not really work since you'd have to sample a very large amount of neighboring pixels, which might make the shader 10-20 times slower depending on how thick you need the edge to be.
If you don't need the image to be that dynamic, then maybe just creating an inner glow black/white texture in Photoshop and overlaying it using an additive shader would work better for you. If you don't know how to write shaders, then maybe the two above approaches are a bit of a tall order.
I'm writing a little 3D engine. I've just added the alpha blending functionality in my program and I wonder one thing: do I have to sort all the primitives compared with the camera?)
Let's take a simple example : I have a scene composed by 1 skybox and 1 tree with alpha blended leafs!
Here's a screenshot of a such scene:
Until here all seems to be correct concerning the alpha blending of the leafs relative to each others.
But if we get closer...
... we can see there is a little trouble on the top right of the image (the area around the leaf forms a quad).
I think this bug comes from the fact these two quads (primitives) should have been rendered later than the ones in back.
What do you think about my supposition ?
PS: I want to precise all the geometry concerning the leafs is rendered in just one draw call.
But if I'm right it would means when I need to render an alpha blended mesh like this tree I need update my VBO each time my camera is moving by sorting all the primitives (triangles or quads) from the camera's point of view. So the primitives in back should be rendered in first...
What do you think of my idea?
When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.
I am actually trying to develop a web application that would visualize a Finite Element mesh. In order to do so, I am using WebGl. Right now I have a page with all the code necessary to draw the mesh in the viewport using triangles as primitives (each quad element of the mesh was splitted into two triangles to draw it). The problem is that, when using triangles, all the piece is "continuous" and you cant see the separation between triangles. In fact, what I would like to achieve is to add lines between the nodes so that, around each quad element (formed by two triangles) we have these lines in black, and so the mesh can actually be shown.
So I was able to define the lines in my page, but since one shader just can have one type of primitive, if I add the code for the line buffers and bind them it just show the lines, not the element (as they were the last buffers binded).
So the closest solution I have found is using multiple shaders, and managing them with multiple programs, but this solution would just enable me whether to plot the geometry with trias or to draw just the lines, depending on which program is currently selected.
Could any of you help me about how to approach this issue? I have seen a windows application that shows FE meshes using OpenGL and it is able to mix the triangles with points and lines, apart from using different layers, illumination etc. So I am aware that this may be complicated, but I assume that if it is possible somehow with OpenGl it should be as well with webGL.
Please if you provide any solution I would appreciate a lot that it contains some code as an example, for instance drawing a single triangle but including three black lines at its borders and maybe three points at the vertices.
setup()
{
<your current code here>
Additional step - Unbind the previous textures, upload and bind one 1x1 black pixel as a texture. Let this texture object be borderID;
}
Draw loop()
{
Unbind the previous textures, bind your normal textures, and draw the mesh like your current setup. This will fill the entire area with different colours, without border (the current case)
Bind the borderID texture, and draw the same vertices again except this time, use context.LINE_STRIP instead of context.TRIANGLES. This will draw lines with the black texture, and will appear as border, on top of the previously drawn colors for each triangle. You can have something like below
if(currDrawMode==0)
context3dStore.bindTexture(context3dStore.TEXTURE_2D, meshTextureObj[bindId]); else context3dStore.bindTexture(context3dStore.TEXTURE_2D, borderTexture1pixObj[bindId]);
context3dStore.drawElements((currDrawMode == 0) ? context3dStore.TRIANGLES: context3dStore.LINE_LOOP, indicesCount[bindId], context3dStore.UNSIGNED_SHORT, 0); , where currDrawMode toggles between drawing the border and drawing the meshfill.
Since the line texture appears as a border over the flat colors you had earlier, this should solve your need
}
I'm a little bit lost, and this is somewhat related to another question I've asked about fragment shaders, but goes beyond it.
I have an orthographic scene (although that may not be relevant), with the scene drawn here as black, and I have one billboarded sprite that I draw using a shader, which I show in red. I have a point that I know and define myself, A, represented by the blue dot, at some x,y coordinate in the 2d coordinate space. (Lower-left of screen is origin). I need to mask the red billboard in a programmatic fashion where I specify 0% to 100%, with 0% being fully intact and 100% being fully masked. I can either pass 0-100% (0 to 1.0) in to the shader, or I could precompute an angle, either solution would be fine.
( Here you can see the scene drawn with '0%' masking )
So when I set "15%" I want the following to show up:
( Here you can see the scene drawn with '15%' masking )
And when I set "45%" I want the following to show up:
( Here you can see the scene drawn with '45%' masking )
And here's an example of "80%":
The general idea, I think, is to pass in a uniform 'A' vec2d, and within the fragment shader I determine if the fragment is within the area from 'A' to bottom of screen, to the a line that's the correct angle offset clockwise from there. If within that area, discard the fragment. (Discarding makes more sense than setting alpha to 0.0 or 1.0 if keeping, right?)
But how can I actually achieve this?? I don't understand how to implement that algorithm in terms of a shader. (I'm using OpenGL ES 2.0)
One solution to this would be to calculate the difference between gl_FragCoord (I hope that exists under ES 2.0!) and the point (must be sure the point is in screen coords) and using the atan function with two parameters, giving you an angle. If the angle is not some value that you like (greater than minimum and less than maximum), kill the fragment.
Of course, killing fragments is not precisely the most performant thing to do. A (somewhat more complicated) triangle solution may still be faster.
EDIT:
To better explain "not precisely the most performant thing", consider that killing fragments still causes the fragment shader to run (it only discards the result afterwards) and interferes with early depth/stencil fragment rejection.
Constructing a triangle fan like whoplisp suggested is more work, but will not process any fragments that are not visible, will not interfere with depth/stencil rejection, and may look better in some situations, too (MSAA for example).
Why don't you just draw some black triangles ontop of the red rectangle?