Zoom invariant objects in webgl - opengl-es

WebGl doesn't support line thickness. So when I need to highlight some line, I just draw rectangle around it. But when I zoom scene it looks pretty scary.
There are two ways I see now:
1) Recalculate rectangle width according to canvas.width into model coordinates.
2) Place all zoom-invariant objects under separate matrix (I use scenejs) and recalculate their positions after each mousewheel
I don't like both of this solution. So I wonder: is there good workaround to make items zoom invariant?

Another way around that (not the most efficient one though) might be to use shaders. In our WebGL app, we render highlighted primitives into a texture and then blur it back on screen to add a "selection glow effect".

Related

Draw the outlines of a 3D model with OpenGL ES

I need to add this classic effect which consist in highlighting a 3D model by stroking the outlines, just like this for example (without the transparent gradiant, just a solid stroke) :
I found a way to do this here which seems pretty simple and easy to implement. The guy is playing with the stencil buffer to compute the model shape, then he's drawing the model using wireframes and the thickness of the lines is doing the job.
This is my problem, the wireframes. I'm using OpenGL ES 2.0, which means I can't use glPolygonMode to change the render mode to GL_LINE.
And I'm stuck here, I can't find any simple alternative way to do it, the most relevant solution i found for the moment is to implement the wireframe rendering myself, which is clearly not the easiest solution. To draw my objects I'm using glDrawElements with GL_TRIANGLES as primitive, I tried to use GL_TRIANGLE_STRIP as primitive but the result is definetely not the right one.
Any idea/trick to bypass the lack of glPolygonMode with OpenGL ES? Thanks in advance.
Drawing Outline or border for a Model in OpenGL ES 2 is not straight forward as the example you have mentioned.
Method 1:
The easiest way is to do it in multiple passes.
Step 1 (Shape Pass): Render only the object and draw it in black using the same camera settings. And draw all other pixels with different color.
Step 2 (Render Pass): This is the usual Render pass, where you actually draw the objects in real color. This every time you a fragment, you have to test the color at the same pixel on the ShapePass image to see if any of the nearby 8 pixels are different in color. If all nearby pixels are of same color, then the fragment does not represent a border, else add some color to draw the border.
Method 2: There are other techniques that can give you similar effects in a single pass. You can draw the same object twice, first time slightly scaled up with a single color, and then with real color.

Isometric Sprites

This might be a stupid question but I'm stuck and can't get passed it. I'm making a isometric game and I have my map built using tiles, I just followed this tutorial to build the map, http://www.binpress.com/tutorial/creating-a-city-building-game-with-sfml/137. But now I don't know how to add character sprites. Do I have to add these sprites using tiles as well or do I just draw the the sprites into position of the screen. Any help would be much appreciated.
As far as I can tell from the engine, just follow the "Textures and Animations" guide and draw the Animation to the screen after you have drawn the tiles. This isn't a complicated engine, so you are only working with 2D sprites being drawn to the screen (the 3D effect is merely tricks of painter's algorithm to make it work...there is no z-axis from what the tutorial indicates)
The depth is done by the order of tile rendering
The same goes for objects,players,etc... Let assume plane XY is parallel with the ground and Z axis is the altitude. Then your grid would be something like this (assuming diamond shape layout):
Order of rendering
You have to handle object,players and stuff sprites in the same way as tiles (and in the same time). so you should render all cells in specific order dependent on your grid layout and sprite combination equation. If your sprites can overwrite already rendered stuff then you should render from the most distant tiles to the closest to the "camera". In that case the blue direction arrow on above image is correct and Z axis should be increasing in the most inner loop.
So now if you got any object,player or stuff placed in cell (x,y,z) then you should render it directly after the cell (x,y,z) was rendered prior to rendering any other cell.
To speed up is a good idea to have objects and players in your tile map as a cell. But for that you have to have the tiles in the right manner and also your map representations must be capable of doing so.

Drawing transparent sprites in a 3D world in OpenGL(LWJGL)

I need to draw transparent 2D sprites in a 3D world. I tried rendering a QUAD, texturing it(using slick_util) and rotating it to face the camera, but when there are many of them the transparency doesn't really work. The sprite closest to the camera will block the ones behind it if it's rendered before them.
I think it's because OpenGL only draws the object that is closest to the viewer without checking the alpha value.
This could be fixed by sorting them from furthest away to closest but I don't know how to do that
and wouldn't I have to use math.sqrt to get the distance? (I've heard it's slow)
I wonder if there's an easy way of getting transparency in 3D to work correctly. (Enabling something in OpenGL e.g)
Disable depth testing and render transparent geometry back to front.
Or switch to additive blending and hope that looks OK.
Or use depth peeling.

Drawing Outline with OpenGL ES

Every technique that I've found or tried to render outline in OpenGL uses some function that is not avaliable on OpenGL ES...
Actually what I could do is set depthMask to false, draw the object as a 3 pixels wide line wireframe, reenable the depthMask and then drawing my object. It doesnt work for me because it outline only the external parts of my object, not the internals.
The following image shows two outlines, the left one is a correct outline, the right one is what I got.
So, can someone direct me to a technique that doesn't is avaliable on OpenGL ES?
Haven't done one of these for a while, but I think you're almost there! What I would recommend is this:
Keep depthMask enabled, but flip your backface culling to only render the "inside" of the object.
Draw the mesh with that shader that pushes all the verts out along their normals slightly and as a solid color (your outline color, probably black). Make sure that you're drawing solid triangles and not just GL_LINES.
Flip the backface culling back to normal again and re-render the mesh like usual.
The result is that the outlines will only be visible around the points on your mesh where the triangles start to turn away from the camera. This gives you some nice, simple outlines around things like noses, chins, lips, and other internal details.

Limiting the drawing to a rectangle in OpenGLES

I need to limit the drawing of an object to a rectangle. I can't just change the viewport to match the rectangle becouse the ModelView matrix (that should change the rectangle, but not the content) may not be identity. A solution that would work is to draw to a FBO that match the rectangle, then draw the FBO to the screen, but it seems to slow. Is there any better option to do that?
If I understood you correctly, glScissor should be the function you are looking for. It crops the rendering to a selected sub-rectangle of the viewport. This does not modify the viewport. So the objects cover the same size on the screen, it just prevents you from drawing any pixels outside of the scissor region. If this is not what you want and you want the sub-rectangle to contain the whole scene and thus your objects to shrink, then changing the viewport is the solution of choice.
EDIT: If you want the rectangle to be transformable and especially rotatable (and therefore not a rectangle anymore on the screen), then rendering into an FBO and using this as texture on a quad is probably the best solution. Otherwise you could probably also just modify the vertex coordinates after projection, thus multiplying the transformation matrix of the target rectangle with the projection matrix and using this as new projection matrix, but I'm not completely sure about that (but at least something similar should do it.

Resources