Make OpenGL Polygon Edges Smooth - opengl-es

Due of tilting the rectangular polygon with texture, its edges become sharp. But inner edges (inner cut parts) still smooth.
Texture has antialiasing enabled.
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
It looks like antialiasing works just inside the bounds of polygon, but doesn't on edges.
Is it possible to enable antialiasing on edges, so they look smooth like the inner edges in the picture?
Used Cocos2d-x v3.3.

Multi sampling enabled makes edges more smooth. It is not perfect solution, but on retina display looks nice. There is almost no difference between number of sampling 2 or 9 for this example.
Here is code how to set multi sampling in Cocos2d-x:
// In AppController.mm
// Init the CCEAGLView
CCEAGLView *eaglView = [CCEAGLView viewWithFrame: [window bounds]
pixelFormat: (NSString*)cocos2d::GLViewImpl::_pixelFormat
depthFormat: cocos2d::GLViewImpl::_depthFormat
preserveBackbuffer: NO
sharegroup: nil
multiSampling: YES
numberOfSamples: 2 ];

Related

Per-object post-processing

Suppose I need to render the following scene:
Two cubes, one yellow, another red.
The red cube needs to 'glow' with red light, the yellow one does not glow.
The cubes are rotating around the common center of gravity.
The camera is positioned in
such a way that when the red, glowing cube is close to the camera,
it partially obstructs the yellow cube, and when the yellow cube is
close to the camera, it partially obstructs the red, glowing one.
If not for the glow, the scene would be trivial to render. With the glow, I can see at least 2 ways of rendering it:
WAY 1
Render the yellow cube to the screen.
Compute where the red cube will end up on the screen (easy, we have the vertices +the model view matrix), so render it to an off-screen
FBO just big enough (leave margins for the glow); make sure to save
the Depths to a texture.
Post-process the FBO and make the glow.
Now the hard part: merge the FBO with the screen. We need to take into account the Depths (which we have stored in a texture) so looks
like we need to do the following:
a) render a quad , textured with the FBO's color attachment.
b) set up the ModelView matrix appropriately (
we need to move the texture by some vector because we intentionally
rendered the red cube to a smaller than the screen FBO in step 2 (for
speed reasons!)) c) in the 'merging' fragment shader, we need to write
the gl_FragDepth from FBO's Depth attachment texture (and not from
FragCoord.z)
WAY2
Render both cubes to a off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1's.
Post-process the FBO so that the marked area gets blurred and blend this to make the glow
Blit the FBO to the screen
WAY 1 works, but major problem with it is speed, namely step 4c. Writing to gl_FragDepth in fragment shader disables the early z-test.
WAY 2 also kind of works, and looks like it should be much faster, but it does not give 100% correct results.
The problem is when the red cube is partially obstructed by the yellow one, pixels of the red cube that are close to the yellow one get 'yellowish' when we blur them, i.e. the closer, yellow cube 'creeps' into the glow.
I guess I could kind of remedy the above problem by, when I am blurring, stop blurring when the pixels I am reading suddenly decrease in Depth (means we just jumped from a further object to a closer one) but that would mean twice as many texture accesses when blurring (in addition to fetching the COLOR texture we need to keep fetching the DEPTH texture), and a conditional statement in the blurring fragment shader. I haven't tried, but I am not convinced it would be any faster than WAY 1, and even that wouldn't give 100% correct results (the red pixels close to the border with the yellow cube would be only influenced by the visible part of the red cube, rather than the whole (-blurRadius,+blurRadius) area so in this area the glow would not be 100% the same).
Would anyone have suggestions how to best implement such 'per-object post-processing' ?
EDIT:
What I am writing is a sort of OpenGL ES library for graphics effects. Clients are able to give it a series of instructions like 'take this Mesh, texture it with this, apply the following matrix transformations it its ModelView matrix, apply the following distortions to its vertices, the following set of fragment effects, render to the following Framebuffer'.
In my library, I already have what I call 'matrix effects' (modifying the Model View) 'vertex effects' (various vertex distortions) and 'fragment effects' (various changes of RGBA per-fragment).
Now I am trying to add what I call 'post-processing' effects, this 'GLOW' being the first of them. I define the effect and I vision it exactly as you described above.
The effects are applied to whole Meshes; thus now I need what I call 'per-object post-processing'.
The library is aimed mostly at kind of '2.5D' usages, like GPU-accelerated UIs in Mobile Apps, 2-2.5D games (think Candy Crush), etc. I doubt people will actually ever use it for any real 3D, large game.
So FPS, while always important, is a bit less crucial then usually.
I try really hard to keep the API 'Mesh-local', i.e. the rendering pipeline only knows about the current Mesh it is rendering. Main complaint about the above is that it has to be aware of the whole set me meshes we are going to render to a given Framebuffer. That being said, if 'mesh-locality' is impossible or cannot be done efficiently with post-processing effects, then I guess I'll have to give it up (and make my Tutorials more complicated).
Yesterday I was thinking about this:
# 'Almost-Mesh-local' algorithm for rendering N different Meshes, some of them glowing
Create FBO, attach texture the size of the screen to COLOR0, another texture 1/4 the size of the screen to COLOR1.
Enable DEPTH test, clear COLOR/DEPTH
FOREACH( glowing Mesh )
{
use MRT to render it to COLOR0 and COLOR1 in one go
}
Detach COLOR1, attach STENCIL texture
Set up STENCIL so that the test always passes and writes 1s when Depth test passes
Switch off DEPTH/COLOR writes
FOREACH( glowing Mesh )
{
enlarge it by N% (amount of GLOW needs to be modifiable!)
render to STENCIL // i.e. mark the future 'glow' regions with 1s in stencil
}
Set up STENCIL so that test always passes and writes 0 when Depth test passes
Switch on DEPTH/COLOR writes
FOREACH( not glowing Mesh )
{
render to COLOR0/STENCIL/DEPTH // now COLOR0 contains everything rendered, except for the GLOW. STENCIL marks the unobstructed glowing areas with 1s
}
Blur the COLOR1 texture with BLUR radius 'N'
Merge COLOR0 and COLOR1 to the screen in the following way:
IF ( STENCIL==0 ) take pixel from COLOR0
ELSE blend COLOR0 and COLOR1
END
This is not Mesh-local (we still need to be able to process all 'glowing' Meshes first) although I call it 'almost Mesh-local' because it differentiates between meshes only on the basis of the Effects being applied to them, and not which one is where or which obstructs which.
It also can have problems when two GLOWING Meshes obstruct each other (blend does not have to be done in the right order) although with the GLOW being half-transparent, I am hoping the final look will be more or less ok.
Looks like it can even be turned into a completely 'Mesh-local' algorithm by doing one giant
FOREACH(Mesh)
{
if( glowing )
{
}
else
{
}
}
although at a cost of having to attach and detach stuff from FBO and setting STENCILS differently at each loop iteration.
A knee-jerk suggestion is to do the hybrid:
compute where the red cube will end up on screen, so render it to an off-screen FBO just big enough (or one the same size as the screen, since creating FBOs on the hoof may not be efficient); don't worry about depths, it's only the colours you're after;
render both cubes to an off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1s;
post-process to the screen by using an original pixel from (2) wherever the stencil is 0, or a blurred pixel computed by sampling (1) wherever the stencil is 1.

OpenGl Blending

i want to blend two rects, but i want to draw only blended area (area where rects are intersecting), How to do it
If you don't want to compute the intersection you can probably use the stencil buffer to achieve that. read about it here:
http://bluevoid.com/opengl/sig00/advanced00/notes/node118.html
You can draw the two rects and with increment on the stencil buffer and then mask only the pixels that have value > 2, i.e. the pixels where 2 or more rects were drawn.
The intersection of two convex rects is always a rect. so why not just compute the intersection and draw only the that?
GLES20.glEnable( GLES20.GL_BLEND );
GLES20.glBlendFunc( GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA );
But you should set behavior of your blend function yourown.
And in the shader I set alpha channel. You can see the result:
blending post.
the source of android project

How do I add an outline to a 2d concave polygon?

I'm successfully drawing the convex polys which make up the following white concave shape.
The orange color is my attempt to add a uniform outline around the white shape. As you can see it's not so uniform. On some edges the orange doesn't show at all.
Evidently using...
glScalef(1.1, 1.1, 0.0);
... to draw a slightly larger orange shape before I drew the white shape wasn't the way to go.
I just have a nagging feeling I'm missing a more simple way to do this.
Note that the white part is going to be mapped with a texture which has areas of transparency, so the orange part needs to be behind the white shapes too, not just surrounding them.
Also, I'm using a parallel projection matrix, that's why glScalef's z is set to 0.0 - reminds me there is no perspective scaling.
Any ideas? Thanks!
Nope, you wont be going anywhere with glScale in this case. Possible options are
a) construct an extruded polygon from the original one (possibly rounding sharp corners)
b) draw the polygon with GL_LINES and set glLineWidth to your desired outline width (in fact you might want to draw the outline with 2x width first)
The first approach will generate CPU load, the second one might slow down rendering significantly AFAIK.
You can displace your polygon in the 8 directions of the compass.
You can have a look at this link: http://simonschreibt.de/gat/cell-shading/
It's a nice trick, and might do the job
Unfortunately there is no simple way to get an outline of consistent width - you just have to do the maths:
For each edge: calculate the normal, scale to the desired width, and add to the edge vertices to get a line segment on the new expanded edge
Calculate the intersection of the lines through two adjacent segments to find the expanded vertex positions
A distinct answer from those offered to date, posted just for interest; if you're in GLES 2.0 have access to shaders then you could render the source polygon to a framebuffer with a texture bound as the colour renderbuffer, then do a second parse to write to the screen (so you're using the image of the white polygon as the input texture and running a post-processing pixel shader to every pixel on the screen) with a shader that obeys the following logic for an outline of thickness q:
if the input is white then output a white pixel
if the input pixel is black then sample every pixel within a radius of q from the current pixel; if any one of them is white then output an orange pixel, otherwise output a black pixel
In practise you'd spend an awful lot on texture sampling and probably turn that into the bottleneck. And they'd be mostly dependent reads, which are bad for the pipeline on lots of GPUs — including the PowerVR SGX that powers the overwhelming majority of OpenGL ES 2.0 devices.
EDIT: actually, you could speed this up substantially; if your radius is q then have the hardware generate mip maps for your framebuffer object, take the first one for which the output pixels are at least q by q in the source image. You've then essentially got a set of bins that'll be pure black if there were no bits of the polygon in that region and pure white if that area was entirely internal to the polygon. For each output fragment that you're considering might be on the border you can quite possibly just straight to a conclusion of definitely in or definitely out and beyond the border based on four samples of the mipmap.

I have an OpenGL Tessellated Sphere and I want to cut a cylindrical hole in it

I am working on a piece of software which generated a polygon mesh to represent a sphere, and I want to cut a hole through the sphere. This polygon mesh is only an overlay across the surface of the sphere. I have a good idea of how to determine which polygons will intersect my hole, and I can remove them from my collection, but after that point I am getting a little confused. I was wondering if anyone could help me with the high-level concepts?
Basically, I envision three situations:
1.) The cylindrical hole does not intersect my sphere.
2.) The cylindrical hole partially goes through my sphere.
3.) The cylindrical hole goes all the way through my sphere.
For #1, I can test for this (no polygons removed) and act accordingly (do nothing). For #2 and #3, I am not sure how to re-tessellate my sphere to account for the hole. For #3, I have somewhat of an idea that is basically along the following lines:
a.) Find your entry point (a circle)
b.) Find your exit point (a circle)
c.) Remove the necessary polygons
d.) Make new polygons along the 4* 'sides' of the hole to keep my
sphere a manifold.
This extremely simplified algorithm has some 'holes' I would like to fill in. For example, I don't actually want to have 4 sides to my hole - it should be a cylinder, or at lease a tessellated representation of a cylinder. I'm also not sure how to make these new polygons to keep my sphere with a hole in a tessellated surface.
I have no idea how to approach scenario #2.
Sounds like you want constructive solid geometry.
Carve might do what you want. If you just want run-time rendering OpenCSG will work.
Well if you want just to render this (visualize) then may be you do not need to change the generated meshes at all. Instead use Stencil buffer to render your sphere with the holes. For example I am rendering disc (thin cylinder) with circular holes near its outer edge (as a base plate for machinery) with combination of solid and transparent objects around so I need the holes are really holes. As I was lazy to triangulate the shape as is generated at runtime I chose stencil for this.
Create OpenGL context with Stencil buffer
I am using 8 bit for stencil but this technique uses just single bit.
Clear stencil with 0 and turn off Depth&Color masks
This has to be done before rendering your mesh with stencil. So if you have more objects rendered in this way you need to do this before each one of them.
Set stencil with 1 for solid mesh
Clear stencil with 0 for hole meshes
Turn on Depth&Color masks and render solid mesh where stencil is 1
In code it looks like this:
// [stencil]
glEnable(GL_STENCIL_TEST);
// whole stencil=0
glClearStencil(0);
glClear(GL_STENCIL_BUFFER_BIT);
// turn off color,depth
glStencilMask(0xFF);
glColorMask(GL_FALSE, GL_FALSE, GL_FALSE, GL_FALSE);
glDepthMask(GL_FALSE);
// stencil=1 for solid mesh
glStencilFunc(GL_ALWAYS,1,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
glCylinderxz(0.0,y,0.0,r,qh);
// stencil=0 for hole meshes
glStencilFunc(GL_ALWAYS,0,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
for(b=0.0,j=0;j<12;j++,b+=db)
{
x=dev_R*cos(b);
z=dev_R*sin(b);
glCylinderxz(x,y-0.1,z,dev_r,qh+0.2);
}
// turn on color,depth
glColorMask(GL_TRUE, GL_TRUE, GL_TRUE, GL_TRUE);
glDepthMask(GL_TRUE);
// render solid mesh the holes will be created by the stencil test
glStencilFunc(GL_NOTEQUAL,0,0xFF);
glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP);
glColor3f(0.1,0.3,0.4);
glCylinderxz(0.0,y,0.0,r,qh);
glDisable(GL_STENCIL_TEST);
where glCylinderxz(x,y,z,r,h) is just function that render cylinder at (x,y,z) with radius r with y-axis as its rotation axis. The db is angle step (2*Pi/12). Radiuses are r-big, dev_r-hole radius, dev_R-hole centers And qhis the thickness of the plate.
The result looks like this (each of the 2 plates is rendered with this):
This approach is more suited for thin objects. If your cuts leads to thick enough sides then you need to add a cut side rendering otherwise the lighting could be wrong on these parts.
I implemented CSG operations using scalar fields earlier this year. It works well if performance isn't important. That is, the calculations aren't real time. The problem is that the derivative isn't defined everywhere, so you can forget about computing cheap vertex-normals that way. It has to be done as a post-step.
See here for the paper I used (in the first answer), and some screenshots I made:
CSG operations on implicit surfaces with marching cubes
Also, CSG this way requires the initial mesh to be represented using implicit surfaces. While any geometric mesh can be split into planes, it wouldn't give good results. So spheres would have to be represented by a radius and an origin, and cylinders would be represented by a radius, origin and base height.

iPhone OpenGL ES: Applying a Depth Test on Textures that have transparent pixels for 2D game

Currently, I have blending and depth testing turn on for a 2D game. When I draw my textures, the "upper" texture remove some portion of the lower textures if they intersect. Clearly, transparent pixels of the textures are taken into account of the depth test, and it clear out all the colors of the drawn lower textures if they intersect. Moreover, alpha blendings are incorrectly rendered. Are there any sort of functions that can tell OpenGL to not include transparent pixels into depth testing?
glEnable( GL_ALPHA_TEST );
glAlphaFunc( GL_EQUAL, 1.0f );
This will discard all pixels with an alpha of anything other than fully opaque. These pixels will, then, not be rendered to the Z-Buffer. This does, however, affect various Z-Buffer pipeline optimisations so it may cause some serious slowdowns. Only use it if you really have too.
No it's not possible. This is true of all hardware depth testing.
GL (full or ES -- and D3D) all have the same model -- they paint in the order you specify polygons. If you draw polygon A in before polygon B, and logically polygon A should be in front on polygon B, polygon B won't be painted (courtesy of the depth test).
The solution is to draw you polygons in order from farthest to nearest the current view origin. Happily in a 2D game this should just be a simple sort (one you probably won't even need to do very often).
In 3D games BSPs are the basic solution to this issue.
if you're using shaders, can try disabling blending and discard the pixels with alpha 0
if(texColor.w == 0.0)
discard;
What type of blending are you using?
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Should prevent any fragments with alpha of 0 from writing to the depth buffer.

Resources