How to draw "glowing" line in OpenGL ES - opengl-es

Could you please share some code (any language) on how draw textured line (that would be smooth or have a glowing like effect, blue line, four points) consisting of many points like on attached image using OpenGL ES 1.0.
What I was trying was texturing a GL_LINE_STRIP with texture 16x16 or 1x16 pixels, but without any success.

In ES 1.0 you can use render-to-texture creatively to achieve the effect that you want, but it's likely to be costly in terms of fill rate. Gamasutra has an (old) article on how glow was achieved in the Tron 2.0 game — you'll want to pay particular attention to the DirectX 7.0 comments since that was, like ES 1.0, a fixed pipeline. In your case you probably want just to display the Gaussian image rather than mixing it with an original since the glow is all you're interested in.
My summary of the article is:
render all lines to a texture as normal, solid hairline lines. Call this texture the source texture.
apply a linear horizontal blur to that by taking the source texture you just rendered and drawing it, say, five times to another texture, which I'll call the horizontal blur texture. Draw one copy at an offset of x = 0 with opacity 1.0, draw two further copies — one at x = +1 and one at x = -1 — with opacity 0.63 and a final two copies — one at x = +2 and one at x = -2 with an opacity of 0.17. Use additive blending.
apply a linear vertical blur to that by taking the horizontal blur texture and doing essentially the same steps but with y offsets instead of x offsets.
Those opacity numbers were derived from the 2d Gaussian kernel on this page. Play around with them to affect the fall off towards the outside of your lines.
Note the extra costs involved here: you're ostensibly adding ten full-screen textured draws plus some framebuffer swapping. You can probably get away with fewer draws by using multitexturing. A shader approach would likely do the horizontal and vertical steps in a single pass.

Related

Per-object post-processing

Suppose I need to render the following scene:
Two cubes, one yellow, another red.
The red cube needs to 'glow' with red light, the yellow one does not glow.
The cubes are rotating around the common center of gravity.
The camera is positioned in
such a way that when the red, glowing cube is close to the camera,
it partially obstructs the yellow cube, and when the yellow cube is
close to the camera, it partially obstructs the red, glowing one.
If not for the glow, the scene would be trivial to render. With the glow, I can see at least 2 ways of rendering it:
WAY 1
Render the yellow cube to the screen.
Compute where the red cube will end up on the screen (easy, we have the vertices +the model view matrix), so render it to an off-screen
FBO just big enough (leave margins for the glow); make sure to save
the Depths to a texture.
Post-process the FBO and make the glow.
Now the hard part: merge the FBO with the screen. We need to take into account the Depths (which we have stored in a texture) so looks
like we need to do the following:
a) render a quad , textured with the FBO's color attachment.
b) set up the ModelView matrix appropriately (
we need to move the texture by some vector because we intentionally
rendered the red cube to a smaller than the screen FBO in step 2 (for
speed reasons!)) c) in the 'merging' fragment shader, we need to write
the gl_FragDepth from FBO's Depth attachment texture (and not from
FragCoord.z)
WAY2
Render both cubes to a off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1's.
Post-process the FBO so that the marked area gets blurred and blend this to make the glow
Blit the FBO to the screen
WAY 1 works, but major problem with it is speed, namely step 4c. Writing to gl_FragDepth in fragment shader disables the early z-test.
WAY 2 also kind of works, and looks like it should be much faster, but it does not give 100% correct results.
The problem is when the red cube is partially obstructed by the yellow one, pixels of the red cube that are close to the yellow one get 'yellowish' when we blur them, i.e. the closer, yellow cube 'creeps' into the glow.
I guess I could kind of remedy the above problem by, when I am blurring, stop blurring when the pixels I am reading suddenly decrease in Depth (means we just jumped from a further object to a closer one) but that would mean twice as many texture accesses when blurring (in addition to fetching the COLOR texture we need to keep fetching the DEPTH texture), and a conditional statement in the blurring fragment shader. I haven't tried, but I am not convinced it would be any faster than WAY 1, and even that wouldn't give 100% correct results (the red pixels close to the border with the yellow cube would be only influenced by the visible part of the red cube, rather than the whole (-blurRadius,+blurRadius) area so in this area the glow would not be 100% the same).
Would anyone have suggestions how to best implement such 'per-object post-processing' ?
EDIT:
What I am writing is a sort of OpenGL ES library for graphics effects. Clients are able to give it a series of instructions like 'take this Mesh, texture it with this, apply the following matrix transformations it its ModelView matrix, apply the following distortions to its vertices, the following set of fragment effects, render to the following Framebuffer'.
In my library, I already have what I call 'matrix effects' (modifying the Model View) 'vertex effects' (various vertex distortions) and 'fragment effects' (various changes of RGBA per-fragment).
Now I am trying to add what I call 'post-processing' effects, this 'GLOW' being the first of them. I define the effect and I vision it exactly as you described above.
The effects are applied to whole Meshes; thus now I need what I call 'per-object post-processing'.
The library is aimed mostly at kind of '2.5D' usages, like GPU-accelerated UIs in Mobile Apps, 2-2.5D games (think Candy Crush), etc. I doubt people will actually ever use it for any real 3D, large game.
So FPS, while always important, is a bit less crucial then usually.
I try really hard to keep the API 'Mesh-local', i.e. the rendering pipeline only knows about the current Mesh it is rendering. Main complaint about the above is that it has to be aware of the whole set me meshes we are going to render to a given Framebuffer. That being said, if 'mesh-locality' is impossible or cannot be done efficiently with post-processing effects, then I guess I'll have to give it up (and make my Tutorials more complicated).
Yesterday I was thinking about this:
# 'Almost-Mesh-local' algorithm for rendering N different Meshes, some of them glowing
Create FBO, attach texture the size of the screen to COLOR0, another texture 1/4 the size of the screen to COLOR1.
Enable DEPTH test, clear COLOR/DEPTH
FOREACH( glowing Mesh )
{
use MRT to render it to COLOR0 and COLOR1 in one go
}
Detach COLOR1, attach STENCIL texture
Set up STENCIL so that the test always passes and writes 1s when Depth test passes
Switch off DEPTH/COLOR writes
FOREACH( glowing Mesh )
{
enlarge it by N% (amount of GLOW needs to be modifiable!)
render to STENCIL // i.e. mark the future 'glow' regions with 1s in stencil
}
Set up STENCIL so that test always passes and writes 0 when Depth test passes
Switch on DEPTH/COLOR writes
FOREACH( not glowing Mesh )
{
render to COLOR0/STENCIL/DEPTH // now COLOR0 contains everything rendered, except for the GLOW. STENCIL marks the unobstructed glowing areas with 1s
}
Blur the COLOR1 texture with BLUR radius 'N'
Merge COLOR0 and COLOR1 to the screen in the following way:
IF ( STENCIL==0 ) take pixel from COLOR0
ELSE blend COLOR0 and COLOR1
END
This is not Mesh-local (we still need to be able to process all 'glowing' Meshes first) although I call it 'almost Mesh-local' because it differentiates between meshes only on the basis of the Effects being applied to them, and not which one is where or which obstructs which.
It also can have problems when two GLOWING Meshes obstruct each other (blend does not have to be done in the right order) although with the GLOW being half-transparent, I am hoping the final look will be more or less ok.
Looks like it can even be turned into a completely 'Mesh-local' algorithm by doing one giant
FOREACH(Mesh)
{
if( glowing )
{
}
else
{
}
}
although at a cost of having to attach and detach stuff from FBO and setting STENCILS differently at each loop iteration.
A knee-jerk suggestion is to do the hybrid:
compute where the red cube will end up on screen, so render it to an off-screen FBO just big enough (or one the same size as the screen, since creating FBOs on the hoof may not be efficient); don't worry about depths, it's only the colours you're after;
render both cubes to an off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1s;
post-process to the screen by using an original pixel from (2) wherever the stencil is 0, or a blurred pixel computed by sampling (1) wherever the stencil is 1.

performance - drawing many 2d circles in opengl

I am trying to draw large numbers of 2d circles for my 2d games in opengl. They are all the same size and have the same texture. Many of the sprites overlap. What would be the fastest way to do this?
an example of the kind of effect I'm making http://img805.imageshack.us/img805/6379/circles.png
(It should be noted that the black edges are just due to the expanding explosion of circles. It was filled in a moment after this screen-shot was taken.
At the moment I am using a pair of textured triangles to make each circle. I have transparency around the edges of the texture so as to make it look like a circle. Using blending for this proved to be very slow (and z culling was not possible as they were rendered as squares to the depth buffer). Instead I am not using blending but having my fragment shader discard any fragments with an alpha of 0. This works, however it means that early z is not possible (as fragments are discarded).
The speed is limited by the large amounts of overdraw and the gpu's fillrate. The order that the circles are drawn in doesn't really matter (provided it doesn't change between frames creating flicker) so I have been trying to ensure each pixel on the screen can only be written to once.
I attempted this by using the depth buffer. At the start of each frame it is cleared to 1.0f. Then when a circle is drawn it changes that part of the depth buffer to 0.0f. When another circle would normally be drawn there it is not as the new circle also has a z of 0.0f. This is not less than the 0.0f that is currently there in the depth buffer so it is not drawn. This works and should reduce the number of pixels which have to be drawn. However; strangely it isn't any faster. I have already asked a question about this behavior (opengl depth buffer slow when points have same depth) and the suggestion was that z culling was not being accelerated when using equal z values.
Instead I have to give all of my circles separate false z-values from 0 upwards. Then when I render using glDrawArrays and the default of GL_LESS we correctly get a speed boost due to z culling (although early z is not possible as fragments are discarded to make the circles possible). However this is not ideal as I've had to add in large amounts of z related code for a 2d game which simply shouldn't require it (and not passing z values if possible would be faster). This is however the fastest way I have currently found.
Finally I have tried using the stencil buffer, here I used
glStencilFunc(GL_EQUAL, 0, 1);
glStencilOp(GL_KEEP, GL_INCR, GL_INCR);
Where the stencil buffer is reset to 0 each frame. The idea is that after a pixel is drawn to the first time. It is then changed to be none-zero in the stencil buffer. Then that pixel should not be drawn to again therefore reducing the amount of overdraw. However this has proved to be no faster than just drawing everything without the stencil buffer or a depth buffer.
What is the fastest way people have found to write do what I am trying?
The fundamental problem is that you're fill limited, which is the GPUs inability to shade all the fragments you ask it to draw in the time you're expecting. The reason that you're depth buffering trick isn't effective is that the most time-comsuming part of processing is shading the fragments (either through your own fragment shader, or through the fixed-function shading engine), which occurs before the depth test. The same issue occurs for using stencil; shading the pixel occurs before stenciling.
There are a few things that may help, but they depend on your hardware:
render your sprites from front to back with depth buffering. Modern GPUs often try to determine if a collection of fragments will be visible before sending them off to be shaded. Roughly speaking, the depth buffer (or a represenation of it) is checked to see if the fragment that's about to be shaded will be visible, and if not, it's processing is terminated at that point. This should help reduce the number of pixels that need to be written to the framebuffer.
Use a fragment shader that immediately checks your texel's alpha value, and discards the fragment before any additional processing, as in:
varying vec2 texCoord;
uniform sampler2D tex;
void main()
{
vec4 texel = texture( tex, texCoord );
if ( texel.a < 0.01 ) discard;
// rest of your color computations
}
(you can also use alpha test in fixed-function fragment processing, but it's impossible to say if the test will be applied before the completion of fragment shading).

Non predefined multiple light sources in OpenGL ES 2.0

There is a great article about multiple light sources in GLSL
http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Multiple_Lights
But light0 and light1 parameters described in shader code, what if must draw flare gun shots, e.g every flare has it own position, color and must illuminate surroundings. How we manage other objects shader to deal with unknown (well there is a limit to max flares on the screen) position, colors of flares? For example there will be 8 max flares on screen, what i must to pass 8*2 uniforms, even if they not exist at this time?
Or imagine you making level editor, user can place lamps, how other objects will "know" about new light source and render then new lamp has been added?
I think there must be clever solution, but i can't find one.
Lighting equations usually rely on additive colour. So the output is the colour of light one plus the colour of light two plus the colour of light three, etc.
One of the in-framebuffer blending modes offered by OpenGL is additive blending. So the colour output of anything new that you draw will be added to whatever is already in the buffer.
The most naive solution is therefore to write your shader to do exactly one light. If you have multiple lights, draw the scene that many times, each time with a different nominated line. It's an example of multipass rendering.
Better solutions involve writing shaders to do two, four, eight or whatever lights at once, doing, say, 15 lights as an 8-light draw then a 4-light draw then a 2-light draw then a 1-light draw, and including only geometry within reach of each light when you do that pass. Which tends to mean finding intelligent ways to group lights by locality.
EDIT: with a little more thought, I should add that there's another option in deferred shading, though it's not completely useful on most GL ES devices at the moment due to the limited options for output buffers.
Suppose theoretically you could render your geometry exactly once and store whatever you wanted per pixel. So you wouldn't just output a colour, you'd output, say, a position in 3d space, a normal, a diffuse colour, a specular colour and a specular exponent. Those would then all be in a per-pixel buffer.
You could then render each light by (i) working out the maximum possible space it can occupy when projected onto the screen (so, a 2d rectangle that relates directly to pixels); and (ii) rendering the light as a single quad of that size, for each pixel reading the relevant values from the buffer you just set up and outputting an appropriately lit colour.
Then you'd do all the actual geometry in your scene only exactly once, and each additional light would cost at most a single, full-screen quad.
In practice you can't really do that because the output buffers you tend to be able to use in ES provide too little storage. But what you can usually do is render to a 32bit colour buffer with an attached depth buffer. So you can just store depth in the depth buffer and work out world (x, y, z) from that plus the [uniform] position of the camera in the light shader. You could store 8-bit versions of normal x and y in the colour buffer so as to spend 16 bits and work out z in the colour buffer because you know that the normal is always of unit length. Then, to pick a concrete example at random, maybe you could store a 16-bit version of the diffuse colour in the remaining space, possibly in YCrCb with extra storage for Y.
The main disadvantage is that hardware antialiasing then doesn't due to much the same sort of concerns as transparency and depth buffers. But if you get to the point where you save dramatically on lighting it might still make sense to do manual antialiasing by rendering a large version of the scene and then scaling it down in a final pass.

Drawing an outline around the non-transparent part of a texture

Tinkering with the feature set for a new game, i'm considering including a PVP gameplay mode. Nothing like NI after kicking the AI so smithereens :). iSomething only. Willing to restrict to modern devices.
One option I would consider to differentiate the characters for each player on the map would be to add 'on the fly' a 2-point outline of different colours to the characters of each player (others options exist, but have weight considerations for the resources).
I have not found on here (nor elsewhere for that matter) any very useful answers to this kind of requirement, nor am I an GL expert by a far cry. If any one of you could point me in the direction of some tutorials, I would greatly appreciate. TIA
I wasn't recommending that you necessarily put the outlines into separate textures. What I was imagining was that you have a sprite with a region that is all alpha = 1.0, surrounded by a transparent region of alpha = 0.0.
One idea could be to draw a couple pixel wide ring around the opaque region with something like alpha = 0.5.
If you then want to draw your sprites without a border, you can just alpha test for alpha > 0.75, and the border will not appear. If you want to draw a border, you can alpha test for alpha > 0.25, and use a fragment shader to replace all pixels with 0.4 < alpha < 0.6 with a colored border of your choice.
This becomes more difficult if your images use partial transparency, though in that case you could maybe block off the range from 0.0 to 0.1 for alpha metadata like the border.
This would not require any additional textures to be used or increase the size of any of the existing resources.

How do I add an outline to a 2d concave polygon?

I'm successfully drawing the convex polys which make up the following white concave shape.
The orange color is my attempt to add a uniform outline around the white shape. As you can see it's not so uniform. On some edges the orange doesn't show at all.
Evidently using...
glScalef(1.1, 1.1, 0.0);
... to draw a slightly larger orange shape before I drew the white shape wasn't the way to go.
I just have a nagging feeling I'm missing a more simple way to do this.
Note that the white part is going to be mapped with a texture which has areas of transparency, so the orange part needs to be behind the white shapes too, not just surrounding them.
Also, I'm using a parallel projection matrix, that's why glScalef's z is set to 0.0 - reminds me there is no perspective scaling.
Any ideas? Thanks!
Nope, you wont be going anywhere with glScale in this case. Possible options are
a) construct an extruded polygon from the original one (possibly rounding sharp corners)
b) draw the polygon with GL_LINES and set glLineWidth to your desired outline width (in fact you might want to draw the outline with 2x width first)
The first approach will generate CPU load, the second one might slow down rendering significantly AFAIK.
You can displace your polygon in the 8 directions of the compass.
You can have a look at this link: http://simonschreibt.de/gat/cell-shading/
It's a nice trick, and might do the job
Unfortunately there is no simple way to get an outline of consistent width - you just have to do the maths:
For each edge: calculate the normal, scale to the desired width, and add to the edge vertices to get a line segment on the new expanded edge
Calculate the intersection of the lines through two adjacent segments to find the expanded vertex positions
A distinct answer from those offered to date, posted just for interest; if you're in GLES 2.0 have access to shaders then you could render the source polygon to a framebuffer with a texture bound as the colour renderbuffer, then do a second parse to write to the screen (so you're using the image of the white polygon as the input texture and running a post-processing pixel shader to every pixel on the screen) with a shader that obeys the following logic for an outline of thickness q:
if the input is white then output a white pixel
if the input pixel is black then sample every pixel within a radius of q from the current pixel; if any one of them is white then output an orange pixel, otherwise output a black pixel
In practise you'd spend an awful lot on texture sampling and probably turn that into the bottleneck. And they'd be mostly dependent reads, which are bad for the pipeline on lots of GPUs — including the PowerVR SGX that powers the overwhelming majority of OpenGL ES 2.0 devices.
EDIT: actually, you could speed this up substantially; if your radius is q then have the hardware generate mip maps for your framebuffer object, take the first one for which the output pixels are at least q by q in the source image. You've then essentially got a set of bins that'll be pure black if there were no bits of the polygon in that region and pure white if that area was entirely internal to the polygon. For each output fragment that you're considering might be on the border you can quite possibly just straight to a conclusion of definitely in or definitely out and beyond the border based on four samples of the mipmap.

Resources