Weird flashing when scrolling tex coords of procedurally generated texture - opengl-es

The procedurally generated texture appears fine, until I start trying to scroll the texture coords utilizing GL_REPEAT. If I scroll just a normal image that I've uploaded, then it is scrolling fine. But the procedural one does this weird, periodic flashing effect. I'm trying to generate a starfield, so it's just white dots on a transparent background, and with this problem the stars are fading to almost black then back to white and so on. Also, if i generate a substantial number of stars, there are a few that don't exhibit this behavior, but seem normal.
here i just setup the texture for opengl and establish the pixels array.
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLubyte *pixels=new GLubyte[(screen_width)*(screen_height)*4];
here i setup my random number generators and i fill in the pixel data for the number of stars i want.
irngX=new IRNG(0,screen_width-1);
irngY=new IRNG(0,screen_height-1);
for (int i=0; i<count; ++i){
int x=irngX->get();
int y=irngY->get();
int pos=(y*screen_width*4)+(x*4);
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
}
here i upload the pixels to opengl...the pixel store function seems to have no effect on anything, unlike desktop opengl, nor do i understand its usage anyways. ive tried it on and off.
glPixelStorei(GL_UNPACK_ALIGNMENT,1); // ??
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,screen_width,screen_height,0,
GL_RGBA,GL_UNSIGNED_BYTE,pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
here's my tex coord scroller:
tex_scroller+=0.001f;
glUniform1f(tex_scroller_loc,tex_scroller);
and in the shader:
vtex.t+=tex_scroller;
i hope that's all the relevant code, and that i've made my problem understandable. please let me know if you need a better description.
p.s. i apolozie if my code is not formatted properly. i tried.

The problem is probably about texture sampling. Your stars are just a single texel in size, and when you're scrolling, the grid of screen pixels is no longer aligned perfectly with the grid of texels, so texture filtering is occuring (GL_LINEAR) which affects the brightness. This is particularly a problem because your transparent texels are presumably black. Some suggestions:
Try changing 'GL_LINEAR' to 'GL_NEAREST' in your code above. This should fix any fading, but might replace it with an on/off flickering.
Try making the transparent texels white, that will improve things.
Try changing tex_scroller+=0.001f; to tex_scroller+=1.0f/screen_width; to see if you can keep the texel grid and pixel grid aligned
Try changing your stars to be multiple texels in size instead of single texel.

Related

OpenGLES 3.0 Cannot render to a texture larger than the screen size

I have made an image below to indicate my problem. I render my scene to an offscreen framebuffer with a texture the size of the screen. I then render said texture to a screen-filling quad. This produces the case 1 on the image. I then run the exact same program, but with with a texture size, let's say, 1.5 times greater (enough to contain the entire smiley), and afterwards render it once more to the screen-filling quad. I then get result 3, but I expected to get result 2.
I remember to change the viewport according to the new texture size before rendering to the texture, and reset the viewport before drawing the quad. I do NOT understand, what I am doing wrong.
problem shown as an image!
To summarize, this is the general flow (too much code to post it all):
Create MSAAframebuffer and ResolveFramebuffer (Resolve contains the texture).
Set glViewport(0, 0, Width*1.5, Height*1.5)
Bind MSAAframebuffer and render my scene (the smiley).
Blit the MSAAframebuffer into the ResolveFramebuffer
Set glViewport(0, 0, Width, Height), bind the texture and render my quad.
Note that all the MSAA is working perfectly fine. Also both buffers have the same dimensions, so when I blit it is simply: glBlitFramebuffer(0, 0, Width*1.5, Height*1.5, 0, 0, Width*1.5, Height*1.5, ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest)
Hope someone has a good idea. I might get fired if not :D
I found that I actually used an AABB somewhere else in the code to determine what to render; and this AABB was computed from the "small viewport's size". Stupid mistake.

Per-object post-processing

Suppose I need to render the following scene:
Two cubes, one yellow, another red.
The red cube needs to 'glow' with red light, the yellow one does not glow.
The cubes are rotating around the common center of gravity.
The camera is positioned in
such a way that when the red, glowing cube is close to the camera,
it partially obstructs the yellow cube, and when the yellow cube is
close to the camera, it partially obstructs the red, glowing one.
If not for the glow, the scene would be trivial to render. With the glow, I can see at least 2 ways of rendering it:
WAY 1
Render the yellow cube to the screen.
Compute where the red cube will end up on the screen (easy, we have the vertices +the model view matrix), so render it to an off-screen
FBO just big enough (leave margins for the glow); make sure to save
the Depths to a texture.
Post-process the FBO and make the glow.
Now the hard part: merge the FBO with the screen. We need to take into account the Depths (which we have stored in a texture) so looks
like we need to do the following:
a) render a quad , textured with the FBO's color attachment.
b) set up the ModelView matrix appropriately (
we need to move the texture by some vector because we intentionally
rendered the red cube to a smaller than the screen FBO in step 2 (for
speed reasons!)) c) in the 'merging' fragment shader, we need to write
the gl_FragDepth from FBO's Depth attachment texture (and not from
FragCoord.z)
WAY2
Render both cubes to a off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1's.
Post-process the FBO so that the marked area gets blurred and blend this to make the glow
Blit the FBO to the screen
WAY 1 works, but major problem with it is speed, namely step 4c. Writing to gl_FragDepth in fragment shader disables the early z-test.
WAY 2 also kind of works, and looks like it should be much faster, but it does not give 100% correct results.
The problem is when the red cube is partially obstructed by the yellow one, pixels of the red cube that are close to the yellow one get 'yellowish' when we blur them, i.e. the closer, yellow cube 'creeps' into the glow.
I guess I could kind of remedy the above problem by, when I am blurring, stop blurring when the pixels I am reading suddenly decrease in Depth (means we just jumped from a further object to a closer one) but that would mean twice as many texture accesses when blurring (in addition to fetching the COLOR texture we need to keep fetching the DEPTH texture), and a conditional statement in the blurring fragment shader. I haven't tried, but I am not convinced it would be any faster than WAY 1, and even that wouldn't give 100% correct results (the red pixels close to the border with the yellow cube would be only influenced by the visible part of the red cube, rather than the whole (-blurRadius,+blurRadius) area so in this area the glow would not be 100% the same).
Would anyone have suggestions how to best implement such 'per-object post-processing' ?
EDIT:
What I am writing is a sort of OpenGL ES library for graphics effects. Clients are able to give it a series of instructions like 'take this Mesh, texture it with this, apply the following matrix transformations it its ModelView matrix, apply the following distortions to its vertices, the following set of fragment effects, render to the following Framebuffer'.
In my library, I already have what I call 'matrix effects' (modifying the Model View) 'vertex effects' (various vertex distortions) and 'fragment effects' (various changes of RGBA per-fragment).
Now I am trying to add what I call 'post-processing' effects, this 'GLOW' being the first of them. I define the effect and I vision it exactly as you described above.
The effects are applied to whole Meshes; thus now I need what I call 'per-object post-processing'.
The library is aimed mostly at kind of '2.5D' usages, like GPU-accelerated UIs in Mobile Apps, 2-2.5D games (think Candy Crush), etc. I doubt people will actually ever use it for any real 3D, large game.
So FPS, while always important, is a bit less crucial then usually.
I try really hard to keep the API 'Mesh-local', i.e. the rendering pipeline only knows about the current Mesh it is rendering. Main complaint about the above is that it has to be aware of the whole set me meshes we are going to render to a given Framebuffer. That being said, if 'mesh-locality' is impossible or cannot be done efficiently with post-processing effects, then I guess I'll have to give it up (and make my Tutorials more complicated).
Yesterday I was thinking about this:
# 'Almost-Mesh-local' algorithm for rendering N different Meshes, some of them glowing
Create FBO, attach texture the size of the screen to COLOR0, another texture 1/4 the size of the screen to COLOR1.
Enable DEPTH test, clear COLOR/DEPTH
FOREACH( glowing Mesh )
{
use MRT to render it to COLOR0 and COLOR1 in one go
}
Detach COLOR1, attach STENCIL texture
Set up STENCIL so that the test always passes and writes 1s when Depth test passes
Switch off DEPTH/COLOR writes
FOREACH( glowing Mesh )
{
enlarge it by N% (amount of GLOW needs to be modifiable!)
render to STENCIL // i.e. mark the future 'glow' regions with 1s in stencil
}
Set up STENCIL so that test always passes and writes 0 when Depth test passes
Switch on DEPTH/COLOR writes
FOREACH( not glowing Mesh )
{
render to COLOR0/STENCIL/DEPTH // now COLOR0 contains everything rendered, except for the GLOW. STENCIL marks the unobstructed glowing areas with 1s
}
Blur the COLOR1 texture with BLUR radius 'N'
Merge COLOR0 and COLOR1 to the screen in the following way:
IF ( STENCIL==0 ) take pixel from COLOR0
ELSE blend COLOR0 and COLOR1
END
This is not Mesh-local (we still need to be able to process all 'glowing' Meshes first) although I call it 'almost Mesh-local' because it differentiates between meshes only on the basis of the Effects being applied to them, and not which one is where or which obstructs which.
It also can have problems when two GLOWING Meshes obstruct each other (blend does not have to be done in the right order) although with the GLOW being half-transparent, I am hoping the final look will be more or less ok.
Looks like it can even be turned into a completely 'Mesh-local' algorithm by doing one giant
FOREACH(Mesh)
{
if( glowing )
{
}
else
{
}
}
although at a cost of having to attach and detach stuff from FBO and setting STENCILS differently at each loop iteration.
A knee-jerk suggestion is to do the hybrid:
compute where the red cube will end up on screen, so render it to an off-screen FBO just big enough (or one the same size as the screen, since creating FBOs on the hoof may not be efficient); don't worry about depths, it's only the colours you're after;
render both cubes to an off-screen FBO; set up stencil so that the unobstructed part of the red cube is marked with 1s;
post-process to the screen by using an original pixel from (2) wherever the stencil is 0, or a blurred pixel computed by sampling (1) wherever the stencil is 1.

WebGL- After rendering the mesh correctly some triangles disappear

My problem is the following. I have a canvas in which I am drawing a piece using WebGL.
When it renders, it is fine.
But then, two seconds later or so, without moving the camera or anything, some of the triangles disappear.
And after moving the camera or something, the triangles that are gone stays the same (I have read that in some cases is due to the buffer and the distance to the object so by zooming in or out the triangles that are gone can change).
What could be the problem?
I am applying both color and texture to each element in order to print black lines around each "square" (my texture is a square with black border and white inside). That means that the final color is computed in the fragment shader by multiplying the color times the texture. That also means that some of the nodes are duplicated or even more (in order to give the TextureVertex attribute to a node I need a different node as it belongs to each element) It is important to notice that, when I create a mesh with less number of nodes, they do not disappear. Anyway, I have seen WebGL examples on the net very complex, and I may have just 1000 nodes so I don't think it could be a problem of my graphic hardware.
What do you think could be the problem? How would you solve it? If you require more info just let me know. I didn't include code because it seems to be rendered OK at the beginning, and furthermore I only have this problem with "big" meshes.
Thanks for the comment. please find here both images:
First draw
A few seconds later.
EDITED: Im gonna give some more details in case this helps to find the problem. I will give you the information regarding one of the squares (the rest of the squares would follow same scheme). Notice that they are defined in the code behind as public variables and then I pass them to the html script:
Nodes for vertex buffer:
serverSideInitialCoordinates = {
-1.0,-1.0,0.0,
1.0,-1.0,0.0,
1.0,1.0,0.0,
-1.0,1.0,0.0,
0.0,-1.0,0.0,
1.0,0.0,0.0,
0.0,1.0,0.0,
-1.0,0.0,0.0,
0.0,0.0,0.0,
};
Connectivity to form triangles:
serverSideConnectivity = {
0,4,8,
0,8,7,
1,5,8,
1,8,4,
2,6,8,
2,8,5,
3,7,8,
3,8,6
};
Colors:not relevant.
TextureVertex:{
0.0,0.0
1.0,0.0
1.0,1.0
0.0,1.0
0.5,0.0
1.0,0.5
0.5,1.0
0.0,0.5
0.5,0.5
};
As I mentioned I have an image which is white with just few pixels black around the borders. So in the fragment shader I have something similar to this:
gl_FragColor = texture2D(u_texture, v_texcoord) * vColor;
Then I have a function that loads the image and gets the texture.
In the function InitBuffers I create the buffers and assign to them the vertexPosition, The colors and the connectivity of the triangles.
Finally in the Draw function I bind the buffers again : vertexPosition, color (bind it as colorattribute), texture (bind it as textureVertex), and connectivity, and then set Matrix Uniform and draw. I don think the problem is here because it works fine for smaller meshes, but I still dont know why it doesn't for larger ones. I thought maybe performance of firefox is worse than other browsers' but then I ran on firefox difficult WebGL models I found on the web and they work fine, no triangles missing. If I print same objects without the texture (just colors) it works fine and no triangles are missing. Do you think that maybe it takes a lot of effort for the shader to get the color everytime by multiplying both things? Can you think of another way?
My idea was just to draw black lines between some nodes instead of using a complete texture, but I cant get it working, either I draw the triangles or I draw the lines but it doesn't allow me to draw both at same time. If I put code for both, only the last "elements" are drawn.

LibGDX - Sprites to texture using FBO

I am working on a simple painting app using LibGDX, and I am having trouble getting it to "paint" properly with the setup I am using. The way I am trying to do this is to draw with sprites, and add these individual sprites into a background texture, using LibGDX's FBO commands, when it is appropriate.
The problem I am having is something relating to blending, in that when the sprites are added to this texture that I am building, any transparent pixels of the sprite that are on top of pixels that have been drawn to previous will be brightened up substantially, which obviously doesn't look very good. The following is what the result looks like, using a circle with a green>red gradient as the "brush". The top row is part of the background texture now, while the bottom one is still in its purely sprite drawn form.
http://i238.photobucket.com/albums/ff307/Muriako/hmm.png
Basically, the transparent areas of each sprite are brightening anything below them, and I need to make them completely transparent. I have messed around with many different blending mode combinations and couldn't find one that was any better. GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA for example did not have this problem, but instead the transparent pixels of each sprite seem to be lowered in alpha and even take on some of the color from the layer below, which seemed even more annoying.
I will be happy to post any code snippets on request, but my code has become a bit of mess since I started trying to fix these problems, so I would rather only put up the necessary bits as necessary.
What order are you drawing the sprites in? Alpha blending only works with respect to pixels already in the target, so you have to draw all alpha-containing things (and everything "behind" them) in Z order to get the right result. I'm using .glBlendFunc(GL10.GL_SRC_ALPHA, GL10.GL_ONE_MINUS_SRC_ALPHA);

What can we use instead of blending in OpenGL ES?

I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.

Resources