Opengl seamless texture - opengl-es

In opengl es on andrid when I place two textures next to each other there is a slight seam in which you can see drawn objects behind them. Basically looks like a small gap. I narrowed it down to the mipmapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
If I set them both to GL_NEAREST the seam goes away but the textures look like crap. Do I have any other options here?

The sampling behavior at the edge of the texture depends on your texture wrap settings, which you can set with glTexParamteri(), using GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T for the parameter name. These control the wrap mode for the two texture directions separately.
The default value for these parameters is GL_REPEAT, which creates a repeating pattern of the texture. This means that when you approach for example the left edge of the texture, and reach the last half texel, linear interpolation will happen between the leftmost and rightmost pixel. This is most likely causing the seams in your case.
The other valid values in ES 2.0 are GL_MIRRORED_REPEAT and GL_CLAMP_TO_EDGE. Both should work better for your case, but GL_CLAMP_TO_EDGE is the most straightforward. It will simply use the texel at the edge while approaching the edge, without any interpolation for the last half texel.
You can set these parameter values with:
glTexParamteri(GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParamteri(GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

Related

Weird flashing when scrolling tex coords of procedurally generated texture

The procedurally generated texture appears fine, until I start trying to scroll the texture coords utilizing GL_REPEAT. If I scroll just a normal image that I've uploaded, then it is scrolling fine. But the procedural one does this weird, periodic flashing effect. I'm trying to generate a starfield, so it's just white dots on a transparent background, and with this problem the stars are fading to almost black then back to white and so on. Also, if i generate a substantial number of stars, there are a few that don't exhibit this behavior, but seem normal.
here i just setup the texture for opengl and establish the pixels array.
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLubyte *pixels=new GLubyte[(screen_width)*(screen_height)*4];
here i setup my random number generators and i fill in the pixel data for the number of stars i want.
irngX=new IRNG(0,screen_width-1);
irngY=new IRNG(0,screen_height-1);
for (int i=0; i<count; ++i){
int x=irngX->get();
int y=irngY->get();
int pos=(y*screen_width*4)+(x*4);
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
}
here i upload the pixels to opengl...the pixel store function seems to have no effect on anything, unlike desktop opengl, nor do i understand its usage anyways. ive tried it on and off.
glPixelStorei(GL_UNPACK_ALIGNMENT,1); // ??
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,screen_width,screen_height,0,
GL_RGBA,GL_UNSIGNED_BYTE,pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
here's my tex coord scroller:
tex_scroller+=0.001f;
glUniform1f(tex_scroller_loc,tex_scroller);
and in the shader:
vtex.t+=tex_scroller;
i hope that's all the relevant code, and that i've made my problem understandable. please let me know if you need a better description.
p.s. i apolozie if my code is not formatted properly. i tried.
The problem is probably about texture sampling. Your stars are just a single texel in size, and when you're scrolling, the grid of screen pixels is no longer aligned perfectly with the grid of texels, so texture filtering is occuring (GL_LINEAR) which affects the brightness. This is particularly a problem because your transparent texels are presumably black. Some suggestions:
Try changing 'GL_LINEAR' to 'GL_NEAREST' in your code above. This should fix any fading, but might replace it with an on/off flickering.
Try making the transparent texels white, that will improve things.
Try changing tex_scroller+=0.001f; to tex_scroller+=1.0f/screen_width; to see if you can keep the texel grid and pixel grid aligned
Try changing your stars to be multiple texels in size instead of single texel.

Make OpenGL Polygon Edges Smooth

Due of tilting the rectangular polygon with texture, its edges become sharp. But inner edges (inner cut parts) still smooth.
Texture has antialiasing enabled.
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
It looks like antialiasing works just inside the bounds of polygon, but doesn't on edges.
Is it possible to enable antialiasing on edges, so they look smooth like the inner edges in the picture?
Used Cocos2d-x v3.3.
Multi sampling enabled makes edges more smooth. It is not perfect solution, but on retina display looks nice. There is almost no difference between number of sampling 2 or 9 for this example.
Here is code how to set multi sampling in Cocos2d-x:
// In AppController.mm
// Init the CCEAGLView
CCEAGLView *eaglView = [CCEAGLView viewWithFrame: [window bounds]
pixelFormat: (NSString*)cocos2d::GLViewImpl::_pixelFormat
depthFormat: cocos2d::GLViewImpl::_depthFormat
preserveBackbuffer: NO
sharegroup: nil
multiSampling: YES
numberOfSamples: 2 ];

Rendering to depth texture - unclarities about usage of GL_OES_depth_texture

I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.
Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.

performance - drawing many 2d circles in opengl

I am trying to draw large numbers of 2d circles for my 2d games in opengl. They are all the same size and have the same texture. Many of the sprites overlap. What would be the fastest way to do this?
an example of the kind of effect I'm making http://img805.imageshack.us/img805/6379/circles.png
(It should be noted that the black edges are just due to the expanding explosion of circles. It was filled in a moment after this screen-shot was taken.
At the moment I am using a pair of textured triangles to make each circle. I have transparency around the edges of the texture so as to make it look like a circle. Using blending for this proved to be very slow (and z culling was not possible as they were rendered as squares to the depth buffer). Instead I am not using blending but having my fragment shader discard any fragments with an alpha of 0. This works, however it means that early z is not possible (as fragments are discarded).
The speed is limited by the large amounts of overdraw and the gpu's fillrate. The order that the circles are drawn in doesn't really matter (provided it doesn't change between frames creating flicker) so I have been trying to ensure each pixel on the screen can only be written to once.
I attempted this by using the depth buffer. At the start of each frame it is cleared to 1.0f. Then when a circle is drawn it changes that part of the depth buffer to 0.0f. When another circle would normally be drawn there it is not as the new circle also has a z of 0.0f. This is not less than the 0.0f that is currently there in the depth buffer so it is not drawn. This works and should reduce the number of pixels which have to be drawn. However; strangely it isn't any faster. I have already asked a question about this behavior (opengl depth buffer slow when points have same depth) and the suggestion was that z culling was not being accelerated when using equal z values.
Instead I have to give all of my circles separate false z-values from 0 upwards. Then when I render using glDrawArrays and the default of GL_LESS we correctly get a speed boost due to z culling (although early z is not possible as fragments are discarded to make the circles possible). However this is not ideal as I've had to add in large amounts of z related code for a 2d game which simply shouldn't require it (and not passing z values if possible would be faster). This is however the fastest way I have currently found.
Finally I have tried using the stencil buffer, here I used
glStencilFunc(GL_EQUAL, 0, 1);
glStencilOp(GL_KEEP, GL_INCR, GL_INCR);
Where the stencil buffer is reset to 0 each frame. The idea is that after a pixel is drawn to the first time. It is then changed to be none-zero in the stencil buffer. Then that pixel should not be drawn to again therefore reducing the amount of overdraw. However this has proved to be no faster than just drawing everything without the stencil buffer or a depth buffer.
What is the fastest way people have found to write do what I am trying?
The fundamental problem is that you're fill limited, which is the GPUs inability to shade all the fragments you ask it to draw in the time you're expecting. The reason that you're depth buffering trick isn't effective is that the most time-comsuming part of processing is shading the fragments (either through your own fragment shader, or through the fixed-function shading engine), which occurs before the depth test. The same issue occurs for using stencil; shading the pixel occurs before stenciling.
There are a few things that may help, but they depend on your hardware:
render your sprites from front to back with depth buffering. Modern GPUs often try to determine if a collection of fragments will be visible before sending them off to be shaded. Roughly speaking, the depth buffer (or a represenation of it) is checked to see if the fragment that's about to be shaded will be visible, and if not, it's processing is terminated at that point. This should help reduce the number of pixels that need to be written to the framebuffer.
Use a fragment shader that immediately checks your texel's alpha value, and discards the fragment before any additional processing, as in:
varying vec2 texCoord;
uniform sampler2D tex;
void main()
{
vec4 texel = texture( tex, texCoord );
if ( texel.a < 0.01 ) discard;
// rest of your color computations
}
(you can also use alpha test in fixed-function fragment processing, but it's impossible to say if the test will be applied before the completion of fragment shading).

opengl texture filtering low quality

On my windows machine I see no difference between next settings:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
and
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
and both are of quite bad quality. Am I missing some settings in pipeline?
And if this is some kind of oddity what are my options to overcome this by the means of
opengl without using custom scaling?
Don't sure about windows tag. )
For minifictation, you likely want to enable mipmapping. See GL_LINEAR_MIPMAP_LINEAR, otherwise you will get very noticeable aliasing in high frequency textures, when you zoom in more than 2x.
Of course, you need to generate mipmaps, do use this!
The filter is applicable only when a texture in minified respect the original size.
What is your projection parameters, and how do you display the texture? Answering to these question may help us to find the solution.
Probably your texture is not minified, I suppose. In this case, try to setup the MAG_FILTER texture parameter to have effects using your projection.

Resources