On my windows machine I see no difference between next settings:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
and
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
and both are of quite bad quality. Am I missing some settings in pipeline?
And if this is some kind of oddity what are my options to overcome this by the means of
opengl without using custom scaling?
Don't sure about windows tag. )
For minifictation, you likely want to enable mipmapping. See GL_LINEAR_MIPMAP_LINEAR, otherwise you will get very noticeable aliasing in high frequency textures, when you zoom in more than 2x.
Of course, you need to generate mipmaps, do use this!
The filter is applicable only when a texture in minified respect the original size.
What is your projection parameters, and how do you display the texture? Answering to these question may help us to find the solution.
Probably your texture is not minified, I suppose. In this case, try to setup the MAG_FILTER texture parameter to have effects using your projection.
Related
The procedurally generated texture appears fine, until I start trying to scroll the texture coords utilizing GL_REPEAT. If I scroll just a normal image that I've uploaded, then it is scrolling fine. But the procedural one does this weird, periodic flashing effect. I'm trying to generate a starfield, so it's just white dots on a transparent background, and with this problem the stars are fading to almost black then back to white and so on. Also, if i generate a substantial number of stars, there are a few that don't exhibit this behavior, but seem normal.
here i just setup the texture for opengl and establish the pixels array.
glBindTexture(GL_TEXTURE_2D,tex_id);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
GLubyte *pixels=new GLubyte[(screen_width)*(screen_height)*4];
here i setup my random number generators and i fill in the pixel data for the number of stars i want.
irngX=new IRNG(0,screen_width-1);
irngY=new IRNG(0,screen_height-1);
for (int i=0; i<count; ++i){
int x=irngX->get();
int y=irngY->get();
int pos=(y*screen_width*4)+(x*4);
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
pixels[pos++]=(GLubyte)255;
}
here i upload the pixels to opengl...the pixel store function seems to have no effect on anything, unlike desktop opengl, nor do i understand its usage anyways. ive tried it on and off.
glPixelStorei(GL_UNPACK_ALIGNMENT,1); // ??
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,screen_width,screen_height,0,
GL_RGBA,GL_UNSIGNED_BYTE,pixels);
glPixelStorei(GL_UNPACK_ALIGNMENT,4);
here's my tex coord scroller:
tex_scroller+=0.001f;
glUniform1f(tex_scroller_loc,tex_scroller);
and in the shader:
vtex.t+=tex_scroller;
i hope that's all the relevant code, and that i've made my problem understandable. please let me know if you need a better description.
p.s. i apolozie if my code is not formatted properly. i tried.
The problem is probably about texture sampling. Your stars are just a single texel in size, and when you're scrolling, the grid of screen pixels is no longer aligned perfectly with the grid of texels, so texture filtering is occuring (GL_LINEAR) which affects the brightness. This is particularly a problem because your transparent texels are presumably black. Some suggestions:
Try changing 'GL_LINEAR' to 'GL_NEAREST' in your code above. This should fix any fading, but might replace it with an on/off flickering.
Try making the transparent texels white, that will improve things.
Try changing tex_scroller+=0.001f; to tex_scroller+=1.0f/screen_width; to see if you can keep the texel grid and pixel grid aligned
Try changing your stars to be multiple texels in size instead of single texel.
I implemented an opengl-es application running on mali-400 gpu.
I grab the 1280x960 RGB buffer from camera and render on gpu using glTexImage2D.
However the glTexImage2D call takes around 25 milliseconds for 1280x960 resolution frame. It does extra memcopy of pCameraBuffer.
1) Is there any way to improve the performance of glTexImage2D?
2) Will FBO help? how can I use Frame Buffer Objects to render. I found few FBO examples, but I see that these examples pass NULL to glTexImage2d in last argument (data). so how can I render pCameraBuffer with FBO?
below is the code running for each camera frame.
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, SCENE_WIDTH, SCENE_HEIGHT, 0, GL_RGB, GL_UNSIGNED_BYTE, pCameraBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDeleteTextures(1, &textureID);
The usual approach to this type of thing is to try and import the camera buffer directly into the graphics driver, avoiding the need for any memory allocation or copy at all. Whether this is supported depends on a lot on the platform integration, and the capabilities of the drivers in the system.
For Linux systems, which is what you indicate you are using, the route is via the EGL_EXT_image_dma_buf_import extension. You need a camera driver which creates a surface backed by dma_buf managed memory, and a side-channel to get the dma_buf file handle into the application running the graphics operations. You can then turn this into an EGLImage using the extension above.
In opengl es on andrid when I place two textures next to each other there is a slight seam in which you can see drawn objects behind them. Basically looks like a small gap. I narrowed it down to the mipmapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
If I set them both to GL_NEAREST the seam goes away but the textures look like crap. Do I have any other options here?
The sampling behavior at the edge of the texture depends on your texture wrap settings, which you can set with glTexParamteri(), using GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T for the parameter name. These control the wrap mode for the two texture directions separately.
The default value for these parameters is GL_REPEAT, which creates a repeating pattern of the texture. This means that when you approach for example the left edge of the texture, and reach the last half texel, linear interpolation will happen between the leftmost and rightmost pixel. This is most likely causing the seams in your case.
The other valid values in ES 2.0 are GL_MIRRORED_REPEAT and GL_CLAMP_TO_EDGE. Both should work better for your case, but GL_CLAMP_TO_EDGE is the most straightforward. It will simply use the texel at the edge while approaching the edge, without any interpolation for the last half texel.
You can set these parameter values with:
glTexParamteri(GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParamteri(GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
First - Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
I quite understand what multi-pass shaders are.
Well what is my picture:
Draw group of something (e.g. sprites) to FBO using some shader.
Think of FBO as big texture for big screen sized quad and use another shader, which, for example, turn texture colors to grayscale.
Draw FBO textured quad to screen with grayscaled colors.
Or is this called else?
So multi-pass = use another shader output to another shader input? So we render one object twice or more? How shader output get to another shader input?
For example
glUseProgram(shader_prog_1);//Just plain sprite draw
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, /*some texture_id*/);
//Setting input for shader_prog_1
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
//Disabling arrays, buffers
glUseProgram(shader_prog_1);//Uses same vertex, but different fragment shader program
//Setting input for shader_prog_2
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can anyone provide simple example of this in basic way?
In general, the term "multi-pass rendering" refers to rendering the same object multiple times with different shaders, and accumulating the results in the framebuffer. The accumulation is generally done via blending, not with shaders. That is, the second shader doesn't take the output of the first. They each perform part of the computation, and the blend stage combines them into the final value.
Nowadays, this is primarily done for lighting in forward-rendering scenarios. You render each object once for each light, passing different lighting parameters and possibly using different shaders each time you render a light. The blend mode used to accumulate the results is additive, since light reflectance is an additive property.
Does subroutines require GLSL 4.0+? So it unavailable in GLSL version of OpenGL ES 2.0?
This is a completely different question from the entire rest of your post, but the answer is yes and no.
No, in the sense that ARB_shader_subroutine is an OpenGL extension, and it therefore could be implemented by any OpenGL implementation. Yes, in the practical sense that any hardware that actually could implement shader_subroutine could also implement the rest of GL 4.x and therefore would already be advertising 4.x functionality.
In practice, you won't find shader_subroutine supported by non-4.x OpenGL implementations.
It is unavailable in GLSL ES 2.0 because it's GLSL ES. Do not confuse desktop OpenGL with OpenGL ES. They are two different things, with different GLSL versions and different featuresets. They don't even share extensions (except for a very few recent ones).
I am doing my iPhone graphics using OpenGL. In one of my projects, I need to use an image, which I need to use as a texture in OpenGL. The .png image is 512 * 512 in size, its background is transparent, and the image has a thick blue line in its center.
When I apply my image to a polygon in OpenGL, the texture appears as if the transparent part in the image is black and the thick blue line is seen as itself. In order to remove the black part, I used blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
Then my black part of the texture in the polygon is removed. Now only the blue band is seen. Thus the problem is solved.
But I want to add many such images and make many objects in OpenGL. I am able to do that, but the frame rate is very low when I add more and more images to objects. But when I comment out blending, the frame rate is normal, but the images are not seen.
Since I do not have good fps, the graphics are a bit slow and I get a shaky effect.
So:
1) Is there any other method than blending to solve my problem?
2) How can I improve the frame rate of my OpenGL app? What all steps need to be taken in order to implement my graphics properly?
If you want to have transparent parts of an object, the only way is to blend to pixel data for the triangle with what is currently in the buffer (what you are currently doing). Normally, when using solid textures, the new pixel data for a triangle just overwrites what ever was in buffer (as long as it is closers, ie z-buffer). But with transparency, it has start looking at the transparency of that part of the texture, look at what is behind it, all the way back to something solid. Then has combine all of those over lapping layers of transparent stuff till you get the final image.
If all you are wanting your transparency for is something like a simple tree sprite, and removing the 'stuff' form the sides of the trunk etc. Then you may be better of providing more complex geometry that actually defines the shape of the trunk and thus not need to bother with transparency.
Sadly, I don't think there is much you can do to try to speed up your FPS, other then cut down the amount of transparency you are calculating. Maybe even adding some optimization that checks images to see if it can turn of alpha blending for this image or not. Depending on how much you are trying to push through, may save time in the long wrong.