Issue with NPOT Atlas (C++/iOS) using glTexCoordPointer - opengl-es

My app uses an atlas and reaches parts of it to display items using glTexCoordPointer.
It works well with power-of-two textures, but I wanted to use NPOT to reduce the amount of memory used.
Actually, the picture itself is well loaded with the linear filter and clamp-to-edge wrapping (the content displayed comes from the pic, even with alpha), but the display is deformed.
The coordinates are not the correct ones, and the "shape" is more a trapezoid than a rectangle.
I guessed I had to play with glEnable(), passing GL_TEXTURE_2D in the case of a POT texture, and GL_APPLE_texture_2D_limited_npot in the other case, but I cannot find a way to do so.
Also, I do not have the GL_TEXTURE_RECTANGLE_ARB, I don't know if it is an issue...
Anyone had the same kind of problem ?

Since OpenGL-2 (i.e. for about 10 years) there are no longer constraints on the size of a regular texture. You can use whatever image size you want, it will just work.

Related

Vulkan/OpenGL subpasses that fetch more than single fragment

So, Vulkan introduced subpasses and opengl implelemts similar behaviour with ARM_framebuffer_fetch
In the past, I have used framebuffer_fetch successfully for tonemapping post-effect shaders.
Back then the limitation was that one could only read the contents of the framebuffer at the location of the currently rendered fragment.
Now, what I wonder is whether there is any way by now in Vulkan (or even OpenGL ES) to read from multiple locations (for example to implement a blur kernel) without having a tiled hardware to store/load to RAM.
In theory I guess it should be possible, the first pass wpuld just need to render slightly larger than the blur subpass, based on kernel size (so for example if kernel size was 4 pixels then the tile resolved would need to be 4 pixels smaller than the in-tile buffer sizes) and some pixels would have to be rendered redundantly (on the overlaps of tiles).
Now, is there a way to do that?
I seem to recall having seen some Vulkan instruction related to subpasses that would allow to define the support size (which sounded like what I’m looking for now) but I can’t recall where I saw that.
So my questions:
With Vulkan on a mobile tiled renderer architecture, is it possible to forward-render some geometry and the render a full-screen blur over it, all within a single in-tile pass (without the hardware having to store the result of the intermediate pass to ram first and then load the texture from ram when bluring)? If so, how?
If the answer to 1 is yes, can it also be done in OpenGL ES?
Short answer, no. Vulkan subpasses still have the 1:1 fragment-to-pixel association requirements.

Not calling glclear results in weird artifacts

im wanting to NOT call glClear for the depth or color bit because i want to be able to see all the previously rendered frames. And it does work except it repeats the model all over the x and y axis, and also causes some strange grey blocky lines. Is there a way to accomplish this? Im using opengl es 3 on android. Thank you for any help.
The contents of the default framebuffer at the start of a frame is undefined, especially on tile-based renderers, which most of the mobile GPUs are. Your "repeats" in the X and Y axis are likely just showing how big the tiles are on your particular GPU (e.g. it's just dumping out whatever is in the GPU local tile RAM, repeated N times to completely cover the screen).
If you want to render on top of the previous frame you need to configure the rendering context configuration to use EGL_BUFFER_PRESERVED (the default is EGL_BUFFER_DESTROYED). E.g:
eglSurfaceAttrib(m_display, m_surface, EGL_SWAP_BEHAVIOR, EGL_BUFFER_PRESERVED);
Note 1: this will incur some overhead (the surface is effectively copied back into tile-local memory), whereas starting with a surface discard or invalidate, or a clear is usually free.
Note 2: this will only preserve color data; there is no means to preserve depth or stencil across frames for the default framebuffer.

Compare two images, keep different pixels, set same pixels to transparent in actionscript 3

Ok, the question is pretty much in the title.
What I want to do is filter out differing pixels from one image and a default image. Then "print" only those pixels on a transparent layer if you will. Such that if you would "merge" the default image with the transparent layer, you would end up with the other image.
Is that possible with actionscript 3 ?
BitmapData.compare() is nearly what you want. It gives you a new BitmapData object where each pixel is the difference between the corresponding pixels in the two BitmapData objects being compared.
It sounds like you want the actual value of the changed pixels. I don't know of a built-in way to do that, so you might have to go through and do it pixel-by-pixel using BitmapData.getPixel() or building a PixelBender filter to do it. Either route is likely slower than you really want to deal with, honestly (I built a Chromakey demo app for a sales pitch that was barely able to scrape 8fps using those methods)

OpenGL tile rendering: most efficient way?

I am creating a tile-based 2D game as a way of learning basic "modern" OpenGL concepts. I'm using shaders with OpenGL 2.1., and am familiar with the rendering pipeline and how to actually draw geometry on-screen. What I'm wondering is the best way to organize a tilemap to render quickly and efficiently. I have thought of several potential methods:
1.) Store the quad representing a single tile (vertices and texture coordinates) in a VBO and render each tile with a separate draw* call, translating it to the correct position onscreen and using uniform2i to give the location in the texture atlas for that particular tile;
2.) Keep a VBO containing every tile onscreen (already-computed screen coordinates and texture atlas coordinates), using BufferSubData to update the tiles every frame but using a single draw* call;
3.) Keep VBOs containing static NxN "chunks" of tiles, drawing however many chunks of tiles are at least partially visible onscreen and translating them each into position.
*I'd like to stay away from the last option if possible unless rendering chunks of 64x64 is not too inefficient. Tiles are loaded into memory in blocks of that size, and even though only about 20x40 tiles are visible onscreen at a time, I would have to render up to four chunks at once. This method would also complicate my code in several other ways.
So, which of these is the most efficient way to render a screen of tiles? Are there any better methods?
You could do any one of these and they would probably be fine; what you're proposing to render is very, very simple.
#1 will definitely be worse in principle than the other options, because you would be drawing many extremely simple “models” rather than letting the GPU do a whole lot of batch work on one draw call. However, if you have only 20×40 = 800 tiles visible on screen at once, then this is a trivial amount of work for any modern CPU and GPU (unless you're doing some crazy fragment shader).
I recommend you go with whichever is simplest to program for you, so that you can continue work on your game. I imagine this would be #1, or possibly #2. If and when you find yourself with a performance problem, do whichever of #2 or #3 (64×64 sounds like a fine chunk size) lets you spend the least CPU time on your program's part of drawing (i.e. updating the buffer(s)).
I've been recently learning modern OpenGL myself, through OpenGL ES 2.0 on Android. The OpenGL ES 2.0 Programming Guide recommends an "array of structures", that is,
"Store vertex attributes together in a single buffer. The structure represents all attributes of a vertex and we have an array of these attributes per vertex."
While this may seem like it would initially consume a lot of space, it allows for efficient rendering using VBOs and flexibility in texture mapping each tile. I recently did a tiled hex grid using interleaved arrays containing vertex, normals, color, and texture data for a 20x20 tile hex grid on a Droid 2. So far things are running smoothly.

How can I deblur an image in matlab?

I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.

Resources