image smoothing in opengl? - image

Does opengl provide any facilities to help with image smoothing?
My project converts scientific data to textures, each of which is a single line of colored pixels which is then mapped onto the appropriate area of the image. Lines are mapped next to each other.
I'd like to do simple image smoothing of this, but am wondering of OGL can do any of it for me.
By smoothing, I mean applying a two-dimensional averaging filter to the image - effectively increasing the number of pixels but filling them with averages of nearby actual colors - basically normal image smoothing.

You can do it through a custom shader if you want. Essentially you just bind your input texture, draw it as a fullscreen quad, and in the shader just take multiple samples around each fragment, average them together, and write it out to a new texture. The new texture can be an arbitrary higher resolution than the input texture if you desire that as well.

Related

How can I track/color seperate shapes generated using perlin noise?

So I have created a 2D animation that consists of 3D Perlin noise where the X & Y axes are the pixel positions on the matrix/screen and the Z axis just counts up over time. I then apply a threshold so it only shows solid shapes unlike the cloud type pattern of the normal noise. In effect it creates a forever moving fluid animation like so https://i.imgur.com/J9AqY5s.gifv
I have been trying to think of a way I can track and maybe index the different shapes so I can have them all be different colours. I'm tried looping over the image and flood filling each shape but this only works for one frame as it doesn't track which shape is which and how they grow and shrink.
I think there must be a way to do something like this because if I had a colouring pencil and each frame on a piece of paper I would be able to colour and track each blob and combine colours when two blobs join. I just can't figure out how to do this programmatically. The nature in which Perlin-noise works and since the shapes aren't defined objects I find it difficult to wrap my head around how I would index them.
Hopefully, my explanation has been sufficient, any suggestions or help would be greatly appreciated.
Your current algorithm effectively marks every pixel in a frame as part of a blob or part of the background. Let's say you have a second frame buffer that can hold a color for every pixel location. As you noted, you can use flood fill on the blob/background buffer to find all the pixels that belong to a blob.
For the first frame, assign colors to each blob you find and save them in the color buffer.
For the second (and each subsequent) frame, you can again use flood fill on the blob/background buffer to determine all the pixels that belong to a discrete blob. Look up the colors corresponding to those each of those pixels from the color buffer (which represents the colors from the last frame) and build a histogram of all the colors you find.
The histogram will contain some of the pixels of the background color (because the blob may have moved or grown into an area that was background).
But since the blobs move smoothly, many of the pixels that are part of a given blob this frame will have been be part of the same blob on the last frame. So if your histogram has just one non-background color, that's the color you would use.
If the histogram contains only the background color, this is a new blob and you can assign it a new color.
If the histogram contains two (or more) blob colors, then two (or more) blobs have merged, and you can blend their colors (perhaps in proportion to their histogram counts with correspond to their areas).
This trick will be to do all this efficiently. The algorithm I've outlined here gives the idea. An actual implementation may not literally build histograms and might take recalculate each pixel color frame scratch on every frame.

Image pixelation library, non-square "pixel" shape

I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.

Fastest way to draw one million pixels in openGL 4?

I started using opengl and I was wondering how I could put over 1 million pixels on the screen without getting under 10 fps. Currently I have set up a std::vector that takes in each pixel information during the update stage of the main loop then afterwards before it renders.
The render stage looks like this:
glBufferData(GL_ARRAY_BUFFER, sizeof(float)*data.size(),
&data[0], GL_DYNAMIC_DRAW);
then I glDrawArrays
Each pixel takes color and 2d position. Is there a faster method of drawing one million pixels? I use dynamic draw because I want changing colors on the screen while each individual pixel gives random colors. Sort of like a tv on a broken channel.
Don't store the colors in an array but instead calculate them in the fragment shader.
You create a noise texture and use wrapping for its sampler. You should also pass in a few uniforms that change each frame and combine them in a non-linear way with the window coordinates.

Any ideas on how to remove small abandoned pixel in a png using OpenCV or other algorithm?

I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.

How to draw "glowing" line in OpenGL ES

Could you please share some code (any language) on how draw textured line (that would be smooth or have a glowing like effect, blue line, four points) consisting of many points like on attached image using OpenGL ES 1.0.
What I was trying was texturing a GL_LINE_STRIP with texture 16x16 or 1x16 pixels, but without any success.
In ES 1.0 you can use render-to-texture creatively to achieve the effect that you want, but it's likely to be costly in terms of fill rate. Gamasutra has an (old) article on how glow was achieved in the Tron 2.0 game — you'll want to pay particular attention to the DirectX 7.0 comments since that was, like ES 1.0, a fixed pipeline. In your case you probably want just to display the Gaussian image rather than mixing it with an original since the glow is all you're interested in.
My summary of the article is:
render all lines to a texture as normal, solid hairline lines. Call this texture the source texture.
apply a linear horizontal blur to that by taking the source texture you just rendered and drawing it, say, five times to another texture, which I'll call the horizontal blur texture. Draw one copy at an offset of x = 0 with opacity 1.0, draw two further copies — one at x = +1 and one at x = -1 — with opacity 0.63 and a final two copies — one at x = +2 and one at x = -2 with an opacity of 0.17. Use additive blending.
apply a linear vertical blur to that by taking the horizontal blur texture and doing essentially the same steps but with y offsets instead of x offsets.
Those opacity numbers were derived from the 2d Gaussian kernel on this page. Play around with them to affect the fall off towards the outside of your lines.
Note the extra costs involved here: you're ostensibly adding ten full-screen textured draws plus some framebuffer swapping. You can probably get away with fewer draws by using multitexturing. A shader approach would likely do the horizontal and vertical steps in a single pass.

Resources