What information do image pixels hold? - image

For a color image, say with dimensions 320 by 240, we have 76,800 pixels in the image. What does each pixel represent for a color image? Is it just the RGB values for that pixel? How are shapes and textures represented in pixels? If each pixel in a colored image only contains RGB values, is that information enough to store the shape, size, and texture of the objects in the image?

A single pixel in RGB space can only hold information about the color value of that single pixel.
Shapes, and textures can only be described with the combination of several pixels, that information is not stored in single pixels themselves.
Moreover this information (same as for shape, size, texture of possible objects) is never stored explicitly in the image data. You can infer shapes or textures based on your interpretation of underlying pixel data, but this always depends on how you yourself define a shape or a texture.

Every pixel contains a simplified representation of the light landing on the corresponding sensor cell in the camera. The amount of light is averaged over the cell area, and light spectrum is grossly described by taking three weighted averages of intensity over the frequencies. The result is (usually) three integers in the range 0-255, for a total of 24 bits of information.
As the pixels are aligned on a grid, a digital color image can be seen as a triple matrix of integers, that's it. (Below, an example of such a matrix.) This information is raw.
The semantic image content must be inferred by an image analysis system, which is able to segment the image in distinct areas, and to a lesser extent, characterize the textures.

Related

How can I track/color seperate shapes generated using perlin noise?

So I have created a 2D animation that consists of 3D Perlin noise where the X & Y axes are the pixel positions on the matrix/screen and the Z axis just counts up over time. I then apply a threshold so it only shows solid shapes unlike the cloud type pattern of the normal noise. In effect it creates a forever moving fluid animation like so https://i.imgur.com/J9AqY5s.gifv
I have been trying to think of a way I can track and maybe index the different shapes so I can have them all be different colours. I'm tried looping over the image and flood filling each shape but this only works for one frame as it doesn't track which shape is which and how they grow and shrink.
I think there must be a way to do something like this because if I had a colouring pencil and each frame on a piece of paper I would be able to colour and track each blob and combine colours when two blobs join. I just can't figure out how to do this programmatically. The nature in which Perlin-noise works and since the shapes aren't defined objects I find it difficult to wrap my head around how I would index them.
Hopefully, my explanation has been sufficient, any suggestions or help would be greatly appreciated.
Your current algorithm effectively marks every pixel in a frame as part of a blob or part of the background. Let's say you have a second frame buffer that can hold a color for every pixel location. As you noted, you can use flood fill on the blob/background buffer to find all the pixels that belong to a blob.
For the first frame, assign colors to each blob you find and save them in the color buffer.
For the second (and each subsequent) frame, you can again use flood fill on the blob/background buffer to determine all the pixels that belong to a discrete blob. Look up the colors corresponding to those each of those pixels from the color buffer (which represents the colors from the last frame) and build a histogram of all the colors you find.
The histogram will contain some of the pixels of the background color (because the blob may have moved or grown into an area that was background).
But since the blobs move smoothly, many of the pixels that are part of a given blob this frame will have been be part of the same blob on the last frame. So if your histogram has just one non-background color, that's the color you would use.
If the histogram contains only the background color, this is a new blob and you can assign it a new color.
If the histogram contains two (or more) blob colors, then two (or more) blobs have merged, and you can blend their colors (perhaps in proportion to their histogram counts with correspond to their areas).
This trick will be to do all this efficiently. The algorithm I've outlined here gives the idea. An actual implementation may not literally build histograms and might take recalculate each pixel color frame scratch on every frame.

How can I remove/reassign small pixelregions (at edges) from color images? (MATLAB)

I have segmentation masks with indexed colors. Unfortunately there is (colored) noise at the edges of objects. At the transition from one color region to the next, there are small pixel regions in different colors, separating the two color regions (caused by converting transparent pixels at the edges).
I want to remove this noise (with MATLAB) by assigning a color of one of the neighboring large regions. It doesn't matter, which one - main thing is to remove the small areas.
It can be assumed that small regions of ANY color may be removed in this way (reassign to neighboring large region).
In case of a binary image, I could use bwareaopen (suggested in this Q&A: Remove small chunks of labels in an image). Converting the image to a binary image for each color might be a workaround, however this is costly (for many colors) and leaves the question of reassignment open. I hope there are more elegant ways to do this.
Check the following:
Convert RGB to indexed image.
Apply median filter on indexed map.
Convert back to RGB
RGB = imread('GylzKm.png');
%Convert RGB to indexed image with 4 levels
[X, map] = rgb2ind(RGB, 4);
%Apply median filter on 4 levels images
X = medfilt2(X, [5, 5]);
%Convert indexed image back to RGB.
J = ind2rgb(X, map);
figure;imshow(J);
The black border is a little problematic.

Matching scaled and translated binary mask to noisy image of 2d object in MATLAB

So I have a matrix A 300x500 which contains binary image of some object (background 0, object 1), and noisy image B with depicted the same object. I want to match binary mask A to the image B. Object on the mask has exactly the same shape as object on the noisy image. The problem is that images have different sizes (both of their planes and the objects depicted on them). Moreover the object on mask is located in the middle of the plane, contrary, on the image B is translated. Does anyone now simple solution how can I match these images?
Provided you don't rotate or scale your object the peak in cross-correlation should give you the shift between the two objects.
From the signal prcessing toolbox you can use xcorr2(A, B) to do this. The help even has it as one of the examples.
The peak position indicates the offset to get from one to the other. The fact that one indut is noisy will introduce some uncertainty in your answer, but this is inevitable as they do not exactly match.

image smoothing in opengl?

Does opengl provide any facilities to help with image smoothing?
My project converts scientific data to textures, each of which is a single line of colored pixels which is then mapped onto the appropriate area of the image. Lines are mapped next to each other.
I'd like to do simple image smoothing of this, but am wondering of OGL can do any of it for me.
By smoothing, I mean applying a two-dimensional averaging filter to the image - effectively increasing the number of pixels but filling them with averages of nearby actual colors - basically normal image smoothing.
You can do it through a custom shader if you want. Essentially you just bind your input texture, draw it as a fullscreen quad, and in the shader just take multiple samples around each fragment, average them together, and write it out to a new texture. The new texture can be an arbitrary higher resolution than the input texture if you desire that as well.

Organizing Images By Color

Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?

Resources