Effecient algorithms for image pixel manipulations - image

Is there any efficient method to access and change image pixels than the usual scanning the pixel array and changing them? I've a psuedocode but I want a a better method than this. I just need an algorithm, any language is fine. It looks something like this:-
For i in range(0,len(pixel_array),4)
pixel_array[0] = a //a is some random value
pixel_array[1] = a
pixel_array[2] = a
pixel_array[3] = 1

I had a similar problem once, while i was moving pixels around in an SDL_Texture. There is not really any better option for you to do this, unless you tell us, what exactly you want to to. Do you have to manipulate pixel per pixel or can you set complete areas in one (e.g. if you draw a line, you could use memset to set a whole range in the array to that data)? You need to check, what could be done any faster. Drawing a rectangle can be optimized by editing the pixels in a bulk. But if you really need to change every pixel independently, then, no, there isnt any faster attempt besides doing it on the GPU instead of the CPU (e.g. using OpenGL).

Related

Equalize contrast and brightness across multiple images

I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.

Trainable "Spam-Filter" for Images

I remember a story about someone filtering images with a spam filter which he fed with some training data.
I come to the point where I exactly need something like this.
I have a lot different types of images (mainly people, e.g. selfies, group pictures, portraits, ..) but I only want a certain type (e.g. only male) of them.
With the right algorithm and training data I think it's possible to get it to the point where I can pass an image to it and i get true or false whether it matches my type or not.
I had a look at a few Face/Gender Detection APIs, but none of them worked for me that's why I want to try the approach with the spam-filter - seems like a funny idea.
Here's what I need:
a trainable spam-filter algorithm/code sample/API
has to work offline
preferably for C# or Java
I already spent a few hours trying different things and googling, now I'm here and I'd like to get your opinion on this problem and the solution you think is appropriate.
Buddha
There is a simple image comparison algorithm that you can read about here: compareImages php class.
Basically the way it works is this:
it takes an image (a cropped image would be best), scales it down to a 8x8 pixels image, converts it to a BW / Greyscale image, and then it calculates the mean value of the pixels (which is the average value).
Then it goes over all the pixels of the scaled image (64 pixels), and in every pixel where the pixel's value >= the mean value, it puts "1", and if the pixel's value < the mean value, it puts "0", resulting in a 64bit "signature" value of 0s and 1s.
This signature value is what identifies the image, and then you can save this signature value in some kind of a database, as your "learned" filter.
Then if an email arrives with some images.. you can just crop them, and scan them, produce a signature, and see if it matches any known signature in your database.
The good things about this algorithm are:
It is very fast and scalable (scaling an image down to 8x8 is fast, and scanning the pixels as described is fast too).
Because it converts the image to greyscale & resizes it down, it means it can detect any color variations or sizes of the same image.
Because you use 64bit signatures, it doesn't take alot of space in your database as well.
Hope this helps.

How can I deblur an image in matlab?

I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.

Detecting if two images are visually identical

Sometimes two image files may be different on a file level, but a human would consider them perceptively identical. Given that, now suppose you have a huge database of images, and you wish to know if a human would think some image X is present in the database or not. If all images had a perceptive hash / fingerprint, then one could hash image X and it would be a simple matter to see if it is in the database or not.
I know there is research around this issue, and some algorithms exist, but is there any tool, like a UNIX command line tool or a library I could use to compute such a hash without implementing some algorithm from scratch?
edit: relevant code from findimagedupes, using ImageMagick
try $image->Sample("160x160!");
try $image->Modulate(saturation=>-100);
try $image->Blur(radius=>3,sigma=>99);
try $image->Normalize();
try $image->Equalize();
try $image->Sample("16x16");
try $image->Threshold();
try $image->Set(magick=>'mono');
($blob) = $image->ImageToBlob();
edit: Warning! ImageMagick $image object seems to contain information about the creation time of an image file that was read in. This means that the blob you get will be different even for the same image, if it was retrieved at a different time. To make sure the fingerprint stays the same, use $image->getImageSignature() as the last step.
findimagedupes is pretty good. You can run "findimagedupes -v fingerprint images" to let it print "perceptive hash", for example.
Cross-correlation or phase correlation will tell you if the images are the same, even with noise, degradation, and horizontal or vertical offsets. Using the FFT-based methods will make it much faster than the algorithm described in the question.
The usual algorithm doesn't work for images that are not the same scale or rotation, though. You could pre-rotate or pre-scale them, but that's really processor intensive. Apparently you can also do the correlation in a log-polar space and it will be invariant to rotation, translation, and scale, but I don't know the details well enough to explain that.
MATLAB example: Registering an Image Using Normalized Cross-Correlation
Wikipedia calls this "phase correlation" and also describes making it scale- and rotation-invariant:
The method can be extended to determine rotation and scaling differences between two images by first converting the images to log-polar coordinates. Due to properties of the Fourier transform, the rotation and scaling parameters can be determined in a manner invariant to translation.
Colour histogram is good for the same image that has been resized, resampled etc.
If you want to match different people's photos of the same landmark it's trickier - look at haar classifiers. Opencv is a great free library for image processing.
I don't know the algorithm behind it, but Microsoft Live Image Search just added this capability. Picasa also has the ability to identify faces in images, and groups faces that look similar. Most of the time, it's the same person.
Some machine learning technology like a support vector machine, neural network, naive Bayes classifier or Bayesian network would be best at this type of problem. I've written one each of the first three to classify handwritten digits, which is essentially image pattern recognition.
resize the image to a 1x1 pixle... if they are exact, there is a small probability they are the same picture...
now resize it to a 2x2 pixle image, if all 4 pixles are exact, there is a larger probability they are exact...
then 3x3, if all 9 pixles are exact... good chance etc.
then 4x4, if all 16 pixles are exact,... better chance.
etc...
doing it this way, you can make efficiency improvments... if the 1x1 pixel grid is off by a lot, why bother checking 2x2 grid? etc.
If you have lots of images, a color histogram could be used to get rough closeness of images before doing a full image comparison of each image against each other one (i.e. O(n^2)).
There is DPEG, "The" Duplicate Media Manager, but its code is not open. It's a very old tool - I remember using it in 2003.
You could use diff to see if they are REALLY different.. I guess it will remove lots of useless comparison. Then, for the algorithm, I would use a probabilistic approach.. what are the chances that they look the same.. I'd based that on the amount of rgb in each pixel. You could also find some other metrics such as luminosity and stuff like that.

Piece together several images into one big image

I'm trying to put several images together into one big image, and am looking for an algorithm which determines the placing most optimally. The images can't be rotated or resized, but the position in the resulting image is not important.
edit: added no resize constraint
Possibly you are looking for something like this: Automatic Magazine Layout.
Appearantly it's called a 'packing problem', which is something frequently used in game programming. For those interested, here are some suggested implementations:
Packing Lightmaps,
Rectangle packing and
Rectangle Placement
I created an algorithm for this ones, it's actually a variant of the NP-Hard Bin packing problem, but with a infinite bin size.
You could try to find some articles about it and try to optimize your algorithm, but in the end it will remain a brute force way to try every possibility and try to minimize the resulting bin size.
If you don't need the best solution, but just one solution, you could avoid brute forcing all the combinations. I created a program which did that once too.
Description:
Images: array of the input images
ResultMap: 2d array of Booleans
FinalImage: large image
Sort the Images array so that the largest image is at the top.
Calculate the total size of your images and initialise the ResultMap so that it's size is 1.5 times the total size of your images (you could make this step smarter for better memory usage and performance). Make the ResultMap the same size and fill it with False values.
Then add the first image in the left of your FinalImage and set all the Booleans in ResultMap true from 0,0 until ImageHeight, ImageWidth.
The ResultMap is used to quickly check if you can fit an image on the current FinalImage. You could optimize it to use a int32 and use each bit for one pixel. This will reduce memory and increase performance, because you can check 32 bit's at once (using a mask). But it will become more dificult because you'll have to think about the mask you'll need to make for the edges of your image.
Now I will describe the real loop of the "algorithm".
For each image in the array try to find a place were it would fit. You could write a loop which would look trough the ResultMap array and look for a false value and than start to see if it remains false in both directions for the size of the image to place.
If you find a place, copy the image to the FinalImage and update the correct booleans in ResultMap
If you cand find a place, increase the size of the FinalImage just enough (so look at the edges where the minimal amount of extra space is needed) and also sync that with the ResultMap
GOTO 1 :)
It's not optimal, but it can solve the problem in a reasonably optimal way (especially if there are a few smaller images to fill up the gabs in the end).
Optimal packing is hard, but there might be simplifications available to you depending on the details of your problem domain. A few ideas:
If you can carve up your bitmaps into equally sized tiles, then packing is trivial. Then, on-demand, you'd reassemble the bitmaps from the tiles.
Sort your images largest to smallest, then, for each image use a greedy-allocator to select the first available sub-rectangle that fits the image.
Use a genetic algorithm. Start with several randomly-selected layouts. Score them based on how tightly they're packed. Mix solutions from the top scoring ones, and iterate until you get to an acceptable score.
You are probably looking for SIFT
http://www.cs.ubc.ca/~lowe/keypoints/
http://user.cs.tu-http://www.cs.ubc.ca/~lowe/keypoints/.de/~nowozin/autopano-sift/technicaldetails.html
In a non-programmatical way, u can use MS Paint feature "Paste From" i.e. Paste a (JPEG) file into the mspaint image area. Using this u can arrange the individual images, and create a final big image and save it as JPEG/GIF/Raw-BMP format.
-AD.

Resources