Problems with image convolution - image

I was given an assignment in which I have to write a function to convolve an image with a Gaussian filter (already calculated). I have done it but there is a slight difference between my output picture and my prof output. As you can see that the difference is at the top of his head (1st image is my output, second is from my prof), looks like I missed something? My output looks better but the server rejects it. Can anyone really point out all differences between 2 images so I can try to fix it.
This is my output
This is my prof's output
This is the original one

Related

Process noisy image to get clean polygons

I am implementing image segmentation task and the output has a significant noise. I want to process the output image by removing all the noise and keep only the rectangle boxes (or polygons).
Can someone please suggest good image processing techniques for this?
For instance, the images which we need to process looks like following:
and the desired output should be something like following:
you should try these techniques probably it work on your problem.
4 Image Segmentation Techniques in OpenCV Python

How to convert an image to animation?

I would like to apply some recursive mathematical operation on an arbitrary photo or image, which yield an interesting animation. I mean the first frame is the original picture and then the pixels are transformed in a way which leads to an interesting animation. For example a fractal animation or diffusion as particles or something similar. What approaches should I follow, what softwares to apply?
This is a very broad question, and it's hard to answer general "how do I do this" type questions. Stack Overflow is designed more for specific "I tried X, expected Y, but got Z instead" type questions. That being said, I'll try to answer in a general sense:
Step 1: Download Processing. You've tagged this with the processing tag, which is for questions about the Processing language. I can't tell whether that was on purpose, but it doesn't matter since Processing happens to be perfect for this task.
Step 2: Create a simple sketch that displays a single image, without any effects or animation. You need to break your problem down into smaller steps.
Step 3: Create a function that takes a PImage or a PGraphics and does the image manipulation for a single iteration of your algorithm. There are plenty of useful functions in the reference for manipulating the pixels in an image.
Step 4: Start with your initial image, and display that for X frames. Then pass that image into the function you created in step 3, and display the result of that for X frames. Then pass the update image back into your function, and repeat that process as many times as you want.
You could also generate every single frame ahead of time. Completely up to you.
There are a bunch of ways to do this, but those steps are the general outline. Please try something out, and break your problem down into smaller steps. Don't try to tackle your whole goal at one time. Instead, ask yourself: what's the smallest, simplest, easiest thing I know I have to do next?
If you get stuck, post an MCVE (note: not your whole sketch, just a small example that shows the problem you're stuck on) and a specific question, and we'll go from there. Good luck.

Equalize contrast and brightness across multiple images

I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.

matlab image comparison

I am trying to set up a database of images that can be used to compare
to a current image (So if the current image is equal, or almost equal
to the one being compared it'll give a match)
However to start this project off I want to just compare 2 images
using Matlab to see how the process works.
Does anyone know how I might compare say image1.jpg and image2.jpg to
see how closely related to each other they are? So basically if I was
to compare image1.jpg and image1.jpg the relationship should be 100%,
but comparing 2 different images might give me quite a close
relationship.
I hope that makes some sense!!!
Thanks,
Well, the method to use greatly depend on what you define as similar images. If for example you can guarantee that translations (moves in the x and y directions) are very small (no more than a few pixels), a simple RMS subtraction measure might do fine. If this is not the case, brute force template search methods might be an option. At the other end of the scale are advanced recognition techniques using morphological measures.
The first and simplest approach might look something like this:
errorMeasure = sqrt(sum(sum(sum((image1-image2).^2))))
This method simple takes the difference and finds the "energy" of the error.

How can I deblur an image in matlab?

I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.

Resources