I would like to combine two images that partially contain content and otherwise are transparent (alpha = 0). Where the content of the two images overlaps I would like to use half the color value (alpha=0.5) from the first image combined with half the color value of the other image. All pixels that still does not contain content should be transparent. I can't seem to find a convenient way to do this using Core Graphics or Core Image or maybe I am missing something... Does anyone have any tips on how to do this?
If anyone else encounters this problem:
I was able to solve it by using pixel wise processing inspired by this answer https://stackoverflow.com/a/31661519/3652610
and alpha blending described here https://stackoverflow.com/a/727339
Related
I have a bunch of optical microscopy images of graphene, where we've taken an image before an after the samples have been subject to different treatments. Unfortunately something has gone wrong in the calibration of most of the latter images, meaning that they look darker and has a different color compared the first. We have +40 image pairs and manually color correcting them is both extremely time consuming and too imprecise. What i would like to do is find a good basis for color correction from one photo to another, which i can then apply to all the images. Beneath is shown an exampleBefore (Upper) and after (Lower)
Does anyone know a way to do this?
For example of full quality images:
https://imgur.com/a/93KRhLM
I've tried to manually color correct in Gimp, but i gave up after 2 hours of adjustments
I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.
Basically what I'm trying to do is overlay two images using predefined points on each image.
The images will be of two different sizes probably or scaled differently, don't know this for sure yet. But the images are of the same thing. So what I want to do is say this spot on image one is the same as this spot on image 2. And do this for multiple spots and then have matlab resize or transform to get all those points lined up so that the two images can be overlayed. The thing thats confusing me is having matlab automatically adjust the images so that they can "fit" together.
I have no idea where to start on this, and was just hoping to get a general idea of what i may be able to do.
Just incase someone else knows how to do this I'll throw in what else I need to do. After the two images are on top of each other, one images will be a region map the other a real image. What I need matlab to do is count the amount of dots from the real image in each region of the map.
Thanks for any help.
What you are trying to do is called image registration which is a very common image processing task. You wont need to write much code because matlab has built in functions for this. You use the cp2tform to create a transform from the first to second image and can then apply the transform to the first image using imtransform function. The code will look something like this assuming x,y coordinates of the control points are in an m by 2 matrix called points1 for image1 and points2 for image2.
tform= cp2tform(points1, points2 , 'similarity');
imtransform(image1, tform);
I am new to Matlab, I am working on a project which will take input an image like this
as we can see it has a plain background (blue), and system will generate it's passport size image with given ratios, first I am working to separate background and person, the approach I searched is like if there is a blue in combinations of rgb matrices of image, then it is background, and rest is a person, but I am little bit confused that if this approach is correct or not, if it is correct then how can I find that current pixel is blue or not, how can I do it with matlab function find. Any help would be appreciated.
If you want to crop your image based on person's face, then there is no need in separating the background from the foreground. Nowadays you will easily find ready implementations of face detection, so, unless you want to implement your own method because the ready one fails, this should be a non-issue. See:
Show[img,
Graphics[{EdgeForm[{Yellow, Thick}], Opacity[0],
Rectangle ###
FindFaces[img = Import["http://i.stack.imgur.com/cSwzj.jpg"]]}]]
Supposing the face is detected correctly, you can expand/retract its bounding box to match the size you are after.
I heard that only premultiplied alpha is needed when doing layer blending etc. How do I know if my original image is premultiplied alpha?
You can't.
The only thing that you can check is if it's not premultiplied. To do it go over all the pixels and see if there is a color-value that has a higher value than the alpha would permit if(max(col.r,col.g,col.b) > 255*alpha)//not premul. Any other cases are ambiguous and could or could not be premultiplied. Your best guess is probably to assume that they aren't as that's the case for most PNGs.
Edit: actually, not even the code that I posted would work as there are a lot of PNGs out there with white matte, so the image would have to include parts that have an alpha of 0 to determine the matte color first.
Android Bitmap stores images loaded from PNG with premultiplied alpha. You can't get non-premultiplied original colours from it in usual way.
In order to load images without RGB channels being premultiplied I have to use 3rd party PNGDecoder from here: http://twl.l33tlabs.org/#downloads