Measuring Intensity Using ImageJ - threshold

we have to measure the intensity of the fluorescence in certain regions of images using imagej. we came up with the below steps to measure the intensity. while it does seem correct, my question is --> are we actually measuring intensity correctly using the following steps or are we erroneously measuring something else and believing that that value is the intensity?
Make the image a 8-bit
Threshold the image (Image > Adjust > Threshold) to outline all the regions and click Apply
Open Analyze > Analyze particles. Make sure “add to manager” is clicked
Analyze > Analyze particles > Show > Bare Outlines. This will open a new image.
Open the color microscopy image. Then, Image > Overlay > From ROI Manager.
Image > Overlay > To ROI Manager.
In ROI Manager: press “measure.” (a Results window with individual data points will pop up)
Right click in the Results window and click Summarize.
Record mean intensity data
are we correctly measuring mean intensity data using the above steps?

The are some things you need to be aware of when measuring intensity in imageJ:
ImageJ automatically converts images to 8-bit
You should ALWAYS use RAW if available
If your microscope cannot save files as RAW format you must use .tif, other formats create artifacts
Save channels separately not as an RGB stack if you are using .tif format
Before threshold you must split the channels, especially if you are looking for florescence from a certain wavelength.
You also need to use the "Watershed" function after the threshold so it will outline individual cells allowing you to avoid outlining a group of cells globed together. This way you can measure their individual intensities. However it is not perfect so once ROI pops up and the cells have been outlined by the particle analysis you must go through and ensure it is measuring singular cells. Any outlines containing more than one cell should be deleted. Also look for odd shaped cells or micro-nuclei, which should also be deleted.
Now I assume you are measuring human cancer cells. You should use these settings:
After all of this then you apply the overlay. Whatever image you are overlaying on you must also split the channels of that as well and use the channel that contains the wavelength you are measuring (ie: Alexaflour 488 would be in the GFP channel).
If you have been collecting data without doing this I would trash all of it as the procedure done to collect it hasn't controlled for anything.
Also as for where to go to ask this stuff.

Related

Algorithm or tool for finding edges of a different area when comparing two images

I am working on a community project which goal is to reduce the speeding violations. In order to recognize the car's license plates I am using OpenALPR. The problem is that it is sensitive to camera position, that is the angle and OpenALPR have troubles detecting the LP when the angle is greater than 20 degrees (yes, I've read the recommendations about the good camera position but IRL sometimes they cannot be satisfied).
I found that the problem is that the LP area is not detected. However, cropping manually the image to contain just the car without any other modifications of the pixels (like filtering) fixes the problem and OpenALPR is able to detect the LP area.
I am looking for a solution that can do the cropping automatically. Either algorithm or tool that can compare two images "base" and "target" and return the coordinates (top left, bottom right) of the changed area in the target image.
Alternative solution would be different configuration file for the OpenALPR. I am experimenting with this last few hours but with no success.
Base image will look like:
Target image will look like:
(these are just two frames from a video)
(original image size is much bigger, i.e. 3840x2160)
Are there algorithm(s) or tools that can help me with automating this task?
The basic method is by differencing, i.e. taking the absolute difference of the RGB component values pixel per pixel. Where differences are large, there is a detection.
But this can work poorly (and it does with the given images) because the two pictures may be slightly unaligned, and wind can move the vegetation.
So I recommend to
reduce the image resolution by a significant factor (say 8);
blur the reduced images;
compute the absolute differences;
keep the largest differences among the components;
binarize with a threshold;
finally use connected components labelling to find the most significant blob and eliminate the residual interferences.
Make sure to refresh the background image (when you are sure there is no car) to avoid the effect of daily drift (there are always slow changes). It may also be useful to normalize the image intensity to thwart the changes is ambiant lighting (passing clouds f.i.).

Equalize contrast and brightness across multiple images

I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.

Algorithm to detect the change in visible luminosity in an image

I want a formula to detect/calculate the change in visible luminosity in a part of the image,provided i can calculate the RGB, HSV, HSL and CMYK color spaces.
E.g: In the above picture we will notice that the left side of the image is more bright when compared to the right side , which is beneath a shade.
I have had a little think about this, and done some experiments in Photoshop, though you could just as well use ImageMagick which is free. Here is what I came up with.
Step 1 - Convert to Lab mode and discard the a and b channels since the Lightness channel holds most of the brightness information which, ultimately, is what we are looking for.
Step 2 - Stretch the contrast of the remaining L channel (using Levels) to accentuate the variation.
Step 3 - Perform a Gaussian blur on the image to remove local, high frequency variations in the image. I think I used 10-15 pixels radius.
Step 4 - Turn on the Histogram window and take a single row marquee and watch the histogram change as different rows are selected.
Step 5 - Look out for a strongly bimodal histogram (two distimct peaks) to identify the illumination variations.
This is not a complete, general purpose solution, but may hold some pointers and cause people who know better to suggest improvememnts for you!!! Note that the method requires the image to have a some areas of high uniformity like the whiteish horizontal bar across your input image. However, nearly any algorithm is going to have a hard time telling the difference between a sheet of white paper with a shadow of uneven light across it and the same sheet of paper with a grey sheet of paper laid on top of it...
In the images below, I have superimposed the histogram top right. In the first one, you can see the histogram is not narrow and bimodal because the dotted horizontal selection marquee is across the bar-code area of the image.
In the subsequent images, you can see a strong bimodal histogram because the dotted selection marquee is across a uniform area of image.
The first problem is in "visible luminosity". It me mean one of several things. This discussion should be a good start. (Yes, it has incomplete and contradictory answers, as well.)
Formula to determine brightness of RGB color
You should make sure you operate on the linear image which does not have any gamma correction applied to it. AFAIK Photoshop does not degamma and regamma images during filtering, which may produce erroneous results. It all depends on how accurate results you want. Photoshop wants things to look good, not be precise.
In principle you should first pick a formula to convert your RGB values to some luminosity value which fits your use. Then you have a single-channel image which you'll need to filter with a Gaussian filter, sliding average, or some other suitable filter. Unfortunately, this may require special tools as photoshop/gimp/etc. type programs tend to cut corners.
But then there is one thing you would probably like to consider. If you have an even brightness gradient across an image, the eye is happy and does not perceive it. Rather large differences go unnoticed if the contrast in the image is constant across the image. Unfortunately, the definition of contrast is not very meaningful if you do not know at least something about the content of the image. (If you have scanned/photographed documents, then the contrast is clearly between ink and paper.) In your sample image the brightness changes quite abruptly, which makes the change visible.
Just to show you how strange the human vision is in determining "brightness", see the classical checker shadow illusion:
http://en.wikipedia.org/wiki/Checker_shadow_illusion
So, my impression is that talking about the conversion formulae is probably the second or third step in the process of finding suitable image processing methods. The first step would be to try to define the problem in more detail. What do you want to accomplish?

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Image Color Picking Script

I have a bunch of sports team logos. What I want to do is find the color that is used for the highest percentage of pixels. So, for the patriots logo below, I would pick out the blue or #000f47 (white will not be an acceptable color), as this is used for the highest percentage of pixels. Obviously I can eyeball each image, use the color picker tool in Gimp/Photoshop, and determine the color. However, I would like to script this if possible.
I can use any format for the picture input. Would it be possible to read the raw bitmap file format and determine this way? What would be an easy format to read? Do any tools support this, like ImageMagick, etc?
Thanks
If you're up for it then it's fairly straight forward to write your own image processor in C#; just run through the pixels, grab the R, G and, B values and increment a counter for each unique combination.
Having said that, if the image is anti-aliased then what you or I would eye-ball as being blue will be variations of the RGB and the processor would count them seperately. You might want to build in some allowable tollerances into the processor.
Just to be picky, isn't the most frequent pixel value in the image above white not blue?

Resources