I would like to calculate the median of each pixel in a set of images or "video". However, when MATLAB starts calculating this, it takes a very long time and finishes randomly with an index error. Why?
This is the code:
V = VideoReader('hall_monitor.avi');
info = get(V);
M = info.Width;
N = info.Height;
nb_frames_bk = 5;
v_pixel = zeros([nb_frames_bk 3]);
IB=zeros([M N 3],'double');
for i=1:M
for j=1:N
for k=1:nb_frames_bk
frm=read(V,k);
v_pixel(k,:)=frm(i,j,:);
end
IB(i,j,:)=median(v_pixel(:,:));
end
end
IB=uint8(IB);
imshow(IB);
This code can benefit from a lot of refactoring. For one thing, you are re-reading frames when you can just read them once, store them and use them after you're done.
Secondly, iterating over all pixels to compute your median is going to be very slow. From what it looks like in your code, for each spatial position over the first nb_frames_bk frames, you collect all of the RGB values within these frames and calculate the median RGB value.
Also as a minor note, you are getting a dimension exceeds error because you defined the output matrix wrong. You defined it as M x N with M being the width and N being the height. This needs to be swapped. Remember that matrices are defined as height first, width second. However, this is unnecessary with what I'm going to suggest for implementing this properly.
Instead of reading the frames one at a time, specify a range of frames. This way, you will get a 4D matrix where the first three dimensions references an image, with the fourth dimension representing the frame number. You can then take the median in the fourth dimension to find the median RGB value over all frames.
In other words, simply do this:
V = VideoReader('hall_monitor.avi');
nb_frames_bk = 5;
frms = read(V, [1 nb_frames_bk]);
IB = median(frms, 4);
imshow(IB);
This is much better, to the point and guaranteed to be faster. You also don't need to obtain the width and height of each frame as it is no longer needed as we are no longer looping over each pixel.
Related
I am learning image analysis and trying to average set of color images and get standard deviation at each pixel
I have done this, but it is not by averaging RGB channels. (for ex rchannel = I(:,:,1))
filelist = dir('dir1/*.jpg');
ims = zeros(215, 300, 3);
for i=1:length(filelist)
imname = ['dir1/' filelist(i).name];
rgbim = im2double(imread(imname));
ims = ims + rgbim;
end
avgset1 = ims/length(filelist);
figure;
imshow(avgset1);
I am not sure if this is correct. I am confused as to how averaging images is useful.
Also, I couldn't get the matrix holding standard deviation.
Any help is appreciated.
If you are concerned about finding the mean RGB image, then your code is correct. What I like is that you converted the images using im2double before accumulating the mean and so you are making everything double precision. As what Parag said, finding the mean image is very useful especially in machine learning. It is common to find the mean image of a set of images before doing image classification as it allows the dynamic range of each pixel to be within a normalized range. This allows the training of the learning algorithm to converge quickly to the optimum solution and provide the best set of parameters to facilitate the best accuracy in classification.
If you want to find the mean RGB colour which is the average colour over all images, then no your code is not correct.
You have summed over all channels individually which is stored in sumrgbims, so the last step you need to do now take this image and sum over each channel individually. Two calls to sum in the first and second dimensions chained together will help. This will produce a 1 x 1 x 3 vector, so using squeeze after this to remove the singleton dimensions and get a 3 x 1 vector representing the mean RGB colour over all images is what you get.
Therefore:
mean_colour = squeeze(sum(sum(sumrgbims, 1), 2));
To address your second question, I'm assuming you want to find the standard deviation of each pixel value over all images. What you will have to do is accumulate the square of each image in addition to accumulating each image inside the loop. After that, you know that the standard deviation is the square root of the variance, and the variance is equal to the average sum of squares subtracted by the mean squared. We have the mean image, now you just have to square the mean image and subtract this with the average sum of squares. Just to be sure our math is right, supposing we have a signal X with a mean mu. Given that we have N values in our signal, the variance is thus equal to:
Source: Science Buddies
The standard deviation would simply be the square root of the above result. We would thus calculate this for each pixel independently. Therefore you can modify your loop to do that for you:
filelist = dir('set1/*.jpg');
sumrgbims = zeros(215, 300, 3);
sum2rgbims = sumrgbims; % New - for standard deviation
for i=1:length(filelist)
imname = ['set1/' filelist(i).name];
rgbim = im2double(imread(imname));
sumrgbims = sumrgbims + rgbim;
sum2rgbims = sum2rgbims + rgbim.^2; % New
end
rgbavgset1 = sumrgbims/length(filelist);
% New - find standard deviation
rgbstdset1 = ((sum2rgbims / length(filelist)) - rgbavgset.^2).^(0.5);
figure;
imshow(rgbavgset1, []);
% New - display standard deviation image
figure;
imshow(rgbstdset1, []);
Also to make sure, I've scaled the display of each imshow call so the smallest value gets mapped to 0 and the largest value gets mapped to 1. This does not change the actual contents of the images. This is just for display purposes.
I would like to resize a 512X512 image into 363X762 image which will be larger than the original image(of size 512X512). Those extra pixel values must be different values in the range of 0-255.
I tried the following code:
I=imread('photo.jpg'); %photo.jpg is a 512X512 image
B=zeros(363,726);
sizeOfMatrixB=size(B);
display(sizeOfMatrixB);
B(1:262144)=I(1:262144);
imshow(B);
B(262155:263538)=0;
But I think this is a lengthy one and the output is also not as desired. Could anyone suggest me with a better piece of code to perform this. Thank you in advance.
I think that the code you have is actually pretty close to ideal except that you have a lot of hard-coded values in there. Those should really be computed on the fly. We can do that using numel to count the number of elements in B.
B = zeros(363, 726);
%// Assign the first 262144 elements of B to the values in I
%// all of the rest will remain as 0
B(1:numel(I)) = I;
This flexibility is important and the importance is actually demonstrated via the typo in your last line:
B(262155:263538)=0;
%// Should be
B(262144:263538)=0;
Also, you don't need these extra assignments to zero at the end because you initialize the matrix to be all zeros in the first place.
A Side Note
It looks like you are spreading the original image data for each column across multiple columns. I'm guessing this isn't what you want. You probably only want to grab the first 363 rows of I to be placed into B. You can do that this way:
B = zeros(363, 726);
B(1:size(B, 1), 1:size(I, 2)) = I(1:size(B, 1), :);
Update
If you want the other values to be something other than zero, you can initialize your matrix to be that value instead.
value = 2;
B = zeros(363, 726) + value;
B(1:numel(I)) = I;
If you want them to be random integers between 0 and 255, use randi to initialize the matrix.
B = randi([0 255], 363, 726);
B(1:numel(I)) = I;
I've found some methods to enlarge an image but there is no solution to shrink an image. I'm currently using the nearest neighbor method. How could I do this with bilinear interpolation without using the imresize function in MATLAB?
In your comments, you mentioned you wanted to resize an image using bilinear interpolation. Bear in mind that the bilinear interpolation algorithm is size independent. You can very well use the same algorithm for enlarging an image as well as shrinking an image. The right scale factors to sample the pixel locations are dependent on the output dimensions you specify. This doesn't change the core algorithm by the way.
Before I start with any code, I'm going to refer you to Richard Alan Peters' II digital image processing slides on interpolation, specifically slide #59. It has a great illustration as well as pseudocode on how to do bilinear interpolation that is MATLAB friendly. To be self-contained, I'm going to include his slide here so we can follow along and code it:
Please be advised that this only resamples the image. If you actually want to match MATLAB's output, you need to disable anti-aliasing.
MATLAB by default will perform anti-aliasing on the images to ensure the output looks visually pleasing. If you'd like to compare apples with apples, make sure you disable anti-aliasing when comparing between this implementation and MATLAB's imresize function.
Let's write a function that will do this for us. This function will take in an image (that is read in through imread) which can be either colour or grayscale, as well as an array of two elements - The image you want to resize and the output dimensions in a two-element array of the final resized image you want. The first element of this array will be the rows and the second element of this array will be the columns. We will simply go through this algorithm and calculate the output pixel colours / grayscale values using this pseudocode:
function [out] = bilinearInterpolation(im, out_dims)
%// Get some necessary variables first
in_rows = size(im,1);
in_cols = size(im,2);
out_rows = out_dims(1);
out_cols = out_dims(2);
%// Let S_R = R / R'
S_R = in_rows / out_rows;
%// Let S_C = C / C'
S_C = in_cols / out_cols;
%// Define grid of co-ordinates in our image
%// Generate (x,y) pairs for each point in our image
[cf, rf] = meshgrid(1 : out_cols, 1 : out_rows);
%// Let r_f = r'*S_R for r = 1,...,R'
%// Let c_f = c'*S_C for c = 1,...,C'
rf = rf * S_R;
cf = cf * S_C;
%// Let r = floor(rf) and c = floor(cf)
r = floor(rf);
c = floor(cf);
%// Any values out of range, cap
r(r < 1) = 1;
c(c < 1) = 1;
r(r > in_rows - 1) = in_rows - 1;
c(c > in_cols - 1) = in_cols - 1;
%// Let delta_R = rf - r and delta_C = cf - c
delta_R = rf - r;
delta_C = cf - c;
%// Final line of algorithm
%// Get column major indices for each point we wish
%// to access
in1_ind = sub2ind([in_rows, in_cols], r, c);
in2_ind = sub2ind([in_rows, in_cols], r+1,c);
in3_ind = sub2ind([in_rows, in_cols], r, c+1);
in4_ind = sub2ind([in_rows, in_cols], r+1, c+1);
%// Now interpolate
%// Go through each channel for the case of colour
%// Create output image that is the same class as input
out = zeros(out_rows, out_cols, size(im, 3));
out = cast(out, class(im));
for idx = 1 : size(im, 3)
chan = double(im(:,:,idx)); %// Get i'th channel
%// Interpolate the channel
tmp = chan(in1_ind).*(1 - delta_R).*(1 - delta_C) + ...
chan(in2_ind).*(delta_R).*(1 - delta_C) + ...
chan(in3_ind).*(1 - delta_R).*(delta_C) + ...
chan(in4_ind).*(delta_R).*(delta_C);
out(:,:,idx) = cast(tmp, class(im));
end
Take the above code, copy and paste it into a file called bilinearInterpolation.m and save it. Make sure you change your working directory where you've saved this file.
Except for sub2ind and perhaps meshgrid, everything seems to be in accordance with the algorithm. meshgrid is very easy to explain. All you're doing is specifying a 2D grid of (x,y) co-ordinates, where each location in your image has a pair of (x,y) or column and row co-ordinates. Creating a grid through meshgrid avoids any for loops as we will have generated all of the right pixel locations from the algorithm that we need before we continue.
How sub2ind works is that it takes in a row and column location in a 2D matrix (well... it can really be any amount of dimensions you want), and it outputs a single linear index. If you're not aware of how MATLAB indexes into matrices, there are two ways you can access an element in a matrix. You can use the row and column to get what you want, or you can use a column-major index. Take a look at this matrix example I have below:
A =
1 2 3 4 5
6 7 8 9 10
11 12 13 14 15
If we want to access the number 9, we can do A(2,4) which is what most people tend to default to. There is another way to access the number 9 using a single number, which is A(11)... now how is that the case? MATLAB lays out the memory of its matrices in column-major format. This means that if you were to take this matrix and stack all of its columns together in a single array, it would look like this:
A =
1
6
11
2
7
12
3
8
13
4
9
14
5
10
15
Now, if you want to access element number 9, you would need to access the 11th element of this array. Going back to the interpolation bit, sub2ind is crucial if you want to vectorize accessing the elements in your image to do the interpolation without doing any for loops. As such, if you look at the last line of the pseudocode, we want to access elements at r, c, r+1 and c+1. Note that all of these are 2D arrays, where each element in each of the matching locations in all of these arrays tell us the four pixels we need to sample from in order to produce the final output pixel. The output of sub2ind will also be 2D arrays of the same size as the output image. The key here is that each element of the 2D arrays of r, c, r+1, and c+1 will give us the column-major indices into the image that we want to access, and by throwing this as input into the image for indexing, we will exactly get the pixel locations that we want.
There are some important subtleties I'd like to add when implementing the algorithm:
You need to make sure that any indices to access the image when interpolating outside of the image are either set to 1 or the number of rows or columns to ensure you don't go out of bounds. Actually, if you extend to the right or below the image, you need to set this to one below the maximum as the interpolation requires that you are accessing pixels to one over to the right or below. This will make sure that you're still within bounds.
You also need to make sure that the output image is cast to the same class as the input image.
I ran through a for loop to interpolate each channel on its own. You could do this intelligently using bsxfun, but I decided to use a for loop for simplicity, and so that you are able to follow along with the algorithm.
As an example to show this works, let's use the onion.png image that is part of MATLAB's system path. The original dimensions of this image are 135 x 198. Let's interpolate this image by making it larger, going to 270 x 396 which is twice the size of the original image:
im = imread('onion.png');
out = bilinearInterpolation(im, [270 396]);
figure;
imshow(im);
figure;
imshow(out);
The above code will interpolate the image by increasing each dimension by twice as much, then show a figure with the original image and another figure with the scaled up image. This is what I get for both:
Similarly, let's shrink the image down by half as much:
im = imread('onion.png');
out = bilinearInterpolation(im, [68 99]);
figure;
imshow(im);
figure;
imshow(out);
Note that half of 135 is 67.5 for the rows, but I rounded up to 68. This is what I get:
One thing I've noticed in practice is that upsampling with bilinear has decent performance in comparison to other schemes like bicubic... or even Lanczos. However, when you're shrinking an image, because you're removing detail, nearest neighbour is very much sufficient. I find bilinear or bicubic to be overkill. I'm not sure about what your application is, but play around with the different interpolation algorithms and see what you like out of the results. Bicubic is another story, and I'll leave that to you as an exercise. Those slides I referred you to does have material on bicubic interpolation if you're interested.
Good luck!
I've been performing a 2D mode filter on an RGB image by running medfilt2 independently on the R,G and B channels. However, splitting the RGB channels like this gives artifacts in the colouring. Is there a way to perform the 2D median filter while keeping RGB values 'together'?
Or, I could explain this more abstractly: Imagine I had a 2D matrix, where each value contained a pair of index coordinates (i.e. a cell matrix of 2X1 vectors). How would I go about performing a median filter on this?
Here's how I can do an independent mode filter (giving the artifacts):
r = colfilt(r0,[5 5],'sliding',#mode);
g = colfilt(g0,[5 5],'sliding',#mode);
b = colfilt(b0,[5 5],'sliding',#mode);
However colfilt won't work on a cell matrix.
Another approach could be to somehow combine my RGB channels into a single number and thus create a standard 2D matrix. Not sure how to implement this, though...
Any ideas?
Thanks for your help.
Cheers,
Hugh
EDIT:
OK, so problem solved. Here's how I did it.
I adapted my question so that I'm no longer dealing with (RGB) vectors, but (UV) vectors. Still essentially the same problem, except that my vectors are 2D not 3D.
So firstly I load the individual U and V channels, arrange them each into a 1D list, then combine them, so I essentially have a list of vectors. Then I reduce it to just those which are unique. Then, I assign each pixel in my matrix the value of the index of that unique vector. After this I can do the mode filter. Then I basically do the reverse, in that I go through the filtered image pixelwise, and read the value at each pixel (i.e. an index in my list), and find the unique vector associated with that index and insert it at that pixel.
% Create index list
img_u = img_iuv(:,:,2);
img_v = img_iuv(:,:,3);
coordlist = unique(cat(2,img_u(:),img_v(:)),'rows');
% Create a 2D matrix of indices
img_idx = zeros(size(img_iuv,1),size(img_iuv,2),2);
for y = 1:length(Y)
for x = 1:length(X)
coords = squeeze(img_iuv(x,y,2:3))';
[~,idx] = ismember(coords,coordlist,'rows');
img_idx(x,y) = idx;
end
end
% Apply the mode filter
img_idx = colfilt(img_idx,[n,n],'sliding',#mode);
% Re-construct the original image using the filtered data
for y = 1:length(Y)
for x = 1:length(X)
idx = img_idx(x,y);
try
coords = coordlist(idx,:);
end
img_iuv(x,y,2:3) = coords(:);
end
end
Not pretty but it gets the job done. I suppose this approach would also work for RGB images, or other similar situations.
Cheers,
Hugh
I don't see how you can define the median of a vector variable. You probably need to reduce the R,G,B components to a single value and then compunte the median on that value. Why not use the intensity level as that single value? You could do it easily with rgb2gray.
Here's the problem: I have a number of binary images composed by traces of different thickness. Below there are two images to illustrate the problem:
First Image - size: 711 x 643 px
Second Image - size: 930 x 951 px
What I need is to measure the average thickness (in pixels) of the traces in the images. In fact, the average thickness of traces in an image is a somewhat subjective measure. So, what I need is a measure that have some correlation with the radius of the trace, as indicated in the figure below:
Notes
Since the measure doesn't need to be very precise, I am willing to trade precision for speed. In other words, speed is an important factor to the solution of this problem.
There might be intersections in the traces.
The trace thickness might not be constant, but an average measure is OK (even the maximum trace thickness is acceptable).
The trace will always be much longer than it is wide.
I'd suggest this algorithm:
Apply a distance transformation to the image, so that all background pixels are set to 0, all foreground pixels are set to the distance from the background
Find the local maxima in the distance transformed image. These are points in the middle of the lines. Put their pixel values (i.e. distances from the background) image into a list
Calculate the median or average of that list
I was impressed by #nikie's answer, and gave it a try ...
I simplified the algorithm for just getting the maximum value, not the mean, so evading the local maxima detection algorithm. I think this is enough if the stroke is well-behaved (although for self intersecting lines it may not be accurate).
The program in Mathematica is:
m = Import["http://imgur.com/3Zs7m.png"] (* Get image from web*)
s = Abs[ImageData[m] - 1]; (* Invert colors to detect background *)
k = DistanceTransform[Image[s]] (* White Pxs converted to distance to black*)
k // ImageAdjust (* Show the image *)
Max[ImageData[k]] (* Get the max stroke width *)
The generated result is
The numerical value (28.46 px X 2) fits pretty well my measurement of 56 px (Although your value is 100px :* )
Edit - Implemented the full algorithm
Well ... sort of ... instead of searching the local maxima, finding the fixed point of the distance transformation. Almost, but not quite completely unlike the same thing :)
m = Import["http://imgur.com/3Zs7m.png"]; (*Get image from web*)
s = Abs[ImageData[m] - 1]; (*Invert colors to detect background*)
k = DistanceTransform[Image[s]]; (*White Pxs converted to distance to black*)
Print["Distance to Background*"]
k // ImageAdjust (*Show the image*)
Print["Local Maxima"]
weights =
Binarize[FixedPoint[ImageAdjust#DistanceTransform[Image[#], .4] &,s]]
Print["Stroke Width =",
2 Mean[Select[Flatten[ImageData[k]] Flatten[ImageData[weights]], # != 0 &]]]
As you may see, the result is very similar to the previous one, obtained with the simplified algorithm.
From Here. A simple method!
3.1 Estimating Pen Width
The pen thickness may be readily estimated from the area A and perimeter length L of the foreground
T = A/(L/2)
In essence, we have reshaped the foreground into a rectangle and measured the length of the longest side. Stronger modelling of the pen, for instance, as a disc yielding circular ends, might allow greater precision, but rasterisation error would compromise the signicance.
While precision is not a major issue, we do need to consider bias and singularities.
We should therefore calculate area A and perimeter length L using functions which take into account "roundedness".
In MATLAB
A = bwarea(.)
L = bwarea(bwperim(.; 8))
Since I don't have MATLAB at hand, I made a small program in Mathematica:
m = Binarize[Import["http://imgur.com/3Zs7m.png"]] (* Get Image *)
k = Binarize[MorphologicalPerimeter[m]] (* Get Perimeter *)
p = N[2 Count[ImageData[m], Except[1], 2]/
Count[ImageData[k], Except[0], 2]] (* Calculate *)
The output is 36 Px ...
Perimeter image follows
HTH!
Its been a 3 years since the question was asked :)
following the procedure of #nikie, here is a matlab implementation of the stroke width.
clc;
clear;
close all;
I = imread('3Zs7m.png');
X = im2bw(I,0.8);
subplottight(2,2,1);
imshow(X);
Dist=bwdist(X);
subplottight(2,2,2);
imshow(Dist,[]);
RegionMax=imregionalmax(Dist);
[x, y] = find(RegionMax ~= 0);
subplottight(2,2,3);
imshow(RegionMax);
List(1:size(x))=0;
for i = 1:size(x)
List(i)=Dist(x(i),y(i));
end
fprintf('Stroke Width = %u \n',mean(List));
Assuming that the trace has constant thickness, is much longer than it is wide, is not too strongly curved and has no intersections / crossings, I suggest an edge detection algorithm which also determines the direction of the edge, then a rise/fall detector with some trigonometry and a minimization algorithm. This gives you the minimal thickness across a relatively straight part of the curve.
I guess the error to be up to 25%.
First use an edge detector that gives us the information where an edge is and which direction (in 45° or PI/4 steps) it has. This is done by filtering with 4 different 3x3 matrices (Example).
Usually I'd say it's enough to scan the image horizontally, though you could also scan vertically or diagonally.
Assuming line-by-line (horizontal) scanning, once we find an edge, we check if it's a rise (going from background to trace color) or a fall (to background). If the edge's direction is at a right angle to the direction of scanning, skip it.
If you found one rise and one fall with the correct directions and without any disturbance in between, measure the distance from the rise to the fall. If the direction is diagonal, multiply by squareroot of 2. Store this measure together with the coordinate data.
The algorithm must then search along an edge (can't find a web resource on that right now) for neighboring (by their coordinates) measurements. If there is a local minimum with a padding of maybe 4 to 5 size units to each side (a value to play with - larger: less information, smaller: more noise), this measure qualifies as a candidate. This is to ensure that the ends of the trail or a section bent too much are not taken into account.
The minimum of that would be the measurement. Plausibility check: If the trace is not too tangled, there should be a lot of values in that area.
Please comment if there are more questions. :-)
Here is an answer that works in any computer language without the need of special functions...
Basic idea: Try to fit a circle into the black areas of the image. If you can, try with a bigger circle.
Algorithm:
set image background = 0 and trace = 1
initialize array result[]
set minimalExpectedWidth
set w = minimalExpectedWidth
loop
set counter = 0
create a matrix of zeros size w x w
within a circle of diameter w in that matrix, put ones
calculate area of the circle (= PI * w)
loop through all pixels of the image
optimization: if current pixel is of background color -> continue loop
multiply the matrix with the image at each pixel (e.g. filtering the image with that matrix)
(you can do this using the current x and y position and a double for loop from 0 to w)
take the sum of the result of each multiplication
if the sum equals the calculated circle's area, increment counter by one
store in result[w - minimalExpectedWidth]
increment w by one
optimization: include algorithm from further down here
while counter is greater zero
Now the result array contains the number of matches for each tested width.
Graph it to have a look at it.
For a width of one this will be equal to the number of pixels of trace color. For a greater width value less circle areas will fit into the trace. The result array will thus steadily decrease until there is a sudden drop. This is because the filter matrix with the circular area of that width now only fits into intersections.
Right before the drop is the width of your trace. If the width is not constant, the drop will not be that sudden.
I don't have MATLAB here for testing and don't know for sure about a function to detect this sudden drop, but we do know that the decrease is continuous, so I'd take the maximum of the second derivative of the (zero-based) result array like this
Algorithm:
set maximum = 0
set widthFound = 0
set minimalExpectedWidth as above
set prevvalue = result[0]
set index = 1
set prevFirstDerivative = result[1] - prevvalue
loop until index is greater result length
firstDerivative = result[index] - prevvalue
set secondDerivative = firstDerivative - prevFirstDerivative
if secondDerivative > maximum or secondDerivative < maximum * -1
maximum = secondDerivative
widthFound = index + minimalExpectedWidth
prevFirstDerivative = firstDerivative
prevvalue = result[index]
increment index by one
return widthFound
Now widthFound is the trace width for which (in relation to width + 1) many more matches were found.
I know that this is in part covered in some of the other answers, but my description is pretty much straightforward and you don't have to have learned image processing to do it.
I have interesting solution:
Do edge detection, for edge pixels extraction.
Do physical simulation - consider edge pixels as positively charged particles.
Now put some number of free positively charged particles in the stroke area.
Calculate electrical force equations for determining movement of these free particles.
Simulate particles movement for some time until particles reach position equilibrium.
(As they will repel from both stoke edges after some time they will stay in the middle line of stoke)
Now stroke thickness/2 would be average distance from edge particle to nearest free particle.