I have one image in bmp format, with size of 512*512. I want to count the number of pixels with values more than 11 and then find the average of these pixels. Here is my code. I don't know what is the problem but the sum of pixel values is wrong and it is always 255. I tried with different images.
Could you please help me to figure it out?
A=imread('....bmp');
sum=0; count=0;
for i=1:512
for j=1:512
if (A(i,j)>=11)
sum=sum+A(i,j);
count=count+1;
end
end
end
disp('Number of pixels grater than or equal to 11')
disp(count)
disp('sum')
disp(sum)
disp('Average')
Avrg=sum/count;
disp(Avrg)
Why doesn't your code work
Difficult to tell, could you display a portion of your matrix and the size using something like
disp(A(1:10,1:10))
disp(size(A))
% possibly also the min and max...
disp(min(A(:))
disp(max(A(:))
just to be sure the format of A is as you expect - imread could have given you a 512x512x3 matrix if the image was read in color, or the image may be in the interval [0,1].
Better approach
Once you're sure that the matrix is indeed 512x512, and has values above 11, you're best off by generating a mask, i.e.
mask = A > 11;
numabove11 = sum(mask(:));
avabove11 = mean(A(mask));
Also in your code you use >= i.e. greater than or equal to, but you say 'greater than' - pick which you want and be consistent.
Explanation
So what do these 3 lines do?
Generate a logical matrix, same size as A that is true wherever A > 11, else false.
Sum the logical matrix, which means sum values that are 1 everywhere that A > 11, else 0 (boolean values are converted to floats for this summation).
Index in to matrix A using logical indexing, and take the mean of those values.
Avoid shadowing builtins
In your code you use the variable sum - this is bad practice as there is a builtin matlab function with the same name, which becomes unusable if you use a variable of the same name.
I also faced a similar problem and actually the solution lies in the fact that matlab stores A(i,j) in uint8 format whose maximum value is 255, so, just change the statement:
sum=sum+A(i,j);
to
sum=sum+double(A(i,j));
I hope this helps.
Related
I would like to calculate the median of each pixel in a set of images or "video". However, when MATLAB starts calculating this, it takes a very long time and finishes randomly with an index error. Why?
This is the code:
V = VideoReader('hall_monitor.avi');
info = get(V);
M = info.Width;
N = info.Height;
nb_frames_bk = 5;
v_pixel = zeros([nb_frames_bk 3]);
IB=zeros([M N 3],'double');
for i=1:M
for j=1:N
for k=1:nb_frames_bk
frm=read(V,k);
v_pixel(k,:)=frm(i,j,:);
end
IB(i,j,:)=median(v_pixel(:,:));
end
end
IB=uint8(IB);
imshow(IB);
This code can benefit from a lot of refactoring. For one thing, you are re-reading frames when you can just read them once, store them and use them after you're done.
Secondly, iterating over all pixels to compute your median is going to be very slow. From what it looks like in your code, for each spatial position over the first nb_frames_bk frames, you collect all of the RGB values within these frames and calculate the median RGB value.
Also as a minor note, you are getting a dimension exceeds error because you defined the output matrix wrong. You defined it as M x N with M being the width and N being the height. This needs to be swapped. Remember that matrices are defined as height first, width second. However, this is unnecessary with what I'm going to suggest for implementing this properly.
Instead of reading the frames one at a time, specify a range of frames. This way, you will get a 4D matrix where the first three dimensions references an image, with the fourth dimension representing the frame number. You can then take the median in the fourth dimension to find the median RGB value over all frames.
In other words, simply do this:
V = VideoReader('hall_monitor.avi');
nb_frames_bk = 5;
frms = read(V, [1 nb_frames_bk]);
IB = median(frms, 4);
imshow(IB);
This is much better, to the point and guaranteed to be faster. You also don't need to obtain the width and height of each frame as it is no longer needed as we are no longer looping over each pixel.
I am learning image analysis and trying to average set of color images and get standard deviation at each pixel
I have done this, but it is not by averaging RGB channels. (for ex rchannel = I(:,:,1))
filelist = dir('dir1/*.jpg');
ims = zeros(215, 300, 3);
for i=1:length(filelist)
imname = ['dir1/' filelist(i).name];
rgbim = im2double(imread(imname));
ims = ims + rgbim;
end
avgset1 = ims/length(filelist);
figure;
imshow(avgset1);
I am not sure if this is correct. I am confused as to how averaging images is useful.
Also, I couldn't get the matrix holding standard deviation.
Any help is appreciated.
If you are concerned about finding the mean RGB image, then your code is correct. What I like is that you converted the images using im2double before accumulating the mean and so you are making everything double precision. As what Parag said, finding the mean image is very useful especially in machine learning. It is common to find the mean image of a set of images before doing image classification as it allows the dynamic range of each pixel to be within a normalized range. This allows the training of the learning algorithm to converge quickly to the optimum solution and provide the best set of parameters to facilitate the best accuracy in classification.
If you want to find the mean RGB colour which is the average colour over all images, then no your code is not correct.
You have summed over all channels individually which is stored in sumrgbims, so the last step you need to do now take this image and sum over each channel individually. Two calls to sum in the first and second dimensions chained together will help. This will produce a 1 x 1 x 3 vector, so using squeeze after this to remove the singleton dimensions and get a 3 x 1 vector representing the mean RGB colour over all images is what you get.
Therefore:
mean_colour = squeeze(sum(sum(sumrgbims, 1), 2));
To address your second question, I'm assuming you want to find the standard deviation of each pixel value over all images. What you will have to do is accumulate the square of each image in addition to accumulating each image inside the loop. After that, you know that the standard deviation is the square root of the variance, and the variance is equal to the average sum of squares subtracted by the mean squared. We have the mean image, now you just have to square the mean image and subtract this with the average sum of squares. Just to be sure our math is right, supposing we have a signal X with a mean mu. Given that we have N values in our signal, the variance is thus equal to:
Source: Science Buddies
The standard deviation would simply be the square root of the above result. We would thus calculate this for each pixel independently. Therefore you can modify your loop to do that for you:
filelist = dir('set1/*.jpg');
sumrgbims = zeros(215, 300, 3);
sum2rgbims = sumrgbims; % New - for standard deviation
for i=1:length(filelist)
imname = ['set1/' filelist(i).name];
rgbim = im2double(imread(imname));
sumrgbims = sumrgbims + rgbim;
sum2rgbims = sum2rgbims + rgbim.^2; % New
end
rgbavgset1 = sumrgbims/length(filelist);
% New - find standard deviation
rgbstdset1 = ((sum2rgbims / length(filelist)) - rgbavgset.^2).^(0.5);
figure;
imshow(rgbavgset1, []);
% New - display standard deviation image
figure;
imshow(rgbstdset1, []);
Also to make sure, I've scaled the display of each imshow call so the smallest value gets mapped to 0 and the largest value gets mapped to 1. This does not change the actual contents of the images. This is just for display purposes.
I would like to resize a 512X512 image into 363X762 image which will be larger than the original image(of size 512X512). Those extra pixel values must be different values in the range of 0-255.
I tried the following code:
I=imread('photo.jpg'); %photo.jpg is a 512X512 image
B=zeros(363,726);
sizeOfMatrixB=size(B);
display(sizeOfMatrixB);
B(1:262144)=I(1:262144);
imshow(B);
B(262155:263538)=0;
But I think this is a lengthy one and the output is also not as desired. Could anyone suggest me with a better piece of code to perform this. Thank you in advance.
I think that the code you have is actually pretty close to ideal except that you have a lot of hard-coded values in there. Those should really be computed on the fly. We can do that using numel to count the number of elements in B.
B = zeros(363, 726);
%// Assign the first 262144 elements of B to the values in I
%// all of the rest will remain as 0
B(1:numel(I)) = I;
This flexibility is important and the importance is actually demonstrated via the typo in your last line:
B(262155:263538)=0;
%// Should be
B(262144:263538)=0;
Also, you don't need these extra assignments to zero at the end because you initialize the matrix to be all zeros in the first place.
A Side Note
It looks like you are spreading the original image data for each column across multiple columns. I'm guessing this isn't what you want. You probably only want to grab the first 363 rows of I to be placed into B. You can do that this way:
B = zeros(363, 726);
B(1:size(B, 1), 1:size(I, 2)) = I(1:size(B, 1), :);
Update
If you want the other values to be something other than zero, you can initialize your matrix to be that value instead.
value = 2;
B = zeros(363, 726) + value;
B(1:numel(I)) = I;
If you want them to be random integers between 0 and 255, use randi to initialize the matrix.
B = randi([0 255], 363, 726);
B(1:numel(I)) = I;
I have this Python code:
cv2.addWeighted(src1, 4, cv2.GaussianBlur(src1, (0, 0), 10), src2, -4, 128)
How can I convert it to Matlab? So far I got this:
f = imread0('X.jpg');
g = imfilter(f, fspecial('gaussian',[size(f,1),size(f,2)],10));
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+128;
I want to subtract local mean color image.
Input image:
Blending output from OpenCV:
The documentation for cv2.addWeighted has the definition such that:
cv2.addWeighted(src1, alpha, src2, beta, gamma[, dst[, dtype]]) → dst
Also, the operations performed on the output image is such that:
(source: opencv.org)
Therefore, what your code is doing is exactly correct... at least for cv2.addWeighted. You take alpha, multiply this by the first image, then beta, multiply this by the second image, then add gamma on top of this. The only intricacy left to deal with is saturate, which means that any values that are beyond the dynamic range of the data type you are dealing with, you cap it at that much. Because there is a potential for negatives to occur in the result, the saturate option simply means to make any values that are negative 0 and any values that are greater than the maximum expected to that max. In this case, you'll want to make any values larger than 1 equal to 1. As such, it'll be a good idea to convert your image to double through im2double because you want to allow the addition and subtraction of values beyond the dynamic range to happen first, then you saturate after. By using the default image precision of the image (which is uint8), the saturation will happen even before the saturate operation occurs, and that'll give you the wrong results. Because you're doing this double conversion, you'll want to convert the addition of 128 for your gamma to 0.5 to compensate.
Now, the only slight problem is your Gaussian Blur. Looking at the documentation, by doing cv2.GaussianBlur(src1, (0, 0), 10), you are telling OpenCV to infer on the mask size while the standard deviation is 10. MATLAB does not infer the size of the mask for you, so you need to do this yourself. A common practice is to simply find six-times the standard deviation, take the floor and add 1. This is for both the horizontal and vertical dimensions of the mask. You can see my post here on the justification as to why this is common practice: By which measures should I set the size of my Gaussian filter in MATLAB?
Therefore, in MATLAB, you would do this with your Gaussian blur instead. BTW, it's simply imread, not imread0:
f = im2double(imread('http://i.stack.imgur.com/kl3Md.jpg')); %// Change - Reading image directly from StackOverflow
sigma = 10; %// Change
sz = 1 + floor(6*sigma); %// Change
g = imfilter(f, fspecial('gaussian', sz, sigma)); %// Change
%// Rest of the code is the same
alpha = 4;
beta = -4;
f1 = f*alpha+g*beta+0.5; %// Change
%// Saturate
f1(f1 > 1) = 1;
f1(f1 < 0) = 0;
I get this image:
Take a note that there is a slight difference in the way this appears between OpenCV in MATLAB... especially the hallowing around the eye. This is because OpenCV does something different when inferring the mask size for the Gaussian blur. This I'm not sure what is going on, but how I specified the mask size by looking at the standard deviation is one of the most common heuristics for it. Play around with the standard deviation until you get something you like.
I have a string with the length of 480000 , it represents a 800x600 frame.
meaning 600 rows and 800 columns. i need to find a square of 3X3 were all the 9 values are in a specific range. i have the range itself. and return the index of the middle.
this is what i am doing right now:
# find 3x3 minValue < square < maxVlaue
for row in range(598):
for col in range(798):
if checkRow(frame,row,col,minValue,maxValue):
if checkRow(frame,row+1,col,minValue,maxValue):
if checkRow(frame,row+2,col,minValue,maxValue):
string = str(row+1),str(col+1)
print string
return
def checkRow(Frame,row,col,minValue,maxValue):
if (Frame[row*800+col])>minValue) and (Frame[row*800+col])<maxValue):
if (Frame[row*800+col+1])>minValue) and (Frame[row*800+col+1])<maxValue):
if (Frame[row*800+col+2])>minValue) and (Frame[row*800+col+2])<maxValue):
return True
return False
this way i check each "pixel" until the square is found.
is there some kind of special function in python for this job?
or maybe a faster and more efficient algorithm?
i thought about checking a pixel every 2 columns and skipping 2 rows. this way i can make sure i wont miss the 3x3 square, but this way when i hit a pixel , i need to check 9 different options for the square itself. so i am not sure how much faster it will be.
thanks