Compute the mean absolute error between two image matlab - image

I want to Compute the mean absolute error between two image in Matlab and named it MAE
there is the code:
x=imread('duck.jpg');
imshow(x)
xmin=min(x);
xmax = max(x);
xmean=mean(x);
I = double(x) / 255;
v = var(I(:));
y = imnoise(x, 'gaussian', 0, v / 10);
y = double(y) / 255;
imshow(y)

There's no need to evaluate the min(), max(), mean() for the first image in order to evaluate the MAE.
Since the MAE is the sum of the (L1-norm) differences between corresponding pixels in your two images x and y (divided by the number of pixels), you can simply evaluate it as:
MAE=sum(abs(x(:)-y(:)))/numel(x);
where numel() is a function that returns the number of elements in its argument. In your case since x and y have the same number of elements you can either put numel(x) or numel(y).

Related

How to create a mask or detect image section based on the intensity value?

I have a matrix named figmat from which I obtain the following pcolor plot (Matlab-Version R 2016b).
Basically I only want to extract the bottom red high intensity line from this plot.
I thought of doing it in some way of extracting the maximum values from the matrix and creating some sort of mask on the main matrix. But I'm not understanding a possible way to achieve this. Can it be accomplished with the help of any edge/image detection algorithms?
I was trying something like this with the following code to create a mask
A=max(figmat);
figmat(figmat~=A)=0;
imagesc(figmat);
But this gives only the boundary of maximum values. I also need the entire red color band.
Okay, I assume that the red line is linear and its values can uniquely be separated from the rest of the picture. Let's generate some test data...
[x,y] = meshgrid(-5:.2:5, -5:.2:5);
n = size(x,1)*size(x,2);
z = -0.2*(y-(0.2*x+1)).^2 + 5 + randn(size(x))*0.1;
figure
surf(x,y,z);
This script generates a surface function. Its set of maximum values (x,y) can be described by a linear function y = 0.2*x+1. I added a bit of noise to it to make it a bit more realistic.
We now select all points where z is smaller than, let's say, 95 % of the maximum value. Therefore find can be used. Later, we want to use one-dimensional data, so we reshape everything.
thresh = min(min(z)) + (max(max(z))-min(min(z)))*0.95;
mask = reshape(z > thresh,1,n);
idx = find(mask>0);
xvec = reshape(x,1,n);
yvec = reshape(y,1,n);
xvec and yvec now contain the coordinates of all values > thresh.
The last step is to do some linear polynomial over all points.
pp = polyfit(xvec(idx),yvec(idx),1)
pp =
0.1946 1.0134
Obviously these are roughly the coefficients of y = 0.2*x+1 as it should be.
I do not know, if this also works with your data, since I made some assumptions. The threshold level must be chosen carefully. Maybe some preprocessing must be done to dynamically detect this level if you really want to process your images automatically. There might also be a simpler way to do it... but for me this one was straight forward without the need of any toolboxes.
By assuming:
There is only one band to extract.
It always has the maximum values.
It is linear.
I can adopt my previous answer to this case as well, with few minor changes:
First, we get the distribution of the values in the matrix and look for a population in the top values, that can be distinguished from the smaller values. This is done by finding the maximum value x(i) on the histogram that:
Is a local maximum (its bin is higher than that of x(i+1) and x(i-1))
Has more values above it than within it (the sum of the height of bins x(i+1) to x(end) < the height of bin x):
This is how it is done:
[h,x] = histcounts(figmat); % get the distribution of intesities
d = diff(fliplr(h)); % The diffrence in bin height from large x to small x
band_min_ind = find(cumsum(d)>size(figmat,2) & d<0, 1); % 1st bin that fit the conditions
flp_val = fliplr(x); % the value of x from large to small
band_min = flp_val(band_min_ind); % the value of x that fit the conditions
Now we continue as before. Mask all the unwanted values, interpolate the linear line:
mA = figmat>band_min; % mask all values below the top value mode
[y1,x1] = find(mA,1); % find the first nonzero row
[y2,x2] = find(mA,1,'last'); % find the last nonzero row
m = (y1-y2)/(x1-x2); % the line slope
n = y1-m*x1; % the intercept
f_line = #(x) m.*x+n; % the line function
And if we plot it we can see the red line where the band for detection was:
Next, we can make this line thicker for a better representation of this line:
thick = max(sum(mA)); % mode thickness of the line
tmp = (1:thick)-ceil(thick/2); % helper vector for expanding
rows = bsxfun(#plus,tmp.',floor(f_line(1:size(A,2)))); % all the rows for each column
rows(rows<1) = 1; % make sure to not get out of range
rows(rows>size(A,1)) = size(A,1); % make sure to not get out of range
inds = sub2ind(size(A),rows,repmat(1:size(A,2),thick,1)); % convert to linear indecies
mA(inds) = true; % add the interpolation to the mask
result = figmat.*mA; % apply the mask on figmat
Finally, we can plot that result after masking, excluding the unwanted areas:
imagesc(result(any(result,2),:))

smooth coloring algorithm for the mandelbrot set

I know there are a lot of questions alrady answered about this. However, mine varies slightly. Whenever we implement the smooth coloring algorithim as I understand it.
mu = 1 + n + math.log2(math.log2(z)) / math.log2(2)
where n is the escape iteration and 2 is the power z is to, and if im not mistaken z is the modulus of the complex number at that escape iteration. We then use this renormalized escape value in our linear interpolation between colors to produce a smoothly banded mandelbrot set. I've seen answers to other questions about this where we run this value through a HSB to RGB conversion, however I still fail to understand how this would provide a smooth gradient of colors and how to implement this in python.
However, whenever I attempted to implement this it produces floating point RGB values, but there isn't an image format that I know of, besides a .tiff file, that would support this, and if we round off to integers we still have unsmooth banding. So how is this supposed to produce a smoothly banded image if we cannot directly use the RGB values it produces? Example code of what I tried below, since I don't undertand fully how to implement this I made an attempt at a solution that somewhat produces smooth banding. This produces a somewhat smoothly banded image between two colors blue for the full set and a progressively whiter color the further we zoom in on the set to the point where at a certain depth everything just appears blurred. Since I'm using tkinter to do this I had to convert the RGB values to hex to be able to draw them to the canvas.
I;m computing the set recursively, and in my other function (not posted below) i am setting the window width and height then iterating over these for the pixels of the tkinter window and computing this recursion in the inner loop.
def linear_interp(self, color_1, color_2, i):
r = (color_1[0] * (1 - i)) + (color_2[0] * i)
g = (color_1[1] * (1 - i)) + (color_2[1] * i)
b = (color_1[2] * (1 - i)) + (color_2[2] * i)
rgb_list = [r, g, b]
for value in rgb_list:
if value > MAX_COLOR:
rgb_list[rgb_list.index(value)] = MAX_COLOR
if value < 0:
rgb_list[rgb_list.index(value)] = abs(value)
return (int(rgb_list[0]), int(rgb_list[1]),
int(rgb_list[2]))
def rgb_to_hex(self, color):
return "#%02x%02x%02x" % color
def mandel(self, x, y, z, iteration):
bmin = 100
bmax = 255
power_z = 2
mod_z = math.sqrt((z.real * z.real) + (z.imag * z.imag))
#If its not in the set or we have reached the maximum depth
if abs(z) >= float(power_z) or iteration == DEPTH:
z = z
if iteration > 255:
factor = (iteration / DEPTH) * 255
else:
factor = iteration
logs = math.log2(math.log2(abs(z) + 1 ) / math.log2(power_z))
r = g = math.floor(factor + 5 - logs)
b = bmin + (bmax - bmin) * r / 255
rgb = (abs(r), abs(g), abs(round(b)))
self.canvas.create_line(x, y, x + 1, y + 1,
fill = self.rgb_to_hex(rgb))
else:
z = (z * z) + self.c
self.mandel(x, y, z, iteration + 1)
return z
The difference between colors #000000, #010000, ..., #FE0000, #FF0000 is so small that you obtain a smooth gradient from black to red. Hence, simply round your values: Suppose your smoothened color values of your smoothness function range from 0 to (excl) 1, then you simply use
(int) (value * 256)

How to draw repeating straight lines with specific radius and angle in matlab?

Suppose i would like to draw an image like the following:
Where the pixel values are refined to 0 for black and white for 1.
These line are drawn with specific radius and angles
Now I create a 80 x 160 matrix
texturematrix = zeros(80,160);
then i want to change particular elements to be 1 according to the lines conditions
but how do i make them repeatedly with specific distance apart from each others effectively?
Thanks a lot everyone!
This might not be what you are looking for, but generating such an image could be done by plotting a set of lines, as follows:
% grid sizes
m = 6;
n = 5;
% line length and angle
len = 1;
theta = .1*pi;
[a,b] = meshgrid(1:m,1:n);
x = reshape([a(:),a(:)+len*cos(theta),nan(numel(a),1)]',[],1);
y = reshape([b(:),b(:)+len*sin(theta),nan(numel(b),1)]',[],1);
h = figure();
plot(x,y,'k', 'LineWidth', 2);
But this has nothing to do with a texture matrix. So, we construct a matrix of desired size:
set(gca, 'position',[0 0 1 1], 'units','normalized', 'YTick',[], 'XTick',[]);
frame = frame2im(getframe(h),[0 0 1 1]);
im = imresize(frame,[80 160]);
M = ~(im(2:end,2:end,1)==255);

Algorithm for fitting points to a grid

I have a list of points in 2D space that form an (imperfect) grid:
x x x x
x x x x
x
x x x
x x x x
What's the best way to fit these to a rigid grid (i.e. create a two-dimendional array and work out where each point fits in that array)?
There are no holes in the grid, but I don't know in advance what its dimensions are.
EDIT: The grid is not necessarily regular (not even spacing between rows/cols)
A little bit of an image processing approach:
If you think of what you have as a binary image where the X is 1 and the rest is 0, you can sum up rows and columns, and use a peak finding algorithm to identify peaks which would correspond to x and y lines of the grid:
Your points as a binary image:
Sums of row/columns
Now apply some smoothing technique to the signal (e.g. lowess):
I'm sure you get the idea :-)
Good luck
The best I could come up with is a brute-force solution that calculates the grid dimensions that minimize the error in the square of the Euclidean distance between the point and its nearest grid intersection.
This assumes that the number of points p is exactly equal to the number of columns times the number of rows, and that each grid intersection has exactly one point on it. It also assumes that the minimum x/y value for any point is zero. If the minimum is greater than zero, just subtract the minimum x value from each point's x coordinate and the minimum y value from each point's y coordinate.
The idea is to create all of the possible grid dimensions given the number of points. In the example above with 16 points, we would make grids with dimensions 1x16, 2x8, 4x4, 8x2 and 16x1. For each of these grids we calculate where the grid intersections would lie by dividing the maximum width of the points by the number of columns minus 1, and the maximum height of the points by the number of rows minus 1. Then we fit each point to its closest grid intersection and find the error (square of the distance) between the point and the intersection. (Note that this only works if each point is closer to its intended grid intersection than to any other intersection.)
After summing the errors for each grid configuration individually (e.g. getting one error value for the 1x16 configuration, another for the 2x8 configuration and so on), we select the configuration with the lowest error.
Initialization:
P is the set of points such that P[i][0] is the x-coordinate and
P[i][1] is the y-coordinate
Let p = |P| or the number of points in P
Let max_x = the maximum x-coordinate in P
Let max_y = the maximum y-coordinate in P
(minimum values are assumed to be zero)
Initialize min_error_dist = +infinity
Initialize min_error_cols = -1
Algorithm:
for (col_count = 1; col_count <= n; col_count++) {
// only compute for integer # of rows and cols
if ((p % col_count) == 0) {
row_count = n/col_count;
// Compute the width of the columns and height of the rows
// If the number of columns is 1, let the column width be max_x
// (and similarly for rows)
if (col_count > 1) col_width = max_x/(col_count-1);
else col_width=max_x;
if (row_count > 1) row_height = max_y/(row_count-1);
else row_height=max_y;
// reset the error for the new configuration
error_dist = 0.0;
for (i = 0; i < n; i++) {
// For the current point, normalize the x- and y-coordinates
// so that it's in the range 0..(col_count-1)
// and 0..(row_count-1)
normalized_x = P[i][0]/col_width;
normalized_y = P[i][1]/row_height;
// Error is the sum of the squares of the distances between
// the current point and the nearest grid point
// (in both the x and y direction)
error_dist += (normalized_x - round(normalized_x))^2 +
(normalized_y - round(normalized_y))^2;
}
if (error_dist < min_error_dist) {
min_error_dist = error_dist;
min_error_cols = col_count;
}
}
}
return min_error_cols;
Once you've got the number of columns (and thus the number of rows) you can recompute the normalized values for each point and round them to get the grid intersection they belong to.
In the end I used this algorithm, inspired by beaker's:
Calculate all the possible dimensions of the grid, given the total number of points
For each possible dimension, fit the points to that dimension and calculate the variance in alignment:
Order the points by x-value
Group the points into columns: the first r points form the first column, where r is the number of rows
Within each column, order the points by y-value to determine which row they're in
For each row/column, calcuate the range of y-values/x-values
The variance in alignment is the maximum range found
Choose the dimension with the least variance in alignment
I wrote this algorithm that accounts for missing coordinates as well as coordinates with errors.
Python Code
# Input [x, y] coordinates of a 'sparse' grid with errors
xys = [[103,101],
[198,103],
[300, 99],
[ 97,205],
[304,202],
[102,295],
[200,303],
[104,405],
[205,394],
[298,401]]
def row_col_avgs(num_list, ratio):
# Finds the average of each row and column. Coordinates are
# assigned to a row and column by specifying an error ratio.
last_num = 0
sum_nums = 0
count_nums = 0
avgs = []
num_list.sort()
for num in num_list:
if num > (1 + ratio) * last_num and count_nums != 0:
avgs.append(int(round(sum_nums/count_nums,0)))
sum_nums = num
count_nums = 1
else:
sum_nums = sum_nums + num
count_nums = count_nums + 1
last_num = num
avgs.append(int(round(sum_nums/count_nums,0)))
return avgs
# Split coordinates into two lists of x's and y's
xs, ys = map(list, zip(*xys))
# Find averages of each row and column within a specified error.
x_avgs = row_col_avgs(xs, 0.1)
y_avgs = row_col_avgs(ys, 0.1)
# Return Completed Averaged Grid
avg_grid = []
for y_avg in y_avgs:
avg_row = []
for x_avg in x_avgs:
avg_row.append([int(x_avg), int(y_avg)])
avg_grid.append(avg_row)
print(avg_grid)
Code Output
[[[102, 101], [201, 101], [301, 101]],
[[102, 204], [201, 204], [301, 204]],
[[102, 299], [201, 299], [301, 299]],
[[102, 400], [201, 400], [301, 400]]]
I am also looking for another solution using linear algebra. See my question here.

Inexplicable results after using ind2sub in Matlab

I am having some problems in matlab i don't understand. The following piece of code analyses a collection of images, and should return a coherent image (and always did).
But since I've put an if-condition in the second for-loop (for optimisation purposes) it returns an interlaced image.
I don't understand why, and am getting ready to throw my computer out the window. I suspect it has something to do with ind2sub, but as far as i can see everything is working just fine! Does anyone know why it's doing this?
function imageMedoid(imageList, resizeFolder, outputFolder, x, y)
% local variables
medoidImage = zeros([1, y*x, 3]);
alphaImage = zeros([y x]);
medoidContainer = zeros([y*x, length(imageList), 3]);
% loop through all images in the resizeFolder
for i=1:length(imageList)
% get filename and load image and alpha channel
fname = imageList(i).name;
[container, ~, alpha] = imread([resizeFolder fname]);
% convert alpha channel to zeros and ones, add to alphaImage
alphaImage = alphaImage + (double(alpha) / 255);
% add (r,g,b) values to medoidContainer and reshape to single line
medoidContainer(:, i, :) = reshape(im2double(container), [y*x 3]);
end
% loop through every pixel
for i=1:(y * x)
% convert i to coordinates for alphaImage
[xCoord, yCoord] = ind2sub([x y],i);
if alphaImage(yCoord, xCoord) == 0
% write default value to medoidImage if alpha is zero
medoidImage(1, i, 1:3) = 0;
else
% calculate distances between all values for current pixel
distances = pdist(squeeze(medoidContainer(i,:,1:3)));
% convert found distances to matrix of distances
distanceMatrix = squareform(distances);
% find index of image with the medoid value
[~, j] = min(mean(distanceMatrix,2));
% write found medoid value to medoidImage
medoidImage(1, i, 1:3) = medoidContainer(i, j, 1:3);
end
end
% replace values larger than one (in alpha channel) by one
alphaImage(alphaImage > 1) = 1;
% reshape image to original proportions
medoidImage = reshape(medoidImage, y, x, 3);
% save medoid image
imwrite(medoidImage, [outputFolder 'medoid_modified.png'], 'Alpha', alphaImage);
end
I didn't include the whole code, just this function (for brevity's sake), if anyone needs more (for a better understanding of it), please let me know and i'll include it.
When you call ind2sub, you give the size [x y], but the actual size of alphaImage is [y x] so you are not indexing the correct location with xCoord and yCoord.

Resources