I have a picture of a handwritten letter (say the letter, "y"). Keeping only the first of the three color values (since it is a grayscale image), I get a 111x81 matrix which I call aLetter. I can see this image (please ignore the title) using:
colormap gray; image(aLetter,'CDataMapping','scaled')
What I want is to remove the white space around this letter and somehow average the remaining pixels so that I have an 8x8 matrix (let's call it simpleALetter). Now if I use:
colormap gray; image(simpleALetter,'CDataMapping','scaled')
I should see a pixellated version of the letter:
Any advice on how to do this would be greatly appreciated!
You need several steps to achieve what you want (updated in the light of #rwong's observation that I had white and black flipped…):
Find the approximate 'bounding box' of the letter:
make sure that "text" is the highest value in the image
set things that are "not text" to zero - anything below a threshold
sum along row and column, find non-zero pixels
upsample the image in the bounding box to a multiple of 8
downsample to 8x8
Here is how you might do that with your situation
aLetter = max(aLetter(:)) - aLetter; % invert image: now white = close to zero
aLetter = aLetter - min(aLetter(:)); % make the smallest value zero
maxA = max(aLetter(:));
aLetter(aLetter < 0.1 * maxA) = 0; % thresholding; play with this to set "white" to zero
% find the bounding box:
rowsum = sum(aLetter, 1);
colsum = sum(aLetter, 2);
nonzeroH = find(rowsum);
nonzeroV = find(colsum);
smallerLetter = aLetter(nonzeroV(1):nonzeroV(end), nonzeroH(1):nonzeroH(end));
% now we have the box, but it's not 8x8 yet. Resampling:
sz = size(smallerLetter);
% first upsample in both X and Y by a factor 8:
bigLetter = repmat(reshape(smallerLetter, [1 sz(1) 1 sz(2)]), [8 1 8 1]);
% then reshape and sum so you end up with 8x8 in the final matrix:
letter8 = squeeze(sum(sum(reshape(bigLetter, [sz(1) 8 sz(2) 8]), 3), 1));
% finally, flip it back "the right way" black is black and white is white:
letter8 = 255 - (letter8 * 255 / max(letter8(:)));
You can do this with explicit for loops but it would be much slower.
You can also use some of the blockproc functions in Matlab but I am using Freemat tonight and it doesn't have those… Neither does it have any image processing toolbox functions, so this is "hard core".
As for picking a good threshold: if you know that > 90% of your image is "white", you could determine the correct threshold by sorting the pixels and finding the threshold dynamically - as I mentioned in my comment in the code "play with it" until you find something that works in your situation.
Related
I have an image, size 213 x 145 pixels. I want to resize it to 128 x 128 pixels for example. I've already tried the code below:
i = imread ('alif1.png');
I = imresize (i, [128 128], 'bilinear');
OR
i = imread ('alif1.png');
I = imresize (i, [128 128], 'lanczos3');
it gave me a square image, but the image became disproportionate. However, I believe the aspect ratio was preserved.
I want to resize the image to a square shape without distorting or stretching the image, rather to pad/crop the white background instead. I still can't figure out the right code. I hope anyone could help.
any help will be very much appreciated :)
I = imread('alifi.png');
Crop image, specifying crop rectangle.
I2 = imcrop(I,[75 68 128 128]);
Size and position of the crop rectangle, specified as a four-element position vector of the form [xmin ymin width height].
for more understanding follow this(matlab ) and this(blog) links.
If you want to resize (not crop) the image and keep the aspect ratio (so you don't loose any part of the image AND it doesn't get distorted), you can first add margins to make the image squared.
You can achieve this using the function padarray, or just creating a new image of zeros and then adding your image in the appropiate coordinates.
Once your image is squared, you can resize it to 128x128 using imresize.
In order to add margins, you will have to see where to add them (top&bottom OR left&right).
Also since padarray adds the same amount of margins in both sides, you have to check if the number you need is even. If it's odd add first a last row (or column) of zeros to your image.
So basically you have three options:
Make the image squared by not preserving aspect ratio (which is what you already tried)
Cropping the image as suggested by #ShvetChakra and #bla (but you will loose some image info)
Add margins to the image and resize (but you will end up with a squared image with margins)
Magic doesn't exist so "you must choose, but choose wisely"
(Quote from Indiana Jones and the Last Crusade).
EDIT:
% Example with a 5x2 image, so an extra column will be added
% in order to use padarray.
im = [1 2; 3 4; 5 6; 7 8; 9 10];
nrows = size(a,1);
ncols = size(a,2);
d = abs(ncols-nrows); % difference between ncols and nrows:
if(mod(d,2) == 1) % if difference is an odd number
if (ncols > nrows) % we add a row at the end
im = [im; zeros(1, ncols)];
nrows = nrows + 1;
else % we add a col at the end
im = [im zeros(nrows, 1)];
ncols = ncols + 1;
end
end
if ncols > nrows
im = padarray(im, [(ncols-nrows)/2 0]);
else
im = padarray(im, [0 (nrows-ncols)/2]);
end
% Here im is a 5x5 matix, not perfectly centered
% because we added an odd number of columns: 3
I am trying to come up with an algorithm to determine the dominant color in an image (either taken from a devices camera or by selecting an existing photo in the photo library). I have written an iOS 8 application in Swift that can grab the RGB value of each pixel in the image, but I don't really know what to do from there.
For pixels that have a distinct dominant color, say RGB(230, 15, 30), it's pretty easy to determine the dominant color. However, I don't really know what to do for pixels that have RGB values where 2 of the 3 values are similar, say RGB(200, 215, 30).
My original thought was to keep 3 counters (one for each color) and add each pixels corresponding RGB values to that counter. At the end I would divide each counter by the total number of pixels and the max of the 3 values would be the dominant color. However, like I mentioned before, when the results are close to each other I can't say that one color necessarily dominates the other.
Just looking for some thoughts and suggestions
I came up with this problem a few weeks ago, and having read many posts talking about it, I found the best method is Hierarchical Quantization presented by this post: http://aishack.in/tutorials/dominant-color/. Also, I have implemented it in python: https://github.com/wenmin-wu/dominant-colors-py . You can install it with pip:pip install dominantcolors and use it as following:
from dominantcolors import get_image_dominant_colors
dominant_colors = get_image_dominant_colors(image_path='/path/to/image_path',num_colors=3)
An idea:
First step is to reduce the number of colors, for example "Color Quantization using K-Means". In the example from the link, the number of colors was reduced to 64 from 96K.
Second step is to calculate the ratio for every color and pick the biggest value.
You can check my hobby project to find the dominant color in a UIImage: https://github.com/ruuki/ColorFinder
What it does basically is creating clusters of colors of the image and returns the most dominant one in a completion block. You can tweak threshold parameters within the source code. Hope it helps.
i had a similar task to do, here is my python code:
import picamera
import picamera.array
import numpy as np
from math import sqrt, atan2, degrees
def get_colour_name(rgb):
rgb = rgb / 255
alpha = (2 * rgb[0] - rgb[1] - rgb [2])/2
beta = sqrt(3)/2*(rgb[1] - rgb[2])
hue = int(degrees(atan2(beta, alpha)))
std = np.std(rgb)
mean = np.mean(rgb)
if hue < 0:
hue = hue + 360
if std < 0.055:
if mean > 0.85:
colour = "white"
elif mean < 0.15:
colour = "black"
else:
colour = "grey"
elif (hue > 50) and (hue <= 160):
colour = "green"
elif (hue > 160) and (hue <= 250):
colour = "blue"
else:
colour = "red"
if DEBUG:
print rgb, hue, std, mean, colour
return str(int(hue)) + ": " + colour
def scan_colour:
with picamera.PiCamera() as camera:
with picamera.array.PiRGBArray(camera) as stream:
camera.start_preview()
camera.resolution = (100, 100)
for foo in camera.capture_continuous(stream, 'rgb', use_video_port=False, resize=None, splitter_port=0, burst=True):
stream.truncate()
stream.seek(0)
RGBavg = stream.array.mean(axis=0).mean(axis=0)
colour = get_colour_name(RGBavg)
print colour
scan_colour()
What i thought is to build the mean Color of all Pixels and to determine the Color out of the hue angle. For getting grayscale answers i wanted to check if the Color is near the middle line of the Color corpus.
I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!
I have an image and I want to import this image to matlab. I am using the following code. The problem that I have is that when I convert the image to grayscale, everything will be changed and the converted image is not similar to original one. In another words, I want to keep the values (or let say the image) as it is in the original image. Is there any way for doing this?
I = imread('myimage.png');
figure, imagesc(I), axis equal tight xy
I2 = rgb2gray(I);
figure, imagesc(I2), axis equal tight xy
Your original image is already using a jet colormap. The problem is, when you convert it to grayscale, you lose some crucial information. See the image below.
In the original image you have a heatmap. Blue areas generally indicate "low value", whereas red areas indicate "high values". But when converted to grayscale, both areas indicate low value, as they aproach dark pixels (see the arrows).
A possible solution is this:
You take every pixel of your image, find the nearest (closest)
color value in the jet colormap and use its index as a gray value.
I will show you first the final code and the results. The explanation goes below:
I = im2double(imread('myimage.png'));
map = jet(256);
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
Igray = zeros(size(I, 1), size(I, 2), 'uint8');
for ii = 1:size(Irgb, 1)
[~, idx] = min(sum((bsxfun(#minus, Irgb(ii, :), map)) .^ 2, 2));
Igray(ii) = idx - 1;
end
clear Irgb;
subplot(2,1,1), imagesc(I), axis equal tight xy
subplot(2,1,2), imagesc(Igray), axis equal tight xy
Result:
>> whos I Igray
Name Size Bytes Class Attributes
I 110x339x3 894960 double
Igray 110x339 37290 uint8
Explanation:
First, you get the jet colormap, like this:
map = jet(256);
It will return a 256x3 colormap with the possible colors on the jet palette, where each row is a RGB pixel. map(1,:) would be kind of a dark blue, and map(256,:) would be kind of a dark red, as expected.
Then, you do this:
Irgb = reshape(I, size(I, 1) * size(I, 2), 3);
... to turn your 110x339x3 image into a 37290x3 matrix, where each row is a RGB pixel.
Now, for each pixel, you take the Euclidean distance of that pixel to the map pixels. You take the index of the nearest one and use it as a gray value. The minus one (-1) is because the index is in the range 1..256, but a gray value is in the range 0..255.
Note: the Euclidean distance takes a square root at the end, but since we are just trying to find the closest value, there is no need to do so.
EDIT:
Here is a 10x faster version of the code:
I = im2double(imread('myimage.png'));
map = jet(256);
[C, ~, IC] = unique(reshape(I, size(I, 1) * size(I, 2), 3), 'rows');
equiv = zeros(size(C, 1), 1, 'uint8');
for ii = 1:numel(equiv)
[~, idx] = min(sum((bsxfun(#minus, C(ii, :), map)) .^ 2, 2));
equiv(ii) = idx - 1;
end
Irgb = reshape(equiv(IC), size(I, 1), size(I, 2));
Irgb = Irgb(end:-1:1,:);
clear equiv C IC;
It runs faster because it exploits the fact that the colors on your image are restricted to the colors in the jet palette. Then, it counts the unique colors and only match them to the palette values. With fewer pixels to match, the algorithm runs much faster. Here are the times:
Before:
Elapsed time is 0.619049 seconds.
After:
Elapsed time is 0.061778 seconds.
In the second image, you're using the default colormap, i.e. jet. If you want grayscale, then try using colormap(gray).
A biologist friend of mine asked me if I could help him make a program to count the squama (is this the right translation?) of lizards.
He sent me some images and I tried some things on Matlab. For some images it's much harder than other, for example when there are darker(black) regions. At least with my method. I'm sure I can get some useful help here. How should I improve this? Have I taken the right approach?
These are some of the images.
I got the best results by following Image Processing and Counting using MATLAB. It's basically turning the image into Black and white and then threshold it. But I did add a bit of erosion.
Here's the code:
img0=imread('C:...\pic.png');
img1=rgb2gray(img0);
%The output image BW replaces all pixels in the input image with luminance greater than level with the value 1 (white) and replaces all other pixels with the value 0 (black). Specify level in the range [0,1].
img2=im2bw(img1,0.65);%(img1,graythresh(img1));
imshow(img2)
figure;
%erode
se = strel('line',6,0);
img2 = imerode(img2,se);
se = strel('line',6,90);
img2 = imerode(img2,se);
imshow(img2)
figure;
imshow(img1, 'InitialMag', 'fit')
% Make a truecolor all-green image. I use this later to overlay it on top of the original image to show which elements were counted (with green)
green = cat(3, zeros(size(img1)),ones(size(img1)), zeros(size(img1)));
hold on
h = imshow(green);
hold off
%counts the elements now defined by black spots on the image
[B,L,N,A] = bwboundaries(img2);
%imshow(img2); hold on;
set(h, 'AlphaData', img2)
text(10,10,strcat('\color{green}Objects Found:',num2str(length(B))))
figure;
%this produces a new image showing each counted element and its count id on top of it.
imshow(img2); hold on;
colors=['b' 'g' 'r' 'c' 'm' 'y'];
for k=1:length(B),
boundary = B{k};
cidx = mod(k,length(colors))+1;
plot(boundary(:,2), boundary(:,1), colors(cidx),'LineWidth',2);
%randomize text position for better visibility
rndRow = ceil(length(boundary)/(mod(rand*k,7)+1));
col = boundary(rndRow,2); row = boundary(rndRow,1);
h = text(col+1, row-1, num2str(L(row,col)));
set(h,'Color',colors(cidx),'FontSize',14,'FontWeight','bold');
end
figure;
spy(A);
And these are some of the results. One the top-left corner you can see how many were counted.
Also, I think it's useful to have the counted elements marked in green so at least the user can know which ones have to be counted manually.
There is one route you should consider: watershed segmentation. Here is a quick and dirty example with your first image (it assumes you have the IP toolbox):
raw=rgb2gray(imread('lCeL8.jpg'));
Icomp = imcomplement(raw);
I3 = imhmin(Icomp,20);
L = watershed(I3);
%%
imagesc(L);
axis image
Result shown with a colormap:
You can then count the cells as follows:
count = numel(unique(L));
One of the advantages is that it can be directly fed to regionprops and give you all the nice details about the individual 'squama':
r=regionprops(L, 'All');
imshow(raw);
for k=2:numel(r)
if r(k).Area>100 % I chose 100 to filter out the objects with a small are.
rectangle('Position',r(k).BoundingBox, 'LineWidth',1, 'EdgeColor','b', 'Curvature', [1 1]);
end
end
Which you could use to monitor over/under segmentation:
Note: special thanks to #jucestain for helping with the proper access to the fields in the r structure here