I am currently trying to make a site which will contain several images with patterns and shapes (Lets say few squares and circles of various colors and shape in each picture). And I am aiming to provide the user a way to upload their images of the pattern and do a reverse image search to check whether similar pattern image already exists in my site or not. So is there any way to implement the same, either by custom code or by using any third party api/widgets etc?
Hi Ashish below is a matlab code for a function which generates signature of a particular binary object's surface, which is nearly size dependent, you can use this concept for matching a shape on different scale.
function sig = signature(bw,prec)
boundry = bwboundaries(bw);
xy = boundry{1};
x = xy(:,1);
y = xy(:,2);
len = length(x);
res = (len/prec);
re = rem(res,2);
if re
res = ceil(res);
end
indexes = 1:res:len;
xnew = x(indexes);
ynew = y(indexes);
cx = round(mean(xnew));
cy = round(mean(ynew));
xn = abs(xnew-cx);
yn = abs(ynew-cy);
sig = (xn.^2+yn.^2);
sig = sig/max(sig);
Following is the example of how to use signature function:
clc
clear all
close all
path = 'E:\GoogleDrive\Mathworks\irisDEt\shapes';
im1 = imread([path,'\3.png']);
gray1 = ((im1));
scales = [1,2,3,4];
gray1 = im2bw(gray1);
for i = 1:length(scales)
im = imresize(gray1,scales(i));
sig = signature(im,25);
figure,plot(sig)
fra = getframe();
image = frame2im(fra);
imwrite(image,['E:\GoogleDrive\Mathworks\irisDEt\shapes\',num2str(i),'.png'])
end
following is the test image and its signature for changing in size od images which looks similar in shape.
All above signatures are generated by the code given above.
Related
I want to training a image classifier using inception model.
Now, I have a dishes called chicken rice.
Suppose i want to create rice and chicken meat class.
So can i design output ground true probability as [0.5,0.5,0,0,0...]?
In other words, If the target image contains two classes' content, what should I do to make it reasonable?
Do somebody has tried this?
I have tried to train the image separately, and google did this, too.
keycnt = 0
imagcnt = 0
TestNumber_byclass = np.zeros([keycount],np.int32)
for key in TestKeys:
TestNumber_byclass[keycnt] = len(json_data_test[key])
for imagedata in json_data_test[key]:
imgdata = tf_resize_images(imagdir + imagedata + '.jpg')
imgdata = np.array(imgdata, dtype = np.uint8)
# make image center at 0 in the range of (-1,1]
#imgdata = (imgdata - mean - 128) / 128
h5f = h5py.File(h5filedir_test + str(imagcnt) + ".h5", "w")
h5f.create_dataset('image', data=imgdata)
h5f.create_dataset('label', data=keycnt)
h5f.create_dataset('name' , data=key)
h5f.close()
imagcnt = imagcnt + 1
keycnt =keycnt +1
message = '\r[%d/%d] progress...' % (keycnt,keycount)
sys.stdout.write(message)
sys.stdout.flush()
Many thanks.
What you're trying to do is a multiclass classification, where M out of N classes will be predicted. This is usually done by setting the flag to 1 if the object appears in the image and setting it to 0 if that's not the case.
The really important piece of information is that the last activation function needs to be a sigmoid instead of a softmax. That way you decouple the confidence for each class from the other classes and the sum will be between 0 and N.
I want to do multi-modality image registration(mri/ct) but I do not have completely aligned images, results obtained with simpleITK are very bad. Even if I try to align them, results are still ridiculously bad.
What can I do to fix this? My registration code is as follow:
import SimpleITK as sitk
def fusion(ct, mr):
fixed = sitk.GetImageFromArray(ct, isVector=True)
moving = sitk.GetImageFromArray(mr, isVector=True)
numberOfBins = 24
samplingPercentage = 0.10
R = sitk.ImageRegistrationMethod()
R.SetMetricAsMattesMutualInformation(numberOfBins)
R.SetMetricSamplingPercentage(samplingPercentage,sitk.sitkWallClock)
R.SetMetricSamplingStrategy(R.RANDOM)
R.SetOptimizerAsRegularStepGradientDescent(1.0,.001,200)
R.SetInitialTransform(sitk.TranslationTransform(fixed.GetDimension()))
R.SetInterpolator(sitk.sitkLinear)
#R.AddCommand( sitk.sitkIterationEvent, lambda: command_iteration(R) )
outTx = R.Execute(fixed, moving)
def get_result():
resampler = sitk.ResampleImageFilter()
resampler.SetReferenceImage(fixed);
resampler.SetInterpolator(sitk.sitkLinear)
resampler.SetDefaultPixelValue(100)
resampler.SetTransform(outTx)
out = resampler.Execute(moving)
simg1 = sitk.Cast(sitk.RescaleIntensity(fixed), sitk.sitkUInt8)
simg2 = sitk.Cast(sitk.RescaleIntensity(out), sitk.sitkUInt8)
cimg = sitk.Compose(simg1, simg2, simg1//2.+simg2//2.)
cimg = sitk.Compose(simg1, simg2, simg1//2.+simg2//2.)
return sitk.GetArrayFromImage(cimg)
#sitk.Show( cimg, "ImageRegi
return get_result()
I think you forgot to make an initial alignment of your images at the beginning.
trans = sitk.CenteredTransformInitializer(fixed ,moving, sitk.Euler3DTransform(), sitk.CenteredTransformInitializerFilter.GEOMETRY)
I have created a Gui in matlab which has three axes and i would like to change the scale to log and linear by right click on the particular figure and select the scale options(1.logarithmic 2.linear). I tried using the UIcontextmenu. But i didn't get the result.
firstcycle_mass = handles.Data1{1,1}(1:6392,1);
firstcycle_ioncurrent = handles.Data1{1,2}(1:6392,1);
replace_with_dot_firstcycle_mass = strrep(firstcycle_mass, ',','.');
X1 = str2double(replace_with_dot_firstcycle_mass);
replace_with_dot_firstcycle_ioncurrent = strrep(firstcycle_ioncurrent, ',','.');
Y1 = str2double(replace_with_dot_firstcycle_ioncurrent);
axes(handles.axes1);
plotline= plot(X1,Y1,'r');
xlabel('Mass [amu]');
ylabel('Ion current [A]');
c=uicontextmenu;
plotline.UIContextMenu = c;
% Create menu items for the uicontextmenu
m1 = uimenu(c,'Label','logarithmic','Callback',#setlinestyle);
m2 = uimenu(c,'Label','linear','Callback',#setlinestyle);
function setlinestyle(source,callbackdata)
switch source.Label
case 'logarithmic'
set(gca,'yscale','log')
case 'linear'
set(gca,'yscale','linear')
end
end
I have written this code to help me compare different image histograms however when i run it i get a figure window popping up. I can't see anywhere in the code where i have written imshow and am really confused. Can anyone see why? thanks
%ensure we start with an empty workspace
clear
myPath= 'C:\coursework\'; %#'
number_of_desired_results = 5; %top n results to return
images_path = strcat(myPath, 'fruitnveg');
images_file_names = dir(fullfile(images_path, '*.png'));
images = cell(length(images_file_names), 3);
number_of_images = length(images);
%textures contruction
%loop through all textures and store them
disp('Starting construction of search domain...');
for i = 1:length(images)
image = strcat(images_path, '\', images_file_names(i).name); %#'
%store image object of image
images{i, 1} = imread(image);
%store histogram of image
images{i, 2} = imhist(rgb2ind(images{i, 1}, colormap(colorcube(256))));
%store name of image
images{i, 3} = images_file_names(i).name;
disp(strcat({'Loaded image '}, num2str(i)));
end
disp('Construction of search domain done');
%load the three example images
RGB1 = imread('C:\coursework\examples\salmon.jpg');
X1 = rgb2ind(RGB1,colormap(colorcube(256)));
example1 = imhist(X1);
RGB2 = imread('C:\coursework\examples\eggs.jpg');
X2 = rgb2ind(RGB2,colormap(colorcube(256)));
example2 = imhist(X2);
RGB3 = imread('C:\coursework\examples\steak.jpg');
X3 = rgb2ind(RGB3,colormap(colorcube(256)));
example3 = imhist(X3);
disp('three examples loaded');
disp('compare examples to loaded fruit images');
results = cell(length(images), 2);
results2 = cell(length(images), 2);
results3 = cell(length(images), 2);
for i = 1:length(images)
results{i,1} = images{i,3};
results{i,2} = hi(example1,images{i, 2});
end
results = flipdim(sortrows(results,2),1);
for i = 1:length(images)
results2{i,1} = images{i,3};
results2{i,2} = hi(example2,images{i, 2});
end
results2 = flipdim(sortrows(results2,2),1);
for i = 1:length(images)
results3{i,1} = images{i,3};
results3{i,2} = hi(example3,images{i, 2});
end
results3 = flipdim(sortrows(results3,2),1);
The colormap function sets the current figure's colormap, if there is no figure one is created.
The second parameter of imhist should be the number of bins used in the histogram, not the colormap.
Run your code in the Matlab debugger, step through it line by line, and see when the figure window pops up. That'll tell you what's creating it.
Etienne's answer is right for why you're getting a figure, but I'd just like to add that colormap is unnecessary in this code:
images{i, 2} = imhist(rgb2ind(images{i, 1}, colormap(colorcube(256))));
All you need is:
images{i, 2} = imhist(rgb2ind(images{i, 1}, colorcube(256)));
The second input of rgb2ind should be a colormap, yes. But the output of colorcube is a colormap already. Unless you've got an existing figure and you either want to set the colormap of it or retrieve the colormap it is currently using, the actual function colormap is not necessary.
Other than opening an unnecessary figure, the output of your existing code won't be wrong, as I think in this situation colormap will just pass as an output argument the colormap it was given as an input argument. For example, if you want to set the current figure colormap to one of the inbuilts and return the actual colormap:
cmap = colormap('bone');
I'm writing a piece of code that has to transform from an RGB image to an rgb normalized space. I've got it working with a for format but it runs too slow and I need to evaluate lots of images. I'm trying to vectorize the full function in order to faster it. What I have for the moment is the following:
R = im(:,:,1);
G = im(:,:,2);
B = im(:,:,3);
r=reshape(R,[],1);
g=reshape(G,[],1);
b=reshape(B,[],1);
clear R G B;
VNormalizedRed = r(:)/(r(:)+g(:)+b(:));
VNormalizedGreen = g(:)/(r(:)+g(:)+b(:));
VNormalizedBlue = b(:)/(r(:)+g(:)+b(:));
NormalizedRed = reshape(VNormalizedRed,height,width);
NormalizedGreen = reshape(VNormalizedGreen,height,width);
NormalizedBlue = reshape(VNormalizedBlue,height,width);
The main problem is that when it arrives at VNormalizedRed = r(:)/(r(:)+g(:)+b(:)); it displays an out of memory error (wich is really strange because i just have freed three vectors of the same size). Were is the error? (solved)
Its possible to do the same process in a more efficiently way?
Edit:
After using Martin sugestions I found the reshape function was not necessary, being able to do the same with a simple code:
R = im(:,:,1);
G = im(:,:,2);
B = im(:,:,3);
NormalizedRed = R(:,:)./sqrt(R(:,:).^2+G(:,:).^2+B(:,:).^2);
NormalizedGreen = G(:,:)./sqrt(R(:,:).^2+G(:,:).^2+B(:,:).^2);
NormalizedBlue = B(:,:)./sqrt(R(:,:).^2+G(:,:).^2+B(:,:).^2);
norm(:,:,1) = NormalizedRed(:,:);
norm(:,:,2) = NormalizedGreen(:,:);
norm(:,:,3) = NormalizedBlue(:,:);
I believe you want
VNormalizedRed = r(:)./(r(:)+g(:)+b(:));
Note the dot in front of the /, which specifies an element-by-element divide. Without the dot, you're solving a system of equations -- which is likely not what you want to do. This probably also explains why you're seeing the high memory consumption.
Your entire first code can be rewritten in one vectorized line:
im_normalized = bsxfun(#rdivide, im, sum(im,3,'native'));
Your second slightly modified version as:
im_normalized = bsxfun(#rdivide, im, sqrt(sum(im.^2,3,'native')));
BTW, you should be aware of the data type used for the image, otherwise one can get unexpected results (due to integer division for example). Therefore I would convert the image to double before performing the normalization calculations:
im = im2double(im);