Matlab camera oscilloscope - image

I am at the moment trying to simulate an oscilloscope plugged to the output of a camera in the context of digital film-making.
Here is my code :
clear all;
close all;
clc;
A = imread('06.tif');
[l,c,d] = size(A);
n=256;
B = zeros(n,c);
for i = 1:c
for j = 1:l
t = A(j,i);
B(t+1,i) = B(t+1,i) + 1;
end
end
B = B/0.45;
B = imresize(B,[l c]);
B = (B/255);
C = zeros(n,c);
for i = 1:c
for j = 1:l
t = 0.2126*A(j,i,1)+0.7152*A(j,i,2)+0.0723*A(j,i,3); // here is the supposed issue
C(t+1,i) = C(t+1,i) + 1;
end
end
C = C/0.45;
C = imresize(C,[l c]);
C = (C/255);
figure(1),imshow(B);
figure(2),imshow(C);
The problem is that I am getting breaks in the second image, and unfortunately that's the one I want as an output. My guess is that the issue is located in the linear combination done in the second for but I cannot handle it. I tried with both tif and jpg input, with different data format like uint8 in Matlab but nothing is helping...
Thank you for your attention, I stay available for any question.

Related

Experimenting inverting pictures with Octave

First time using Octave to experiment inverting an image. My filename is LinearAlgebraLab1.m and after I run the file with Octave I get error "error: no such file, '/home/LinearAlgebraLab1.m'"
However, before this, I was getting an error that my .jpg file couldn't be found. What should I change to have Octave run my script correctly without any errors?
%% import image
C = imread('MonaLisa2.jpg');
%% set slopes and intercepts for color transformation
redSlope = 1;
redIntercept = -80;
greenSlope = -.75;
greenIntercept = 150;
blueSlope = -.50;
blueIntercept = 200;
%%redSlope = 1;
%%redIntercept = -80;
%%greenSlope = -.75;
%%greenIntercept = 150;
%%blueSlope = -.50;
%%blueIntercept = 200; redSlope = 1;
%% store RGB channels from image separately
R = C(:,:,1);
G = C(:,:,2);
B = C(:,:,3);
C2 = C;
S=size(C);
m=S(1,1);
n=S(1,2);
%h=S(1,3);
%% change red channel
M = R;
%%M2 = redSlope*cast(M,'double') + redIntercept*ones(786,579);
M2 = redSlope*cast(M,'double') + redIntercept*ones(m,n);
C2(:,:,1) = M2;
%% change green channel
M = G;
M2 = greenSlope*cast(M,'double') + greenIntercept*ones(m,n);
C2(:,:,2) = M2;
%% change blue channel
M = B;
M2 = blueSlope*cast(M,'double') + blueIntercept*ones(m,n);
C2(:,:,3) = M2;
%% visualize new image
image(C2)
axis equal tight off
set(gca,'position',[0 0 1 1],'units','normalized')

a program to apply the following transformation function to a grayscale image

I want to apply following transformation function to a grayscale image, i know how to apply it to the following function,
my question is how do i apply a program to the following transformation function,
code so far,
clear;
pollen = imread('Fig3.10(b).jpg');
u = double(pollen);
[nx ny] = size(u)
nshades = 256;
r1 = 80; s1 = 10; % Transformation by piecewise linear function.
r2 = 140; s2 = 245;
for i = 1:nx
for j = 1:ny
if (u(i,j)< r1)
uspread(i,j) = ((s1-0)/(r1-0))*u(i,j)
end
if ((u(i,j)>=r1) & (u(i,j)<= r2))
uspread(i,j) = ((s2 - s1)/(r2 - r1))*(u(i,j) - r1)+ s1;
end
if (u(i,j)>r2)
uspread(i,j) = ((255 - s2)/(255 - r2))*(u(i,j) - r2) + s2;
end
end
end
hist= zeros(nshades,1);
for i=1:nx
for j=1:ny
for k=0:nshades-1
if uspread(i,j)==k
hist(k+1)=hist(k+1)+1;
end
end
end
end
plot(hist);
pollenspreadmat = uint8(uspread);
imwrite(pollenspreadmat, 'pollenspread.jpg');
Thanks in advance
The figure says that for any intensities that are between A and B, they should be set to C. All you have to do is modify your two for loops so that for any values between A and B, set the output location to C. I'll also assume the range is inclusive. You can simply remove the first and last if conditions and use the middle one:
for i = 1:nx
for j = 1:ny
if ((u(i,j)>=r1) && (u(i,j)<= r2))
uspread(i,j) = C;
end
end
end
C is a constant that you would set yourself. Usually for segmentation, this result is very high to distinguish the foreground from the background. You have a uint8 image here, so C = 255; would work.
However, I would recommend you achieve a more vectorized solution. Avoid for loops and use logical indexing instead:
uspread = u;
uspread(u >= r1 & u <= r2) = C;

LAB image classification using matlab

I am trying to implement this algorithm, but I am a little confused about the classification of L, A and B . the algorithm outline is as follows:
Convert the RGB image to a LAB image.
Compute the mean values of the pixels in L, A and B planes of the image
separately.
If mean (A) + mean (B) ≤ 256
3.1. Classify the pixels with a value in L ≤(mean(L) – standard deviation
(L)/3) as shadow pixels and others as non-shadow pixels.
Else classify the pixels with lower values in both L and B planes as shadow
pixels and others as non-shadow pixels.
I am very much confused about how to classify L as a shadow pixel and others as non shadow pixels.
This is the code which has the algorithm implemented:
clear;
I = imread('flower.jpg');
rgb = imresize(I,[256,256]);
mlab = makecform('srgb2lab');
lab = applycform(rgb,mlab);
S_bin = im2bw(rgb);
S = S_bin + (S_bin == 0);
NS = S;
for i=1:255
for j=1:255
A = lab(j,i,2);
B = lab(j,i,3);
L = lab(j,i,1);
Lmean = mean(mean(L));
Amean = mean2(mean2(A));
Bmean = mean2(mean2(B));
if (Amean + Bmean) <= 256
Lstd = std2(L);
std = (Lmean-(Lstd/3));
Lmean = mean(mean(L));
if L <= std
S(j,i,1) = L;
S(j,i,2) = L;
S(j,i,3) = L;
else
S = lab;
end
else
if L < B
S(j,i,1) = L;
S(j,i,2) = L;
S(j,i,3) = L;
else
S(j,i,1) = B;
S(j,i,2) = B;
S(j,i,3) = B;
end
end
end
end
S = lab2rgb(S);
S=uint8(round(S*255));
figure(3);imshow(S);
First of all: L, A and B are scalars. Just one single value. Taking means or standard deviations therefore will not work.
Assign L, A and B first, then take means, standard deviations etc, then built you image in a second loop:
I = imread('flower.jpg');
rgb = imresize(I,[256,256]);
mlab = makecform('srgb2lab');
lab = applycform(rgb,mlab);
S_bin = im2bw(rgb);
S = S_bin + (S_bin == 0);
NS = S;
L = squeeze(lab(:,:,1);
A = squeeze(lab(:,:,2);
B = squeeze(lab(:,:,3);
Lmean = mean(L(:));
Amean = mean(A(:));
Bmean = mean(B(:));
Lstd = std(L(:));
Astd = std(A(:));
Bstd = std(B(:));
if (Amean + Bmean) <= 256
if L<=(Lmean-(Lstd/3));
tmp = L<=(Lmean-(Lstd/3));
S(:,:,1) = zeros(size(L))+L(tmp);
S(:,:,2) = zeros(size(L))+L(tmp);
S(:,:,3) = zeros(size(L))+L(tmp);
else
S = lab;
end
tmp=L<B;
S(:,:,1) = zeros(size(L))+L(tmp);
S(:,:,1) = zeros(size(L))+L(tmp);
S(:,:,1) = zeros(size(L))+L(tmp);
tmp=L=>B;
S(:,:,1) = zeros(size(B))+B(tmp);
S(:,:,1) = zeros(size(B))+B(tmp);
S(:,:,1) = zeros(size(B))+B(tmp);
end
S = lab2rgb(S);
S=uint8(round(S*255));
figure(3);imshow(S);
This way you will just get a one colour pane, either coloured according to L or B, but since you had that before in your code I left it here.

Get better performance for converting matrix to vector

when working with images, usually they include 3 layers, (RGB). In order to do some computation, I need to convert each layer of the image into a vector.
I1 = ones(70,50,3); % the first image
I2 = 0.4 * ones(70,50,3); % the second image
for dd = 1:3
ILayer1 = I1(:,:,dd);
ILayerLinear1 = ILayer1(:);
ILayer2 = I2(:,:,dd);
ILayerLinear2 = ILayer2(:);
comp = ILayerLinear1 * ILayerLinear1.';
end
Here I have replaced the main computation part with a very simple computation, but that is not the point.
Is there a better way to not repeat the matrix-to-vector conversion, or do it more efficiently? Because it may happen multiple times through the code.
Update:
I can also define a function as follows to pass an Image and retrieve a vector, but it still is not improving the code.
function V = I2V(I)
[r,c,d] = size(I);
V = zeros(d,r*c);
for dd = 1:d
layer = I(:,:,dd);
V(dd,:) = layer(:);
end
end
I'm not sure about the outer product but, here's everything else.
I1 = reshape(1:70*50*3, 70,50,3);
I2 = 0.4*reshape(1:70*50*3, 70,50,3);
i1 = reshape(I1, [], 3);
i2 = reshape(I2, [], 3);

I want to correct this code for images, what change need to do..?

Currently i am recognzing a face, means i have to find a face which we have to test is in training database or not..! So, i have to decide yes or no..
Yes means find image, and no means print message that NO IMAGE IN DATABASE. I have a program, Currently this program is finding a correct image correctly, but even when there is no image, even it shows other image which not matches.. Actually it should print NO IMAGE IN DATABASE.
So, How to do..?
Here is a Test and training images data on this link.
http://www.fileconvoy.com/dfl.php?id=g6e59fe8105a6e6389994740914b7b2fc99eb3e445
My Program is in terms of different four .m files, and it is here,we have to run only first code.. and remaining 3 are functions, it is also given here..**
clear all
clc
close all
TrainDatabasePath = uigetdir('D:\Program Files\MATLAB\R2006a\work', 'Select training database path' );
TestDatabasePath = uigetdir('D:\Program Files\MATLAB\R2006a\work', 'Select test database path');
prompt = {'Enter test image name (a number between 1 to 10):'};
dlg_title = 'Input of PCA-Based Face Recognition System';
num_lines= 1;
def = {'1'};
TestImage = inputdlg(prompt,dlg_title,num_lines,def);
TestImage = strcat(TestDatabasePath,'\',char(TestImage),'.jpg');
im = imread(TestImage);
T = CreateDatabase(TrainDatabasePath);
[m, A, Eigenfaces] = EigenfaceCore(T);
OutputName = Recognition(TestImage, m, A, Eigenfaces);
SelectedImage = strcat(TrainDatabasePath,'\',OutputName);
SelectedImage = imread(SelectedImage);
imshow(im)
title('Test Image');
figure,imshow(SelectedImage);
title('Equivalent Image');
str = strcat('Matched image is : ',OutputName);
disp(str)
function T = CreateDatabase(TrainDatabasePath)
TrainFiles = dir(TrainDatabasePath);
Train_Number = 0;
for i = 1:size(TrainFiles,1)
if
not(strcmp(TrainFiles(i).name,'.')|strcmp(TrainFiles(i).name,'..')|strcmp(TrainFiles(i).name,'Thu mbs.db'))
Train_Number = Train_Number + 1; % Number of all images in the training database
end
end
T = [];
for i = 1 : Train_Number
str = int2str(i);
str = strcat('\',str,'.jpg');
str = strcat(TrainDatabasePath,str);
img = imread(str);
img = rgb2gray(img);
[irow icol] = size(img);
temp = reshape(img',irow*icol,1); % Reshaping 2D images into 1D image vectors
T = [T temp]; % 'T' grows after each turn
end
function [m, A, Eigenfaces] = EigenfaceCore(T)
m = mean(T,2); % Computing the average face image m = (1/P)*sum(Tj's) (j = 1 : P)
Train_Number = size(T,2);
A = [];
for i = 1 : Train_Number
temp = double(T(:,i)) - m;
Ai = Ti - m
A = [A temp]; % Merging all centered images
end
L = A'*A; % L is the surrogate of covariance matrix C=A*A'.
[V D] = eig(L); % Diagonal elements of D are the eigenvalues for both L=A'*A and C=A*A'.
L_eig_vec = [];
for i = 1 : size(V,2)
if( D(i,i)>1 )
L_eig_vec = [L_eig_vec V(:,i)];
end
end
Eigenfaces = A * L_eig_vec; % A: centered image vectors
function OutputName = Recognition(TestImage, m, A, Eigenfaces)
ProjectedImages = [];
Train_Number = size(Eigenfaces,2);
for i = 1 : Train_Number
temp = Eigenfaces'*A(:,i); % Projection of centered images into facespace
ProjectedImages = [ProjectedImages temp];
end
InputImage = imread(TestImage);
temp = InputImage(:,:,1);
[irow icol] = size(temp);
InImage = reshape(temp',irow*icol,1);
Difference = double(InImage)-m; % Centered test image
ProjectedTestImage = Eigenfaces'*Difference; % Test image feature vector
Euc_dist = [];
for i = 1 : Train_Number
q = ProjectedImages(:,i);
temp = ( norm( ProjectedTestImage - q ) )^2;
Euc_dist = [Euc_dist temp];
end
[Euc_dist_min , Recognized_index] = min(Euc_dist);
OutputName = strcat(int2str(Recognized_index),'.jpg');
So, how to generate error massege when no image matches..?
At the moment, your application appears to find the most similar image (you appear to be using Euclidean distance as you measure of similarity), and return it. There doesn't seem to be any concept of whether the image "matches" or not.
Define a threshold on similarity, and then determine whether your most similar image meets that threshold. If it does, return it, otherwise display an error message.

Resources