Octave imwrite loses grayscale - image

I am using Octave 3.6.4 to process an image and store it afterwards. The image I read is grayscale, and after calculations the matrix should be of the same type. However, if I open the stored image, there are no gray pixels. There are only black and white ones and the gray ones got lost. They are essentially all white.
Here is the processing code:
function aufgabe13()
[img, map, alpha] = imread("Buche.png");
[imax, jmax] = size(img);
a = 0.7;
M = 255 * ones(imax,jmax + round(imax * a));
for i = 1:imax
begin = round((imax-i)*a);
M(i,begin +1 : begin + jmax) = img(i,:);
end
imwrite(M, 'BucheScherung.png', 'png');
end
So what am I doing wrong?

The reason why is because M is a double matrix so the values are expected to be between [0,1] when representing an image. Because your values in your image are between [0,255] when read in (type uint8), a lot of the values are white because they're beyond the value of 1. What you should do is convert the image so that it is double precision and normalized between [0,1], then proceed as normal. This can be done with the im2double function.
In other words, do this:
function aufgabe13()
[img, map, alpha] = imread("Buche.png");
img = im2double(img); % Edit
[imax, jmax] = size(img);
a = 0.7;
M = ones(imax,jmax + round(imax * a)); % Edit
for i = 1:imax
begin = round((imax-i)*a);
M(i,begin +1 : begin + jmax) = img(i,:);
end
imwrite(M, 'BucheScherung.png', 'png');
end

Related

Interp2 of image with transformed coordinates

I have 2 greyscale images that i am trying to align using scalar scaling 1 , rotation matrix [2,2] and translation vector [2,1]. I can calculate image1's transformed coordinates as
y = s*R*x + t;
Below the resulting images are shown.
The first image is image1 before transformation,
the second image is image1 (red) with attempted interpolation using interp2 shown on top of image2 (green)
The third image is when i manually insert the pixel values from image1 into an empty array (that has the same size as image2) using the transformed coordinates.
From this we can see that the coordinate transformation must have been successful, as the images are aligned although not perfectly (which is to be expected since only 2 coordinates were used in calculating s, R and t) .
How come interp2 is not producing a result more similar to when i manually insert pixel values?
Below the code for doing this is included:
Interpolation code
function [transformed_image] = interpolate_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
% doesn't help if i use get_grid that the other function is using here
[~, grid_xr, grid_yr] = get_ipgrid(im_r);
[x_t, grid_xt, grid_yt] = get_ipgrid(im_t);
y = s*R*x_t + t;
yx = reshape(y(1,:), m,n);
yy = reshape(y(2,:), m,n);
transformed_image = interp2(grid_xr, grid_yr, im_r, yx, yy, 'nearest');
end
function [x, grid_x, grid_y] = get_ipgrid(image)
[m,n] = size(image);
[grid_x,grid_y] = meshgrid(1:n,1:m);
x = [reshape(grid_x, 1, []); reshape(grid_y, 1, [])]; % X is [2xM*N] coordinate pairs
end
The manual code
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(round(yx(i,j)),round(yy(i,j))) = im_r(i,j);
end
end
end
end
function [x, grid_x, grid_y] = get_grid(image)
[m,n] = size(image);
[grid_y,grid_x] = meshgrid(1:n,1:m);
x = [grid_x(:) grid_y(:)]'; % X is [2xM*N] coordinate pairs
end
Can anyone see what i'm doing wrong with interp2? I feel like i have tried everything
Turns out i got interpolation all wrong.
In my question i calculate the coordinates of im1 in im2.
However the way interpolation works is that i need to calculate the coordinates of im2 in im1 such that i can map the image as shown below.
This means that i also calculated the wrong s,R and t since they were used to transform im1 -> im2, where as i needed im2 -> im1. (this is also called the inverse transform). Below is the manual code, that is basically the same as interp2 with nearest neighbour interpolation
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(i,j) = im_r(round(yx(i,j)),round(yy(i,j)));
end
end
end
end

2D Image transformations by pixel

I am trying to do a translation, euclidian, similarity, affine and projective transformation on an image pixel by pixel. the input for my program is the image name and the transformation matrix.
This is my code
function imagetrans(name, m)
Image = imread(name);
[rows, cols] = size(Image);
newImage(1:rows,1:cols) = 1;
for row = 1 : rows
for col = 1 : cols
if(Image(row,col) == 0)
point = [row;col;1];
answer = m * point;
answer = int8(answer);
newx = answer(1,1);
newy = answer(2,1);
newImage(newx,newy) = 0;
end
end
end
imshow(newImage);
end
This is the image
Right now I am testing just a translation matrix.
matrix =
1 0 7
0 1 2
0 0 1
when I pass the image and matrix through the function my result is just a little black line
What am I doing wrong?
Using the matlab debugger, I noticed that that you are casting to int8, which is too small too represent all indices. So, you should use int32/int64 or uint32/uint64 instead, i.e.
answer = uint8(answer);
should be
answer = uint32(answer);
Please try to use the Matlab debugger before asking the question: why does it not work?

How can I change the outer limits of a Circular mask to a different colour

I have the following function that is successful in creating a grey circular mask over the image input, such that the new image is a grey border around a circular image. Example: Grey circular mask.
All I want to do is make the mask a very specific green, but I haven't been successful.
Here is the code:
function [newIm] = myCircularMask(im)
%Setting variables
rad = size(im,1)/2.1; %Radius of the circle window
im = double(im);
[rows, cols, planes]= size(im);
newIm = zeros(rows, cols, planes);
%Generating hard-edged circular mask with 1 inside and 0 outside
M = rows;
[X,Y] = meshgrid(-M/2:1:(M-1)/2, -M/2:1:(M-1)/2);
mask = double(zeros(M,M));
mask(X.^2 + Y.^2 < rad^2) = 1;
% Soften edge of mask
gauss = fspecial('gaussian',[12 12],0.1);
mask = conv2(mask,gauss,'same');
% Multiply image by mask, i.e. x1 inside x0 outside
for k=1:planes
newIm(:,:,k) = im(:,:,k).*mask;
end
% Make mask either 0 inside or -127 outside
mask = (abs(mask-1)*127);
% now add mask to image
for k=1:planes
newIm(:,:,k) = newIm(:,:,k)+mask;
end
newIm = floor(newIm)/255;
The type of green I would like to use is of RGB values [59 178 74].
I'm a beginner with MATLAB, so any help would be greatly appreciated.
Cheers!
Steve
After masking your image, create a color version of your mask:
% test with simple mask
mask = ones(10,10);
mask(5:7,5:7)=0;
% invert mask, multiply with rgb-values, make rgb-matrix:
r_green=59/255; g_green=178/255; b_green=74/255;
invmask=(1-mask); % use mask with ones/zeroes
rgbmask=cat(3,invmask*r_green,invmask*g_green,invmask*b_green);
Add this to your masked image.
Edit:
function [newIm] = myCircularMask(im)
%Setting variables
rad = size(im,1)/2.1; %Radius of the circle window
im = double(im);
[rows, cols, planes]= size(im);
newIm = zeros(rows, cols, planes);
%Generating hard-edged circular mask with 1 inside and 0 outside
M = rows;
[X,Y] = meshgrid(-M/2:1:(M-1)/2, -M/2:1:(M-1)/2);
mask = double(zeros(M,M));
mask(X.^2 + Y.^2 < rad^2) = 1;
% Soften edge of mask
gauss = fspecial('gaussian',[12 12],0.1);
mask = conv2(mask,gauss,'same');
% Multiply image by mask, i.e. x1 inside x0 outside
for k=1:planes
newIm(:,:,k) = im(:,:,k).*mask;
end
% Here follows the new code:
% invert mask, multiply with rgb-values, make rgb-matrix:
r_green=59/255; g_green=178/255; b_green=74/255;
invmask=(1-mask); % use mask with ones/zeroes
rgbmask=cat(3,invmask*r_green,invmask*g_green,invmask*b_green);
newIm=newIm+rgbmask;
Note that I haven't been able to test my suggestion, so there might be errors.

How to extract color shade from a given sample image to convert another image using color of sample image?

I have a sample image and a target image. I want to transfer the color shades of sample image to target image. Please tell me how to extract the color from sample image.
Here the images:
input source image:
input map for desired output image
output image
You can use a technique called "Histogram matching" (another description)
Basically, you use the histogram for your source image as a goal and transform the values for each input map pixel to get the output histogram as close to source as possible. You do it for each rgb channel of the image.
Here is my python code for that:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
im3 = np.interp(imsrc[:,:,d].flatten(),cdftint, bins[:-1])
imres[:,:,d] = im3.reshape((imsrc.shape[0],imsrc.shape[1] ))
imsave("histnormresult.jpg", imres)
The output for you samples will look like that:
You could also try making the same in HSV colorspace - it might give better results.
I think the hardest part is to determine the dominant color of the first image. Just looking at it, with all the highlights and shadows, the best overall color will be the one that has the highest combination of brightness and saturation. I start with a blurred image to reduce the effects of noise and other anomalies, then convert each pixel to the HSV color space for the brightness and saturation measurement. Here's how it looks in Python with PIL and colorsys:
blurred = im1.filter(ImageFilter.BLUR)
ld = blurred.load()
max_hsv = (0, 0, 0)
for y in range(blurred.size[1]):
for x in range(blurred.size[0]):
r, g, b = tuple(c / 255. for c in ld[x, y])
h, s, v = colorsys.rgb_to_hsv(r, g, b)
if s + v > max_hsv[1] + max_hsv[2]:
max_hsv = h, s, v
r, g, b = tuple(int(c * 255) for c in colorsys.hsv_to_rgb(*max_hsv))
For your image I get a color of (210, 61, 74) which looks like:
From that point it's just a matter of transferring the hue and saturation to the other image.
The histogram matching solutions above did not work for me. Here is my own, based on OpenCV:
def match_image_histograms(image, reference):
chans1 = cv2.split(image)
chans2 = cv2.split(reference)
new_chans = []
for ch1, ch2 in zip(chans1, chans2):
hist1 = cv2.calcHist([ch1], [0], None, [256], [0, 256])
hist1 /= hist1.sum()
hist2 = cv2.calcHist([ch2], [0], None, [256], [0, 256])
hist2 /= hist2.sum()
lut = np.searchsorted(hist1.cumsum(), hist2.cumsum())
new_chans.append(cv2.LUT(ch1, lut))
return cv2.merge(new_chans).astype('uint8')
obtain average color from color map
ignore saturated white/black colors
convert light map to grayscale
change dynamic range of lightmap to match your desired output
I use max dynamic range. You could compute the range of color map and set it for light map
multiply the light map by avg color
This is how it looks like:
And this is the C++ source code
//picture pic0,pic1,pic2;
// pic0 - source color
// pic1 - source light map
// pic2 - output
int x,y,rr,gg,bb,i,i0,i1;
double r,g,b,a;
// init output as source light map in grayscale i=r+g+b
pic2=pic1;
pic2.rgb2i();
// change light map dynamic range to maximum
i0=pic2.p[0][0].dd; // min
i1=pic2.p[0][0].dd; // max
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
if (i0>i) i0=i;
if (i1<i) i1=i;
}
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
i=pic2.p[y][x].dd;
i=(i-i0)*767/(i1-i0);
pic2.p[y][x].dd=i;
}
// extract average color from color map (normalized to unit vecotr)
for (r=0.0,g=0.0,b=0.0,y=0;y<pic0.ys;y++)
for (x=0;x<pic0.xs;x++)
{
rr=BYTE(pic0.p[y][x].db[picture::_r]);
gg=BYTE(pic0.p[y][x].db[picture::_g]);
bb=BYTE(pic0.p[y][x].db[picture::_b]);
i=rr+gg+bb;
if (i<400) // ignore saturated colors (whiteish) 3*255=white
if (i>16) // ignore too dark colors (whiteish) 0=black
{
r+=rr;
g+=gg;
b+=bb;
}
}
a=1.0/sqrt((r*r)+(g*g)+(b*b)); r*=a; g*=a; b*=a;
// recolor output
for (y=0;y<pic2.ys;y++)
for (x=0;x<pic2.xs;x++)
{
a=DWORD(pic2.p[y][x].dd);
rr=r*a; if (rr>255) rr=255; pic2.p[y][x].db[picture::_r]=BYTE(rr);
gg=g*a; if (gg>255) gg=255; pic2.p[y][x].db[picture::_g]=BYTE(gg);
bb=b*a; if (bb>255) bb=255; pic2.p[y][x].db[picture::_b]=BYTE(bb);
}
I am using own picture class so here some members:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
p[y][x].db[4] is pixel access by color bands (r,g,b,a)
[notes]
If this does not meet your needs then please specify more and add more images. Because your current example is really not self explanatonary
Regarding previous answer, one thing to be careful with:
once the CDF will reach its maximum (=1), the interpolation will get mislead and will match wrongly your values. To avoid this, you should provide the interpolation function only the part of CDF meaningful (not after where it reaches 1) and the corresponding bins. Here the answer adapted:
from scipy.misc import imsave, imread
import numpy as np
imsrc = imread("source.jpg")
imtint = imread("tint_target.jpg")
nbr_bins=255
imres = imsrc.copy()
for d in range(3):
imhist,bins = np.histogram(imsrc[:,:,d].flatten(),nbr_bins,normed=True)
tinthist,bins = np.histogram(imtint[:,:,d].flatten(),nbr_bins,normed=True)
cdfsrc = imhist.cumsum() #cumulative distribution function
cdfsrc = (255 * cdfsrc / cdfsrc[-1]).astype(np.uint8) #normalize
cdftint = tinthist.cumsum() #cumulative distribution function
cdftint = (255 * cdftint / cdftint[-1]).astype(np.uint8) #normalize
im2 = np.interp(imsrc[:,:,d].flatten(),bins[:-1],cdfsrc)
if (cdftint==1).sum()>0:
idx_max = np.where(cdftint==1)[0][0]
im3 = np.interp(im2,cdftint[:idx_max+1], bins[:idx_max+1])
else:
im3 = np.interp(im2,cdftint, bins[:-1])
Enjoy!

Matlab - displaying a background image

I'm trying to find the median values for the R,G & B channels of each pixel for each 10th image in a set of 100, to find the background image. My values all seem correct but when i try to display the background at the end of my code it's always white, please help
%// list all the files in some folder
folder = '~/V&R/1/';
filelist = dir(folder);
images = zeros(480,640,3,100);
% images = [];
%// the first two in filelist are . and ..
count = 1;
for i=3:size(filelist,1)
if filelist(i).isdir ~= true
fname = filelist(i).name;
%// if file extension is jpg
if strcmp( fname(size(fname,2)-3:size(fname,2)) ,'.jpg' ) == 1
tmp = imread([folder fname]);
images(:,:,:,count) = tmp;
count = count +1;
end
end
end
background = zeros(480,640,3);
for j=1:480
for i=1:640
tmpR = zeros(1,10);
tmpG = zeros(1,10);
tmpB = zeros(1,10);
for k=1:10
tmpR(k) = images(j,i,1,k*10);
tmpG(k) = images(j,i,2,k*10);
tmpB(k) = images(j,i,3,k*10);
end
background(j,i,1) = floor(median(tmpR));
background(j,i,2) = floor(median(tmpG));
background(j,i,3) = floor(median(tmpB));
end
end
imshow(background)
thanks
The first step is to vectorize your code. Instead of the following block of code:
background = zeros(480,640,3);
for j=1:480
for i=1:640
tmpR = zeros(1,10);
tmpG = zeros(1,10);
tmpB = zeros(1,10);
for k=1:10
tmpR(k) = images(j,i,1,k*10);
tmpG(k) = images(j,i,2,k*10);
tmpB(k) = images(j,i,3,k*10);
end
background(j,i,1) = floor(median(tmpR));
background(j,i,2) = floor(median(tmpG));
background(j,i,3) = floor(median(tmpB));
end
end
write:
subimages = images(:, :, :, 1:10:end);
background = median(subimages, 4);
now as said before, use imshow with the [] option to show your image:
imshow(background, []);
if you still see a white image, then it's possible that you are dealing with a matrix of double values that are not between [0, 1]. Images in Matlab are usually of class double or single with values between 0 and 1, or of class uint8 or uint16 with values between 0, 255 or 0, 65535 respectively. If your values are between 0 and 255 but class(subimages) returns double or single, do the following before using imshow():
subimages = uint8(subimages);
Try
imshow(background,[])
When using imshow, MATLAB needs to set a display range. For single or double grayscale images, the default display range is [0 1]. This means that any value larger than 1 will be represented as white. You can fix this by setting your own display range manually, say
imshow(background,[0 100],
or you can let MATLAB calculate a new display range by doing
imshow(background,[])
which is the same as
imshow(background,[min(background(:)) max(background(:))])
You can rewrite your code as:
%# get filenames of all JPG images in some folder
folder = '~/V&R/1/';
filelist = dir( fullfile(folder,'*.jpg') );
filelist = strcat(folder, filesep, {filelist.name});
%# read files, and store as 'double' images in a 4D matrix
images = zeros(480,640,3, numel(filelist), 'double');
for i=1:numel(filelist)
images(:,:,:,i) = im2double( imread(filelist{i}) );
end
%# estimate background using median
subimages = images(:,:,:,1:10:end);
background = median(subimages, 4);
imshow(background)

Resources