Basically, I have a matrix filled with 0's and 1's that is a representation of an image. I essentially want a GUI that allows me to arbitrarily draw or make lines on the image, so essentially, Microsoft paint capabilities of drawing on the image.
Thanks for all your help.
As I commented, you can use ginput.
Here is a short program you can test.
fh = figure;
imageh = imshow(false(50));
% Create a button in the figure.
uicontrol('Parent',fh,'Style','pushbutton','String','paint','Callback',{#paintButtonCallback, imageh});
% button callback function
function paintButtonCallback(~,~,imageh)
[x,y] = ginput(1);
% round the values so they can be used for indexing.
x = round(x);
y = round(y);
% make sure the values do not go outside the image.
s = size(imageh.CData);
if x > s(2) || y > s(1) || x < 1 || y < 1
return
end
% make the selected pixel white.
imageh.CData(round(y),round(x)) = true;
end
Update
I'm not sure if there is any existing toolbox would allow you to edit images as conveniently as you can with MS paint. However, it is possible to code it yourselves.
To draw a line you can use 'ginput(2)' to take two points and plot the line. Note that the findLine function isn't perfect.
[x,y] = ginput(2);
% find all pixels on the line xy
ind = findLine(size(imageh.CData),x,y);
% make the selected pixel white.
imageh.CData(ind) = true;
function [x,y] = findLine(x,y)
% Find all pixels that lie between points defined by [x(1),y(1)] and [x(2),y(2)].
supersampling = 1.2;
[x,y,~] = improfile(s,round(x),round(y),max([diff(x);diff(y)])*supersampling);
ind = sub2ind(s,round(x),round(y));
end
If you have Image Processing Toolbox, you have the option to use drawline, which gives a better draw experience and you can get the pixels on the line using createMask function:
h = drawline;
ind = h.createMask;
drawfreehand may be also relevant:
h = drawfreehand;
x = h.Position(:,1);
y = h.Position(:,2);
You can delete the object created on the image with delete(h) if you don't need it. See more similar functions in MATLAB documentation.
It is also painful when you have to click the paint button each time you need to paint a point. To overcome this problem, you can use the ButtonDownFcn of the figure. The paint button will update the ButtonDownFcn with a meaningful callback or empty value depending on the circumstance:
function paintButtonCallback(obj,~,imageh)
if isempty(obj.Tag)
imageh.ButtonDownFcn = #paintMode;
obj.Tag = 'on';
else
imageh.ButtonDownFcn = '';
obj.Tag = '';
end
And the meaningful callback paintMode:
function paintMode(~,~)
[x,y] = ginput(1);
% round the values so they can be used for indexing.
x = round(x);
y = round(y);
% make sure the values do not go outside the image.
s = size(imageh.CData);
if x > s(2) || y > s(1) || x < 1 || y < 1
return
end
% make the selected pixel white.
imageh.CData(y,x) = true;
end
The full demo code:
fh = figure;
imageh = imshow(false(20));
% Create buttons in the figure.
uicontrol('Parent',fh,'Style','pushbutton','String','paint','Callback',{#paintButtonCallback, imageh});
bh = uicontrol('Parent',fh,'Style','pushbutton','String','line','Callback',{#lineButtonCallback, imageh});
bh.Position(2) = 50;
bh2 = uicontrol('Parent',fh,'Style','pushbutton','String','line2','Callback',{#line2ButtonCallback, imageh});
bh2.Position(2) = 80;
bh3 = uicontrol('Parent',fh,'Style','pushbutton','String','free','Callback',{#freeButtonCallback, imageh});
bh3.Position(2) = 110;
% button callback function
function paintButtonCallback(obj,~,imageh)
if isempty(obj.Tag)
imageh.ButtonDownFcn = #paintMode;
obj.Tag = 'on';
else
imageh.ButtonDownFcn = '';
obj.Tag = '';
end
function paintMode(~,~)
[x,y] = ginput(1);
% round the values so they can be used for indexing.
x = round(x);
y = round(y);
% make sure the values do not go outside the image.
s = size(imageh.CData);
if x > s(2) || y > s(1) || x < 1 || y < 1
return
end
% make the selected pixel white.
imageh.CData(y,x) = true;
end
end
% button callback function
function lineButtonCallback(~,~,imageh)
% take two points at a time
[x,y] = ginput(2);
% make sure the values do not go outside the image.
s = size(imageh.CData);
if any(x > s(2)+0.5 | y > s(1)+0.5 | x < 0.5 | y < 0.5) || (diff(x) == 0 && diff(y) == 0)
return
end
% find all pixels on the line xy
ind = findLine(size(imageh.CData),x,y);
% make the selected pixel white.
imageh.CData(ind) = true;
end
function ind = findLine(s,x,y)
% Find all pixels that lie between points defined by [x(1),y(1)] and [x(2),y(2)].
supersampling = 1.2;
[x,y,~] = improfile(s,round(x),round(y),max([diff(x);diff(y)])*supersampling);
ind = sub2ind(s,round(x),round(y));
end
% button callback function
function line2ButtonCallback(~,~,imageh)
% take two points at a time
h = drawline;
ind = h.createMask;
delete(h);
% make the selected pixel white.
imageh.CData(ind) = true;
end
% button callback function
function freeButtonCallback(~,~,imageh)
% take two points at a time
h = drawfreehand;
x = h.Position(:,1);
y = h.Position(:,2);
delete(h);
ind = sub2ind(size(imageh.CData),round(y),round(x));
% make the selected pixel white.
imageh.CData(ind) = true;
end
Related
I have 2 greyscale images that i am trying to align using scalar scaling 1 , rotation matrix [2,2] and translation vector [2,1]. I can calculate image1's transformed coordinates as
y = s*R*x + t;
Below the resulting images are shown.
The first image is image1 before transformation,
the second image is image1 (red) with attempted interpolation using interp2 shown on top of image2 (green)
The third image is when i manually insert the pixel values from image1 into an empty array (that has the same size as image2) using the transformed coordinates.
From this we can see that the coordinate transformation must have been successful, as the images are aligned although not perfectly (which is to be expected since only 2 coordinates were used in calculating s, R and t) .
How come interp2 is not producing a result more similar to when i manually insert pixel values?
Below the code for doing this is included:
Interpolation code
function [transformed_image] = interpolate_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
% doesn't help if i use get_grid that the other function is using here
[~, grid_xr, grid_yr] = get_ipgrid(im_r);
[x_t, grid_xt, grid_yt] = get_ipgrid(im_t);
y = s*R*x_t + t;
yx = reshape(y(1,:), m,n);
yy = reshape(y(2,:), m,n);
transformed_image = interp2(grid_xr, grid_yr, im_r, yx, yy, 'nearest');
end
function [x, grid_x, grid_y] = get_ipgrid(image)
[m,n] = size(image);
[grid_x,grid_y] = meshgrid(1:n,1:m);
x = [reshape(grid_x, 1, []); reshape(grid_y, 1, [])]; % X is [2xM*N] coordinate pairs
end
The manual code
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(round(yx(i,j)),round(yy(i,j))) = im_r(i,j);
end
end
end
end
function [x, grid_x, grid_y] = get_grid(image)
[m,n] = size(image);
[grid_y,grid_x] = meshgrid(1:n,1:m);
x = [grid_x(:) grid_y(:)]'; % X is [2xM*N] coordinate pairs
end
Can anyone see what i'm doing wrong with interp2? I feel like i have tried everything
Turns out i got interpolation all wrong.
In my question i calculate the coordinates of im1 in im2.
However the way interpolation works is that i need to calculate the coordinates of im2 in im1 such that i can map the image as shown below.
This means that i also calculated the wrong s,R and t since they were used to transform im1 -> im2, where as i needed im2 -> im1. (this is also called the inverse transform). Below is the manual code, that is basically the same as interp2 with nearest neighbour interpolation
function [transformed_image] = transform_image(im_r,im_t,s,R,t)
[m,n] = size(im_t);
[x_t, grid_xt, grid_yt] = get_grid(im_t);
y = s*R*x_t + t;
ymat = reshape(y',m,n,2);
yx = ymat(:,:,1);
yy = ymat(:,:,2);
transformed_image = zeros(m,n);
for i = 1:m
for j = 1:n
% make sure coordinates are inside
if (yx(i,j) < m & yy(i,j) < n & yx(i,j) > 0.5 & yy(i,j) > 0.5)
transformed_image(i,j) = im_r(round(yx(i,j)),round(yy(i,j)));
end
end
end
end
I'm trying to write a MATLAB script that does the following:
Given: pixel coordinates(x,y) for a .jpg image
Goal: Check, within a 5 pixel radius of given coordinates, if there is a pixel of a certain value.
For example, let's say I'm given the coordinates (100,100), then I want to check the neighborhood of (100,100) within my image for any pixels that are black (0,0,0). So perhaps, pixel (103, 100) and (104,100) might have the value (0,0,0).
Current code:
x_coord = uint32(coord(:,1));
y_coord = uint32(coord(:,2));
count = 0;
for i = 1:length(x_coord)
%(img(x,y) returns pixel value at that (x,y)
%Note 0 = black. Indicating that, at that position, the image is just
% black
if img(x_coord(i),y_coord(i)) == 0
count = count + 1;
end
end
It currently only checks at an exact location. Not in a local neighborhood. How to could I extend this?
EDIT: Also note, as long as there as at least one pixel in the neighborhood with the value, I increment count. I'm not trying to enumerate how many pixels in the neighborhood have that value, just trying to find evidence of at least one pixel that has that value.
EDIT:
Even though I am unable to identify an error with the code, I am not able to get the exact results I want. Here is the code I am using.
val = 0; %pixel value to check
N = 50; % neighbourhood radius
%2D grid of coordinates surrounding center coordinate
[R, C] = ndgrid(1 : size(img, 1), 1 : size(img, 2));
for kk = 1 : size(coord, 1)
r = coord(kk, 1); c = coord(kk, 2); % Get pixel locations
% mask of valid locations within the neighbourhood (avoid boundary problems)
mask = (R - r).^2 + (C - c).^2 <= N*N;
pix = img(mask); % Get the valid pixels
valid = any(pix(:) ~= val);
% Add either 0 or 1 depending if we have found any matching pixels
if(valid == 1)
img = insertMarker(img, [r c], 'x', 'color', 'red', 'size', 10);
imwrite(img, images(i).name,'tiff');
end
count = count + valid;
end
An easier way to do this would be to use indexing to grab a neighbourhood, then to check to see if any of the pixels in the neighbourhood have the value that you're looking for, use any on a flattened version of this neighbourhood. The trick with grabbing the right neighbourhood is to first generate a 2D grid of coordinates that span the entire dimensions of your image, then simply use the equation of a circle with the centre of it being each coordinate you are looking at and determine those locations that satisfy the following equation:
(x - a)^2 + (y - b)^2 <= N^2
N is the radius of the observation window, (a, b) is a coordinate of interest while (x, y) is a coordinate in the image. Use meshgrid to generate the coordinates.
You would use the above equation to create a logical mask, index into your image to pull the locations that are valid within the mask and check how many pixels match the one you want. Another added benefit with the above approach is that you are not subject to any out of bounds errors. Because you are pre-generating the list of all valid coordinates in your image, generating the mask will confine you within the boundaries of the image so you never have to check for out of boundaries conditions.... even when you specify coordinates to search that are out of bounds.
Specifically, assuming your image is stored in img, you would do:
count = 0; % Remembers total count of pixels matching a value
val = 0; % Value to match
N = 50; % Radius of neighbourhood
% Generate 2D grid of coordinates
[x, y] = meshgrid(1 : size(img, 2), 1 : size(img, 1));
% For each coordinate to check...
for kk = 1 : size(coord, 1)
a = coord(kk, 1); b = coord(kk, 2); % Get the pixel locations
mask = (x - a).^2 + (y - b).^2 <= N*N; % Get a mask of valid locations
% within the neighbourhood
pix = img(mask); % Get the valid pixels
count = count + any(pix(:) == val); % Add either 0 or 1 depending if
% we have found any matching pixels
end
The proposed solution:
fc = repmat(-5:5,11,1);
I = (fc.^2+fc'.^2)<=25;
fc_x = fc(I);
fc_y = fc'; fc_y = fc_y(I);
for i = 1:length(x_coord)
x_toCheck = fc_x + x_coord(i);
y_toCheck = fc_y + y_coord(i);
I = x_toCheck>0 & x_toCheck<=yourImageWidth;
I = I.*(y_toCheck>0 & y_toCheck<=yourImageHeight);
x_toCheck = x_toCheck(logical(I));
y_toCheck = y_toCheck(logical(I));
count = sum(img(x_toCheck(:),y_toCheck(:)) == 0);
end
If your img function can only check one pixel at a time, just add a for loop:
for i = 1:length(x_coord)
x_toCheck = fc_x + x_coord(i);
y_toCheck = fc_y + y_coord(i);
I = x_toCheck>0 & x_toCheck<=yourImageWidth;
I = I.*(y_toCheck>0 & y_toCheck<=yourImageHeight);
x_toCheck = x_toCheck(logical(I));
y_toCheck = y_toCheck(logical(I));
for j = 1:length(x_toCheck)
count = count + (img(x_toCheck(j),y_toCheck(j)) == 0);
end
end
Step-by-step:
You first need to get all the coordinates within 5 pixels range of the given coordinate.
We start by building a square of 11 pixels in length/width.
fc = repmat(-5:5,11,1);
fc_x = fc;
fc_y = fc';
plot(fc_x,fc_y,'.');
We now need to build a filter to get rid of those points outside the 5-pixel radius.
I = (fc.^2+fc'.^2)<=25;
Apply the filter, so we can get a circle of 5-pixel radius.
fc_x = fc_x(I);
fc_y = fc_y(I);
Next translate the centre of the circle to the given coordinate:
x_toCheck = fc_x + x_coord(i);
y_toCheck = fc_y + y_coord(i);
You need to check whether part of the circle is outside the range of your image:
I = x_toCheck>0 & x_toCheck<=yourImageWidth;
I = I.*(y_toCheck>0 & y_toCheck<=yourImageHeight);
x_toCheck = x_toCheck(logical(I));
y_toCheck = y_toCheck(logical(I));
Finally count the pixels:
count = sum(img(x_toCheck,y_toCheck) == 0);
I am currently doing a project on morphology of filamentous fungi during batch fermentation (Yes, I am not a software engineer.. Biotech). Where I am taken pictures of the morphology in a petri dish. I am developing a "fast" method to describe the pellets (small aggregates of fungi) that occurs during the fermentation. To do this I am writing a code in MatLab.
Depending on the color of the pellets (light or dark) the pictures are taken on differen backgrounds, black or white. I am inverting the picture if the mean gray value is below 70 to distinguish between backgrounds.
Pictures:
White background
Dark background
I have several problems:
Detecting the edge of the petri dish so it won't be regarded as an object (Currently done with the edge('log',) function). The edge is detected, but i miss some parts, think because of the lower light in top.
Proper thresholding inside the dish
Detection of pellets - right now it is done by a combination of running through each color channel, but might be done with some blob detection?
Does anybody have some inputs?
My code is as following:
close all
clear all
clc
%Empty arrays to hold data
metricD=[];
areaD=[];
perimeterD=[];
% Specify the folder where the files live.
myFolder = pwd;
% Check to make sure that folder actually exists. Warn user if it doesn't.
if ~isdir(myFolder)
errorMessage = sprintf('Error: The following folder does not exist:\n%s', myFolder);
uiwait(warndlg(errorMessage));
return;
end
% Get a list of all files in the folder with the desired file name pattern.
filePattern = fullfile(myFolder, '*.jpg'); % Change to whatever pattern you need.
theFiles = dir(filePattern);
% Show debugging plots
plotFig = 0;
% parameters that can be tuned
% how many colors channels we minimum want to see a spore in
% e.g. set to 1 for image "P. f Def C.tif"
labelcutOff = 1;
% remove areas larger than
removeLargerthan = 500000;
for k = 1 : length(theFiles)
baseFileName = theFiles(k).name;
fullFileName = fullfile(myFolder, baseFileName);
%% reading as an image array with im
I = imread(fullFileName);
% convert to grayscale
Ig = rgb2gray(I);
if plotFig
figure;imagesc(I)
figure;imagesc(Ig)
end
mm=mean(mean(Ig));
if mm < 70
I=imcomplement(I);
Ig = imcomplement(Ig);
end
% BLOB DETCTION
% h = fspecial('log', [15 15], 2);
% imLOG = imfilter(Ig, h);
% figure;imagesc(imLOG)
%% find petridish by edges and binary operations
% HACK - NOT HOW IT SHOULD BE DONE
Ig = wiener2(Ig,[5 5]);
imEdge = edge(Ig,'log');
circle = bwareaopen(imEdge,50);
circle = imclose(circle,strel('disk',30));
circle = bwareaopen(circle,8000);
% circle = imfill(circle,'holes');
circle = bwconvhull(circle);
circle = imerode(circle,strel('disk',150));
if plotFig
figure;imagesc(circle)
end
%% Get thresholds inside dish using otsu on each channel
imR = double(I(:,:,1)) .* circle;
imG = double(I(:,:,2)) .* circle;
imB = double(I(:,:,3)) .* circle;
thresR = graythresh(uint8(imR(circle))) *max(imR(circle));
thresG = graythresh(uint8(imG(circle))) *max(imG(circle));
thresB = graythresh(uint8(imB(circle))) *max(imB(circle));
if plotFig
figure;imagesc(imR)
figure;imagesc(imG)
figure;imagesc(imB)
end
%% classify inside dish
% check if it should be smaller or larger than
if sum(imR(circle) < thresR) > sum(imR(circle) > thresR)
labelR = imR > thresR;
else
labelR = imR < thresR;
end
if sum(imG(circle) < thresG) > sum(imG(circle) > thresG)
labelG = imG > thresG;
else
labelG = imG < thresG;
end
if sum(imB(circle) < thresB) > sum(imB(circle) > thresB)
labelB = imB > thresB;
else
labelB = imB < thresB;
end
if plotFig
figure;imagesc(labelR)
figure;imagesc(labelG)
figure;imagesc(labelB)
end
labels = (labelR + labelG + labelB) .* circle;
labels(labels < labelcutOff) = 0;
labels = imfill(labels,'holes');
labels = bwareaopen(labels,30);
if plotFig
figure;imagesc(labels)
end
%% clean up labels
labelBig = bwareaopen(labels,removeLargerthan);
labels = labels - labelBig;
if plotFig
figure;imagesc(labels)
end
BN = labels;
%% old script
stats = regionprops(BN,'Basic');
obj2 = numel(stats);
[B,L] = bwboundaries(BN,'holes');
figure
% imshow(label2rgb(L, #jet, [.5 .5 .5]))
imshow(I)
hold on
title(baseFileName)
for j = 1:length(B)
boundary = B{j};
plot(boundary(:,2), boundary(:,1),'w','LineWidth',2)
end
%region stats
stats = regionprops(L,'Area','Centroid');
%Threshold for printing in end
threshold = 0.2;
%Conversion factor pixel to cm
conversionF=9/2125;
% loop over the boundaries
for j = 1:length(B)
% obtain (X,Y) boundary coordinates corresponding to label 'j'
boundary = B{j};
% compute a simple estimate of the object's perimeter
delta_sq = diff(boundary).^2;
perimeter = sum(sqrt(sum(delta_sq,2)));
perimeterD(j,k)=perimeter*conversionF;
% obtain the area calculation corresponding to label 'k'
area = stats(j).Area;
areaD(j,k)=area*conversionF^2;
% compute the roundness metric
metric = 4*pi*area/perimeter^2;
metricD(j,k)=metric;
% display the results
metric_string = sprintf('%d. %2.2f', j,metric);
text(boundary(1,2)-50,boundary(1,1)+23,metric_string,'Color','k',...
'FontSize',14,'FontWeight','bold');
end
drawnow; % Force display to update immediately.
end
%Calculating stats
areaM=mean(areaD);
pM=mean(perimeterD);
metricD(metricD==Inf)=0;
mM=mean(metricD);
Hint:
A morphological top-hat filter fllowed by binarization (with a constant threshold ?) can be a good start. And filtering on the blob size will do a reasonable cleanup.
For the edges, try circular Hough.
I have a cube map texture which defines a surrounding, however I need to pass it to a program which only works with latitude/longitude maps. I am really at lost here on how to do the translation. Any help here?
In other words, I need to come from here:
To this (I think that image has an aditional -90° rotation over the x axis):
update: I got the official names of the projections. By the way, I found the opposite projection here
A general procedure for projecting raster images like this is:
for each pixel of the destination image:
calculate the corresponding unit vector in 3-dimensional space
calculate the x,y coordinate for that vector in the source image
sample the source image at that coordinate and assign the value to the destination pixel
The last step is simply interpolation. We will focus on the other two steps.
The unit vector for a given latitude and longitude is (+z towards the north pole, +x towards the prime meridian):
x = cos(lat)*cos(lon)
y = cos(lat)*sin(lon)
z = sin(lat)
Assume the cube is +/- 1 unit around the origin (i.e. 2x2x2 overall size).
Once we have the unit vector, we can find the face of the cube it's on by looking at the element with the largest absolute value. For example, if our unit vector was <0.2099, -0.7289, 0.6516>, then the y element has the largest absolute value. It's negative, so the point will be found on the -y face of the cube. Normalize the other two coordinates by dividing by the y magnitude to get the location within that face. So, the point will be at x=0.2879, z=0.8939 on the -y face.
I'd like to share my MATLAB implementation of this conversion. I also borrowed from the OpenGL 4.1 specification, Chapter 3.8.10 (found here), as well as Paul Bourke's website (found here). Make sure you look under the subheading: Converting to and from 6 cubic environment maps and a spherical map.
I also used Sambatyon's post above as inspiration. It started off as a port from Python over to MATLAB, but I made the code so that it is completely vectorized (i.e. no for loops). I also take the cubic image and split it up into 6 separate images, as the application I'm building has the cubic image in this format. Also there is no error checking with the code, and that this assumes that all of the cubic images are of the same size (n x n). This also assumes that the images are in RGB format. If you'd like to do this for a monochromatic image, simply comment out those lines of code that require access to more than one channel. Here we go!
function [out] = cubic2equi(top, bottom, left, right, front, back)
% Height and width of equirectangular image
height = size(top, 1);
width = 2*height;
% Flags to denote what side of the cube we are facing
% Z-axis is coming out towards you
% X-axis is going out to the right
% Y-axis is going upwards
% Assuming that the front of the cube is towards the
% negative X-axis
FACE_Z_POS = 1; % Left
FACE_Z_NEG = 2; % Right
FACE_Y_POS = 3; % Top
FACE_Y_NEG = 4; % Bottom
FACE_X_NEG = 5; % Front
FACE_X_POS = 6; % Back
% Place in a cell array
stackedImages{FACE_Z_POS} = left;
stackedImages{FACE_Z_NEG} = right;
stackedImages{FACE_Y_POS} = top;
stackedImages{FACE_Y_NEG} = bottom;
stackedImages{FACE_X_NEG} = front;
stackedImages{FACE_X_POS} = back;
% Place in 3 3D matrices - Each matrix corresponds to a colour channel
imagesRed = uint8(zeros(height, height, 6));
imagesGreen = uint8(zeros(height, height, 6));
imagesBlue = uint8(zeros(height, height, 6));
% Place each channel into their corresponding matrices
for i = 1 : 6
im = stackedImages{i};
imagesRed(:,:,i) = im(:,:,1);
imagesGreen(:,:,i) = im(:,:,2);
imagesBlue(:,:,i) = im(:,:,3);
end
% For each co-ordinate in the normalized image...
[X, Y] = meshgrid(1:width, 1:height);
% Obtain the spherical co-ordinates
Y = 2*Y/height - 1;
X = 2*X/width - 1;
sphereTheta = X*pi;
spherePhi = (pi/2)*Y;
texX = cos(spherePhi).*cos(sphereTheta);
texY = sin(spherePhi);
texZ = cos(spherePhi).*sin(sphereTheta);
% Figure out which face we are facing for each co-ordinate
% First figure out the greatest absolute magnitude for each point
comp = cat(3, texX, texY, texZ);
[~,ind] = max(abs(comp), [], 3);
maxVal = zeros(size(ind));
% Copy those values - signs and all
maxVal(ind == 1) = texX(ind == 1);
maxVal(ind == 2) = texY(ind == 2);
maxVal(ind == 3) = texZ(ind == 3);
% Set each location in our equirectangular image, figure out which
% side we are facing
getFace = -1*ones(size(maxVal));
% Back
ind = abs(maxVal - texX) < 0.00001 & texX < 0;
getFace(ind) = FACE_X_POS;
% Front
ind = abs(maxVal - texX) < 0.00001 & texX >= 0;
getFace(ind) = FACE_X_NEG;
% Top
ind = abs(maxVal - texY) < 0.00001 & texY < 0;
getFace(ind) = FACE_Y_POS;
% Bottom
ind = abs(maxVal - texY) < 0.00001 & texY >= 0;
getFace(ind) = FACE_Y_NEG;
% Left
ind = abs(maxVal - texZ) < 0.00001 & texZ < 0;
getFace(ind) = FACE_Z_POS;
% Right
ind = abs(maxVal - texZ) < 0.00001 & texZ >= 0;
getFace(ind) = FACE_Z_NEG;
% Determine the co-ordinates along which image to sample
% based on which side we are facing
rawX = -1*ones(size(maxVal));
rawY = rawX;
rawZ = rawX;
% Back
ind = getFace == FACE_X_POS;
rawX(ind) = -texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);
% Front
ind = getFace == FACE_X_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texX(ind);
% Top
ind = getFace == FACE_Y_POS;
rawX(ind) = texZ(ind);
rawY(ind) = texX(ind);
rawZ(ind) = texY(ind);
% Bottom
ind = getFace == FACE_Y_NEG;
rawX(ind) = texZ(ind);
rawY(ind) = -texX(ind);
rawZ(ind) = texY(ind);
% Left
ind = getFace == FACE_Z_POS;
rawX(ind) = texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);
% Right
ind = getFace == FACE_Z_NEG;
rawX(ind) = -texX(ind);
rawY(ind) = texY(ind);
rawZ(ind) = texZ(ind);
% Concatenate all for later
rawCoords = cat(3, rawX, rawY, rawZ);
% Finally determine co-ordinates (normalized)
cubeCoordsX = ((rawCoords(:,:,1) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoordsY = ((rawCoords(:,:,2) ./ abs(rawCoords(:,:,3))) + 1) / 2;
cubeCoords = cat(3, cubeCoordsX, cubeCoordsY);
% Now obtain where we need to sample the image
normalizedX = round(cubeCoords(:,:,1) * height);
normalizedY = round(cubeCoords(:,:,2) * height);
% Just in case.... cap between [1, height] to ensure
% no out of bounds behaviour
normalizedX(normalizedX < 1) = 1;
normalizedX(normalizedX > height) = height;
normalizedY(normalizedY < 1) = 1;
normalizedY(normalizedY > height) = height;
% Place into a stacked matrix
normalizedCoords = cat(3, normalizedX, normalizedY);
% Output image allocation
out = uint8(zeros([size(maxVal) 3]));
% Obtain column-major indices on where to sample from the
% input images
% getFace will contain which image we need to sample from
% based on the co-ordinates within the equirectangular image
ind = sub2ind([height height 6], normalizedCoords(:,:,2), ...
normalizedCoords(:,:,1), getFace);
% Do this for each channel
out(:,:,1) = imagesRed(ind);
out(:,:,2) = imagesGreen(ind);
out(:,:,3) = imagesBlue(ind);
I've also made the code publicly available through github and you can go here for it. Included is the main conversion script, a test script to show its use and a sample set of 6 cubic images pulled from Paul Bourke's website. I hope this is useful!
Project changed name to libcube2cyl. Same goodness, better working examples both in C and C++.
Now also available in C.
I happened to solve the exact same problem as you described.
I wrote this tiny C++ lib called "Cube2Cyl", you can find the detailed explanation of algorithm here: Cube2Cyl
Please find the source code from github: Cube2Cyl
It is released under MIT licence, use it for free!
So, I found a solution mixing this article on spherical coordinates from wikipedia and the Section 3.8.10 from the OpenGL 4.1 specification (plus some hacks to make it work). So, assuming that the cubic image has a height h_o and width w_o, the equirectangular will have a height h = w_o / 3 and a width w = 2 * h. Now for each pixel (x, y) 0 <= x <= w, 0 <= y <= h in the equirectangular projection, we want to find the corresponding pixel in the cubic projection, I solve it using the following code in python (I hope I didn't make mistakes while translating it from C)
import math
# from wikipedia
def spherical_coordinates(x, y):
return (math.pi*((y/h) - 0.5), 2*math.pi*x/(2*h), 1.0)
# from wikipedia
def texture_coordinates(theta, phi, rho):
return (rho * math.sin(theta) * math.cos(phi),
rho * math.sin(theta) * math.sin(phi),
rho * math.cos(theta))
FACE_X_POS = 0
FACE_X_NEG = 1
FACE_Y_POS = 2
FACE_Y_NEG = 3
FACE_Z_POS = 4
FACE_Z_NEG = 5
# from opengl specification
def get_face(x, y, z):
largest_magnitude = max(x, y, z)
if largest_magnitude - abs(x) < 0.00001:
return FACE_X_POS if x < 0 else FACE_X_NEG
elif largest_magnitude - abs(y) < 0.00001:
return FACE_Y_POS if y < 0 else FACE_Y_NEG
elif largest_magnitude - abs(z) < 0.00001:
return FACE_Z_POS if z < 0 else FACE_Z_NEG
# from opengl specification
def raw_face_coordinates(face, x, y, z):
if face == FACE_X_POS:
return (-z, -y, x)
elif face == FACE_X_NEG:
return (-z, y, -x)
elif face == FACE_Y_POS:
return (-x, -z, -y)
elif face == FACE_Y_NEG:
return (-x, z, -y)
elif face == FACE_Z_POS:
return (-x, y, -z)
elif face == FACE_Z_NEG:
return (-x, -y, z)
# computes the topmost leftmost coordinate of the face in the cube map
def face_origin_coordinates(face):
if face == FACE_X_POS:
return (2*h, h)
elif face == FACE_X_NEG:
return (0, 2*h)
elif face == FACE_Y_POS:
return (h, h)
elif face == FACE_Y_NEG:
return (h, 3*h)
elif face == FACE_Z_POS:
return (h, 0)
elif face == FACE_Z_NEG:
return (h, 2*h)
# from opengl specification
def raw_coordinates(xc, yc, ma):
return ((xc/abs(ma) + 1) / 2, (yc/abs(ma) + 1) / 2)
def normalized_coordinates(face, x, y):
face_coords = face_origin_coordinates(face)
normalized_x = int(math.floor(x * h + 0.5))
normalized_y = int(math.floor(y * h + 0.5))
# eliminates black pixels
if normalized_x == h:
--normalized_x
if normalized_y == h:
--normalized_y
return (face_coords[0] + normalized_x, face_coords[1] + normalized_y)
def find_corresponding_pixel(x, y):
spherical = spherical_coordinates(x, y)
texture_coords = texture_coordinates(spherical[0], spherical[1], spherical[2])
face = get_face(texture_coords[0], texture_coords[1], texture_coords[2])
raw_face_coords = raw_face_coordinates(face, texture_coords[0], texture_coords[1], texture_coords[2])
cube_coords = raw_coordinates(raw_face_coords[0], raw_face_coords[1], raw_face_coords[2])
# this fixes some faces being rotated 90°
if face in [FACE_X_NEG, FACE_X_POS]:
cube_coords = (cube_coords[1], cube_coords[0])
return normalized_coordinates(face, cube_coords[0], cube_coords[1])
at the end we just call find_corresponding_pixel for each pixel in the equirectangular projection
I think from your algorithm in Python you might have inverted x and y in the calculation of theta and phi.
def spherical_coordinates(x, y):
return (math.pi*((y/h) - 0.5), 2*math.pi*x/(2*h), 1.0)
from Paul Bourke's website here
theta = x pi
phi = y pi / 2
and in your code you are using y in the theta calculation and x in the phi calculation.
Correct me if I am wrong.
Edited Post
These are the new functions that I have created using your template to help me out.
What is happening is though the code works fine, the 'check for cursor' boxes are not over top of the axes. In fact they are very far off. I used disp(axPos) once with set(hAx, 'Units','pixels'), and once with it commented out. It displayed:
1.
169.0000 71.0000 126.0000 51.0000
94.0000 122.0000 126.0000 51.0000
19.0000 71.0000 126.0000 51.0000
94.0000 20.0000 126.0000 51.0000
Which is the GUIDE co ordinates of the axis but not the co ordinates that are being displayed for 'Currentpoint'
2.
33.6000 5.3846 25.2000 3.9231
18.6000 9.3077 25.2000 3.9231
3.6000 5.3846 25.2000 3.9231
18.6000 1.4615 25.2000 3.9231
I do not know where these come from but are closer to where they need to be but are ~60 pixels to the left.
Here is the code:
function HVACSM_OpeningFcn(hObject, eventdata, handles, varargin)
% This function has no output args, see OutputFcn.
% hObject handle to figure
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
% varargin command line arguments to HVACSM (see VARARGIN)
% Choose default command line output for HVACSM
handles.output = hObject;
NUM = 4;
imgOff1 = imread('cond.png');
imgOn1 = imread('condH.png');
imgOff2 = imread('comp.png');
imgOn2 = imread('compH.png');
imgOff3 = imread('evap.png');
imgOn3 = imread('evapH.png');
imgOff4 = imread('exp.png');
imgOn4 = imread('expH.png');
imgOff = cell(1,NUM);
imgOff{1} = imgOff1;
imgOff{2} = imgOff2;
imgOff{3} = imgOff3;
imgOff{4} = imgOff4;
imgOn = cell(1,NUM);
imgOn{1} = imgOn1;
imgOn{2} = imgOn2;
imgOn{3} = imgOn3;
imgOn{4} = imgOn4;
%# setup axes
hAx = zeros(1,NUM);
hImg = zeros(1,NUM);
hAx = [handles.axes1 handles.axes2 handles.axes3 handles.axes4];
hImg(1) = imagesc(imgOff{1}, 'Parent',hAx(1));
hImg(2) = imagesc(imgOff{2}, 'Parent',hAx(2));
hImg(3) = imagesc(imgOff{3}, 'Parent',hAx(3));
hImg(4) = imagesc(imgOff{4}, 'Parent',hAx(4));
set(hAx, 'XTick',[], 'YTick',[],'Box', 'on')
%# get corner-points of each axis
set(hAx, 'Units','pixels')
axPos = cell2mat( get(hAx,'Position') );
disp(axPos)
p = zeros(5,2,NUM);
for k=1:NUM
p(:,:,k) = bsxfun(#plus, axPos(k,1:2), ...
[0 0; axPos(k,3) 0; axPos(k,3:4); 0 axPos(k,4); 0 0]);
end
handles.p = p;
handles.hAx = hAx;
handles.hImg = hImg;
handles.imgOff = imgOff;
handles.imgOn = imgOn;
% Update handles structure
guidata(hObject, handles);
function figure1_WindowButtonMotionFcn(hObject, eventdata, handles)
% hObject handle to figure1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
pos = get(hObject,'CurrentPoint');%CurrentPoint
posx = pos(1); posy = pos(2);
%Display to check to see if the position is working
posx = num2str(posx);
posy = num2str(posy);
set(handles.Xpos,'String',posx);
set(handles.Ypos,'String',posy);
p = handles.p;
hImg = handles.hImg;
hAx = handles.hAx;
imgOff = handles.imgOff;
imgOn = handles.imgOn;
%# for each axis, determine if we are inside it
for i=1:numel(hImg)
if inpolygon(pos(1),pos(2), p(:,1,i),p(:,2,i))
set(hImg(i), 'CData',imgOn{i})
set(hAx(i), 'LineWidth',3, 'XColor','r', 'YColor','r')
else
set(hImg(i), 'CData',imgOff{i})
set(hAx(i), 'LineWidth',1, 'XColor','k', 'YColor','k')
end
end
These are the hitboxes is with the original code "set(hAx, 'Units','pixels')"
Note* the others are way off the screen to the top right, or would be.
These are the hitboxes with the altered code "set(hAx, 'Units','characters')"
Note* This is the exact same thing that happens when the code is commented out.
Tested Aug 2
.
.
.
.
Addendum Original Post
I am getting this error after running my GUI [Fatal Error] :-1:-1: Premature end of file.
It happens during this block of code:
function figure1_WindowButtonMotionFcn(hObject, eventdata, handles)
% hObject handle to figure1 (see GCBO)
% eventdata reserved - to be defined in a future version of MATLAB
% handles structure with handles and user data (see GUIDATA)
pos = get(hObject,'CurrentPoint');
%CurrentPoint
posx = pos(1); posy = pos(2);
%Display to check to see if the position is working
posx = num2str(posx); posy = num2str(posy);
set(handles.Xpos,'String',posx); set(handles.Ypos,'String',posy);
%If mouse over then update the immage
if ((115 < pos(1)) && (pos(1) < 125) && (7 < pos(2)) && (pos(2) < 11))
axes(handles.axes1);
imshow('condH.png')
else
axes(handles.axes1);
imshow('cond.png')
end
if ((90 < pos(1)) && (pos(1) < 100) && (7 < pos(2)) && (pos(2) < 11))
axes(handles.axes2);
imshow('compH.png')
else
axes(handles.axes2);
imshow('comp.png')
end
if ((80 < pos(1)) && (pos(1) < 90) && (7 < pos(2)) && (pos(2) < 11))
axes(handles.axes3);
imshow('evapH.png')
else
axes(handles.axes3);
imshow('evap.png')
end
if ((90 < pos(1)) && (pos(1) < 100) && (2 < pos(2)) && (pos(2) < 5))
axes(handles.axes4);
imshow('expH.png')
else
axes(handles.axes4);
imshow('exp.png')
end
Normally the GUI runs fine until I trigger one of the if statements by mousing over the predetermined box. Then the GUI stops responding and will not open again until Matlab is restarted.
It is difficult to tell why MATLAB is crashing without seeing your entire code. For this reason, I wrote the short example below. It illustrates how I would to write a GUI that simulates the rollover effect using the WindowButtonMotionFcn callback:
function testRolloverGUI()
%# prepare rollover image
imgOff = imread('coins.png');
imgOn = imcomplement(imgOff);
%# setup figure
hFig = figure('Resize','off', 'MenuBar','none', 'Color','w');
set(hFig, 'WindowButtonMotionFcn',#figWindowButtonMotionFcn);
hTxt = uicontrol('Style','text', 'String','(0,0)');
%# setup axes
NUM = 4;
hAx = zeros(1,NUM);
hImg = zeros(1,NUM);
for k=1:NUM
hAx(k) = subplot(2,2,k);
hImg(k) = imagesc(imgOff, 'Parent',hAx(k));
end
colormap(gray)
set(hAx, 'XTick',[], 'YTick',[], 'Box','on')
%# get corner-points of each axis
set(hAx, 'Units','pixels')
axPos = cell2mat( get(hAx,'Position') );
p = zeros(5,2,NUM);
for k=1:NUM
p(:,:,k) = bsxfun(#plus, axPos(k,1:2), ...
[0 0; axPos(k,3) 0; axPos(k,3:4); 0 axPos(k,4); 0 0]);
end
%# callback function
function figWindowButtonMotionFcn(hObj,ev)
%# get mouse current position
pos = get(hObj, 'CurrentPoint');
set(hTxt, 'String',sprintf('(%g,%g)',pos))
%# for each axis, determine if we are inside it
for i=1:numel(hImg)
if inpolygon(pos(1),pos(2), p(:,1,i),p(:,2,i))
set(hImg(i), 'CData',imgOn)
set(hAx(i), 'LineWidth',3, 'XColor','r', 'YColor','r')
else
set(hImg(i), 'CData',imgOff)
set(hAx(i), 'LineWidth',1, 'XColor','k', 'YColor','k')
end
end
end
end
EDIT#2
In response to your comments, I recreated the example in GUIDE. Here are the main parts:
%# --- Executes just before rollover is made visible.
function rollover_OpeningFcn(hObject, eventdata, handles, varargin)
%# Choose default command line output for rollover
handles.output = hObject;
%# allocate
NUM = 4;
imgOff = cell(1,NUM);
imgOn = cell(1,NUM);
hImg = zeros(1,NUM);
%# read images
imgOff{1} = imread('coins.png');
imgOn{1} = imcomplement(imread('coins.png'));
imgOff{2} = imread('coins.png');
imgOn{2} = imcomplement(imread('coins.png'));
imgOff{3} = imread('coins.png');
imgOn{3} = imcomplement(imread('coins.png'));
imgOff{4} = imread('coins.png');
imgOn{4} = imcomplement(imread('coins.png'));
%# setup axes
hAx = [handles.axes1 handles.axes2 handles.axes3 handles.axes4];
for i=1:NUM
hImg(i) = imagesc(imgOff{i}, 'Parent',hAx(i));
end
colormap(hObject, 'gray')
set(hAx, 'XTick',[], 'YTick',[], 'Box','on')
%# make sure axes units match that of the figure
set(hAx, 'Units',get(hObject, 'Units'))
%# check axes parent container (figure or panel)
hAxParents = cell2mat( get(hAx,'Parent') );
idx = ismember(get(hAxParents,'Type'), 'uipanel');
ppos = cell2mat( get(hAxParents(idx), 'Position') );
%# adjust position relative to parent container
axPos = cell2mat( get(hAx,'Position') );
axPos(idx,1:2) = axPos(idx,1:2) + ppos(:,1:2);
%# compute corner-points of each axis
p = zeros(5,2,NUM);
for k=1:NUM
p(:,:,k) = bsxfun(#plus, axPos(k,1:2), ...
[0 0; axPos(k,3) 0; axPos(k,3:4); 0 axPos(k,4); 0 0]);
end
%# store in handles structure
handles.p = p;
handles.hAx = hAx;
handles.hImg = hImg;
handles.imgOff = imgOff;
handles.imgOn = imgOn;
%# Update handles structure
guidata(hObject, handles);
%# --- Executes on mouse motion over figure - except title and menu.
function figure1_WindowButtonMotionFcn(hObject, eventdata, handles)
%# CurrentPoint
pos = get(hObject,'CurrentPoint');
set(handles.text1,'String',sprintf('(%g,%g)',pos));
%# for each axis, determine if we are inside it
for i=1:numel(handles.hImg)
if inpolygon(pos(1),pos(2), handles.p(:,1,i),handles.p(:,2,i))
set(handles.hImg(i), 'CData',handles.imgOn{i})
set(handles.hAx(i), 'LineWidth',3, 'XColor','r', 'YColor','r')
else
set(handles.hImg(i), 'CData',handles.imgOff{i})
set(handles.hAx(i), 'LineWidth',1, 'XColor','k', 'YColor','k')
end
end
The GUI has basically the same components as before, except that the axes are contained inside a uipanel (similar to the screenshot of your GUI):
A few things to note:
Since our goal is to compare the figure's CurrentPoint to the axes position, it is important that they have the same Units as that of the figure, thus: set(hAx, 'Units',get(hObject, 'Units'))
According to the documentation, an axis Position property is relative to its parent container, and because the four axes are inside a panel, we need to adjust their positions accordingly: axPos(idx,1:2) = axPos(idx,1:2) + ppos(:,1:2);