Matrix empty for a heatmaps - matrix

grl <- promoters(TxDb.Celegans.UCSC.ce11.ensGene)
intestine_gr <- makeGRangesFromDataFrame(intestine, keep.extra.columns = TRUE)
mat1 = normalizeToMatrix(intestine_gr, grl, extend = 5000, mean_mode = "w0", w = 50)
mat1:
Normalize intestine_gr to grl:
Upstream 5000 bp (100 windows)
Downstream 5000 bp (100 windows)
Include target regions (44 windows)
61451 target regions
I try to do a matrix for plotting a EnrichHeatmap but with the command normalizeToMatrix I have a matrix with only zero (when I plot the EnrichHeatmap is white) and I don't understand which is the problem, can someone help me?

Related

How to connect the disconnected-points in image with Matlab?

This is a more specific problem with complex settings. I have calculated the distance of all 1-pixels to its nearest 0-pixels of raw image and transformed to an image with local maxima shown as below:
I used the following code to extract the local maxima from this transformed matrix:
loc_max = imregionalmax(loc,4);
figure;imshow(loc_max)
The loc_max gives out disconnected dots of those local maxima points. Tried a bunch of ways to connect them, but not working as it was supposed to do yet.
Here is one of my ideas:
[yy xx]=find(loc_max ==1);
d = [];
dmin = [];
idx = [];
for i = 1: size(yy,1)
for j = 1:size(xx,1)
if i ~= j
d(j) = sqrt(sum(bsxfun(#minus,[yy(j) xx(j)],[yy(i) xx(i)]).^2,2));
%calculate the distance between current 1-pixel in loc_max and all other local maxima pixels in loc_max.
end
end
[dmin(i),idx(i)] = min(d(:));%find the minimum distance between current 1-pixel to others
end
I tried to find the nearest 1-pixels in loc_max to the current 1-pixel in loc_max, and then connect them. But this isn't the solution yet. Because it won't connect to its next 1-pixel if the previous pixel is the nearest pixel of the current one.
In addition, I want to preserve the pixel information of those 0-pixels along the connected line between two disconnected 1-pixels. I want to be able to reconstruct the whole image out of this simplified skeleton later.
Any help would be greatly appreciated!
I have tried erode and dilate (like Ander Biguri advised).
I tried setting the angles of the kernels to be correct.
Check the following:
%Fix the input (remove JPEG artifacts and remove while margins)
%(This is not part of the solution - just a little cleanup).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
I = imread('https://i.stack.imgur.com/4L3fP.jpg'); %Read image from imgur.
I = im2bw(I);
[y0, x0] = find(I == 0);
[y1, x1] = find(I == 0, 1, 'last');
I = I(y0:y1, x0:x1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
se0 = strel('disk', 3);
J = imdilate(I, se0); %Dilate - make line fat
se1 = strel('line', 30, 150); %150 degrees.
se2 = imdilate(se1.getnhood , se0); %Create 150 degrees fat line
J = imclose(J, se2); %Close - (dilate and erode)
se1 = strel('line', 30, 140); %140 degrees.
se2 = imdilate(se1.getnhood , se0);
J = imclose(J, se2);
se1 = strel('line', 80, 60); %60 degrees.
se2 = imdilate(se1.getnhood , se0); %Create 60 degrees fat line
J = imclose(J, se2);
se4 = strel('disk', 2);
J = imerode(J, se4); %Erode - make lines little thinner.
figure;imshow(J);
Result:
Do you think it's good enough?
Ander Biguri gets the credit for the following solution:
J = bwmorph(J,'skel',Inf);

Speeding up vectorized eye-tracking algorithm in numpy

I'm trying to implement Fabian Timm's eye-tracking algorithm [http://www.inb.uni-luebeck.de/publikationen/pdfs/TiBa11b.pdf] (found here: [http://thume.ca/projects/2012/11/04/simple-accurate-eye-center-tracking-in-opencv/]) in numpy and OpenCV and I've hit a snag. I think I've vectorized my implementation decently enough, but it's still not fast enough to run in real time and it doesn't detect pupils with as much accuracy as I had hoped. This is my first time using numpy, so I'm not sure what I've done wrong.
def find_pupil(eye):
eye_len = np.arange(eye.shape[0])
xx,yy = np.meshgrid(eye_len,eye_len) #coordinates
XX,YY = np.meshgrid(xx.ravel(),yy.ravel()) #all distance vectors
Dx,Dy = [YY-XX, YY-XX] #y2-y1, x2-x1 -- simpler this way because YY = XXT
Dlen = np.sqrt(Dx**2+Dy**2)
Dx,Dy = [Dx/Dlen, Dy/Dlen] #normalized
Gx,Gy = np.gradient(eye)
Gmagn = np.sqrt(Gx**2+Gy**2)
Gx,Gy = [Gx/Gmagn,Gy/Gmagn] #normalized
GX,GY = np.meshgrid(Gx.ravel(),Gy.ravel())
X = (GX*Dx+GY*Dy)**2
eye = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1])) #inverting and blurring eye for use as w
eyem = np.repeat(eye.ravel()[np.newaxis,:],eye.size,0)
C = (np.nansum(eyem*X, axis=0)/eye.size).reshape(eye.shape)
return np.unravel_index(C.argmax(), C.shape)
and the rest of the code:
def find_eyes(face):
left_x, left_y = [int(floor(0.5 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
right_x, right_y = [int(floor(0.1 * face.shape[0])), int(floor(0.2 * face.shape[1]))]
area = int(floor(0.2 * face.shape[0]))
left_eye = (left_x, left_y, area, area)
right_eye = (right_x, right_y, area, area)
return [left_eye,right_eye]
faceCascade = cv2.CascadeClassifier("haarcascade_frontalface_default.xml")
video_capture = cv2.VideoCapture(0)
while True:
# Capture frame-by-frame
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.1,
minNeighbors=5,
minSize=(30, 30),
flags=cv2.CASCADE_SCALE_IMAGE
)
# Draw a rectangle around the faces
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (0, 255, 0), 2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = frame[y:y+h, x:x+w]
eyes = find_eyes(roi_gray)
for (ex,ey,ew,eh) in eyes:
eye_gray = roi_gray[ey:ey+eh,ex:ex+ew]
eye_color = roi_color[ey:ey+eh,ex:ex+ew]
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(255,0,0),2)
px,py = find_pupil(eye_gray)
cv2.rectangle(eye_color,(px,py),(px+1,py+1),(255,0,0),2)
# Display the resulting frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()
You can perform many of those operations that save replicated elements and then perform some mathematical opertaions by directly performing the mathematical operatrions after creating singleton dimensions that would allow NumPy broadcasting. Thus, there would be two benefits - On the fly operations to save workspace memory and performance boost. Also, at the end, we can replace the nansum calculation with a simplified version. Thus, with all of that philosophy in mind, here's one modified approach -
def find_pupil_v2(face, x, y, w, h):
eye = face[x:x+w,y:y+h]
eye_len = np.arange(eye.shape[0])
N = eye_len.size**2
eye_len_diff = eye_len[:,None] - eye_len
Dlen = np.sqrt(2*((eye_len_diff)**2))
Dxy0 = eye_len_diff/Dlen
Gx0,Gy0 = np.gradient(eye)
Gmagn = np.sqrt(Gx0**2+Gy0**2)
Gx,Gy = [Gx0/Gmagn,Gy0/Gmagn] #normalized
B0 = Gy[:,:,None]*Dxy0[:,None,:]
C0 = Gx[:,None,:]*Dxy0
X = ((C0.transpose(1,0,2)[:,None,:,:]+B0[:,:,None,:]).reshape(N,N))**2
eye1 = cv2.bitwise_not(cv2.GaussianBlur(eye,(5,5),0.005*eye.shape[1]))
C = (np.nansum(X,0)*eye1.ravel()/eye1.size).reshape(eye1.shape)
return np.unravel_index(C.argmax(), C.shape)
There's one repeat still left in it at Dxy. It might be possible to avoid that step and Dxy0 could be fed directly into the step that uses Dxy to give us X, but I haven't worked through it. Everything's converted to broadcasting based!
Runtime test and output verification -
In [539]: # Inputs with random elements
...: face = np.random.randint(0,10,(256,256)).astype('uint8')
...: x = 40
...: y = 60
...: w = 64
...: h = 64
...:
In [540]: find_pupil(face,x,y,w,h)
Out[540]: (32, 63)
In [541]: find_pupil_v2(face,x,y,w,h)
Out[541]: (32, 63)
In [542]: %timeit find_pupil(face,x,y,w,h)
1 loops, best of 3: 4.15 s per loop
In [543]: %timeit find_pupil_v2(face,x,y,w,h)
1 loops, best of 3: 529 ms per loop
It seems we are getting close to 8x speedup!

How does this MATLAB code make an image into a binary image?

v = videoinput('winvideo', 1, 'YUY2_320x240');
s = serial('COM1', 'BaudRate', 9600);
fopen(s);
while(1)
h = getsnapshot(v);
rgb = ycbcr2rgb(h);
for i = 1:240
for j = 1:320
if rgb(i,j,1) > 140 && rgb(i,j,2) < 100 % use ur own conditions
bm(i, j) = 1;
else
bm(i, j) = 0;
end
end
end
This is the code i got from my senior regarding image processing using MATLAB. The above code is to convert the image to binary image, But in the code rgb(i, j, 1) > 140 I didn't understand that command. How to select that 140 and what does that rgb(i, j, 1) mean?
You have an RGB image rgb where the third dimension are the RGB color planes. Thus, rgb(i,j,1) is the red value at row i, column j.
By doing rgb(i,j,1)>140 it tests if this red value is greater than 140. The value 140 appears to be ad hoc, picked for a specific task.
The code is extremely inefficient as there is no need for a loop:
bm = rgb(:,:,1)>140 & rgb(:,:,2)<100;
Note the change from && to the element-wise operator &. Here I'm assuming that the size of rgb is 240x320x3.
Edit: The threshold values you choose completely depend on the task, but a common approach to automatic thresholding is is Otsu's method, graythresh. You can apply it to a single color plane to get a threshold:
redThresh = graythresh(rgb(:,:,1)) * 255;
Note that graythresh returns a value on [0,1], so you have to scale that by the data range.

Separating Background and Foreground

I am new to Matlab and to Image Processing as well. I am working on separating background and foreground in images like this
I have hundreds of images like this, found here. By trial and error I found out a threshold (in RGB space): the red layer is always less than 150 and the green and blue layers are greater than 150 where the background is.
so if my RGB image is I and my r,g and b layers are
redMatrix = I(:,:,1);
greenMatrix = I(:,:,2);
blueMatrix = I(:,:,3);
by finding coordinates where in red, green and blue the values are greater or less than 150 I can get the coordinates of the background like
[r1 c1] = find(redMatrix < 150);
[r2 c2] = find(greenMatrix > 150);
[r3 c3] = find(blueMatrix > 150);
now I get coordinates of thousands of pixels in r1,c1,r2,c2,r3 and c3.
My questions:
How to find common values, like the coordinates of the pixels where red is less than 150 and green and blue are greater than 150?
I have to iterate every coordinate of r1 and c1 and check if they occur in r2 c2 and r3 c3 to check it is a common point. but that would be very expensive.
Can this be achieved without a loop ?
If somehow I came up with common points like [commonR commonC] and commonR and commonC are both of order 5000 X 1, so to access this background pixel of Image I, I have to access first commonR then commonC and then access image I like
I(commonR(i,1),commonC(i,1))
that is expensive too. So again my question is can this be done without loop.
Any help would be appreciated.
I got solution with #Science_Fiction answer's
Just elaborating his/her answer
I used
mask = I(:,:,1) < 150 & I(:,:,2) > 150 & I(:,:,3) > 150;
No loop is needed. You could do it like this:
I = imread('image.jpg');
redMatrix = I(:,:,1);
greenMatrix = I(:,:,2);
blueMatrix = I(:,:,3);
J(:,:,1) = redMatrix < 150;
J(:,:,2) = greenMatrix > 150;
J(:,:,3) = blueMatrix > 150;
J = 255 * uint8(J);
imshow(J);
A greyscale image would also suffice to separate the background.
K = ((redMatrix < 150) + (greenMatrix > 150) + (blueMatrix > 150))/3;
imshow(K);
EDIT
I had another look, also using the other images you linked to.
Given the variance in background colors, I thought you would get better results deriving a threshold value from the image histogram instead of hardcoding it.
Occasionally, this algorithm is a little to rigorous, e.g. erasing part of the clothes together with the background. But I think over 90% of the images are separated pretty well, which is more robust than what you could hope to achieve with a fixed threshold.
close all;
path = 'C:\path\to\CUHK_training_cropped_photos\photos';
files = dir(path);
bins = 16;
for f = 3:numel(files)
fprintf('%i/%i\n', f, numel(files));
file = files(f);
if isempty(strfind(file.name, 'jpg'))
continue
end
I = imread([path filesep file.name]);
% Take the histogram of the blue channel
B = I(:,:,3);
h = imhist(B, bins);
h2 = h(bins/2:end);
% Find the most common bin in the *upper half*
% of the histogram
m = bins/2 + find(h2 == max(h2));
% Set the threshold value somewhat below
% the value corresponding to that bin
thr = m/bins - .25;
BW = im2bw(B, thr);
% Pad with ones to ensure background connectivity
BW = padarray(BW, [1 1], 1);
% Find connected regions in BW image
CC = bwconncomp(BW);
L = labelmatrix(CC);
% Crop back again
L = L(2:end-1,2:end-1);
% Set the largest region in the orignal image to white
for c = 1:3
channel = I(:,:,c);
channel(L==1) = 255;
I(:,:,c) = channel;
end
% Show the results with a pause every 16 images
subplot(4,4,mod(f-3,16)+1);
imshow(I);
title(sprintf('Img %i, thr %.3f', f, thr));
if mod(f-3,16)+1 == 16
pause
clf
end
end
pause
close all;
Results:
Your approach seems basic but decent. Since for this particular image the background is composed of mainly blue so you be crude and do:
mask = img(:,:,3) > 150;
This will set those pixels which evaluate to true for > 150 to 0 and false to 1. You will have a black and white image though.
imshow(mask);
To add colour back
mask3d(:,:,1) = mask;
mask3d(:,:,2) = mask;
mask3d(:,:,3) = mask;
img(mask3d) = 255;
imshow(img);
Should give you the colour image of face hopefully, with a pure white background. All this requires some trial and error.

How to get intermediate colors from one to another? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
android color between two colors, based on percentage?
How to find all the colors between two colors?
At the beginning, we have two colors in RGB, and a number, for the intermediate colors between them. Method must return an array with required colors. Strongly need help with an algorithm.
Suppose we have 2 Colors (R1,G1,B1) (R2,G2,B2) and N number of intermediate colors:
for i from 1 to N:
Ri = R1 + (R2-R1) * i / N
Bi = B1 + (B2-B1) * i / N
Gi = G1 + (G2-G1) * i / N
AddToArray(Ri,Gi,Bi)
Is that what you are looking for?
PS: I would recommend using the HSL color space instead of the RGB if you want to have a more natural color gradient.
Let your current cR, cG and cB value be 0%, and let the R, G, and B values be 100%, then you just have to iterate i = 1 to 100 with each iteration adding cRGB + i * (RGB - cRGB). You don't have to use 100 intermediate colors, you can use N of them.
function(currentColor, desiredColor, N) {
var colors = [],
cR = currentColor.R,
cG = currentColor.G,
cB = currentColor.B,
dR = desiredColor.R - cR,
dG = desiredColor.G - cG,
dB = desiredColor.B - cB;
for(var i = 1; i <= N; i++) {
colors.push(new Color(cR + i * dR / N, cG + i * dG / N, cB + i * dB / N));
}
return colors;
}
However, that won't give you very good intermediate colors. The first thing you should do is convert your colors into HSV or similar colorspace where intensity is separate from hue and saturation. That will give you much better intermediate colors. http://en.wikipedia.org/wiki/HSL_and_HSV
To do that, first convert your colors to HSV, and run the same algorithm as above, but with H S and V instead of RGB, but keep in mind that S and V have a min of 0 and max of 1, while H is represented in degrees between 0 and 360. You might have to do something with H if you want it to go from the current color to destination color as quickly as possible e.g. if cH = 10 and dH = 50, then going from 10 -> 50 is shortest, but if cH = 10 and dH = 350, then going from 10 -> -10 (same as 350 degrees) is shorter.

Resources