extracting many regions of interests ROIs) from thousand images - image

I have a large set of microscopy images and each image has several hundreds of spots (ROIs). These spots are fixed in space. I want to extract each spot from each image and save into workspace so that I can analyze them further.
I have written a code myself and it working perfectly but its too slow. It takes around 250 sec to completely read out all the spots from every image.
The core of my code looks as following:
for s=1:NumberImages
im1=imread(fn(s,1).name);
im=im1-medfilt2(im1,[15,15]);
for i=1:length(p_g_x)
GreenROI(i,s)=double(sum(sum(im(round(p_g_y(i))+(-2:2), round(p_g_x(i))+(-2:2)))));
RedROI(i,s)=double(sum(sum(im(round(p_r_y(i))+(-2:2), round(p_r_x(i))+(-2:2)))));
end
end
As you can see from the code I am extracting 5x5 regions. Length of p_g_x is between 500-700.
Thanks for your input. I used profile viewer to figure out which function exactly is taking more time. It was median filter which is taking a lot of time (~90%).
Any suggestion to fast it up will be greatly appreciated.
thanks
Mahipal

Use Matlab's profiling tools!
profile on % Starts the profiler
% Run some code now.
profile viewer % Shows you how often each function was called, and
% where most time was spent. Try to start with the slowest part.
profile off % Resets the Profiler, so you can measure again.
Pre-allocate
Preallocate the output because you know the size and this way it is much faster. (Matlab told you this already!)
GreenROI = zeros(length(p_g_x), NumberImages); % And the same for RedROI.
Use convolution
Read about Matlab's conv2 code.
for s=1:NumberImages
im=imread(fn(s,1).name);
im=im-medfilt2(im,[15,15]);
% Pre-compute the sums first. This will only be faster for large p_g_x
roi_image = conv2(im, ones(5,5));
for i=1:length(p_g_x)
GreenROI(i,s)=roi_image(round(p_g_y(i)), round(p_g_x(i))); % You might have to offset the indices by 2, because of the convolution. Check that results are the same.
RedROI(i,s)=roi_image(round(p_r_y(i)), round(p_r_x(i)));
end
end
Matlab-ize the code
Now, that you've used convolution to get an image of sums over 5x5 windows (or you could've used #Shai's accumarray, same thing), you can speed things up further by not iterating through each element in p_g_x but use it as a vector straight away.
I leave that as an exercise for the reader. (convert p_g_x and p_g_y to indices using sub2ind, as a hint).
Update
Our answers, mine included, showed how premature optimisation is a bad thing. Without knowing, I assumed that your loop would take most of the time, but when you measured it (thanks!) it turns out that is not the problem. The bottleneck is medfilt2 the median filter, which takes 90% of the time. So you should address this first. (Note, that on my computer your original code is fast enough for my taste but it is still the median filter taking up most of the time.)
Looking at what the median filter operation does might help us figure out how to make it faster. Here is an image. On the left you see the original image. In the middle the median filter and on the right there is the result.
To me the result looks awfully similar to an edge detection result. (Mathematically this is no surprise.)
I would suggest you start experimenting with various edge detections. Have a look at Canny and Sobel. Or just use conv2(image, kernel_x) where kernel_x = [1, 2, 1; 0, 0, 0; -1, -2, -1] and the same but transposed for a kernel_y. You can find various edge detection options here: edge(im, option). I tried all options from {'sobel', 'canny', 'roberts', 'prewitt'}. Except for Canny, they all take about the same time as your median filter method. Canny is 4x slower, the rest (including the original) take 7.x seconds. All of this without a GPU. imgradient was 9 seconds.
So from this I would say that you can't get any faster. If you have a GPU and it works with Matlab, you could speed it up. Load your image as gpuArrays. There is an example on the medfilt2 documentation. You can still do minor speed ups but they can only amount to a 10% speed increase, so are hardly worthwile.

A few things you should do
Pre-allocate as suggested by Didac Perez.
Use profiler to see what exactly takes long in your code, is it the median filter? is it the indexing?
Assuming all images are of the same size, you can use accumarray and a fixed mask subs to quickly sum the values:
subs_g = zeros( h, w ); %// allocate mask for green
subs_r = zeros( h, w );
subs_g( sub2ind( [h w], round(p_g_y), round(p_g_x) ) = 1:numel(p_g_x); %//index each region
subs_g = conv2( subs_g, ones(5), 'same' );
subs_r( sub2ind( [h w], round(p_r_y), round(p_r_x) ) = 1:numel(p_r_x); %//index each region
subs_r = conv2( subs_r, ones(5), 'same' );
sel_g = subs_g > 0;
sel_r = subs_r > 0;
subs_g = subs_g(sel_g);
subs_r = subs_r(sel_r);
once these masks are fixed, you can process all images
%// pre-allocation goes here - I'll leave it to you
for s=1:NumberImages
im1=imread(fn(s,1).name);
im=double( im1-medfilt2(im1,[15,15]) );
accumarray( subs_g, im( sel_g ) ); % summing all the green ROIs
accumarray( subs_r, im( sel_r ) ); % summing all the green ROIs
end

First, preallocate your GreenROI and RedROI structures since you previously know the final size. Now, you are resizing them again and again in each iteration.
Secondly, I do recommend you to use "tic" and "toc" to investigate where is the problem, it will give you useful timings.

Vectorized code that operates on each image -
%// Pre-compute green and red indices to be used across all the images
r1 = round(bsxfun(#plus,permute(p_g_y,[3 2 1]),[-2:2]'));
c1 = round(bsxfun(#plus,permute(p_g_x,[3 2 1]),[-2:2]));
green_ind = reshape(bsxfun(#plus,(c1-1)*size(im,1),r1),[],numel(p_g_x));
r2 = round(bsxfun(#plus,permute(p_r_y,[3 2 1]),[-2:2]'));
c2 = round(bsxfun(#plus,permute(p_r_x,[3 2 1]),[-2:2]));
red_ind = reshape(bsxfun(#plus,(c2-1)*size(im,1),r2),[],numel(p_g_x));
for s=1:NumberImages
im1=imread(fn(s,1).name);
im=double(im1-medfilt2(im1,[15,15]));
GreenROI=sum(im(green_ind));
RedROI =sum(im(red_ind));
end

Related

How to generate and concatenate spectrograms efficiently

I am working on a signal processing related problem. I have a dataset of >2000 EEG signals. Each EEG Signal is represented by a 2D Numpy array (19 x 30000). Each row of the array is one of the channels of the signal. What I have to do is to find the spectrograms on these individual channels (rows) and concatenate them vertically. Here is the code I wrote so far.
raw = np.load('class_1_ar/'+filename)
images = []
for i in range(19):
print(i,end=" ")
spec,freq,t,im = plt.specgram(raw[i],Fs=100,NFFT=100,noverlap=50)
plt.axis('off')
figure = plt.gcf()
figure.set_size_inches(12, 1)
figure.canvas.draw()
img = np.array(figure.canvas.buffer_rgba())
img = cv2.cvtColor(img, cv2.COLOR_RGBA2BGRA)
b = figure.axes[0].get_window_extent()
img = np.array(figure.canvas.buffer_rgba())
img = img[int(b.y0):int(b.y1),int(b.x0):int(b.x1),:]
img = cv2.cvtColor(img, cv2.COLOR_RGBA2BGRA)
images.append(img)
base = cv2.vconcat(images)
cv2.imwrite('class_1_sp/'+filename[:-4]+'.png',base)
c -= 1
print(c)
And here is my output:
However, the process is taking too much time to process. It took almost 8 hours for the first 200 samples to process.
My question is, What can I do to make it faster?
Like others have said, the overhead of going through matplotlib is likely slowing things down. It would be better to just compute (and not plot) the spectrogram with scipy.signal.spectrogram. This function directly returns the spectrogram as a 2D numpy array, so that you don't have the roundabout step of getting it out of the canvas. Note, that does mean you'll have to map the spectrogram output yourself to pixel intensities. In doing that, beware scipy.signal.spectrogram returns the spectrogram as powers, not decibels, so you probably want to do 10*np.log10(Sxx) to the result (see also scipy.signal.spectrogram compared to matplotlib.pyplot.specgram).
Plotting aside, the bottleneck operation in computing a spectrogram are the FFTs. Instead of using a transform size of 100 samples, 128 or some other power of 2 is more efficient. With scipy.signal.spectrogram this is done by setting nfft=128. Note, you can set nperseg=100 and nfft=128 so that 100 samples are still used for each segment, but zero-padded to 128 before doing the FFT. One other thought: if raw is 64-bit float, it may help to cast it to 32-bit: raw = np.load(...).astype(np.float32).

Median of each pixel of a set of images

I would like to calculate the median of each pixel in a set of images or "video". However, when MATLAB starts calculating this, it takes a very long time and finishes randomly with an index error. Why?
This is the code:
V = VideoReader('hall_monitor.avi');
info = get(V);
M = info.Width;
N = info.Height;
nb_frames_bk = 5;
v_pixel = zeros([nb_frames_bk 3]);
IB=zeros([M N 3],'double');
for i=1:M
for j=1:N
for k=1:nb_frames_bk
frm=read(V,k);
v_pixel(k,:)=frm(i,j,:);
end
IB(i,j,:)=median(v_pixel(:,:));
end
end
IB=uint8(IB);
imshow(IB);
This code can benefit from a lot of refactoring. For one thing, you are re-reading frames when you can just read them once, store them and use them after you're done.
Secondly, iterating over all pixels to compute your median is going to be very slow. From what it looks like in your code, for each spatial position over the first nb_frames_bk frames, you collect all of the RGB values within these frames and calculate the median RGB value.
Also as a minor note, you are getting a dimension exceeds error because you defined the output matrix wrong. You defined it as M x N with M being the width and N being the height. This needs to be swapped. Remember that matrices are defined as height first, width second. However, this is unnecessary with what I'm going to suggest for implementing this properly.
Instead of reading the frames one at a time, specify a range of frames. This way, you will get a 4D matrix where the first three dimensions references an image, with the fourth dimension representing the frame number. You can then take the median in the fourth dimension to find the median RGB value over all frames.
In other words, simply do this:
V = VideoReader('hall_monitor.avi');
nb_frames_bk = 5;
frms = read(V, [1 nb_frames_bk]);
IB = median(frms, 4);
imshow(IB);
This is much better, to the point and guaranteed to be faster. You also don't need to obtain the width and height of each frame as it is no longer needed as we are no longer looping over each pixel.

Color Segmentation: A better cluster-analysis to find K

I know there have been many questions such as this and some solutions to them, but I'm hoping there's another way.
GOAL: The final goal is to cluster colors given an image, then allow the user to change those colors. The user does not need to enter any k. The algorithm determines K.
METHOD: Currently, I'm using the silhouette score metric (http://scikit-learn.sourceforge.net/dev/modules/generated/sklearn.metrics.silhouette_score.html#sklearn.metrics.silhouette_score). I'm using MiniBatchKMeans to cluster the image and then calculate the silhouette_score within a range of k (4-8). The code would be:
# silhouetteCoeff determination
def silhouetteCoeff(z):
max_silhouette = 0
max_k = 0
for i in range(4, 17):
clt = MiniBatchKMeans(n_clusters = i, random_state = 42)
clt.fit(z)
silhouette_avg = silhouette_score(z, clt.labels_, sample_size = 250, random_state = 42)
print("k: ", i, " silhouette avg: ", silhouette_avg)
if (silhouette_avg == 1.0):
max_k = i
break
elif (silhouette_avg > max_silhouette):
max_silhouette = silhouette_avg
max_k = i
print("Max silhouette: ", max_silhouette)
print("Max k: ", max_k)
return int(max_k)
Even if I color quantize the image beforehand (to 16 colors), the function still takes a good 6-8 seconds to run (assume image size 400x400).
My question is, is there any better or faster way to find k? I've tried the Elbow method too, but still gotta calculate the SSE there. From testing on some images, I've found a good average k = 8. But on a more color intensive image, the algorithm loses out on some colors.
Measure your bottleneck!
Silhouette is in O(n²) so most likely it will be the bottleneck of your approach. Also, there are much faster k-means variants than the one in sklearn... so there is a lot of potential to make things faster.
Minibatch kmeans won't even converge, but only approximate the result. It only makes sense if you can't afford to keep all data in memory as far as I can tell.
Reducing the color palette to just 16 colors supposedly does not at all help.

Remove unwanted region in image by matlab

I have a image that includes object and some unwanted region (small dots). I want to remove it. Hence, I use some morphological operator example 'close' to remove. But it is not perfect. Do you have other way to remove more clear? You can download example image at raw image
This is my code
load Image.mat %load Img value
Img= bwmorph(Img,'close');
imshow(Img);
You might prefer a faster and vectorized approach using bsxfun along with the information obtained from bwlabel itself.
Note: bsxfun is memory intensive, but that's precisely what makes it faster. Therefore, watch out for the size of B1 in the code below. This method will get slower once it reaches the memory constraints set by the system, but until then it provides good speedup over the regionprops method.
Code
[L,num] = bwlabel( Img );
counts = sum(bsxfun(#eq,L(:),1:num));
B1 = bsxfun(#eq,L,permute(find(counts>threshold),[1 3 2]));
NewImg = sum(B1,3)>0;
EDIT 1: Few benchmarks for comparisons between bsxfun and regionprops approaches are discussed next.
Case 1
Benchmark Code
Img = imread('coins.png');%%// This one is chosen as it is available in MATLAB image library
Img = im2bw(Img,0.4); %%// 0.4 seemed good to make enough blobs for this image
lb = bwlabel( Img );
threshold = 2000;
disp('--- With regionprops method:');
tic,out1 = regionprops_method1(Img,lb,threshold);toc
clear out1
disp('---- With bsxfun method:');
tic,out2 = bsxfun_method1(Img,lb,threshold);toc
%%// For demo, that we have rejected enough unwanted blobs
figure,
subplot(211),imshow(Img);
subplot(212),imshow(out2);
Output
Benchmark Results
--- With regionprops method:
Elapsed time is 0.108301 seconds.
---- With bsxfun method:
Elapsed time is 0.006021 seconds.
Case 2
Benchmark Code (Only the changes from Case 1 are listed)
Img = imread('snowflakes.png');%%// This one is chosen as it is available in MATLAB image library
Img = im2bw(Img,0.2); %%// 0.2 seemed good to make enough blobs for this image
threshold = 20;
Output
Benchmark Results
--- With regionprops method:
Elapsed time is 0.116706 seconds.
---- With bsxfun method:
Elapsed time is 0.012406 seconds.
As pointed out earlier, I have tested with other bigger images and with a lot of unwanted blobs, for which bsxfun method doesn't provide any improvement over regionprops method. Due to the unavailability of any such bigger images in MATLAB library, they couldn't be discussed here. To sum up, it could be suggested to use either of these two approaches based on the input features. It would be interesting to see how these two approaches perform for your input images.
You can use regionprops and bwlabel to select all regions that are smaller than a certain area (=number of pixels)
lb = bwlabel( Img );
st = regionprops( lb, 'Area', 'PixelIdxList' );
toRemove = [st.Area] < threshold; % fix your threshold here
newImg = Img;
newImg( vertcat( st(toRemove).PixelIdxList ) ) = 0; % remove

Taking too long time to compelete an operation and using lots of physical memory

I have this piece of code:
function Plot2DScatter(img1,img2)
n = size(img1,1);
m = size(img2,1);
axis([0 280 0 280])
hold on
for i=1:n
for j=1:m
x = img1(i,j);
y = img2(i,j);
plot(x,y);
end
end
end
it,s a function that will be used in a GUI. img1 and img2 are two 2048*2048 image matrixes.
so you see the loop should be repeated 4194304 times.
my problem is that it takes too much time for the system to complete the operation (about 45 minutes) and cpu-usage is really high. and when it is done so much physical memory (RAM) is needed (about 45 percent) that the computer gets hanged.
Is there anything that I can do to decrease the pressure applied to the system and do the operation faster?
In matlab you should try to avoid for loops whenever possible and use matrix expressions instead. What you are trying to do can be done like this:
plot(img1(:),img2(:))
img1(:), and img2(:) convert the images into vectors which can be used directly as input to the plot function. For your purpose it might be even better to use the scatter function which plots your data as circles directly. That is:
function Plot2DScatter(img1,img2)
scatter(img1(:),img2(:))
axis([0 280 0 280]) % note with the axis statement
% afterwards you do not need 'hold on'
end

Resources