Normalized Graph Cuts Image Segmentation - image

I'm implementing the normalized graph-cuts algorithm in MATLAB. Can someone please explain how to proceed after bi-partitioning the second smallest eigen vector. Now I have 2 segments, what is the meaning of "recursively bi-partitioning the segmented parts?"

Recursively bi-partitioning means that you need to write a recursive function. this function should bi-partition your segment in each iteration.
you have image I , you make that image to two partition I1 and I2 , then you need to bi-partition each one ,you can call
I11,I12
and
I21, I22
and then again bi-partition each segment , you can call that
I111,I112
and
I121,I122
, and
I21,I22
, and
I221 ,I222
and continue in this way...
I assume you use Matlab function to solve generalized eigenvalue problem with 'sm' option:
[eigvectors,eigvalues]=eigs(L,D,6,'sm')
L is laplacian matrix = D-W
D is diagonal matrix
W is your weight matrix
number 6 means that you are looking for 6 eigen vectors , and 'sm' means you are looking for smallest magnitude.
My thesis for my Master degree in AI was about Improving segmentation using Normalized cut ; feel free to ask any questions.

Related

Cannon's Algorithm for Matrix Multiplication with small number of processors

While researching Cannon's algorithm, there are always examples of the same kind. For example if the size of matrix A and B is 3x3; there are always 9 processor in the examples. Each processor is responsible for its cell and adds the value to the sum. Also the instructions always have the same "Process P(i , j) initially stores A(i , j) and B(i , j) computes block C(i , j) of the result matrix" expression.
I understand this case. I put an example case below. These are the initial matrixes.
And this is the position of processors;
As in this example I put in, the number of processors has always been chosen to deal with only the 1x1 part of the matrix. I wonder what the situation would have been if the number of processors were not chosen like this, but in a way that each processor would deal with more parts.
However, what would happen if 4 processors were used to multiply 4x4 matrices? As I understood from the instructions, each process would take the 2x2 parts of the A and B matrix. In other words, as in the picture I put below, the 1st process would keep the elements of the A and B matrix in the indices (0,0), (0,1), (1,0) (1,1). Second process would keep the indexes of (0,2), (0,3), (1,2) and (1,3).
What would changing the number of processes change about the communication situations or the number of steps required to complete the algorithm? For example, would you have to do more shifts with each step?

Technique to reduce dense point cloud in 3D

I have a point cloud consist of more than 100,000 points , i have to reduce this dense point cloud.
My point cloud is sorted with respect to z axis.
I used simple mathematics like, if selected point's x=3 , y = 4 , z = 5 . Then compare with remaining point cloud with this criteria (x - x(i) == 0.0001f ) if matches , then try another one till end of the point cloud , and select the most updated one , by this way i am reducing the point cloud. It provides me results , but not up to my expectations.
SO is there any technique to reduce dense point cloud..
I should be writing this as a comment but dont have enough rep.
You can do a singular value decomposition. Take your big long vector Xand do SVD decomposition on it. Plot your singular values obtained and see which of the singular values have a high weight, select those this will get you the optimal rank r of the matrix. So you will reconstruct your original X matrix as X' = U Sig V where each of these are rank r truncated.

How to reduce a matrix rank using some zeros?

I'm working of matrices having rank >1. It is possible to reduce the rank of a matrix to rank=1 substituing some values to zeros?
Rank in a matrix refers to how many of the column vectors are independent and non-zero (Or row vectors, but I was taught to always use column vectors). So, if you're willing to lose a lot of the information about the transformation your matrix is defining, you could create a matrix that's just the first non-zero column of your matrix, and everything else set to zero. Guaranteed to be rank 1.
However, that loses a whole lot of information about the transformation. Perhaps a more useful thing to do would be project your matrix onto a space of size 1x1. There are ways to do this in such a way that can create an injection from your matrix to the new space, guaranteeing that no two matrices produce an equivalent result. The first one that comes to mind is:
Let A be an n x m matrix
Let {P_i} be the ith prime number.
Let F(A) = {sum from i to (n * m)} {P_i} ^ (A_(i div n),(i mod m))
While this generates a single number, you can think of a single number as a 1 x 1 matrix, which, if non-zero, has rank 1.
All that being said, rank 1 matrices are kinda boring and you can do cooler stuff with matrices if you keep it at rank != 1. In particular, if you have an n x n matrix with rank n, a whole world of possibility opens up. It really depends on what you want to use these matrices for.
You might want to look at the singular value decomposition, which can be used to write your matrix as a sum of weighted outer products (see here). Choosing only the highest-weighted component of this sum will give you the closest rank-1 approximation to the decomposed matrix.
Most common linear algebra libraries (Eigen, OpenCV, NumPy) have an SVD implementation.

Minimizing a function of vectors

I need to minimize the following sum:
minimize sum for all i{(i = 1 to n) fi(v(i), v(i - 1), tangent(i))}
v and tangent are vectors.
fi takes the 3 vectors as arguments and returns a cost associated with these 3 vectors. For this function, v(i - 1) is the vector chosen in the previous iteration. tangent(i) is also known. fi calculates the cost of choosing a vector v(i), given the other two vectors v(i - 1) and tangent(i). The v(0) and v(n) vectors are known. tangent(i) values are also known in advance for alli = 0 to n.
My task is to determine all such v(i)s such that the total cost of the function values for i = 1 to n is minimized.
Can you please give me any ideas to solve this?
So far I can think of Branch and Bound or dynamic programming methods.
Thanks!
I think this is a problem in mathematical optimisation, with an objective function built up of dot products and arcCosines, subject to the constraint that your vectors should be unit vectors. You could enforce this either with Lagrange multipliers, or by including a normalising step in the arc-Cosine. If Ti is a unit vector then for Vi calculate cos^-1(Ti.Vi/sqrt(Vi.Vi)). I would have a go at using a conjugate gradient optimiser for this, or perhaps even Newton's method, with my starting point Vi = Ti.
I would hope that this would be reasonably tractable, because the Vi are only related to neighbouring Vi. You might even get somewhere by repeatedly adjusting each Vi in isolation, one by one, to optimise the objective function. It might be worth just seeing what happens if you repeatedly set Vi to be the average of Ti, Vi+1, and Vi-1, and then scaled Vi to be a unit vector again.

permuting the rows and columns of a matrix for clustering [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
i have a distance matrix that is 1000x1000 in dimension and symmetric with 0s along the diagonal. i want to form groupings of distances (clusters) by simultaneously reordering the rows and columns of the matrix. this is like reordering a matrix before you visualize its clusters with a heatmap. i feel like this should be an easy problem, but i am not having much luck finding code that does the permutations online. can anyone help?
Here is one approach that came to mind:
"Sparsify" the matrix so that only "sufficiently close" neighbors have a nonzero value in the matrix.
Use a Cuthill-McKee algorithm to compress the bandwidth of the sparse matrix.
Do a symmetric reordering of the original matrix using the results from Step 2.
Example
I will use Octave (everything I am doing should also work in Matlab) since it has a Reverse Cuthill-McKee (RCM) implementation built in.
First, we need to generate a distance matrix. This function creates a random set of points and their distance matrix:
function [x, y, A] = make_rand_dist_matrix(n)
x = rand(n, 1);
y = rand(n, 1);
A = sqrt((repmat(x, 1, n) - repmat(x', n, 1)).^2 +
(repmat(y, 1, n) - repmat(y', n, 1)).^2);
end
Let's use that to generate and visualize a 100-point example.
[x, y, A] = make_rand_dist_matrix(100);
surf(A);
Viewing the surface plot from above gets the image below (yours will be different, of course).
Warm colors represent greater distances than cool colors. Row (or column, if you prefer) i in the matrix contains the distances between point i and all points. The distance between point i and point j is in entry A(i, j). Our goal is to reorder the matrix so that the row corresponding to point i is near rows corresponding to points a short distance from i.
A simple way to sparsify A is to make all entries greater than some threshold zero, and that is what is done below, although more sophisticated approaches may prove more effective.
B = A < 0.2; % sparsify A -- only values less than 0.2 are nonzeros in B
p = symrcm(B); % compute reordering by Reverse Cuthill-McKee
surf(A(p, p)); % visualize reordered distance matrix
The matrix is now ordered in a way that brings nearby points closer together in the matrix. This result is not optimal, of course. Sparse matrix bandwidth compression is computed using heuristics, and RCM is a very simple approach. As I mentioned above, more sophisticated approaches for producing the sparse matrix may give better results, and different algorithms may also yield better results for the problem.
Just for Fun
Another way to look at what happened is to plot the points and connect a pair of points if their corresponding rows in the matrix are adjacent. Your goal is to have the lines connecting pairs of points that are near each other. For a more dramatic effect, we use a larger set of points than above.
[x, y, A] = make_rand_dist_matrix(2000);
plot(x, y); % plot the points in their initial, random order
Clearly, connections are all over the place and are occurring over a wide variety of distances.
B = A < 0.2; % sparsify A
p = symrcm(B);
plot(x(p), y(p)) % plot the reordered points
After reordering, the connections tend to be over much smaller distances and much more orderly.
Two Matlab functions do this: symrcm and
symamd.
Note that there is no unique solution to this problem. Clustering is another approach.

Resources