algorithm for grouping elements from multi dimensional matrix - algorithm

Hi fellows
I have a question in regards to an algorithm problem I am trying to solve.
The input data to the algorithm is a matrix that can be described as follows:
The matrix has N dimensions, like the one in the attached picture (which has 2 dimensions for example).
There are a number of elements (denoted as c1, .. c5 in the attached sample matrix) that are mapped into the cells of the matrix.
An element can occur in one or more cells in the matrix.
The algorithm should work as follows:
Pick (and then remove) elements from the matrix and group them into buckets of E elements, so that:
the elements in each bucket don't have any common values on any of the dimensions
each element is assigned only into one bucket
For example, referring to the sample matrix attached, if we selected E to be 2, then a possible grouping could be: {c1, c2} ; {c3, c4} and {5} by itself. Check: c1 and c2 (and c3 and c4) don't have any common column or row values in the matrix.
The problem has been doing my head in for some time now and I thought it may be worth asking if some whiz here has a solution.
Thanks

Related

How does Principle Component Initialization work for determining the weights of the map vectors in Self Organizing Maps?

I studied on a fundamental SOM initialization and was looking to understand exactly how this process, PCI, works for initializing weight vectors on the map. My understanding is that for a two dimensional Map, this initialization method looks at the eigenvectors for the two largest eigenvalues of the data matrix and then uses the subspace spanned by these eigenvectors to initialize the map. Does that mean that in order to get the initial map weights, does this method take random linear combinations of the largest two eigenvectors in order to generate the map weights? Is there a patten?
For example, for 40 input data vectors on the map, does the lininit initialization method take combinations a1*[e1] + a2*[e2] where [e1] and [e2] are the two largest eigenvectors and a1 and a2 are random integers ranging from -3 to 3? Or is there a different mechanism? I was looking to make sure I knew exactly how lininit takes the two largest eigenvectors of the input data matrix and uses them to construct the initial weight vectors for the map.
The SOM creates a map that has the neighbourhood relationship between nearby nodes. Random initialisation does not help this process, since the nodes start randomly. Therefore, the idea of using the PCA initialisation is just a shortcut to get the map closer to the final state. This saves a lot of computation.
So how does this work? The first two principal components (PCs) are used. Set the initial weights as linear combination of the PCs. Rather than using random a1 and a2, the weights are set in a range that corresponds to the scale of the principal components.
For example, for a 5x3 map, a1 and a2 can both be in the range (-1, 1) with the relevant number of elements. In other words, for the 5x3 map, a1 = [-1.0 -0.5 0.0 0.5 1.0] and a2 = [-1.0 0.0 1.0], with 5 nodes and 3 nodes, respectively.
Then set each of the weights of nodes. For a rectangular SOM, each node has indices [m, n]. Use the values of a1[m] and a2[n]. Thus, for all m = [1 2 3 4 5] and n = [1 2 3]:
weight[m, n] = a1[m] * e1 + a2[n] * e2
That is how to initialize the weights using the principal components. This makes the initial state globally ordered, so now the SOM algorithm is used to create the local ordering.
The Principal Component part of the name is a reference to https://en.wikipedia.org/wiki/Principal_component_analysis.
Here is the idea. You start with data points placed at vectors of many underlying factors. But they may be correlated in your data. So, for example, if you're measuring height, weight, blood pressure, etc, you expect that tall people will weigh more. But what you want to do is replace this with vectors of factors that are not correlated with each other in your data.
So your principal component is a vector of length 1 which is as strongly correlated as possible with the variation in your dataset.
Your secondary component is the vector of length 1 at right angles to the first which is as strongly correlated as possible with the rest of the variation in your data set.
Your tertiary component is the vector of length 1 at right angles to the first two which is as strongly correlated as possible with the rest of the variation in your data set.
And so on.
In practice you may start with many factors, but most of the information is captured in just the first few. For example in the results of intelligence testing the first component is IQ and the second is the difference between how you are at verbal and quantitative reasoning.
How this applies to SOM initialization is that a simple linear model built off of PCA analysis is a pretty good guess for the answer that you're looking for, so starting there reduces how much work you have to do to finish getting the answer.

Can Knapsack algorithms be implemented in Column major form?

I searched for Knapsack algorithm on the net, and in all the implementations, I saw that the 2D array is of the form:
int K[n+1][W+1];
where n is the number of elements and W is the maximum weight which can be accommodated in the Knapsack.
This array was filled in a bottom up manner, in a row major format. Can it even be done in a column major format?
Roughly the only requirement on the order in which the array is filled is that, if a <= b and c <= d, then the (a,c) cell is not filled after the (b,d) cell. This follows from tracing the data dependencies of the dynamic program. Row-major, column-major, and many other fill orders are possible.

Savitzky–Golay filter for 2D images

I would like to ask about Savitzky–Golay filter on 2D-images.
What are the best coefficient and order to choose for finding local details in the image.
Moreover, if someone has an explanation for coefficients and the orders one the 2D-images, it would be perfect.
Thanks in advance
Please check out this website:
https://en.wikipedia.org/wiki/Savitzky%E2%80%93Golay_filter#Two-dimensional_convolution_coefficients
UPDATE: (Thank you for the suggestion, #Rasclatt)
Which has been reproduced here:
Two-dimensional smoothing and differentiation can also be applied to tables of data values, such as intensity values in a photographic image which is composed of a rectangular grid of pixels.[16] [17] The trick is to transform part of the table into a row by a simple ordering of the indices of the pixels. Whereas the one-dimensional filter coefficients are found by fitting a polynomial in the subsidiary variable, z to a set of m data points, the two-dimensional coefficients are found by fitting a polynomial in subsidiary variables v and w to a set of m × m data points. The following example, for a bicubic polynomial and m = 5, illustrates the process, which parallels the process for the one dimensional case, above.[18]
The square of 25 data values, d1 − d25
becomes a vector when the rows are placed one after another.
The Jacobian has 10 columns, one for each of the parameters a00 − a03 and 25 rows, one for each pair of v and w values. Each row has the form
The convolution coefficients are calculated as
The first row of C contains 25 convolution coefficients which can be multiplied with the 25 data values to provide a smoothed value for the central data point (13) of the 25.
check out the below links which use SURE(Stein's unbiased risk estimator) to minimizes the mean squared error between your estimate and the image. This method is useful for denoising and data smoothing.
this link is for optimization of parameters for 1D Savitzky Golay Filter(this will be helpful to understand the 2D part)
https://ieeexplore.ieee.org/abstract/document/6331560/?part=1
this link is for optimization of parameters of 2D Savitzky Golay Filter
https://ieeexplore.ieee.org/document/6738095/

efficient way for finding min value on each given region

Given a
we first define two real-valued functions and as follows:
and we also define a value m(X) for each matrix X as follows:
Now given an , we have many regions of G, denoted as . Here, a region of G is formed by a submatrix of G that is randomly chosen from some columns and some rows of G. And our problem is to compute as fewer operations as possible. Is there any methods like building hash table, or sorting to get the results faster? Thanks!
========================
For example, if G={{1,2,3},{4,5,6},{7,8,9}}, then
G_1 could be {{1,2},{7,8}}
G_2 could be {{1,3},{4,6},{7,9}}
G_3 could be {{5,6},{8,9}}
=======================
Currently, for each G_i we need mxn comparisons to compute m(G_i). Thus, for m(G_1),...,m(G_r) there should be rxmxn comparisons. However, I can notice that G_i and G_j maybe overlapped, so there would be some other approach that is more effective. Any attention would be highly appreciated!
Depending on how many times the min/max type data is needed, you could consider a matrix that holds min/max information in-between the matrix values, i.e. in the interstices between values.. Thus, for your example G={{1,2,3},{4,5,6},{7,8,9}} we would define a relationship matrix R sized ((mxn),(mxn),(mxn)) and having values from the set C = {-1 = less than, 0 = equals and 1 = greater than}.
R would have nine relationship pairs (n,1), (n,2) to (n,9) where each value would be a member of C. Note (n,n is defined and will equal 0). Thus, R[4,,) = (1,1,1,0,-1,-1,-1,-1,-1). Now consider any of your subsets G_1 ..., Knowing the positional relationships of a subset's members will give you offsets into R which will resolve to indexes into each R(N,,) which will return the desired relationship information directly without comparisons.
You, of course, will have to decide if the overhead in space and calculations to build R exceeds the cost of just computing what you need each time it's needed. Certain optimizations including realization that the R matrix is reflected along the major diagonal and that you could declare "equals" to be called, say, less than (meaning C has only two values) are available. Depending on the original matrix G, other optimizations can be had if it is know that a row or column is sorted.
And since some computers (mainframes, supercomputers, etc) store data into RAM in column-major order, store your dataset so that it fills in with the rows and columns transposed thus allowing column-to-column type operations (vector calculations) to actually favor the columns. Check your architecture.

Find the "largest" dense sub matrix in a large sparse matrix

Given a large sparse matrix (say 10k+ by 1M+) I need to find a subset, not necessarily continuous, of the rows and columns that form a dense matrix (all non-zero elements). I want this sub matrix to be as large as possible (not the largest sum, but the largest number of elements) within some aspect ratio constraints.
Are there any known exact or aproxamate solutions to this problem?
A quick scan on Google seems to give a lot of close-but-not-exactly results. What terms should I be looking for?
edit: Just to clarify; the sub matrix need not be continuous. In fact the row and column order is completely arbitrary so adjacency is completely irrelevant.
A thought based on Chad Okere's idea
Order the rows from largest count to smallest count (not necessary but might help perf)
Select two rows that have a "large" overlap
Add all other rows that won't reduce the overlap
Record that set
Add whatever row reduces the overlap by the least
Repeat at #3 until the result gets to small
Start over at #2 with a different starting pair
Continue until you decide the result is good enough
I assume you want something like this. You have a matrix like
1100101
1110101
0100101
You want columns 1,2,5,7 and rows 1 and 2, right? That submatrix would 4x2 with 8 elements. Or you could go with columns 1,5,7 with rows 1,2,3 which would be a 3x3 matrix.
If you want an 'approximate' method, you could start with a single non-zero element, then go on to find another non-zero element and add it to your list of rows and columns. At some point you'll run into a non-zero element that, if it's rows and columns were added to your collection, your collection would no longer be entirely non-zero.
So for the above matrix, if you added 1,1 and 2,2 you would have rows 1,2 and columns 1,2 in your collection. If you tried to add 3,7 it would cause a problem because 1,3 is zero. So you couldn't add it. You could add 2,5 and 2,7 though. Creating the 4x2 submatrix.
You would basically iterate until you can't find any more new rows and columns to add. That would get you too a local minimum. You could store the result and start again with another start point (perhaps one that didn't fit into your current solution).
Then just stop when you can't find any more after a while.
That, obviously, would take a long time, but I don't know if you'll be able to do it any more quickly.
I know you aren't working on this anymore, but I thought someone might have the same question as me in the future.
So, after realizing this is an NP-hard problem (by reduction to MAX-CLIQUE) I decided to come up with a heuristic that has worked well for me so far:
Given an N x M binary/boolean matrix, find a large dense submatrix:
Part I: Generate reasonable candidate submatrices
Consider each of the N rows to be a M-dimensional binary vector, v_i, where i=1 to N
Compute a distance matrix for the N vectors using the Hamming distance
Use the UPGMA (Unweighted Pair Group Method with Arithmetic Mean) algorithm to cluster vectors
Initially, each of the v_i vectors is a singleton cluster. Step 3 above (clustering) gives the order that the vectors should be combined into submatrices. So each internal node in the hierarchical clustering tree is a candidate submatrix.
Part II: Score and rank candidate submatrices
For each submatrix, calculate D, the number of elements in the dense subset of the vectors for the submatrix by eliminating any column with one or more zeros.
Select the submatrix that maximizes D
I also had some considerations regarding the min number of rows that needed to be preserved from the initial full matrix, and I would discard any candidate submatrices that did not meet this criteria before selecting a submatrix with max D value.
Is this a Netflix problem?
MATLAB or some other sparse matrix libraries might have ways to handle it.
Is your intent to write your own?
Maybe the 1D approach for each row would help you. The algorithm might look like this:
Loop over each row
Find the index of the first non-zero element
Find the index of the non-zero row element with the largest span between non-zero columns in each row and store both.
Sort the rows from largest to smallest span between non-zero columns.
At this point I start getting fuzzy (sorry, not an algorithm designer). I'd try looping over each row, lining up the indexes of the starting point, looking for the maximum non-zero run of column indexes that I could.
You don't specify whether or not the dense matrix has to be square. I'll assume not.
I don't know how efficient this is or what its Big-O behavior would be. But it's a brute force method to start with.
EDIT. This is NOT the same as the problem below.. My bad...
But based on the last comment below, it might be equivilent to the following:
Find the furthest vertically separated pair of zero points that have no zero point between them.
Find the furthest horizontally separated pair of zero points that have no zeros between them ?
Then the horizontal region you're looking for is the rectangle that fits between these two pairs of points?
This exact problem is discussed in a gem of a book called "Programming Pearls" by Jon Bentley, and, as I recall, although there is a solution in one dimension, there is no easy answer for the 2-d or higher dimensional variants ...
The 1=D problem is, effectively, find the largest sum of a contiguous subset of a set of numbers:
iterate through the elements, keeping track of a running total from a specific previous element, and the maximum subtotal seen so far (and the start and end elemnt that generateds it)... At each element, if the maxrunning subtotal is greater than the max total seen so far, the max seen so far and endelemnt are reset... If the max running total goes below zero, the start element is reset to the current element and the running total is reset to zero ...
The 2-D problem came from an attempt to generate a visual image processing algorithm, which was attempting to find, within a stream of brightnesss values representing pixels in a 2-color image, find the "brightest" rectangular area within the image. i.e., find the contained 2-D sub-matrix with the highest sum of brightness values, where "Brightness" was measured by the difference between the pixel's brighness value and the overall average brightness of the entire image (so many elements had negative values)
EDIT: To look up the 1-D solution I dredged up my copy of the 2nd edition of this book, and in it, Jon Bentley says "The 2-D version remains unsolved as this edition goes to print..." which was in 1999.

Resources