Prove all diagonal elements of a symmetric matrix are different - matrix

Given a 15*15 symmetric matrix, each row containing all the numbers from 1 to 15 and each column containing all the numbers from 1 to 15, how do you go on to prove that all the diagonal elements will be different?
I tried to prove that no two diagonal elements will be same, but couldn't come up with anything solid. Even tried it for 5*5 matrix, but nothing I could come up with to prove it.
Any help would be appreciated!

This is a problem of symmetric latin squares. The first observation (which requires a short proof) is that each of the numbers 1 to 15 occur an even number of times in the off-diagonal positions. Since 15 is odd, this means that each number must occur at least once in the diagonal positions. But there are only 15 diagonal positions and so each number must occur exactly once in the diagonal positions.

If by 'prove' you mean demonstrate for a particular matrix, see below. If by 'prove' you mean mathematically prove, well, all diagonal matrices are symmetric matrices, and a diagonal matrix isn't required to have unique elements, so not all symmetric matrices have unique elements on the diagonal.
One way to test a particular matrix is to make a new array containing all the diagonal elements, then eliminate duplicates in that array, and test the length. Another is to take each diagonal element and compare it against those elements on the diagonal with a higher index. Here's the latter with some pseudocode using 0 based arrays
unique = TRUE
for i = 0 to 14 {
value = matrix[i][i]
for j = i+1 to 14 // doesn't loop if i+1 > 14
if (value == matrix[j][j])
unique = FALSE
}
ADDED: The OP points out I missed the restriction on the contents of each row and column! All symmetric NxN matrices consisting of N unique values with no duplicated values in each row and column must have an antidiagonal consisting of only one value, by the definition of symmetry. If N is odd, the resulting matrix has a single element that is in both the diagonal and antidiagonal (and of course, if N is even, no element is in common). Given this, you can see the diagonal values must differ in each position from the antidiagonal, except in the common element. Combine that with the requirement that each row and each column has N values, and you'll see that the diagonal element must be different for each row. This isn't formal, but I hope it helps.

We can assume the given matrix is m * m, and we should fill the matrix with m distinct numbers: N1, N2 ... Nm.
Because each element should show up in each column/row once, for each number, it will show up n) times in the matrix.
Because it is symmetric, each number will show up x (even) times in the upper section above the diagonal or x (even) times in the lower section below the diagonal. In this way, in addition to the diagonal, each number will show up 2 * x (x is even) times in the matrix.
Therefore, if the given m is odd, each number should show up one more time in the diagonal; if the given is even, we don't require each number show up on the diagonal cause 2 * x is already even.

Related

Find positions bigger than all their adjecent positions in a NxN matrix

What would be an efficent way to solve the following problem?
Given an NxN matrix of natural numbers, return ALL positions (matrix indexes, i and j) whose number is bigger than all their adjacent positions (horizontal and vertical only, if position is out of the matrix it counts as -1).
In other words, matrix[i][j] has to be bigger than all of the following:
matrix[i+1][j], matrix[i-1][j], matrix[i][j+1], matrix[i][j-1]
If matrix[i][j] fulfilled the condition then (i,j) is a valid position.
Is there any way to solve this without checking every single cell and making all 4 comparisons?

Why does Multiplying a column stochastic matrix with a vector that sums to one result in vector that again has sum one

Suppose I have a nxn coulmn stochastic matrix. If I multiply it by a vector of length n that has elements that sum to one I get a resultant vector of length n that again sums to one Why does this happen? What if I give the vector of lenght n sum =0.8 or some 1.2?
Edit:
What happens if one of the columns of the matrix dosent add up to 1?
Consider matrix B=4x4 and vector A=4x1. Now when we are calculating the output vector sum it can be broken as
a1*(b11+b21+b31+b41)+a2*(b12+b22+b32+b42)+a3*(b13+b23+b33+b43)+a4*(b14+b24+b34+b44)
Now since all columns sum to 1 since it's column stochastic the
sum=a1*1+a2*1+a3*1+a4*1=1
since the vector was 1. Now if one of the cols is not 1 we will have that column's contribution reduced for the jth entry of vector A. For eg. if
b13+b23+b33+b43=0.8
then
sum=a1*1+a2*1+a3*(0.8)+a4*1=a1*1+a2*1+a3*1+a4*1 -0.2*a3
So there is a leak of -0.2a*3 from the original sum of 1

Possibility of making diagonal elements of a square matrix 1,if matrix has only 0 or 1

Let M be an n x n matrix with each entry equal to either 0 or 1. Let m[i][j]
denote the entry in row i and column j. A diagonal entry is one of the
form m[i][i] for some i. Swapping rows i and j of the matrix M denotes the following action:
we swap the values m[i][k] and m[j][k] for k = 1, 2 ..... n. Swapping two columns
is defined analogously We say that M is re arrangeable if it is possible to swap some of the pairs of rows and some of the pairs of columns (in any sequence) so that,
after all the swapping, all the diagonal entries of M are equal to 1.
(a) Give an example of a matrix M that is not re arrangeable, but for
which at least one entry in each row and each column is equal to !.
(b) Give a polynomial-time algorithm that determines whether a matrix
M with 0-1 entries is re-arrangeable.
I tried a lot but could not reach to any conclusion please suggest me algorithm for that.
I think this post is on topic here because I think the answer is http://en.wikipedia.org/wiki/Assignment_problem. Consider the job of putting a 1 in column i, for each i. Each row could do some subset of those jobs. If you can find an assignment of rows such that there is a different row capable of putting a 1 in each column then you can make the matrix diagonal by rearranging the rows so that row i puts a 1 on column i.
Suppose that there is an assignment that solves the problem. Paint the cells that hold the 1s for the solution red. Notice that permuting rows leaves a single red cell in each row and in each column. Similarly permuting columns leaves a single red cell in each row and each column. Therefore no matter how much you permute rows and columns I can restore the diagonal by permuting rows. Therefore if there is any solution which places 1s on all the diagonals, no matter how much you try to disguise it by permuting both rows and columns I can restore a diagonal by permuting only rows. Therefore the assignment algorithm fails to solve this problem exactly when there is no solution, for example if the only 1s are in the top row and the leftmost column.

Finding a square side length is R in 2D plane ?

I was at the high frequency Trading firm interview, they asked me
Find a square whose length size is R with given n points in the 2D plane
conditions:
--parallel sides to the axis
and it contains at least 5 of the n points
running complexity is not relative to the R
they told me to give them O(n) algorithm
Interesting problem, thanks for posting! Here's my solution. It feels a bit inelegant but I think it meets the problem definition:
Inputs: R, P = {(x_0, y_0), (x_1, y_1), ..., (x_N-1, y_N-1)}
Output: (u,v) such that the square with corners (u,v) and (u+R, v+R) contains at least 5 points from P, or NULL if no such (u,v) exist
Constraint: asymptotic run time should be O(n)
Consider tiling the plane with RxR squares. Construct a sparse matrix, B defined as
B[i][j] = {(x,y) in P | floor(x/R) = i and floor(y/R) = j}
As you are constructing B, if you find an entry that contains at least five elements stop and output (u,v) = (i*R, j*R) for i,j of the matrix entry containing five points.
If the construction of B did not yield a solution then either there is no solution or else the square with side length R does not line up with our tiling. To test for this second case we will consider points from four adjacent tiles.
Iterate the non-empty entries in B. For each non-empty entry B[i][j], consider the collection of points contained in the tile represented by the entry itself and in the tiles above and to the right. These are the points in entries: B[i][j], B[i+1][j], B[i][j+1], B[i+1][j+1]. There can be no more than 16 points in this collection, since each entry must have fewer than 5. Examine this collection and test if there are 5 points among the points in this collection satisfying the problem criteria; if so stop and output the solution. (I could specify this algorithm in more detail, but since (a) such an algorithm clearly exists, and (b) its asymptotic runtime is O(1), I won't go into that detail).
If after iterating the entries in B no solution is found then output NULL.
The construction of B involves just a single pass over P and hence is O(N). B has no more than N elements, so iterating it is O(N). The algorithm for each element in B considers no more than 16 points and hence does not depend on N and is O(1), so the overall solution meets the O(N) target.
Run through set once, keeping the 5 largest x values in a (sorted) local array. Maintaining the sorted local array is O(N) (constant time performed N times at most).
Define xMin and xMax as the x-coordinates of the two points with largest and 5th largest x values respectively (ie (a[0] and a[4]).
Sort a[] again on Y value, and set yMin and yMax as above, again in constant time.
Define deltaX = xMax- xMin, and deltaY as yMax - yMin, and R = largest of deltaX and deltaY.
The square of side length R located with upper-right at (xMax,yMax) meets the criteria.
Observation if R is fixed in advance:
O(N) complexity means no sort is allowed except on a fixed number of points, as only a Radix sort would meet the criteria and it requires a constraint on the values of xMax-xMin and of yMax-yMin, which was not provided.
Perhaps the trick is to start with the point furthest down and left, and move up and right. The lower-left-most point can be determined in a single pass of the input.
Moving up and right in steps and counitng points in the square requries sorting the points on X and Y in advance, which to be done in O(N) time requiress that the Radix sort constraint be met.

find the values greater than x in N dimensional Matrix, where x is sum of index

We are given an N dimensional matrix of order [m][m][m]....n times where value position contains the value sum of its index..
For example in 6x6 matrix A, value at position A[3][4] will be 7.
We have to find out the total number of counts of elements greater than x. For 2 dimensional matrix we have following approach:
If we know the one index say [i][j] {i+j = x} then we create a diagonal by just doing [i++][j--] of [i--][j++] with constraint that i and j are always in range of 0 to m.
For example in two dimensional matrix A[6][6] for value A[3][4] (x = 7), diagonal can be created via:
A[1][6] -> A[2][5] -> A[3][4] -> A[4][3] -> A[5][2] -> A[6][2]
Here we have converted our problem into another problem which is count the element below the diagonal including the diagonal.
We can easily count in O(m) complexity instead spending O(m^2) where 2 is order of matrix.
But if we consider N dimensional matrix, how we will do it, because in N dimensional matrix if we know the index of that location,
where sum of index is x say A[i1][i2][i3][i4]....[in] times.
Then there may be multiple diagonal which satisfy that condition, say by doing i1-- we can increment any of {i2, i3, i4....in}
So, above used approach for 2 dimensional matrix become useless here... because there is only two variable quantity i1 and i2 is present.
Please help me to find solution
For 2D: count of the elements below diagonal is triangular number.
For 3D: count of the elements below diagonal plane is tetrahedral number
Note that Kth tetrahedral number is the sum of the first K triangular numbers.
For nD: n-simplexial (I don't know exact english term) number (is sum of first (n-1)-simplexial numbers).
The value of Kth n-simplexial is
S(k, n) = k * (k+1) * (k+2).. (k + n - 1) / n! = BinomialCoefficient(k+n-1, n)
Edit: this method works "as is" for limited values of X below main anti-diagonal (hyper)plane.
Generating function approach:
Let's we have polynom
A(s)=1+s+s^2+s^3+..+s^m
then it's nth power
B(s) = An(s) has important property: coefficient of kth power of s is the number of ways to compose k from n summands. So the sum of nth to kth coefficients gives us the count of the elements below kth diagonal
For a 2-dimensional matrix, you converted the problem into another problem, which is count the elements below the diagonal including the diagonal.
Try and visualize it for a 3-d matrix. In case of a 3-dimensional matrix, the problem will be reduced to another problem, which is to count the elements below the diagonal plane including the diagonal

Resources