Algorithm for partial symmetry searched - algorithm

I'm trying to calculate in percentage how symmetric a concrete Matrix is.
The "traditional" way of calculating symmetry would be that I have as input an arbitrary square matrix M of size N×N and that the algorithm's output must be true (=symmetric) if M[i,j] = M[j,i] for all j≠i, false otherwise.
How would be a adequate handling of calculating the percentage? So not just saying symmetric or asymmetric? Maybe counting the times j≠i and divide it by the overall amount of (i,j) ?
So f.e. if I have the following Matrixes:
1 1 1 1 1 1
A = 2 2 2 B = 2 2 2
1 1 2 3 4 5
then I need to know that A is "more symmetric" than B, even both are not symmetric.

You should first define your symmetry distance metric per cell. This should be zero if the symmetric cells is the same or some other number if they aren't.
For example:
s(i,j):= (m(i,j)==m(j,i) ? 0:1) // returns 0/1 if the symmetric cell is/isn't the same
or
s(i,j):= |m(i,j)-m(j,i)| // returns the absolute difference between the two cells
Then just sum the distances for all the cells:
int SymmetricDistance(matrix){
for (int i=0; i<matrix.Width; i++)
for (int j=i; j<matrix.Width; j++) // check if th matrix is square first
dist = dist + s(i,j);
return dist;
}
now you can say matrix A is "more symmetric" than matrix B iff
SymmetricDistance(A) < SymmetricDistance(B)

I agree with #Sten Petrov's answer in general. However, if you are looking for a percentage of symmetry specifically:
First, find the total number of pairs of elements which could be symmetric in an NxN matrix.
You could find this by splitting the matrix along the diagonal and counting the number of elements. Since adding 1 to N increases the total number of pairs by N, a general rule for finding the total pairs is to sum the numbers from 1 to N. However, rather than looping just use the sum formula:
Total Possible = N * (N + 1) / 2
The matrix is perfectly symmetrical iff all the pairs are symmetrical. Therefore, percentage of symmetry can be defined as the fraction of symmetric pairs to total possible pairs.
Symmetry = Symmetric Pairs / Total Pairs
Pseudo-code:
int matchingPairs= 0;
int N = matrix.Width;
int possiblePairs = N * (N + 1 ) / 2;
for(int i = 0; i < N; ++i){
for(int j = 0; j <= i; ++j){
matchingPairs += (matrix[i][j] == matrix[j][i]) ? 1 : 0;
}
}
float percentSymmetric = matchingPairs / possiblePairs ;

Related

Could anyone tell me a better solution for this problem? I could only think of brute force way which is O(n^2)

Recently I was attempting the following problem:
Given an array of integers, arr.
Find sum of floor of (arr[i]/arr[j]) for all pairs of indices (i,j).
e.g.
Input: arr[]={1,2,3,4,5}
Output: Sum=27.
Explanation:
(1/1)+(1/5)+(1/4)+(1/2)+(1/3) = 1+0+0+0+0 = 1
(5/1)+(5/5)+(5/4)+(5/2)+(5/3) = 5+1+1+2+1 = 10
(4/1)+(4/5)+(4/4)+(4/2)+(4/3) = 4+0+1+2+1 = 8
(2/1)+(2/5)+(2/4)+(2/2)+(2/3) = 2+0+0+1+0 = 3
(3/1)+(3/5)+(3/4)+(3/2)+(3/3) = 3+0+0+1+1 = 5
I could only think of naive O(n^2) solution. Is there any other better approach?
Thanks in advance.
A possibility resides in "quickly" skipping the elements that are the same integer multiple of a given element (after rounding).
For the given example, the vertical bars below delimit runs of equal ratios (the lower triangle is all zeroes and ignored; I show the elements on the left and the ratios on the right):
1 -> 2 | 3 | 4 | 5 ≡ 2 | 3 | 4 | 5
2 -> 3 | 4 5 ≡ 1 | 2 2
3 -> 4 5 ≡ 1 1
4 -> 5 ≡ 1
For bigger arrays, the constant runs can be longer.
So the algorithm principle is
sort all elements increasingly;
process the elements from smallest to largest;
for a given element, find the index of the first double and count the number of skipped elements;
from there, find the index of the first triple and count twice the number of skipped elements;
continue with higher multiples until you exhaust the tail of the array.
A critical operation is to "find the next multiple". This should be done by an exponential search followed by a dichotomic search, so that the number of operations remains logarithmic in the number of elements to skip (a pure dichotomic search would be logarithmic in the total number of remaining elements). Hence the cost of a search will be proportional to the sum of the logarithms of the distances between the multiples.
Hopefully, this sum will be smaller than the sum of the distances themselves, though in the worst case the complexity remains O(N). In the best case, O(log(N)).
A global analysis is difficult and in theory the worst-case complexity remains O(N²); but in practice it could go down to O(N log N), because the worst case would require that the elements grow faster than a geometric progression of common ratio 2.
Addendum:
If the array contains numerous repeated values, it can be beneficial to compress it by storing a repetition count and a single instance of every value. This can be done after sorting.
int[] arr = { 1, 2, 3, 4, 5 };
int result = 0;
int BuffSize = arr.Max() * 2;
int[] b = new int[BuffSize + 1];
int[] count = new int[BuffSize];
for (int i = 0; i < arr.Length; ++i)
count[arr[i]]++;
for (int i = BuffSize - 1; i >= 1; i--)
{
b[i] = b[i + 1] + count[i];
}
for (int i = 1; i < BuffSize; i++)
{
if (count[i] == 0)
{
continue;
}
for (int j = i, mul = 1; j < BuffSize; j += i, mul++)
{
result += 1 * (b[j] - b[Math.Min(BuffSize - 1, j + i)]) * mul * count[i];
}
}
This code takes advantage of knowing difference between each successive value ahead of time, and only process the remaining portion of the array rather than redundantly processing the entire thing n^2 times,
I believe it has a worst case runtime of O(n*sqrt(n)*log(n))

Find minimum sum that cannot be formed

Given positive integers from 1 to N where N can go upto 10^9. Some K integers from these given integers are missing. K can be at max 10^5 elements. I need to find the minimum sum that can't be formed from remaining N-K elements in an efficient way.
Example; say we have N=5 it means we have {1,2,3,4,5} and let K=2 and missing elements are: {3,5} then remaining array is now {1,2,4} the minimum sum that can't be formed from these remaining elements is 8 because :
1=1
2=2
3=1+2
4=4
5=1+4
6=2+4
7=1+2+4
So how to find this un-summable minimum?
I know how to find this if i can store all the remaining elements by this approach:
We can use something similar to Sieve of Eratosthenes, used to find primes. Same idea, but with different rules for a different purpose.
Store the numbers from 0 to the sum of all the numbers, and cross off 0.
Then take numbers, one at a time, without replacement.
When we take the number Y, then cross off every number that is Y plus some previously-crossed off number.
When we have done this for every number that is remaining, the smallest un-crossed-off number is our answer.
However, its space requirement is high. Can there be a better and faster way to do this?
Here's an O(sort(K))-time algorithm.
Let 1 &leq; x1 &leq; x2 &leq; … &leq; xm be the integers not missing from the set. For all i from 0 to m, let yi = x1 + x2 + … + xi be the partial sum of the first i terms. If it exists, let j be the least index such that yj + 1 < xj+1; otherwise, let j = m. It is possible to show via induction that the minimum sum that cannot be made is yj + 1 (the hypothesis is that, for all i from 0 to j, the numbers x1, x2, …, xi can make all of the sums from 0 to yi and no others).
To handle the fact that the missing numbers are specified, there is an optimization that handles several consecutive numbers in constant time. I'll leave it as an exercise.
Let X be a bitvector initialized to zero. For each number Ni you set X = (X | X << Ni) | Ni. (i.e. you can make Ni and you can increase any value you could make previously by Ni).
This will set a '1' for every value you can make.
Running time is linear in N, and bitvector operations are fast.
process 1: X = 00000001
process 2: X = (00000001 | 00000001 << 2) | (00000010) = 00000111
process 4: X = (00000111 | 00000111 << 4) | (00001000) = 01111111
First number you can't make is 8.
Here is my O(K lg K) approach. I didn't test it very much because of lazy-overflow, sorry about that. If it works for you, I can explain the idea:
const int MAXK = 100003;
int n, k;
int a[MAXK];
long long sum(long long a, long long b) { // sum of elements from a to b
return max(0ll, b * (b + 1) / 2 - a * (a - 1) / 2);
}
void answer(long long ans) {
cout << ans << endl;
exit(0);
}
int main()
{
cin >> n >> k;
for (int i = 1; i <= k; ++i) {
cin >> a[i];
}
a[0] = 0;
a[k+1] = n+1;
sort(a, a+k+2);
long long ans = 0;
for (int i = 1; i <= k+1; ++i) {
// interval of existing numbers [lo, hi]
int lo = a[i-1] + 1;
int hi = a[i] - 1;
if (lo <= hi && lo > ans + 1)
break;
ans += sum(lo, hi);
}
answer(ans + 1);
}
EDIT: well, thanks God #DavidEisenstat in his answer wrote the description of the approach I used, so I don't have to write it. Basically, what he mentions as exercise is not adding the "existing numbers" one by one, but all at the same time. Before this,you just need to check if some of them breaks the invariant, which can be done using binary search. Hope it helped.
EDIT2: as #DavidEisenstat pointed in the comments, the binary search is not needed, since only the first number in every interval of existing numbers can break the invariant. Modified the code accordingly.

What is the probability that the array will remain the same?

This question has been asked in Microsoft interview. Very much curious to know why these people ask so strange questions on probability?
Given a rand(N), a random generator which generates random number from 0 to N-1.
int A[N]; // An array of size N
for(i = 0; i < N; i++)
{
int m = rand(N);
int n = rand(N);
swap(A[m],A[n]);
}
EDIT: Note that the seed is not fixed.
what is the probability that array A remains the same?
Assume that the array contains unique elements.
Well I had a little fun with this one. The first thing I thought of when I first read the problem was group theory (the symmetric group Sn, in particular). The for loop simply builds a permutation σ in Sn by composing transpositions (i.e. swaps) on each iteration. My math is not all that spectacular and I'm a little rusty, so if my notation is off bear with me.
Overview
Let A be the event that our array is unchanged after permutation. We are ultimately asked to find the probability of event A, Pr(A).
My solution attempts to follow the following procedure:
Consider all possible permutations (i.e. reorderings of our array)
Partition these permutations into disjoint sets based on the number of so-called identity transpositions they contain. This helps reduce the problem to even permutations only.
Determine the probability of obtaining the identity permutation given that the permutation is even (and of a particular length).
Sum these probabilities to obtain the overall probability the array is unchanged.
1) Possible Outcomes
Notice that each iteration of the for loop creates a swap (or transposition) that results one of two things (but never both):
Two elements are swapped.
An element is swapped with itself. For our intents and purposes, the array is unchanged.
We label the second case. Let's define an identity transposition as follows:
An identity transposition occurs when a number is swapped with itself.
That is, when n == m in the above for loop.
For any given run of the listed code, we compose N transpositions. There can be 0, 1, 2, ... , N of the identity transpositions appearing in this "chain".
For example, consider an N = 3 case:
Given our input [0, 1, 2].
Swap (0 1) and get [1, 0, 2].
Swap (1 1) and get [1, 0, 2]. ** Here is an identity **
Swap (2 2) and get [1, 0, 2]. ** And another **
Note that there is an odd number of non-identity transpositions (1) and the array is changed.
2) Partitioning Based On the Number of Identity Transpositions
Let K_i be the event that i identity transpositions appear in a given permutation. Note this forms an exhaustive partition of all possible outcomes:
No permutation can have two different quantities of identity transpositions simultaneously, and
All possible permutations must have between 0 and N identity transpositions.
Thus we can apply the Law of Total Probability:
Now we can finally take advantage of the the partition. Note that when the number of non-identity transpositions is odd, there is no way the array can go unchanged*. Thus:
*From group theory, a permutation is even or odd but never both. Therefore an odd permutation cannot be the identity permutation (since the identity permutation is even).
3) Determining Probabilities
So we now must determine two probabilities for N-i even:
The First Term
The first term, , represents the probability of obtaining a permutation with i identity transpositions. This turns out to be binomial since for each iteration of the for loop:
The outcome is independent of the results before it, and
The probability of creating an identity transposition is the same, namely 1/N.
Thus for N trials, the probability of obtaining i identity transpositions is:
The Second Term
So if you've made it this far, we have reduced the problem to finding for N - i even. This represents the probability of obtaining an identity permutation given i of the transpositions are identities. I use a naive counting approach to determine the number of ways of achieving the identity permutation over the number of possible permutations.
First consider the permutations (n, m) and (m, n) equivalent. Then, let M be the number of non-identity permutations possible. We will use this quantity frequently.
The goal here is to determine the number of ways a collections of transpositions can be combined to form the identity permutation. I will try to construct the general solution along side an example of N = 4.
Let's consider the N = 4 case with all identity transpositions (i.e. i = N = 4). Let X represent an identity transposition. For each X, there are N possibilities (they are: n = m = 0, 1, 2, ... , N - 1). Thus there are N^i = 4^4 possibilities for achieving the identity permutation. For completeness, we add the binomial coefficient, C(N, i), to consider ordering of the identity transpositions (here it just equals 1). I've tried to depict this below with the physical layout of elements above and the number of possibilities below:
I = _X_ _X_ _X_ _X_
N * N * N * N * C(4, 4) => N^N * C(N, N) possibilities
Now without explicitly substituting N = 4 and i = 4, we can look at the general case. Combining the above with the denominator found previously, we find:
This is intuitive. In fact, any other value other than 1 should probably alarm you. Think about it: we are given the situation in which all N transpositions are said to be identities. What's the probably that the array is unchanged in this situation? Clearly, 1.
Now, again for N = 4, let's consider 2 identity transpositions (i.e. i = N - 2 = 2). As a convention, we will place the two identities at the end (and account for ordering later). We know now that we need to pick two transpositions which, when composed, will become the identity permutation. Let's place any element in the first location, call it t1. As stated above, there are M possibilities supposing t1 is not an identity (it can't be as we have already placed two).
I = _t1_ ___ _X_ _X_
M * ? * N * N
The only element left that could possibly go in the second spot is the inverse of t1, which is in fact t1 (and this is the only one by uniqueness of inverse). We again include the binomial coefficient: in this case we have 4 open locations and we are looking to place 2 identity permutations. How many ways can we do that? 4 choose 2.
I = _t1_ _t1_ _X_ _X_
M * 1 * N * N * C(4, 2) => C(N, N-2) * M * N^(N-2) possibilities
Again looking at the general case, this all corresponds to:
Finally we do the N = 4 case with no identity transpositions (i.e. i = N - 4 = 0). Since there are a lot of possibilities, it starts to get tricky and we must be careful not to double count. We start similarly by placing a single element in the first spot and working out possible combinations. Take the easiest first: the same transposition 4 times.
I = _t1_ _t1_ _t1_ _t1_
M * 1 * 1 * 1 => M possibilities
Let's now consider two unique elements t1 and t2. There are M possibilities for t1 and only M-1 possibilities for t2 (since t2 cannot be equal to t1). If we exhaust all arrangements, we are left with the following patterns:
I = _t1_ _t1_ _t2_ _t2_
M * 1 * M-1 * 1 => M * (M - 1) possibilities (1)st
= _t1_ _t2_ _t1_ _t2_
M * M-1 * 1 * 1 => M * (M - 1) possibilities (2)nd
= _t1_ _t2_ _t2_ _t1_
M * M-1 * 1 * 1 => M * (M - 1) possibilities (3)rd
Now let's consider three unique elements, t1, t2, t3. Let's place t1 first and then t2. As usual, we have:
I = _t1_ _t2_ ___ ___
M * ? * ? * ?
We can't yet say how many possible t2s there can be yet, and we will see why in a minute.
We now place t1 in the third spot. Notice, t1 must go there since if were to go in the last spot, we would just be recreating the (3)rd arrangement above. Double counting is bad! This leaves the third unique element t3 to the final position.
I = _t1_ _t2_ _t1_ _t3_
M * ? * 1 * ?
So why did we have to take a minute to consider the number of t2s more closely? The transpositions t1 and t2 cannot be disjoint permutations (i.e. they must share one (and only one since they also cannot be equal) of their n or m). The reason for this is because if they were disjoint, we could swap the order of permutations. This means we would be double counting the (1)st arrangement.
Say t1 = (n, m). t2 must be of the form (n, x) or (y, m) for some x and y in order to be non-disjoint. Note that x may not be n or m and y many not be n or m. Thus, the number of possible permutations that t2 could be is actually 2 * (N - 2).
So, coming back to our layout:
I = _t1_ _t2_ _t1_ _t3_
M * 2(N-2) * 1 * ?
Now t3 must be the inverse of the composition of t1 t2 t1. Let's do it out manually:
(n, m)(n, x)(n, m) = (m, x)
Thus t3 must be (m, x). Note this is not disjoint to t1 and not equal to either t1 or t2 so there is no double counting for this case.
I = _t1_ _t2_ _t1_ _t3_
M * 2(N-2) * 1 * 1 => M * 2(N - 2) possibilities
Finally, putting all of these together:
4) Putting it all together
So that's it. Work backwards, substituting what we found into the original summation given in step 2. I computed the answer to the N = 4 case below. It matches the empirical number found in another answer very closely!
N = 4
M = 6 _________ _____________ _________
| Pr(K_i) | Pr(A | K_i) | Product |
_________|_________|_____________|_________|
| | | | |
| i = 0 | 0.316 | 120 / 1296 | 0.029 |
|_________|_________|_____________|_________|
| | | | |
| i = 2 | 0.211 | 6 / 36 | 0.035 |
|_________|_________|_____________|_________|
| | | | |
| i = 4 | 0.004 | 1 / 1 | 0.004 |
|_________|_________|_____________|_________|
| | |
| Sum: | 0.068 |
|_____________|_________|
Correctness
It would be cool if there was a result in group theory to apply here-- and maybe there is! It would certainly help make all this tedious counting go away completely (and shorten the problem to something much more elegant). I stopped working at N = 4. For N > 5, what is given only gives an approximation (how good, I'm not sure). It is pretty clear why that is if you think about it: for example, given N = 8 transpositions, there are clearly ways of creating the identity with four unique elements which are not accounted for above. The number of ways becomes seemingly more difficult to count as the permutation gets longer (as far as I can tell...).
Anyway, I definitely couldn't do something like this within the scope of an interview. I would get as far as the denominator step if I was lucky. Beyond that, it seems pretty nasty.
Very much curious to know why these people ask so strange questions on probability?
Questions like this are asked because they allow the interviewer to gain insight into the interviewee's
ability read code (very simple code but at least something)
ability to analyse an algorithm to identify execution path
skills at applying logic to find possible outcomes and edge case
reasoning and problem solving skills as they work through the problem
communication and work skills - do they ask questions, or work in isolation based on information at hand
... and so on. The key to having a question that exposes these attributes of the interviewee is to have a piece of code that is deceptively simple. This shakes out the imposters the non-coder is stuck; the arrogant jump to the wrong conclusion; the lazy or sub-par computer scientist finds a simple solution and stops looking. Often, as they say, it's not whether you get the right answer but whether you impress with your thought process.
I'll attempt to answer the question, too. In an interview I'd explain myself rather than provide a one-line written answer - this is because even if my 'answer' is wrong, I am able to demonstrate logical thinking.
A will remain the same - i.e. elements in the same positions - when
m == n in every iteration (so that every element only swaps with itself); or
any element that is swapped is swapped back to its original position
The first case is the 'simple' case that duedl0r gives, the case that the array isn't altered. This might be the answer, because
what is the probability that array A remains the same?
if the array changes at i = 1 and then reverts back at i = 2, it's in the original state but it didn't 'remain the same' - it was changed, and then changed back. That might be a smartass technicality.
Then considering the chance of elements being swapped and swapped back - I think that calculation is above my head in an interview. The obvious consideration is that that does not need to be a change - change back swap, there could just as easily be a swap between three elements, swapping 1 and 2, then 2 and 3, 1 and 3 and finally 2 and 3. And continuing, there could be swaps between 4, 5 or more items that are 'circular' like this.
In fact, rather than considering the cases where the array is unchanged, it may be simpler to consider the cases where it is changed. Consider whether this problem can be mapped onto a known structure like Pascal's triangle.
This is a hard problem. I agree that it's too hard to solve in an interview, but that doesn't mean it is too hard to ask in an interview. The poor candidate won't have an answer, the average candidate will guess the obvious answer, and the good candidate will explain why the problem is too hard to answer.
I consider this an 'open-ended' question that gives the interviewer insight into the candidate. For this reason, even though it's too hard to solve during an interview, it is a good question to ask during an interview. There's more to asking a question than just checking whether the answer is right or wrong.
Below is C code to count the number of values of the 2N-tuple of indices that rand can produce and calculate the probability. Starting with N = 0, it shows counts of 1, 1, 8, 135, 4480, 189125, and 12450816, with probabilities of 1, 1, .5, .185185, .0683594, .0193664, and .00571983. The counts do not appear in the Encyclopedia of Integer Sequences, so either my program has a bug or this is a very obscure problem. If so, the problem is not intended to be solved by a job applicant but to expose some of their thought processes and how they deal with frustration. I would not regard it as a good interview problem.
#include <inttypes.h>
#include <math.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#define swap(a, b) do { int t = (a); (a) = (b); (b) = t; } while (0)
static uint64_t count(int n)
{
// Initialize count of how many times the original order is the result.
uint64_t c = 0;
// Allocate space for selectors and initialize them to zero.
int *r = calloc(2*n, sizeof *r);
// Allocate space for array to be swapped.
int *A = malloc(n * sizeof *A);
if (!A || !r)
{
fprintf(stderr, "Out of memory.\n");
exit(EXIT_FAILURE);
}
// Iterate through all values of selectors.
while (1)
{
// Initialize A to show original order.
for (int i = 0; i < n; ++i)
A[i] = i;
// Test current selector values by executing the swap sequence.
for (int i = 0; i < 2*n; i += 2)
{
int m = r[i+0];
int n = r[i+1];
swap(A[m], A[n]);
}
// If array is in original order, increment counter.
++c; // Assume all elements are in place.
for (int i = 0; i < n; ++i)
if (A[i] != i)
{
// If any element is out of place, cancel assumption and exit.
--c;
break;
}
// Increment the selectors, odometer style.
int i;
for (i = 0; i < 2*n; ++i)
// Stop when a selector increases without wrapping.
if (++r[i] < n)
break;
else
// Wrap this selector to zero and continue.
r[i] = 0;
// Exit the routine when the last selector wraps.
if (2*n <= i)
{
free(A);
free(r);
return c;
}
}
}
int main(void)
{
for (int n = 0; n < 7; ++n)
{
uint64_t c = count(n);
printf("N = %d: %" PRId64 " times, %g probabilty.\n",
n, c, c/pow(n, 2*n));
}
return 0;
}
The behaviour of the algorithm can be modelled as a Markov chain over the symmetric group SN.
Basics
The N elements of the array A can be arranged in N! different permutations. Let us number these permutations from 1 to N!, e.g. by lexicographic ordering. So the state of the array A at any time in the algorithm can be fully characterized by the permutation number.
For example, for N = 3, one possible numbering of all 3! = 6 permutations might be:
a b c
a c b
b a c
b c a
c a b
c b a
State transition probabilities
In each step of the algorithm, the state of A either stays the same or transitions from one of these permutations to another. We are now interested in the probabilities of these state changes. Let us call Pr(i → j) the probability that the state changes from permutation i to permutation j in a single loop iteration.
As we pick m and n uniformly and independently from the range [0, N-1], there are N² possible outcomes, each of which is equally likely.
Identity
For N of these outcomes m = n holds, so there is no change in the permutation. Therefore,
.
Transpositions
The remaining N² - N cases are transpositions, i.e. two elements exchange their positions and therefore the permutation changes. Suppose one of these transpositions exchanges the elements at positions x and y. There are two cases how this transposition can be generated by the algorithm: either m = x, n = y or m = y, n = x. Thus, the probability for each transposition is 2 / N².
How does this relate to our permutations? Let us call two permutations i and j neighbors if and only if there is a transposition which transforms i into j (and vice versa). We then can conclude:
Transition matrix
We can arrange the probabilities Pr(i → j) in a transition matrix P ∈ [0,1]N!×N!. We define
pij = Pr(i → j),
where pij is the entry in the i-th row and j-th column of P. Note that
Pr(i → j) = Pr(j → i),
which means P is symmetric.
The key point now is the observation of what happens when we multiply P by itself. Take any element p(2)ij of P²:
The product Pr(i → k) · Pr(k → j) is the probability that starting at permutation i we transition into permutation k in one step, and transition into permutation j after another subsequent step. Summing over all in-between permutations k therefore gives us the total probability of transitioning from i to j in 2 steps.
This argument can be extended to higher powers of P. A special consequence is the following:
p(N)ii is the probability of returning back to permutation i after N steps, assuming we started at permutation i.
Example
Let's play this through with N = 3. We already have a numbering for the permutations. The corresponding transition matrix is as follows:
Multiplying P with itself gives:
Another multiplication yields:
Any element of the main diagonal gives us the wanted probability, which is 15/81 or 5/27.
Discussion
While this approach is mathematically sound and can be applied to any value of N, it is not very practical in this form. The transition matrix P has N!² entries, which becomes huge very fast. Even for N = 10 the size of the matrix already exceeds 13 trillion entries. A naive implementation of this algorithm therefore appears to be infeasible.
However, in comparison to other proposals, this approach is very structured and doesn't require complex derivations beyond figuring out which permutations are neighbors. My hope is that this structuredness can be exploited to find a much simpler computation.
For example, one could exploit the fact that all diagonal elements of any power of P are equal. Assuming we can easily calculate the trace of PN, the solution is then simply tr(PN) / N!. The trace of PN is equal to the sum of the N-th powers of its eigenvalues. So if we had an efficient algorithm to compute the eigenvalues of P, we would be set. I haven't explored this further than calculating the eigenvalues up to N = 5, however.
It's easy to observe bounds 1/nn <= p <= 1/n.
Here is an incomplete idea of showing an inverse-exponential upper bound.
You're drawing numbers from {1,2,..,n} 2n times. If any of them is unique (occurs exactly once), the array will definitely be changed, as the element has gone away and cannot return at its original place.
The probability that a fixed number is unique is 2n * 1/n * (1-1/n)^(2n-1)=2 * (1-1/n)^(2n-1) which is asympotically 2/e2, bounded away from 0. [2n because you choose on which try you get it, 1/n that you got it on that try, (1-1/n)^(2n-1) that you did not get it on other tries]
If the events were independent, you'd get that chance that all numbers are nonunique is (2/e2)^n, which would mean p <= O((2/e2)^n). Unfortunately, they are not independent. I feel that the bound can be shown with more sophisticated analysis. The keyword is "balls and bins problem".
One simplistic solution is
p >= 1 / NN
Since one possible way the array stays the same is if m = n for every iteration. And m equals n with possibility 1 / N.
It's certainly higher than that. The question is by how much..
Second thought: One could also argue, that if you shuffle an array randomly, every permutation has equal probability. Since there are n! permutations the probability of getting just one (the one we have at the beginning) is
p = 1 / N!
which is a bit better than the previous result.
As discussed, the algorithm is biased. This means not every permutation has the same probability. So 1 / N! is not quite exact. You have to find out how the distribution of the permutations are.
FYI, not sure the bound above (1/n^2) holds:
N=5 -> 0.019648 < 1/25
N=6 -> 0.005716 < 1/36
Sampling code:
import random
def sample(times,n):
count = 0;
for i in range(times):
count += p(n)
return count*1.0/times;
def p(n):
perm = range(n);
for i in range(n):
a = random.randrange(n)
b = random.randrange(n)
perm[a],perm[b]=perm[b],perm[a];
return perm==range(n)
print sample(500000,5)
Everyone assumes that A[i] == i, which was not explicitly
stated. I'm going to make this assumption too, but note that the probability
depends on the contents. For example if A[i]=0, then the probability = 1 for
all N.
Here's how to do it. Let P(n,i) be the probability that the resulting array
differs by exactly i transpositions from the original array.
We want to know P(n,0). It's true that:
P(n,0) =
1/n * P(n-1,0) + 1/n^2 * P(n-1,1) =
1/n * P(n-1,0) + 1/n^2 * (1-1/(n-1)) * P(n-2,0)
Explanation: we can get the original array in two ways, either by making a "neutral" transposition in an array that's already good, or by reverting the only "bad" transposition. To get an array with only one "bad" transposition, we can take an array with 0 bad transpositions and make one transposition that is not neutral.
EDIT: -2 instead of -1 in P(n-1,0)
It's not a full solution, but it's something at least.
Take a particular set of swaps that have no effect. We know that it must have been the case that its swaps ended up forming a bunch of loops of different sizes, using a total of n swaps. (For the purposes of this, a swap with no effect can be considered a loop of size 1)
Perhaps we can
1) Break them down into groups based on what the sizes of the loops are
2) Calculate the number of ways to get each group.
(The main problem is that there are a ton of different groups, but I'm not sure how you'd actually calculate this if you don't take into account the different groupings.)
Interesting question.
I think the answer is 1/N, but I don't have any proof. When I find a proof, I will edit my answer.
What I got until now:
If m == n, You won't change the array.
The probability to get m == n is 1/N, because there are N^2 options, and only N is suitable ((i,i) for every 0 <= i <= N-1).
Thus, we get N/N^2 = 1/N.
Denote Pk the probability that after k iterations of swaps, the array of size N will remain the same.
P1 = 1/N. (As we saw below)
P2 = (1/N)P1 + (N-1/N)(2/N^2) = 1/N^2 + 2(N-1) / N^3.
Explanation for P2:
We want to calculate the probability that after 2 iterations, the array with
N elements won't change. We have 2 options :
- in the 2 iteration we got m == n (Probability of 1/N)
- in the 2 iteration we got m != n (Probability of N-1/N)
If m == n, we need that the array will remain after the 1 iteration = P1.
If m != n, we need that in the 1 iteration to choose the same n and m
(order is not important). So we get 2/N^2.
Because those events are independent we get - P2 = (1/N)*P1 + (N-1/N)*(2/N^2).
Pk = (1/N)*Pk-1 + (N-1/N)*X. (the first for m == n, the second for m != n)
I have to think more about what X equals. (X is just a replacement for the real formula, not a constant or anything else)
Example for N = 2.
All possible swaps:
(1 1 | 1 1),(1 1 | 1 2),(1 1 | 2 1),(1 1 | 2 2),(1 2 | 1 1),(1 2 | 1 2)
(1 2 | 2 1),(1 2 | 2 2),(2 1 | 1 1),(2 1 | 1 2),(2 1 | 2 1),(2 1 | 2 2)
(2 2 | 1 1),(2 2 | 1 2),(2 2 | 2 1),(2 1 | 1 1).
Total = 16. Exactly 8 of them remain the array the same.
Thus, for N = 2, the answer is 1/2.
EDIT :
I want to introduce another approach:
We can classify swaps to three groups: constructive swaps, destructive swaps and harmless swaps.
Constructive swap is defined to be a swap that cause at least one element to move to its right place.
Destructive swap is defined to be a swap that cause at least one element to move from its correct position.
Harmless swap is defined to be a swap that does not belong to the other groups.
It is easy to see that this is a partition of all possible swaps. (intersection = empty set).
Now the claim I want to prove:
The array will remain the same if and only if
the number of Destructive swap == Constructive swap in the iterations.
If someone has a counter-example, please write it down as a comment.
If this claim is correct, we can take all combinations and sum them -
0 harmless swaps, 1 harmless swaps,..,N harmless swaps.
And for each possible k harmless swap, we check if N-k is even, if no, we skip. If yes, we take (N-k)/2 for destructive, and (N-k) for constructive. And just look all possibilities.
I would model the problem as a multigraph where nodes are elements of the array and swaps is adding an un-directed(!) connection between them. Then look for loops somehow (all nodes is a part of a loop => original)
Really need to get back to work! :(
well, from mathematical perspective:
to have the array elements swapped at the same place every time, then the Rand(N) function must generate the same number twice for int m, and int n. so the probability that the Rand(N) function generates the same number twice is 1/N.
and we have Rand(N) called N times inside the for loop, so we have probability of 1/(N^2)
Naive implementation in C#.
The idea is to create all the possible permutations of initial array and enumerate them.
Then we build a matrix of possible changes of state. Multiplying matrix by itself N times we will get the matrix showing how many ways exists that lead from permutation #i to permutation #j in N steps. Elemet [0,0] will show how many ways will lead to the same initial state. Sum of elements of row #0 will show total number of different ways. By dividing former to latter we get the probability.
In fact total number of permutations is N^(2N).
Output:
For N=1 probability is 1 (1 / 1)
For N=2 probability is 0.5 (8 / 16)
For N=3 probability is 0.1851851851851851851851851852 (135 / 729)
For N=4 probability is 0.068359375 (4480 / 65536)
For N=5 probability is 0.0193664 (189125 / 9765625)
For N=6 probability is 0.0057198259072973293366526105 (12450816 / 2176782336)
class Program
{
static void Main(string[] args)
{
for (int i = 1; i < 7; i++)
{
MainClass mc = new MainClass(i);
mc.Run();
}
}
}
class MainClass
{
int N;
int M;
List<int> comb;
List<int> lastItemIdx;
public List<List<int>> combinations;
int[,] matrix;
public MainClass(int n)
{
N = n;
comb = new List<int>();
lastItemIdx = new List<int>();
for (int i = 0; i < n; i++)
{
comb.Add(-1);
lastItemIdx.Add(-1);
}
combinations = new List<List<int>>();
}
public void Run()
{
GenerateAllCombinations();
GenerateMatrix();
int[,] m2 = matrix;
for (int i = 0; i < N - 1; i++)
{
m2 = Multiply(m2, matrix);
}
decimal same = m2[0, 0];
decimal total = 0;
for (int i = 0; i < M; i++)
{
total += m2[0, i];
}
Console.WriteLine("For N={0} probability is {1} ({2} / {3})", N, same / total, same, total);
}
private int[,] Multiply(int[,] m2, int[,] m1)
{
int[,] ret = new int[M, M];
for (int ii = 0; ii < M; ii++)
{
for (int jj = 0; jj < M; jj++)
{
int sum = 0;
for (int k = 0; k < M; k++)
{
sum += m2[ii, k] * m1[k, jj];
}
ret[ii, jj] = sum;
}
}
return ret;
}
private void GenerateMatrix()
{
M = combinations.Count;
matrix = new int[M, M];
for (int i = 0; i < M; i++)
{
matrix[i, i] = N;
for (int j = i + 1; j < M; j++)
{
if (2 == Difference(i, j))
{
matrix[i, j] = 2;
matrix[j, i] = 2;
}
else
{
matrix[i, j] = 0;
}
}
}
}
private int Difference(int x, int y)
{
int ret = 0;
for (int i = 0; i < N; i++)
{
if (combinations[x][i] != combinations[y][i])
{
ret++;
}
if (ret > 2)
{
return int.MaxValue;
}
}
return ret;
}
private void GenerateAllCombinations()
{
int placeAt = 0;
bool doRun = true;
while (doRun)
{
doRun = false;
bool created = false;
for (int i = placeAt; i < N; i++)
{
for (int j = lastItemIdx[i] + 1; j < N; j++)
{
lastItemIdx[i] = j; // remember the test
if (comb.Contains(j))
{
continue; // tail items should be nulled && their lastItemIdx set to -1
}
// success
placeAt = i;
comb[i] = j;
created = true;
break;
}
if (comb[i] == -1)
{
created = false;
break;
}
}
if (created)
{
combinations.Add(new List<int>(comb));
}
// rollback
bool canGenerate = false;
for (int k = placeAt + 1; k < N; k++)
{
lastItemIdx[k] = -1;
}
for (int k = placeAt; k >= 0; k--)
{
placeAt = k;
comb[k] = -1;
if (lastItemIdx[k] == N - 1)
{
lastItemIdx[k] = -1;
continue;
}
canGenerate = true;
break;
}
doRun = canGenerate;
}
}
}
The probability that m==n on each iteration, then do that N times. P(m==n) = 1/N. So I think P=1/(n^2) for that case. But then you have to consider the values getting swapped back. So I think the answer is (text editor got me) 1/N^N.
Question: what is the probability that array A remains the same?
Condition: Assume that the array contains unique elements.
Tried the solution in Java.
Random swapping happens on primitive int array. In java method parameters are always passed by value so what happens in swap method does not matter as a[m] and a[n] elements of the array (from below code swap(a[m], a[n]) ) are passed not complete array.
The answer is array will remain same. Despite of condition mentioned above. See below java code sample:
import java.util.Random;
public class ArrayTrick {
int a[] = new int[10];
Random random = new Random();
public void swap(int i, int j) {
int temp = i;
i = j;
j = temp;
}
public void fillArray() {
System.out.println("Filling array: ");
for (int index = 0; index < a.length; index++) {
a[index] = random.nextInt(a.length);
}
}
public void swapArray() {
System.out.println("Swapping array: ");
for (int index = 0; index < a.length; index++) {
int m = random.nextInt(a.length);
int n = random.nextInt(a.length);
swap(a[m], a[n]);
}
}
public void printArray() {
System.out.println("Printing array: ");
for (int index = 0; index < a.length; index++) {
System.out.print(" " + a[index]);
}
System.out.println();
}
public static void main(String[] args) {
ArrayTrick at = new ArrayTrick();
at.fillArray();
at.printArray();
at.swapArray();
at.printArray();
}
}
Sample output:
Filling array:
Printing array:
3 1 1 4 9 7 9 5 9 5
Swapping array:
Printing array:
3 1 1 4 9 7 9 5 9 5

Double Squares: counting numbers which are sums of two perfect squares

Source: Facebook Hacker Cup Qualification Round 2011
A double-square number is an integer X which can be expressed as the sum of two perfect squares. For example, 10 is a double-square because 10 = 32 + 12. Given X, how can we determine the number of ways in which it can be written as the sum of two squares? For example, 10 can only be written as 32 + 12 (we don't count 12 + 32 as being different). On the other hand, 25 can be written as 52 + 02 or as 42 + 32.
You need to solve this problem for 0 ≤ X ≤ 2,147,483,647.
Examples:
10 => 1
25 => 2
3 => 0
0 => 1
1 => 1
Factor the number n, and check if it has a prime factor p with odd valuation, such that p = 3 (mod 4). It does if and only if n is not a sum of two squares.
The number of solutions has a closed form expression involving the number of divisors of n. See this, Theorem 3 for a precise statement.
Here is my simple answer in O(sqrt(n)) complexity
x^2 + y^2 = n
x^2 = n-y^2
x = sqrt(n - y^2)
x should be integer so (n-y^2) should be perfect square. Loop to y=[0, sqrt(n)] and check whether (n-y^2) is perfect square or not
Pseudocode :
count = 0;
for y in range(0, sqrt(n))
if( isPerfectSquare(n - y^2))
count++
return count/2
Here's a much simpler solution:
create list of squares in the given range (that's 46340 values for the example given)
for each square value x
if list contains a value y such that x + y = target value (i.e. does [target - x] exist in list)
output √x, √y as solution (roots can be stored in a std::map lookup created in the first step)
Looping through all pairs (a, b) is infeasible given the constrains on X. There is a faster way though!
For fixed a, we can work out b: b = √(X - a2). b won't always be an integer though, so we have to check this. Due to precision issues, perform the check with a small tolerance: if b is x.99999, we can be fairly certain it's an integer. So we loop through all possible values of a and count all cases where b is an integer. We need to be careful not to double-count, so we place the constraint that a <= b. For X = a2 + b2, a will be at most √(X/2) with this constraint.
Here is an implementation of this algorithm in C++:
int count = 0;
// add EPS to avoid flooring x.99999 to x
for (int a = 0; a <= sqrt(X/2) + EPS; a++) {
int b2 = X - a*a; // b^2
int b = (int) (sqrt(b2) + EPS);
if (abs(b - sqrt(b2)) < EPS) // check b is an integer
count++;
}
cout << count << endl;
See it on ideone with sample input
Here's a version which is trivially O(sqrt(N)) and avoids all loop-internal branches.
Start by generating all squares up to the limit, easily done without any multiplications, then initialize a l and r index.
In each iteration you calculate the sum, then update the two indices and the count based on a comparison with the target value. This is sqrt(N) iterations to generate the table and maximum sqrt(N) iterations of the search loop. Estimated running time with a reasonable compiler is max 10 clock cycles per sqrt(N), so for a maximum input value if 2^31 (sqrt(N) ~= 46341) this should correspond to less than 500K clock cycles or a few tenths of a second:
unsigned countPairs(unsigned n)
{
unsigned sq = 0, i;
unsigned square[65536];
for (i = 0; sq <= n; i++) {
square[i] = sq;
sq += i+i+1;
}
unsigned l = 0, r = i-1, count = 0;
do {
unsigned sum = square[l] + square[r];
l += sum <= n; // Increment l if the sum is <= N
count += sum == n; // Increment the count if a match
r -= sum >= n; // Decrement r if the sum is >= N
} while (l <= r);
return count;
}
A good compiler can note that the three compares at the end are all using the same operands so it only needs a single CMP opcode followed by three different conditional move operations (CMOVcc).
I was in a hurry, so solved it using a rather brute-force approach (very similar to marcog's) using Python 2.6.
def is_perfect_square(x):
rt = int(math.sqrt(x))
return rt*rt == x
def double_sqaures(n):
rng = int(math.sqrt(n))
ways = 0
for i in xrange(rng+1):
if is_perfect_square(n - i*i):
ways +=1
if ways % 2 == 0:
ways = ways // 2
else:
ways = ways // 2 + 1
return ways
Note: ways will be odd when the number is a perfect sqaure.
The number of solutions (x,y) of
x^2+y^2=n
over the integers is exactly 4 times the number of divisors of n congruent to 1 mod 4.
Similar identities exist also for the problems
x^2 + 2y^2 = n
and
x^2 + y^2 + z^2 + w^2 = n.

Complexity of sampling from mixture model

I have a model where state j among M states is chosen with probability p_j. The probability could be any real number. This specifies a mixture model over the M states. I can access p_j for all j in constant time.
I want to make a large number (N) of random samples. The most obvious algorithm is
1) Compute the cumulative probability distribution P_j = p_1+p_2+...p_j. O(M)
2) For each sample choose random float x in [0,1]. O(N)
3) For each sample choose j such that min(0,P_j-1) < x <= max(1,P_j). O(Nlog(M))
So the asymptotic complexity is O(Nlog(M)). The factor of N is obviously unavoidable, but I am wondering about log(M). Is it possible to beat this factor in a realistic implementation?
I think you can do better using something like the following algorithm, or any other reasonable Multinomial distribution sampler,
// Normalize p_j
for j = 1 to M
p_hat[j] = p[j] / P_j
// Place the draws from the mixture model in this array
draws = [];
// Sample until we have N iid samples
cdf = 1.0;
for ( j = 1, remaining = N; j <= M && remaining > 0; j++ )
{
// p_hat[j] is the probability of sampling item j and there
// are (N - count) items remaining to sample. This is just
// (N - count) Bernoulli trials, so draw from a
// Binomial(N - count, p_hat[j] / cdf) distribution to get the
// number of items
//
// Adjusting the probability by 1 - CDF ensures that *something*
// is sampled because p_hat[M] / cdf = p_hat[M] / p_hat[M] = 1.0
items = Binomial.sample( remaining, p_hat[j] / cdf );
remaining -= items;
cdf -= p_hat[j];
for ( k = 0; k < items; k++ )
draws.push( sample_from_mixture_component( j ))
}
This should take close to O(N) time but it does depend on how efficient your Binomial distribution and mixture model component samplers are.

Resources