Understanding Recursion / how are subproblems combined (Max-Subarray Algorithm) - algorithm

I'm having some problems understanding divide and conquer algorithms. I've read that in order to apply recursion successfully you need to have a "recursive leap of faith" and you shouldn't bother with the details of every step, but I'm not really satisfied with just accepting that recursion works if all the conditions are fulfilled, because it seems like magic to me at the moment and I want to understand why it works.
So I'm given the following recursive algorithm of finding a maximum subarray in pseudocode:
Find-Maximum-Subarray(A, low, high)
if high == low
return (low, high, A[low])
else
mid = [(low + high)/2]
(left-low, left-high, left-sum) = Find-Maximum-Subarray(A, low, mid)
(right-low, right-high, right-sum) = Find-Maximum-Subarray(A,mid + 1, high)
(cross-low, cross-high, cross-sum) = Find-Max-Crossing-Subarray(A,low, mid, high)
if left-sum >= right-sum and left-sum >= cross-sum
return (left-low, left-high, left-sum)
else if right-sum >= left-sum and right-sum >= cross-sum
return (right-low, right-high, right-sum)
else
return (cross-low, cross-high, cross-sum)
where the Find-Max-Crossing-Subarray algorithm is given by the following pseudocode:
Find-Maximum-Crossing-Subarray(A, low, mid, high)
left-sum = -INF
sum = 0
for i = mid down to low
sum = sum + A[i]
if sum > left-sum
left-sum = sum
max-left = i
right-sum = -INF
sum = 0
for j = mid + 1 to high
sum = sum + A[j]
if sum > right-sum
right-sum = sum
max-right = j
return (max-left, max-right, left-sum + right-sum)
Now when I try to apply this algorithm to an example, I'm having a hard time understanding all the steps.
The array is "broken down" (using the indices, without actually changing the array itself) until high equals low. I thinks this corresponds to the first call, so Find-Maximum-Subarray is first called for all the terms on the left of the array, until high==low==1. Then (low, high, A[low]) is returned which would be (1, 1, A[1]) in this case. Now I don't understand how those values are processed in the remainder of the call.
Furthermore I don't understand how the algorithm actually compares subarrays of lengths > 1. Can anybody explain to me how the algorithm continues once one of the calls of the function has bottomed out, please?

In short:
Let A be an array of length n. You want to compute the max subarray of A sou you call Find-Maximum-Subarray(A, 0, n-1). Now try to make the problem easier:
Case. high = low:
In this case the Array has only 1 Element so the solution is simple
high != low
In this case the solutions are to hard to find. so try to make the problem smaller. What happens if we cut the array A into arrays B1 and B2 of the half length. Now there are only 3 new cases
a) the Max Subarray of A is also a subarray of B1 but not of B2
b) The max subarray of A is also a subarray of B2 but not of B1
c) The max subarray of A overlaps with B1 and B2
so you compute the max subarray of B1 and B2 separately and look for an overlapping solution and finally you take the largest one.
The trick is now, that you can du the same thing with B1 and B2.
Example:
A =[-1, 2, -1, 1]
Call Find-Maximum-Subarray(A, 0, 3);
- Call Find-Maximum-Subarray(A, 0, 1); -> returns ( 1, 1, 2 ) (cause 2 > 1 > -1, see the subcalls)
- Call Find-Maximum-Subarray(A, 0, 0); -> returns ( 0, 0, -1 )
- Call Find-Maximum-Subarray(A, 1, 1); -> returns ( 1, 1, 2 )
- Call Find-Max-Crossing-Subarray(A, 0, 0, 1); -> returns ( 0, 1, 1 )
- Call Find-Maximum-Subarray(A, 2, 3); -> returns ( 3, 3, 1 ) ( 1 > 0 > -1, see subcalls)
- Call Find-Maximum-Subarray(A, 2, 2); -> returns ( 2, 2, -1 )
- Call Find-Maximum-Subarray(A, 3, 3); -> returns ( 3, 3, 1 )
- Call Find-Max-Crossing-Subarray(A, 2, 2, 3); returns ( 2, 3, 0 )
- Call Find-Max-Crossing-Subarray(A, 0, 1, 3); -> returns ( 1, 3, 2 )
- Here you have to take at least the elements A[1] and A[2] with the sum of 1,
but if you also take A[3]=1 the sum will be 2. taking A[0] does not help
due to A[0] is negative
- Now you have only to look which subarray has the larger sum. In this case you
have two with the same size: A[1] and A[1-3]. Return one of them.

Related

How to divide a number into multiple parts(not equal) so that there sum is equal to input?

I want to divide a number e.g. input number i.e. 40 into different token(30 parts) numbers randomly selected from a range and their sum must be equal to input number i.e 40.
Edit:
Max Range is should be 40% and minimum should be 0.
example:
range = (0,4)
1+1+0+1+1+0+3+0+3+0+0+2+0+4+4+1+1+0+1+1+0+3+0+4+0+2+2+0+4+1 = 40.
Actually in real world Showing results for scenario I am having a sum of product users expressions which i need to populate randomly into a record set for each day in last month. I am using php but unable to get the algorithm to process such situation.
Simple approach exploits "trial and error" method. Suitable for reasonable small input values.
Note - it might work long time when n is close to p*maxx. If such case is possible, it would more wise to distribute "holes" rather than "ones" (the second code)
import random
def randparts(n, p, maxx):
lst = [0] * p
while n > 0:
r = random.randrange(p)
if lst[r] < maxx:
n -= 1
lst[r] += 1
return lst
print(randparts(20, 10, 4))
>>> [2, 0, 3, 2, 4, 2, 1, 3, 0, 3]
def randparts(n, p, maxx):
if p * maxx >= n * 2:
lst = [0] * p
while n > 0:
r = random.randrange(p)
if lst[r] < maxx:
n -= 1
lst[r] += 1
else:
lst = [maxx] * p
n = maxx * p - n
while n > 0:
r = random.randrange(p)
if lst[r] > 0:
n -= 1
lst[r] -= 1
return lst
print(randparts(16, 10, 4))
print(randparts(32, 10, 4))
>> [2, 0, 0, 3, 4, 0, 0, 3, 2, 2]
>> [3, 4, 4, 4, 4, 0, 3, 3, 4, 3]
Since you mentioned that it is for 'a record set for each day in last month', I assume that the number of tokens could also be 28, or 31, and since you said 'randomly', here is what I would do:
1. create a function that takes in:
a. The number to sum to (40 in your example).
b. The maximum number of a single token (4 in your example).
c. The number of tokens (30 in your example).
2. Within the function, create an array the size of the number of tokens (28, 30, 31, or whatever)
3. Initialize all elements of the array to zero.
4. Check to make sure that it is possible to achieve the sum given the maximum single token value and number of tokens.
5. While I need to increment a token (sum > 0):
a. Select a random token.
b. Determine if the value of the token can be incremented without going over the max single token value.
c. If it can, then increment the token value and decrement the sum.
d. If the token cannot be incremented, then go back to 5a.
6. Return the array of tokens, or however you want them back (you didn't specify).
Here is an example in c#:
public int[] SegmentSum(int sum, int maxPart, int parts)
{
if (sum < 0 || maxPart < 0 || parts < 0 || parts * maxPart < sum)
throw new ArgumentOutOfRangeException;
Random rnd = new Random();
int[] tokens = Enumerable.Repeat(0, parts).ToArray();
while(sum > 0)
{
int token = rnd.Next(parts);
if (tokens[token] < maxPart)
{
tokens[token]++;
sum--;
}
}
return tokens;
}
Hope this helps you.

Minimum common remainder of division

I have n pairs of numbers: ( p[1], s[1] ), ( p[2], s[2] ), ... , ( p[n], s[n] )
Where p[i] is integer greater than 1; s[i] is integer : 0 <= s[i] < p[i]
Is there any way to determine minimum positive integer a , such that for each pair :
( s[i] + a ) mod p[i] != 0
Anything better than brute force ?
It is possible to do better than brute force. Brute force would be O(A·n), where A is the minimum valid value for a that we are looking for.
The approach described below uses a min-heap and achieves O(n·log(n) + A·log(n)) time complexity.
First, notice that replacing a with a value of the form (p[i] - s[i]) + k * p[i] leads to a reminder equal to zero in the ith pair, for any positive integer k. Thus, the numbers of that form are invalid a values (the solution that we are looking for is different from all of them).
The proposed algorithm is an efficient way to generate the numbers of that form (for all i and k), i.e. the invalid values for a, in increasing order. As soon as the current value differs from the previous one by more than 1, it means that there was a valid a in-between.
The pseudocode below details this approach.
1. construct a min-heap from all the following pairs (p[i] - s[i], p[i]),
where the heap comparator is based on the first element of the pairs.
2. a0 = -1; maxA = lcm(p[i])
3. Repeat
3a. Retrieve and remove the root of the heap, (a, p[i]).
3b. If a - a0 > 1 then the result is a0 + 1. Exit.
3c. if a is at least maxA, then no solution exists. Exit.
3d. Insert into the heap the value (a + p[i], p[i]).
3e. a0 = a
Remark: it is possible for such an a to not exist. If a valid a is not found below LCM(p[1], p[2], ... p[n]), then it is guaranteed that no valid a exists.
I'll show below an example of how this algorithm works.
Consider the following (p, s) pairs: { (2, 1), (5, 3) }.
The first pair indicates that a should avoid values like 1, 3, 5, 7, ..., whereas the second pair indicates that we should avoid values like 2, 7, 12, 17, ... .
The min-heap initially contains the first element of each sequence (step 1 of the pseudocode) -- shown in bold below:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We retrieve and remove the head of the heap, i.e., the minimum value among the two bold ones, and this is 1. We add into the heap the next element from that sequence, thus the heap now contains the elements 2 and 3:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
We again retrieve the head of the heap, this time it contains the value 2, and add the next element of that sequence into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
The algorithm continues, we will next retrieve value 3, and add 5 into the heap:
1, 3, 5, 7, ...
2, 7, 12, 17, ...
Finally, now we retrieve value 5. At this point we realize that the value 4 is not among the invalid values for a, thus that is the solution that we are looking for.
I can think of two different solutions. First:
p_max = lcm (p[0],p[1],...,p[n]) - 1;
for a = 0 to p_max:
zero_found = false;
for i = 0 to n:
if ( s[i] + a ) mod p[i] == 0:
zero_found = true;
break;
if !zero_found:
return a;
return -1;
I suppose this is the one you call "brute force". Notice that p_max represents Least Common Multiple of p[i]s - 1 (solution is either in the closed interval [0, p_max], or it does not exist). Complexity of this solution is O(n * p_max) in the worst case (plus the running time for calculating lcm!). There is a better solution regarding the time complexity, but it uses an additional binary array - classical time-space tradeoff. Its idea is similar to the Sieve of Eratosthenes, but for remainders instead of primes :)
p_max = lcm (p[0],p[1],...,p[n]) - 1;
int remainders[p_max + 1] = {0};
for i = 0 to n:
int rem = s[i] - p[i];
while rem >= -p_max:
remainders[-rem] = 1;
rem -= p[i];
for i = 0 to n:
if !remainders[i]:
return i;
return -1;
Explanation of the algorithm: first, we create an array remainders that will indicate whether certain negative remainder exists in the whole set. What is a negative remainder? It's simple, notice that 6 = 2 mod 4 is equivalent to 6 = -2 mod 4. If remainders[i] == 1, it means that if we add i to one of the s[j], we will get p[j] (which is 0, and that is what we want to avoid). Array is populated with all possible negative remainders, up to -p_max. Now all we have to do is search for the first i, such that remainder[i] == 0 and return it, if it exists - notice that the solution does not have to exists. In the problem text, you have indicated that you are searching for the minimum positive integer, I don't see why zero would not fit (if all s[i] are positive). However, if that is a strong requirement, just change the for loop to start from 1 instead of 0, and increment p_max.
The complexity of this algorithm is n + sum (p_max / p[i]) = n + p_max * sum (1 / p[i]), where i goes from to 0 to n. Since all p[i]s are at least 2, that is asymptotically better than the brute force solution.
An example for better understanding: suppose that the input is (5,4), (5,1), (2,0). p_max is lcm(5,5,2) - 1 = 10 - 1 = 9, so we create array with 10 elements, initially filled with zeros. Now let's proceed pair by pair:
from the first pair, we have remainders[1] = 1 and remainders[6] = 1
second pair gives remainders[4] = 1 and remainders[9] = 1
last pair gives remainders[0] = 1, remainders[2] = 1, remainders[4] = 1, remainders[6] = 1 and remainders[8] = 1.
Therefore, first index with zero value in the array is 3, which is a desired solution.

Number of unique sequences of 3 digits (-1,0,1) given a length that matches a sum

Say you have a vertical game board of length n (being the number of spaces). And you have a three-sided die that has the options: go forward one, stay and go back one. If you go below or above the number of board game spaces it is an invalid game. The only valid move once you reach the end of the board is "stay". Given an exact number of die rolls t, is it possible to algorithmically work out the number of unique dice rolls that result in a winning game?
So far I've tried producing a list of every possible combination of (-1,0,1) for the given number of die rolls and sorting through the list to see if any add up to the length of the board and also meet all the requirements for being a valid game. But this is impractical for dice rolls above 20.
For example:
t=1, n=2; Output=1
t=3, n=2; Output=3
You can use a dynamic programming approach. The sketch of a recurrence is:
M(0, 1) = 1
M(t, n) = T(t-1, n-1) + T(t-1, n) + T(t-1, n+1)
Of course you have to consider the border cases (like going off the board or not allowing to exit the end of the board, but it's easy to code that).
Here's some Python code:
def solve(N, T):
M, M2 = [0]*N, [0]*N
M[0] = 1
for i in xrange(T):
M, M2 = M2, M
for j in xrange(N):
M[j] = (j>0 and M2[j-1]) + M2[j] + (j+1<N-1 and M2[j+1])
return M[N-1]
print solve(3, 2) #1
print solve(2, 1) #1
print solve(2, 3) #3
print solve(5, 20) #19535230
Bonus: fancy "one-liner" with list compreehension and reduce
def solve(N, T):
return reduce(
lambda M, _: [(j>0 and M[j-1]) + M[j] + (j<N-2 and M[j+1]) for j in xrange(N)],
xrange(T), [1]+[0]*N)[-1]
Let M[i, j] be an N by N matrix with M[i, j] = 1 if |i-j| <= 1 and 0 otherwise (and the special case for the "stay" rule of M[N, N-1] = 0)
This matrix counts paths of length 1 from position i to position j.
To find paths of length t, simply raise M to the t'th power. This can be performed efficiently by linear algebra packages.
The solution can be read off: M^t[1, N].
For example, computing paths of length 20 on a board of size 5 in an interactive Python session:
>>> import numpy
>>> M = numpy.matrix('1 1 0 0 0;1 1 1 0 0; 0 1 1 1 0; 0 0 1 1 1; 0 0 0 0 1')
>>> M
matrix([[1, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 1]])
>>> M ** 20
matrix([[31628466, 51170460, 51163695, 31617520, 19535230],
[51170460, 82792161, 82787980, 51163695, 31617520],
[51163695, 82787980, 82792161, 51170460, 31628465],
[31617520, 51163695, 51170460, 31628466, 19552940],
[ 0, 0, 0, 0, 1]])
So there's M^20[1, 5], or 19535230 paths of length 20 from start to finish on a board of size 5.
Try a backtracking algorithm. Recursively "dive down" into depth t and only continue with dice values that could still result in a valid state. Propably by passing a "remaining budget" around.
For example, n=10, t=20, when you reached depth 10 of 20 and your budget is still 10 (= steps forward and backwards seemed to cancelled), the next recursion steps until depth t would discontinue the 0 and -1 possibilities, because they could not result in a valid state at the end.
A backtracking algorithms for this case is still very heavy (exponential), but better than first blowing up a bubble with all possibilities and then filtering.
Since zeros can be added anywhere, we'll multiply those possibilities by the different arrangements of (-1)'s:
X (space 1) X (space 2) X (space 3) X (space 4) X
(-1)'s can only appear in spaces 1,2 or 3, not in space 4. I got help with the mathematical recurrence that counts the number of ways to place minus ones without skipping backwards.
JavaScript code:
function C(n,k){if(k==0||n==k)return 1;var p=n;for(var i=2;i<=k;i++)p*=(n+1-i)/i;return p}
function sumCoefficients(arr,cs){
var s = 0, i = -1;
while (arr[++i]){
s += cs[i] * arr[i];
}
return s;
}
function f(n,t){
var numMinusOnes = (t - (n-1)) >> 1
result = C(t,n-1),
numPlaces = n - 2,
cs = [];
for (var i=1; numPlaces-i>=i-1; i++){
cs.push(-Math.pow(-1,i) * C(numPlaces + 1 - i,i));
}
var As = new Array(cs.length),
An;
As[0] = 1;
for (var m=1; m<=numMinusOnes; m++){
var zeros = t - (n-1) - 2*m;
An = sumCoefficients(As,cs);
As.unshift(An);
As.pop();
result += An * C(zeros + 2*m + n-1,zeros);
}
return result;
}
Output:
console.log(f(5,20))
19535230

Algorithm to generate all unique permutations of fixed-length integer partitions?

I'm searching for an algorithm that generates all permutations of fixed-length partitions of an integer. Order does not matter.
For example, for n=4 and length L=3:
[(0, 2, 2), (2, 0, 2), (2, 2, 0),
(2, 1, 1), (1, 2, 1), (1, 1, 2),
(0, 1, 3), (0, 3, 1), (3, 0, 1), (3, 1, 0), (1, 3, 0), (1, 0, 3),
(0, 0, 4), (4, 0, 0), (0, 4, 0)]
I bumbled about with integer partitions + permutations for partitions whose length is lesser than L; but that was too slow because I got the same partition multiple times (because [0, 0, 1] may be a permutation of [0, 0, 1] ;-)
Any help appreciated, and no, this isn't homework -- personal interest :-)
Okay. First, forget about the permutations and just generate the partitions of length L (as suggested by #Svein Bringsli). Note that for each partition, you may impose an ordering on the elements, such as >. Now just "count," maintaining your ordering. For n = 4, k = 3:
(4, 0, 0)
(3, 1, 0)
(2, 2, 0)
(2, 1, 1)
So, how to implement this? It looks like: while subtracting 1 from position i and adding it to the next position maintains our order, subtract 1 from position i, add 1 to position i + 1, and move to the next position. If we're in the last position, step back.
Here's a little python which does just that:
def partition_helper(l, i, result):
if i == len(l) - 1:
return
while l[i] - 1 >= l[i + 1] + 1:
l[i] -= 1
l[i + 1] += 1
result.append(list(l))
partition_helper(l, i + 1, result)
def partition(n, k):
l = [n] + [0] * (k - 1)
result = [list(l)]
partition_helper(l, 0, result)
return result
Now you have a list of lists (really a list of multisets), and generating all permutations of each multiset of the list gives you your solution. I won't go into that, there's a recursive algorithm which basically says, for each position, choose each unique element in the multiset and append the permutations of the multiset resulting from removing that element from the multiset.
Given that you ask this out of interest, you would probably be interested an authorative answer! It can be found in "7.2.1.2 - Generating all permutations" of Knuth's The Art of Computer Programming (subvolume 4A).
Also, 3 concrete algorithms can be found here.
As noted by #pbarranis, the code by #rlibby does not include all lists when n equals k. Below is Python code which does include all lists. This code is non-recursive, which may be more efficient with respect to memory usage.
def successor(n, l):
idx = [j for j in range(len(l)) if l[j] < l[0]-1]
if not idx:
return False
i = idx[0]
l[1:i+1] = [l[i]+1]*(len(l[1:i+1]))
l[0] = n - sum(l[1:])
return True
def partitions(n, k):
l = [0]*k
l[0] = n
results = []
results.append(list(l))
while successor(n, l):
results.append(list(l))
return results
The lists are created in colexicographic order (algorithm and more description here).
I found that using a recursive function was not good for larger lengths and integers because it chews up too much RAM, and using a generator / resumable-function (that 'yields' values) was too slow and required a large library to make it cross-platform.
So here's a non-recursive solution in C++ that produces the partitions in sorted order (which is ideal for permutations too). I've found this to be over 10 times faster than seemingly clever and concise recursive solutions I tried for partition lengths of 4 or greater, but for lengths of 1-3 the performance is not necessarily better (and I don't care about short lengths because they're fast with either approach).
// Inputs
unsigned short myInt = 10;
unsigned short len = 3;
// Partition variables.
vector<unsigned short> partition(len);
unsigned short last = len - 1;
unsigned short penult = last - 1;
short cur = penult; // Can dip into negative value when len is 1 or 2. Can be changed to unsigned if len is always >=3.
unsigned short sum = 0;
// Prefill partition with 0.
fill(partition.begin(), partition.end(), 0);
do {
// Calculate remainder.
partition[last] = max(0, myInt - sum); // Would only need "myInt - sum" if partition vector contains signed ints.
/*
*
* DO SOMETHING WITH "partition" HERE.
*
*/
if (partition[cur + 1] <= partition[cur] + 1) {
do {
cur--;
} while (
cur > 0 &&
accumulate(partition.cbegin(), partition.cbegin() + cur, 0) + (len - cur) * (partition[cur] + 1) > myInt
);
// Escape if seeked behind too far.
// I think this if-statement is only useful when len is 1 or 2, can probably be removed if len is always >=3.
if (cur < 0) {
break;
}
// Increment the new cur position.
sum++;
partition[cur]++;
// The value in each position must be at least as large as the
// value in the previous position.
for (unsigned short i = cur + 1; i < last; ++i) {
sum = sum - partition[i] + partition[i - 1];
partition[i] = partition[i - 1];
}
// Reset cur for next time.
cur = penult;
}
else {
sum++;
partition[penult]++;
}
} while (myInt - sum >= partition[penult]);
Where I've written DO SOMETHING WITH "partition" HERE. is where you would actually consume the value. (On the last iteration the code will continue to execute the remainder of the loop but I found this to be better than constantly checking for exit conditions - it's optimised for larger operations)
0,0,10
0,1,9
0,2,8
0,3,7
0,4,6
0,5,5
1,1,8
1,2,7
1,3,6
1,4,5
2,2,6
2,3,5
2,4,4
3,3,4
Oh I've used "unsigned short" because I know my length and integer won't exceed certain limits, change that if it's not suitable for you :) Check the comments; one variable there (cur) had to be signed to handle lengths of 1 or 2 and there's a corresponding if-statement that goes with that, and I've also noted in a comment that if your partition vector has signed ints there is another line that can be simplified.
To get all the compositions, in C++ I would use this simple permutation strategy which thankfully does not produce any duplicates:
do {
// Your code goes here.
} while (next_permutation(partition.begin(), partition.end()));
Nest that in the DO SOMETHING WITH "partition" HERE spot, and you're good to go.
An alternative to finding the compositions (based on the Java code here https://www.nayuki.io/page/next-lexicographical-permutation-algorithm) is as follows. I've found this to perform better than next_permutation().
// Process lexicographic permutations of partition (compositions).
composition = partition;
do {
// Your code goes here.
// Find longest non-increasing suffix
i = last;
while (i > 0 && composition[i - 1] >= composition[i]) {
--i;
}
// Now i is the head index of the suffix
// Are we at the last permutation already?
if (i <= 0) {
break;
}
// Let array[i - 1] be the pivot
// Find rightmost element that exceeds the pivot
j = last;
while (composition[j] <= composition[i - 1])
--j;
// Now the value array[j] will become the new pivot
// Assertion: j >= i
// Swap the pivot with j
temp = composition[i - 1];
composition[i - 1] = composition[j];
composition[j] = temp;
// Reverse the suffix
j = last;
while (i < j) {
temp = composition[i];
composition[i] = composition[j];
composition[j] = temp;
++i;
--j;
}
} while (true);
You'll notice some undeclared variables there, just declare them earlier in the code before all your do-loops: i, j, pos, and temp (unsigned shorts), and composition (same type and length as partition). You can reuse the declaration of i for it's use in a for-loop in the partitions code snippet. Also note variables like last being used which assume this code is nested within the partitions code given earlier.
Again "Your code goes here" is where you consume the composition for your own purposes.
For reference here are my headers.
#include <vector> // for std::vector
#include <numeric> // for std::accumulate
#include <algorithm> // for std::next_permutation and std::max
using namespace std;
Despite the massive increase in speed using these approaches, for any sizeable integers and partition lengths this will still make you mad at your CPU :)
Like I mentioned above, I couldn't get #rlibby's code to work for my needs, and I needed code where n=l, so just a subset of your need. Here's my code below, in C#. I know it's not perfectly an answer to the question above, but I believe you'd only have to modify the first method to make it work for different values of l; basically add the same code #rlibby did, making the array of length l instead of length n.
public static List<int[]> GetPartitionPermutations(int n)
{
int[] l = new int[n];
var results = new List<int[]>();
GeneratePermutations(l, n, n, 0, results);
return results;
}
private static void GeneratePermutations(int[] l, int n, int nMax, int i, List<int[]> results)
{
if (n == 0)
{
for (; i < l.Length; ++i)
{
l[i] = 0;
}
results.Add(l.ToArray());
return;
}
for (int cnt = Math.Min(nMax, n); cnt > 0; --cnt)
{
l[i] = cnt;
GeneratePermutations(l, (n - cnt), cnt, i + 1, results);
}
}
A lot of searching led to this question. Here is an answer that includes the permutations:
#!/usr/bin/python
from itertools import combinations_with_replacement as cr
def all_partitions(n, k):
"""
Return all possible combinations that add up to n
i.e. divide n objects in k DISTINCT boxes in all possible ways
"""
all_part = []
for div in cr(range(n+1), k-1):
counts = [div[0]]
for i in range(1, k-1):
counts.append(div[i] - div[i-1])
counts.append(n-div[-1])
all_part.append(counts)
return all_part
For instance, all_part(4, 3) as asked by OP gives:
[[0, 0, 4],
[0, 1, 3],
[0, 2, 2],
[0, 3, 1],
[0, 4, 0],
[1, 0, 3],
[1, 1, 2],
[1, 2, 1],
[1, 3, 0],
[2, 0, 2],
[2, 1, 1],
[2, 2, 0],
[3, 0, 1],
[3, 1, 0],
[4, 0, 0]]

Algorithm to find two repeated numbers in an array, without sorting

There is an array of size n (numbers are between 0 and n - 3) and only 2 numbers are repeated. Elements are placed randomly in the array.
E.g. in {2, 3, 6, 1, 5, 4, 0, 3, 5} n=9, and repeated numbers are 3 and 5.
What is the best way to find the repeated numbers?
P.S. [You should not use sorting]
There is a O(n) solution if you know what the possible domain of input is. For example if your input array contains numbers between 0 to 100, consider the following code.
bool flags[100];
for(int i = 0; i < 100; i++)
flags[i] = false;
for(int i = 0; i < input_size; i++)
if(flags[input_array[i]])
return input_array[i];
else
flags[input_array[i]] = true;
Of course there is the additional memory but this is the fastest.
OK, seems I just can't give it a rest :)
Simplest solution
int A[N] = {...};
int signed_1(n) { return n%2<1 ? +n : -n; } // 0,-1,+2,-3,+4,-5,+6,-7,...
int signed_2(n) { return n%4<2 ? +n : -n; } // 0,+1,-2,-3,+4,+5,-6,-7,...
long S1 = 0; // or int64, or long long, or some user-defined class
long S2 = 0; // so that it has enough bits to contain sum without overflow
for (int i=0; i<N-2; ++i)
{
S1 += signed_1(A[i]) - signed_1(i);
S2 += signed_2(A[i]) - signed_2(i);
}
for (int i=N-2; i<N; ++i)
{
S1 += signed_1(A[i]);
S2 += signed_2(A[i]);
}
S1 = abs(S1);
S2 = abs(S2);
assert(S1 != S2); // this algorithm fails in this case
p = (S1+S2)/2;
q = abs(S1-S2)/2;
One sum (S1 or S2) contains p and q with the same sign, the other sum - with opposite signs, all other members are eliminated.
S1 and S2 must have enough bits to accommodate sums, the algorithm does not stand for overflow because of abs().
if abs(S1)==abs(S2) then the algorithm fails, though this value will still be the difference between p and q (i.e. abs(p - q) == abs(S1)).
Previous solution
I doubt somebody will ever encounter such a problem in the field ;)
and I guess, I know the teacher's expectation:
Lets take array {0,1,2,...,n-2,n-1},
The given one can be produced by replacing last two elements n-2 and n-1 with unknown p and q (less order)
so, the sum of elements will be (n-1)n/2 + p + q - (n-2) - (n-1)
the sum of squares (n-1)n(2n-1)/6 + p^2 + q^2 - (n-2)^2 - (n-1)^2
Simple math remains:
(1) p+q = S1
(2) p^2+q^2 = S2
Surely you won't solve it as math classes teach to solve square equations.
First, calculate everything modulo 2^32, that is, allow for overflow.
Then check pairs {p,q}: {0, S1}, {1, S1-1} ... against expression (2) to find candidates (there might be more than 2 due to modulo and squaring)
And finally check found candidates if they really are present in array twice.
You know that your Array contains every number from 0 to n-3 and the two repeating ones (p & q). For simplicity, lets ignore the 0-case for now.
You can calculate the sum and the product over the array, resulting in:
1 + 2 + ... + n-3 + p + q = p + q + (n-3)(n-2)/2
So if you substract (n-3)(n-2)/2 from the sum of the whole array, you get
sum(Array) - (n-3)(n-2)/2 = x = p + q
Now do the same for the product:
1 * 2 * ... * n - 3 * p * q = (n - 3)! * p * q
prod(Array) / (n - 3)! = y = p * q
Your now got these terms:
x = p + q
y = p * q
=> y(p + q) = x(p * q)
If you transform this term, you should be able to calculate p and q
Insert each element into a set/hashtable, first checking if its are already in it.
You might be able to take advantage of the fact that sum(array) = (n-2)*(n-3)/2 + two missing numbers.
Edit: As others have noted, combined with the sum-of-squares, you can use this, I was just a little slow in figuring it out.
Check this old but good paper on the topic:
Finding Repeated Elements (PDF)
Some answers to the question: Algorithm to determine if array contains n…n+m? contain as a subproblem solutions which you can adopt for your purpose.
For example, here's a relevant part from my answer:
bool has_duplicates(int* a, int m, int n)
{
/** O(m) in time, O(1) in space (for 'typeof(m) == typeof(*a) == int')
Whether a[] array has duplicates.
precondition: all values are in [n, n+m) range.
feature: It marks visited items using a sign bit.
*/
assert((INT_MIN - (INT_MIN - 1)) == 1); // check n == INT_MIN
for (int *p = a; p != &a[m]; ++p) {
*p -= (n - 1); // [n, n+m) -> [1, m+1)
assert(*p > 0);
}
// determine: are there duplicates
bool has_dups = false;
for (int i = 0; i < m; ++i) {
const int j = abs(a[i]) - 1;
assert(j >= 0);
assert(j < m);
if (a[j] > 0)
a[j] *= -1; // mark
else { // already seen
has_dups = true;
break;
}
}
// restore the array
for (int *p = a; p != &a[m]; ++p) {
if (*p < 0)
*p *= -1; // unmark
// [1, m+1) -> [n, n+m)
*p += (n - 1);
}
return has_dups;
}
The program leaves the array unchanged (the array should be writeable but its values are restored on exit).
It works for array sizes upto INT_MAX (on 64-bit systems it is 9223372036854775807).
suppose array is
a[0], a[1], a[2] ..... a[n-1]
sumA = a[0] + a[1] +....+a[n-1]
sumASquare = a[0]*a[0] + a[1]*a[1] + a[2]*a[2] + .... + a[n]*a[n]
sumFirstN = (N*(N+1))/2 where N=n-3 so
sumFirstN = (n-3)(n-2)/2
similarly
sumFirstNSquare = N*(N+1)*(2*N+1)/6 = (n-3)(n-2)(2n-5)/6
Suppose repeated elements are = X and Y
so X + Y = sumA - sumFirstN;
X*X + Y*Y = sumASquare - sumFirstNSquare;
So on solving this quadratic we can get value of X and Y.
Time Complexity = O(n)
space complexity = O(1)
I know the question is very old but I suddenly hit it and I think I have an interesting answer to it.
We know this is a brainteaser and a trivial solution (i.e. HashMap, Sort, etc) no matter how good they are would be boring.
As the numbers are integers, they have constant bit size (i.e. 32). Let us assume we are working with 4 bit integers right now. We look for A and B which are the duplicate numbers.
We need 4 buckets, each for one bit. Each bucket contains numbers which its specific bit is 1. For example bucket 1 gets 2, 3, 4, 7, ...:
Bucket 0 : Sum ( x where: x & 2 power 0 == 0 )
...
Bucket i : Sum ( x where: x & 2 power i == 0 )
We know what would be the sum of each bucket if there was no duplicate. I consider this as prior knowledge.
Once above buckets are generated, a bunch of them would have values more than expected. By constructing the number from buckets we will have (A OR B for your information).
We can calculate (A XOR B) as follows:
A XOR B = Array[i] XOR Array[i-1] XOR ... 0, XOR n-3 XOR n-2 ... XOR 0
Now going back to buckets, we know exactly which buckets have both our numbers and which ones have only one (from the XOR bit).
For the buckets that have only one number we can extract the number num = (sum - expected sum of bucket). However, we should be good only if we can find one of the duplicate numbers so if we have at least one bit in A XOR B, we've got the answer.
But what if A XOR B is zero?
Well this case is only possible if both duplicate numbers are the same number, which then our number is the answer of A OR B.
Sorting the array would seem to be the best solution. A simple sort would then make the search trivial and would take a whole lot less time/space.
Otherwise, if you know the domain of the numbers, create an array with that many buckets in it and increment each as you go through the array. something like this:
int count [10];
for (int i = 0; i < arraylen; i++) {
count[array[i]]++;
}
Then just search your array for any numbers greater than 1. Those are the items with duplicates. Only requires one pass across the original array and one pass across the count array.
Here's implementation in Python of #eugensk00's answer (one of its revisions) that doesn't use modular arithmetic. It is a single-pass algorithm, O(log(n)) in space. If fixed-width (e.g. 32-bit) integers are used then it is requires only two fixed-width numbers (e.g. for 32-bit: one 64-bit number and one 128-bit number). It can handle arbitrary large integer sequences (it reads one integer at a time therefore a whole sequence doesn't require to be in memory).
def two_repeated(iterable):
s1, s2 = 0, 0
for i, j in enumerate(iterable):
s1 += j - i # number_of_digits(s1) ~ 2 * number_of_digits(i)
s2 += j*j - i*i # number_of_digits(s2) ~ 4 * number_of_digits(i)
s1 += (i - 1) + i
s2 += (i - 1)**2 + i**2
p = (s1 - int((2*s2 - s1**2)**.5)) // 2
# `Decimal().sqrt()` could replace `int()**.5` for really large integers
# or any function to compute integer square root
return p, s1 - p
Example:
>>> two_repeated([2, 3, 6, 1, 5, 4, 0, 3, 5])
(3, 5)
A more verbose version of the above code follows with explanation:
def two_repeated_seq(arr):
"""Return the only two duplicates from `arr`.
>>> two_repeated_seq([2, 3, 6, 1, 5, 4, 0, 3, 5])
(3, 5)
"""
n = len(arr)
assert all(0 <= i < n - 2 for i in arr) # all in range [0, n-2)
assert len(set(arr)) == (n - 2) # number of unique items
s1 = (n-2) + (n-1) # s1 and s2 have ~ 2*(k+1) and 4*(k+1) digits
s2 = (n-2)**2 + (n-1)**2 # where k is a number of digits in `max(arr)`
for i, j in enumerate(arr):
s1 += j - i
s2 += j*j - i*i
"""
s1 = (n-2) + (n-1) + sum(arr) - sum(range(n))
= sum(arr) - sum(range(n-2))
= sum(range(n-2)) + p + q - sum(range(n-2))
= p + q
"""
assert s1 == (sum(arr) - sum(range(n-2)))
"""
s2 = (n-2)**2 + (n-1)**2 + sum(i*i for i in arr) - sum(i*i for i in range(n))
= sum(i*i for i in arr) - sum(i*i for i in range(n-2))
= p*p + q*q
"""
assert s2 == (sum(i*i for i in arr) - sum(i*i for i in range(n-2)))
"""
s1 = p+q
-> s1**2 = (p+q)**2
-> s1**2 = p*p + 2*p*q + q*q
-> s1**2 - (p*p + q*q) = 2*p*q
s2 = p*p + q*q
-> p*q = (s1**2 - s2)/2
Let C = p*q = (s1**2 - s2)/2 and B = p+q = s1 then from Viete theorem follows
that p and q are roots of x**2 - B*x + C = 0
-> p = (B + sqrtD) / 2
-> q = (B - sqrtD) / 2
where sqrtD = sqrt(B**2 - 4*C)
-> p = (s1 + sqrt(2*s2 - s1**2))/2
"""
sqrtD = (2*s2 - s1**2)**.5
assert int(sqrtD)**2 == (2*s2 - s1**2) # perfect square
sqrtD = int(sqrtD)
assert (s1 - sqrtD) % 2 == 0 # even
p = (s1 - sqrtD) // 2
q = s1 - p
assert q == ((s1 + sqrtD) // 2)
assert sqrtD == (q - p)
return p, q
NOTE: calculating integer square root of a number (~ N**4) makes the above algorithm non-linear.
Since a range is specified, you can perform radix sort. This would sort your array in O(n). Searching for duplicates in a sorted array is then O(n)
You can use simple nested for loop
int[] numArray = new int[] { 1, 2, 3, 4, 5, 7, 8, 3, 7 };
for (int i = 0; i < numArray.Length; i++)
{
for (int j = i + 1; j < numArray.Length; j++)
{
if (numArray[i] == numArray[j])
{
//DO SOMETHING
}
}
*OR you can filter the array and use recursive function if you want to get the count of occurrences*
int[] array = { 1, 2, 3, 4, 5, 4, 4, 1, 8, 9, 23, 4, 6, 8, 9, 1,4 };
int[] myNewArray = null;
int a = 1;
void GetDuplicates(int[] array)
for (int i = 0; i < array.Length; i++)
{
for (int j = i + 1; j < array.Length; j++)
{
if (array[i] == array[j])
{
a += 1;
}
}
Console.WriteLine(" {0} occurred {1} time/s", array[i], a);
IEnumerable<int> num = from n in array where n != array[i] select n;
myNewArray = null;
a = 1;
myNewArray = num.ToArray() ;
break;
}
GetDuplicates(myNewArray);
answer to 18..
you are taking an array of 9 and elements are starting from 0..so max ele will be 6 in your array. Take sum of elements from 0 to 6 and take sum of array elements. compute their difference (say d). This is p + q. Now take XOR of elements from 0 to 6 (say x1). Now take XOR of array elements (say x2). x2 is XOR of all elements from 0 to 6 except two repeated elements since they cancel out each other. now for i = 0 to 6, for each ele of array, say p is that ele a[i] so you can compute q by subtracting this ele from the d. do XOR of p and q and XOR them with x2 and check if x1==x2. likewise doing for all elements you will get the elements for which this condition will be true and you are done in O(n). Keep coding!
check this out ...
O(n) time and O(1) space complexity
for(i=0;i< n;i++)
xor=xor^arr[i]
for(i=1;i<=n-3;i++)
xor=xor^i;
So in the given example you will get the xor of 3 and 5
xor=xor & -xor //Isolate the last digit
for(i = 0; i < n; i++)
{
if(arr[i] & xor)
x = x ^ arr[i];
else
y = y ^ arr[i];
}
for(i = 1; i <= n-3; i++)
{
if(i & xor)
x = x ^ i;
else
y = y ^ i;
}
x and y are your answers
For each number: check if it exists in the rest of the array.
Without sorting you're going to have a keep track of numbers you've already visited.
in psuedocode this would basically be (done this way so I'm not just giving you the answer):
for each number in the list
if number not already in unique numbers list
add it to the unique numbers list
else
return that number as it is a duplicate
end if
end for each
How about this:
for (i=0; i<n-1; i++) {
for (j=i+1; j<n; j++) {
if (a[i] == a[j]) {
printf("%d appears more than once\n",a[i]);
break;
}
}
}
Sure it's not the fastest, but it's simple and easy to understand, and requires
no additional memory. If n is a small number like 9, or 100, then it may well be the "best". (i.e. "Best" could mean different things: fastest to execute, smallest memory footprint, most maintainable, least cost to develop etc..)
In c:
int arr[] = {2, 3, 6, 1, 5, 4, 0, 3, 5};
int num = 0, i;
for (i=0; i < 8; i++)
num = num ^ arr[i] ^i;
Since x^x=0, the numbers that are repeated odd number of times are neutralized. Let's call the unique numbers a and b.We are left with a^b. We know a^b != 0, since a != b. Choose any 1 bit of a^b, and use that as a mask ie.choose x as a power of 2 so that x & (a^b) is nonzero.
Now split the list into two sublists -- one sublist contains all numbers y with y&x == 0, and the rest go in the other sublist. By the way we chose x, we know that the pairs of a and b are in different buckets. So we can now apply the same method used above to each bucket independently, and discover what a and b are.
I have written a small programme which finds out the number of elements not repeated, just go through this let me know your opinion, at the moment I assume even number of elements are even but can easily extended for odd numbers also.
So my idea is to first sort the numbers and then apply my algorithm.quick sort can be use to sort this elements.
Lets take an input array as below
int arr[] = {1,1,2,10,3,3,4,5,5,6,6};
the number 2,10 and 4 are not repeated ,but they are in sorted order, if not sorted use quick sort to first sort it out.
Lets apply my programme on this
using namespace std;
main()
{
//int arr[] = {2, 9, 6, 1, 1, 4, 2, 3, 5};
int arr[] = {1,1,2,10,3,3,4,5,5,6,6};
int i = 0;
vector<int> vec;
int var = arr[0];
for(i = 1 ; i < sizeof(arr)/sizeof(arr[0]); i += 2)
{
var = var ^ arr[i];
if(var != 0 )
{
//put in vector
var = arr[i-1];
vec.push_back(var);
i = i-1;
}
var = arr[i+1];
}
for(int i = 0 ; i < vec.size() ; i++)
printf("value not repeated = %d\n",vec[i]);
}
This gives the output:
value not repeated= 2
value not repeated= 10
value not repeated= 4
Its simple and very straight forward, just use XOR man.
for(i=1;i<=n;i++) {
if(!(arr[i] ^ arr[i+1]))
printf("Found Repeated number %5d",arr[i]);
}
Here is an algorithm that uses order statistics and runs in O(n).
You can solve this by repeatedly calling SELECT with the median as parameter.
You also rely on the fact that After a call to SELECT,
the elements that are less than or equal to the median are moved to the left of the median.
Call SELECT on A with the median as the parameter.
If the median value is floor(n/2) then the repeated values are right to the median. So you continue with the right half of the array.
Else if it is not so then a repeated value is left to the median. So you continue with the left half of the array.
You continue this way recursively.
For example:
When A={2, 3, 6, 1, 5, 4, 0, 3, 5} n=9, then the median should be the value 4.
After the first call to SELECT
A={3, 2, 0, 1, <3>, 4, 5, 6, 5} The median value is smaller than 4 so we continue with the left half.
A={3, 2, 0, 1, 3}
After the second call to SELECT
A={1, 0, <2>, 3, 3} then the median should be 2 and it is so we continue with the right half.
A={3, 3}, found.
This algorithm runs in O(n+n/2+n/4+...)=O(n).
What about using the https://en.wikipedia.org/wiki/HyperLogLog?
Redis does http://redis.io/topics/data-types-intro#hyperloglogs
A HyperLogLog is a probabilistic data structure used in order to count unique things (technically this is referred to estimating the cardinality of a set). Usually counting unique items requires using an amount of memory proportional to the number of items you want to count, because you need to remember the elements you have already seen in the past in order to avoid counting them multiple times. However there is a set of algorithms that trade memory for precision: you end with an estimated measure with a standard error, in the case of the Redis implementation, which is less than 1%. The magic of this algorithm is that you no longer need to use an amount of memory proportional to the number of items counted, and instead can use a constant amount of memory! 12k bytes in the worst case, or a lot less if your HyperLogLog (We'll just call them HLL from now) has seen very few elements.
Well using the nested for loop and assuming the question is to find the number occurred only twice in an array.
def repeated(ar,n):
count=0
for i in range(n):
for j in range(i+1,n):
if ar[i] == ar[j]:
count+=1
if count == 1:
count=0
print("repeated:",ar[i])
arr= [2, 3, 6, 1, 5, 4, 0, 3, 5]
n = len(arr)
repeated(arr,n)
Why should we try out doing maths ( specially solving quadratic equations ) these are costly op . Best way to solve this would be t construct a bitmap of size (n-3) bits , i.e, (n -3 ) +7 / 8 bytes . Better to do a calloc for this memory , so every single bit will be initialized to 0 . Then traverse the list & set the particular bit to 1 when encountered , if the bit is set to 1 already for that no then that is the repeated no .
This can be extended to find out if there is any missing no in the array or not.
This solution is O(n) in time complexity

Resources