What is the best algo for this array minimization problem? - algorithm

I have an array say 'A' of size n having some numbers {a1, a2, …, an} not necessarily distinct.
I have to create another array B = {b1, b2, …, bn} which are distinct such that the value of
sum|ai - bi| over all i's{i =1 to i =n} is minimized.
Basically I want to minimize sum of |ai - bi| over all i
What is the best algo for this?
I tried a greedy approach:
pseudocode:
for i = 0 to n-1{
if(a[i] not in b){
b[i] = a[i];}
else{
cnt = 1
assigned = false
do{
if(a[i]-cnt not in b){
b[i] = a[i]-cnt;
assigned = true}
elif(a[i]+cnt not in b){
b[i] = a[i]+cnt;
assigned = true}
else
cnt++
}while(assigned==false)
}//else
}//for loop
NOte:
'n' is an input variable.
the goal is to minimize sum of |ai - bi| over all i

I came up with a O(NlogN) solution. Its based on sorting the input-sequence and greedily expanding the available numbers around it.
Code implementation in Python:
def get_closest_distinct_tuple(X: list):
X = sorted(X, reverse=True)
hmap = {}
used_set = set()
Y = []
for x in X:
if x not in used_set:
Y.append(x)
hmap[x] = 1
used_set.add(x)
else:
Y.append(x + hmap[x])
used_set.add(x + hmap[x])
hmap[x] = 1 - hmap[x] if hmap[x] < 0 else -hmap[x]
dist = sum([abs(X[i]-Y[i]) for i in range(len(X))])
return dist, Y
print(get_closest_distinct_tuple([20, 1, 1, 1, 1, 1, 1]))
Output:
Dist: 9
Y = [20, 1, 2, 0, 3, -1, 4]
I couldnt really find a way to prove that this is the most optimal solution out there.

Related

In an array of randomly generated booleans, change 'k' falses to trues to create the largest continuous chain of trues

I was asked in an interview the following question:
In an array of randomly generated booleans, such as : T F T T F F F T F F F F T
Write an algorithm to determine which false values to change to true, to maximize the largest continuous chunk of trues. In the above example, suppose that k = 3. One of the solutions would be:
T F T T T* T* T* T F F F F T
Where T* denotes a value that has been changed.
Besides simple bruteforce, one of the methods I came up with was to find the largest continuous chunk of False values, and compare it with k. If it is less, then we replace the entire chunk with True and continue with the 'k' that is remaining. However, it turns out this method didn't always guarantee the correct answer.
Another more complicated method I thought of is this: for every chunk of falses inbetween chunks of trues, compute how big of a chunk can be built by flipping the falses inbetween the trues. Then it comes down to selecting the best combination of chunks inbetween trues to flip.
What is the optimal algorithm for this problem?
Many thanks.
Find the largest range that contains k false values. You can do this in linear time by keeping a running window.
You really can do this with a sliding window. I actually think that even though conceptually it's not a difficult problem, it is tricky to get the indexing right for the edge cases especially with the pressure of an interview.
Here's one way to do it:
Set two index variable to zero (start and end). Scan ahead incrementing end to right before the k+1 'F' (or the end of the array) putting the indexes of the 'F's in an array. This is your initial best guess and location of the 'F's.
Increment end to the next 'F', and move start to the next index in your array of F locations. Test if it's longer and repeat. You can keep track of the best start which will be the initial 'F' you'll need to change.
It's a little easier to show an example than explain, but it's basically a moving window while keeping track of the best run and best initial 'F' to change. Here's a quick and dirty JS implementation:
function findBestFlips(k, arr) {
let start, end, max, best_start, n;
start = end = max = best_start_index = n = 0;
let fs = [];
for (end = 0; end <= arr.length; end++) {
if (arr[end] == 0) {
fs.push(end)
if (fs.length <= k + 1) {
max = end; // set initial max from start of array to right before (k+1)th false value
continue // fill up fs with k+1 values if possible
}
if (max < (end - (fs[start] + 1))) {
max = end - (fs[start] + 1)
best_start_index = start + 1
}
start++
}
}
/* The above while loop stopped with potentially more ‘T’s at the end of the array.
push on to the end of the array */
if (max < arr.length - (fs[start] + 1)) {
max = arr.length - (fs[start] + 1)
best_start_index = start + 1
}
/* fs should have the index of all the false values
best_start through k + best_start_index are the indexes we need to change
to get the best_run */
if (fs.length <= k) max = arr.length
return {
flip_indexes: fs.slice(best_start_index, k + best_start_index),
best_run: max
}
}
let arr = [1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1]
let k = 3;
console.log(findBestFlips(k, arr))
// edge cases
arr = [1, 0, 1, 1, 1, 1]
k = 3;
console.log(findBestFlips(k, arr))
arr = [0, 0, 0]
k = 3;
console.log(findBestFlips(k, arr))
arr = [0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0]
k = 3;
console.log(findBestFlips(k, arr))

Man move from source to destination with constraints

Consider this cartesian graph where each index represents a weight.
[3, 2, 1, 4, 2
1, 3, 3, 2, 2
S, 3, 4, 1, D
3, 1, 2, 4, 3
4, 2, 3, 1, 4]
A man is standing at source 'S' and he has to reach destination 'D' at minimum cost. Constraints are:
If the man moves from one index to another index where both index share same cost, the cost of moving man is '1'.
If the man moves from one index to another index where both indexes have different cost, the cost of moving man is abs(n-m)*10 + 1.
Last but not the least, man can only move up, down, left & right. No diagonal moves.
Which data structure & algorithm is best suited for this problem. I have thought of representing this problem as a graph and use one of the greedy approaches but could not reach to clean solution in my mind.
I would use A* to solve the problem. The distance can be estimated by dx + dy + 10 * dValue + distance travelled (it is impossible that the way is shorter than that, see example at the bottom). The idea of A* is to expand always the node with the lowest estimated distance, as soon as you find the destination node you are finished. This works if the estimation never over-estimates the distance. Here is an implementation in JS (fiddle):
function solve(matrix, sRow, sCol, eRow, eCol) {
if (sRow == eRow && sCol == eCol)
return 0;
let n = matrix.length, m = matrix[0].length;
let d = [], dirs = [[-1, 0], [0, 1], [1, 0], [0, -1]];
for (let i = 0; i < n; i++) {
d.push([]);
for (let j = 0; j < m; j++)
d[i].push(1000000000);
}
let list = [[sRow, sCol, 0]];
d[sRow][sCol] = 0;
for (;;) {
let pos = list.pop();
for (let i = 0; i < dirs.length; i++) {
let r = pos[0] + dirs[i][0], c = pos[1] + dirs[i][1];
if (r >= 0 && r < n && c >= 0 && c < m) {
let v = d[pos[0]][pos[1]] + 1 + 10 * Math.abs(matrix[pos[0]][pos[1]] - matrix[r][c]);
if (r == eRow && c == eCol)
return v;
if (v < d[r][c]) {
d[r][c] = v;
list.push([r, c, v + Math.abs(r - eRow) + Math.abs(c - eCol) + 10 * Math.abs(matrix[r][c] - matrix[eRow][eCol])]);
}
}
}
list.sort(function(a, b) {
if (a[2] > b[2])
return -1;
if (a[2] < b[2])
return 1;
return 0;
});
}
}
The answer for the example is 46 and only 8 nodes are getting expanded!
Estimation example, from (0,0) to D:
distance from S to (0,0) is 22
dx = abs(0 - 4) = 4
dy = abs(0 - 2) = 2
dValue = abs(3 - 1) = 2
estimation = distance + dx + dy + 10 * dValue = 22 + 4 + 2 + 10 * 2 = 48
Note: the implementation uses rows and columns insted of x and y, so they are swapped, it doesn't really matter it just has to be consistent.
Although not explicitly stated, in the problem formulation there seem to be only positive node weights, which means that a shortest path will have no repetition of nodes. As the cost does not depend on the nodes only, approaches like the Bellman-Ford algorithm or the algorithm by Dijkstra are not suitable.
That being said, apparently the path can be found recursively by using depth-first search, where nodes which are currently occuring in the stack may not be visited. Every time the destination is reached, the current path (which is contained in the stack at each time the destination is reached) along with its associated cost, which could be maintained in an auxiliary variable, could be evaluated against the best previously found path. On termination, a path with minimum cost would be stored.

Number of unique sequences of 3 digits (-1,0,1) given a length that matches a sum

Say you have a vertical game board of length n (being the number of spaces). And you have a three-sided die that has the options: go forward one, stay and go back one. If you go below or above the number of board game spaces it is an invalid game. The only valid move once you reach the end of the board is "stay". Given an exact number of die rolls t, is it possible to algorithmically work out the number of unique dice rolls that result in a winning game?
So far I've tried producing a list of every possible combination of (-1,0,1) for the given number of die rolls and sorting through the list to see if any add up to the length of the board and also meet all the requirements for being a valid game. But this is impractical for dice rolls above 20.
For example:
t=1, n=2; Output=1
t=3, n=2; Output=3
You can use a dynamic programming approach. The sketch of a recurrence is:
M(0, 1) = 1
M(t, n) = T(t-1, n-1) + T(t-1, n) + T(t-1, n+1)
Of course you have to consider the border cases (like going off the board or not allowing to exit the end of the board, but it's easy to code that).
Here's some Python code:
def solve(N, T):
M, M2 = [0]*N, [0]*N
M[0] = 1
for i in xrange(T):
M, M2 = M2, M
for j in xrange(N):
M[j] = (j>0 and M2[j-1]) + M2[j] + (j+1<N-1 and M2[j+1])
return M[N-1]
print solve(3, 2) #1
print solve(2, 1) #1
print solve(2, 3) #3
print solve(5, 20) #19535230
Bonus: fancy "one-liner" with list compreehension and reduce
def solve(N, T):
return reduce(
lambda M, _: [(j>0 and M[j-1]) + M[j] + (j<N-2 and M[j+1]) for j in xrange(N)],
xrange(T), [1]+[0]*N)[-1]
Let M[i, j] be an N by N matrix with M[i, j] = 1 if |i-j| <= 1 and 0 otherwise (and the special case for the "stay" rule of M[N, N-1] = 0)
This matrix counts paths of length 1 from position i to position j.
To find paths of length t, simply raise M to the t'th power. This can be performed efficiently by linear algebra packages.
The solution can be read off: M^t[1, N].
For example, computing paths of length 20 on a board of size 5 in an interactive Python session:
>>> import numpy
>>> M = numpy.matrix('1 1 0 0 0;1 1 1 0 0; 0 1 1 1 0; 0 0 1 1 1; 0 0 0 0 1')
>>> M
matrix([[1, 1, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 1, 1],
[0, 0, 0, 0, 1]])
>>> M ** 20
matrix([[31628466, 51170460, 51163695, 31617520, 19535230],
[51170460, 82792161, 82787980, 51163695, 31617520],
[51163695, 82787980, 82792161, 51170460, 31628465],
[31617520, 51163695, 51170460, 31628466, 19552940],
[ 0, 0, 0, 0, 1]])
So there's M^20[1, 5], or 19535230 paths of length 20 from start to finish on a board of size 5.
Try a backtracking algorithm. Recursively "dive down" into depth t and only continue with dice values that could still result in a valid state. Propably by passing a "remaining budget" around.
For example, n=10, t=20, when you reached depth 10 of 20 and your budget is still 10 (= steps forward and backwards seemed to cancelled), the next recursion steps until depth t would discontinue the 0 and -1 possibilities, because they could not result in a valid state at the end.
A backtracking algorithms for this case is still very heavy (exponential), but better than first blowing up a bubble with all possibilities and then filtering.
Since zeros can be added anywhere, we'll multiply those possibilities by the different arrangements of (-1)'s:
X (space 1) X (space 2) X (space 3) X (space 4) X
(-1)'s can only appear in spaces 1,2 or 3, not in space 4. I got help with the mathematical recurrence that counts the number of ways to place minus ones without skipping backwards.
JavaScript code:
function C(n,k){if(k==0||n==k)return 1;var p=n;for(var i=2;i<=k;i++)p*=(n+1-i)/i;return p}
function sumCoefficients(arr,cs){
var s = 0, i = -1;
while (arr[++i]){
s += cs[i] * arr[i];
}
return s;
}
function f(n,t){
var numMinusOnes = (t - (n-1)) >> 1
result = C(t,n-1),
numPlaces = n - 2,
cs = [];
for (var i=1; numPlaces-i>=i-1; i++){
cs.push(-Math.pow(-1,i) * C(numPlaces + 1 - i,i));
}
var As = new Array(cs.length),
An;
As[0] = 1;
for (var m=1; m<=numMinusOnes; m++){
var zeros = t - (n-1) - 2*m;
An = sumCoefficients(As,cs);
As.unshift(An);
As.pop();
result += An * C(zeros + 2*m + n-1,zeros);
}
return result;
}
Output:
console.log(f(5,20))
19535230

Find subarray with given sum

I am trying to implement functional style of finding subarray with given sum.
Code i wrote is not up to functional style. Can someone help to make it more functional.
Problem : Given an unsorted array of nonnegative integers, find a continous subarray which adds to a given number.
Input: arr[] = {1, 4, 20, 3, 10, 5}, sum = 33
Ouptut: Sum found between indexes 2 and 4
Input: arr[] = {1, 4, 0, 0, 3, 10, 5}, sum = 7
Ouptut: Sum found between indexes 1 and 4
I could solve this problem in brute force approach. But looking for more effective functional solution.
val sumList = list.foldLeft(List(0), 0)((l, r) => (l._1 :+ (l._2+r), l._2 + r))._1.drop(1)
//Brute force approach
sumList.zipWithIndex.combinations(2).toList.collectFirst({
case i if i(1)._1 - i(0)._1 == sum => i
}) match {
case Some(List(x, y)) => println("elements which form the given sum are => "+ list.drop(x._2+1).take(y._2-x._2))
case _ => println("couldn't find elements which satisfy the given condition")
}
Algorithm : Initialize a variable curr_sum as first element. curr_sum indicates the sum of current subarray. Start from the second element and add all elements one by one to the curr_sum. If curr_sum becomes equal to sum, then print the solution. If curr_sum exceeds the sum, then remove trailing elemnents while curr_sum is greater than sum.
val list:List[Int] = List(1, 4, 20, 3, 10, 5)
val sum = 33
val (totalSum, start, end, isSumFound) = list.zipWithIndex.drop(1).foldLeft(list.head, 0, 1, false)((l, r) =>
if(!l._4) {
val tempSum = l._1 + r._1
if (tempSum == sum){
(sum, l._2, r._2, true)
} else if(tempSum > sum){
var (curSum, curIndex) = (tempSum, l._2)
while(curSum > sum && curIndex < list.length-1){
curSum = curSum - list(curIndex)
curIndex = l._2 +1
}
(curSum, curIndex, r._2, curSum == sum)
} else {
(tempSum, l._2, r._2, false)
}
}else
l
)
if(isSumFound || totalSum == sum){
println("elements which form the given sum are => "+ list.drop(start+1).take(end-start))
}else{
println("couldn't find elements which satisfy the given condition")
}
val list:List[Int] = List(1, 4, 20, 3, 10, 5)
val sum = 33
A method to return a iterator of sublists, first with the ones that start with the first element, then starting with the second...
def subLists[T](xs:List[T]):Iterator[List[T]] =
if (xs == Nil) Iterator.empty
else xs.inits ++ subLists(xs.tail)
Find the first list with the correct sum
val ol = subLists(list).collectFirst{ case x if x.sum == sum => x}
Then find the index again, and print the result
ol match {
case None => println("No such subsequence")
case Some(l) => val i = list.indexOfSlice(l)
println("Sequence of sum " + sum +
" found between " + i +
" and " + (i + l.length - 1))
}
//> Sequence of sum 33 found between 2 and 4
(you could keep track of the index associated with the sublist when building the iterator, but that seems more trouble than it is worth, and reduces the general usefulness of subLists)
EDIT: Here's a version of the code you posted that's more "functional". But I think my first version is clearer - it's simpler to separate the concerns of generating the sequences from checking their sums
val sumList = list.scanLeft(0){_ + _}
val is = for {i <- 1 to list.length - 1
j <- 0 to i
if sumList(i)-sumList(j) == sum}
yield (j, i-1)
is match {
case Seq() => println("No such subsequence")
case (start, end) +: _ =>
println("Sequence of sum " + sum +
" found between " + start + " and " + end )
}
//> Sequence of sum 33 found between 2 and 4
EDIT2: And here's an O(N) one. "Functional" in that there are no mutable variables, but it's less clear than the others, in my opinion. It's a bit clearer if you just print the results as they are found (no need to carry the rs part of the accumulator between iterations) but that side-effecting way seems less functional, so I return a list of solutions.
val sums = list.scanLeft(0)(_ + _) zipWithIndex
sums.drop(1).foldLeft((sums, List[(Int, Int)]())) {
case ((leftTotal, rs), total) =>
val newL = leftTotal.dropWhile(total._1 - _._1 > target)
if (total._1 - newL.head._1 == target)
(newL, (newL.head._2, total._2 - 1) :: rs)
else (newL, rs)
}._2
//> res0: List[(Int, Int)] = List((2,4))
O(N) because we pass the shortened newL as the next iterations leftTotal, so dropWhile only ever goes through the list once. This one relies on the integers being non-negative (so adding another element cannot reduce the total), the others work with negative integers too.

Algorithm to find two repeated numbers in an array, without sorting

There is an array of size n (numbers are between 0 and n - 3) and only 2 numbers are repeated. Elements are placed randomly in the array.
E.g. in {2, 3, 6, 1, 5, 4, 0, 3, 5} n=9, and repeated numbers are 3 and 5.
What is the best way to find the repeated numbers?
P.S. [You should not use sorting]
There is a O(n) solution if you know what the possible domain of input is. For example if your input array contains numbers between 0 to 100, consider the following code.
bool flags[100];
for(int i = 0; i < 100; i++)
flags[i] = false;
for(int i = 0; i < input_size; i++)
if(flags[input_array[i]])
return input_array[i];
else
flags[input_array[i]] = true;
Of course there is the additional memory but this is the fastest.
OK, seems I just can't give it a rest :)
Simplest solution
int A[N] = {...};
int signed_1(n) { return n%2<1 ? +n : -n; } // 0,-1,+2,-3,+4,-5,+6,-7,...
int signed_2(n) { return n%4<2 ? +n : -n; } // 0,+1,-2,-3,+4,+5,-6,-7,...
long S1 = 0; // or int64, or long long, or some user-defined class
long S2 = 0; // so that it has enough bits to contain sum without overflow
for (int i=0; i<N-2; ++i)
{
S1 += signed_1(A[i]) - signed_1(i);
S2 += signed_2(A[i]) - signed_2(i);
}
for (int i=N-2; i<N; ++i)
{
S1 += signed_1(A[i]);
S2 += signed_2(A[i]);
}
S1 = abs(S1);
S2 = abs(S2);
assert(S1 != S2); // this algorithm fails in this case
p = (S1+S2)/2;
q = abs(S1-S2)/2;
One sum (S1 or S2) contains p and q with the same sign, the other sum - with opposite signs, all other members are eliminated.
S1 and S2 must have enough bits to accommodate sums, the algorithm does not stand for overflow because of abs().
if abs(S1)==abs(S2) then the algorithm fails, though this value will still be the difference between p and q (i.e. abs(p - q) == abs(S1)).
Previous solution
I doubt somebody will ever encounter such a problem in the field ;)
and I guess, I know the teacher's expectation:
Lets take array {0,1,2,...,n-2,n-1},
The given one can be produced by replacing last two elements n-2 and n-1 with unknown p and q (less order)
so, the sum of elements will be (n-1)n/2 + p + q - (n-2) - (n-1)
the sum of squares (n-1)n(2n-1)/6 + p^2 + q^2 - (n-2)^2 - (n-1)^2
Simple math remains:
(1) p+q = S1
(2) p^2+q^2 = S2
Surely you won't solve it as math classes teach to solve square equations.
First, calculate everything modulo 2^32, that is, allow for overflow.
Then check pairs {p,q}: {0, S1}, {1, S1-1} ... against expression (2) to find candidates (there might be more than 2 due to modulo and squaring)
And finally check found candidates if they really are present in array twice.
You know that your Array contains every number from 0 to n-3 and the two repeating ones (p & q). For simplicity, lets ignore the 0-case for now.
You can calculate the sum and the product over the array, resulting in:
1 + 2 + ... + n-3 + p + q = p + q + (n-3)(n-2)/2
So if you substract (n-3)(n-2)/2 from the sum of the whole array, you get
sum(Array) - (n-3)(n-2)/2 = x = p + q
Now do the same for the product:
1 * 2 * ... * n - 3 * p * q = (n - 3)! * p * q
prod(Array) / (n - 3)! = y = p * q
Your now got these terms:
x = p + q
y = p * q
=> y(p + q) = x(p * q)
If you transform this term, you should be able to calculate p and q
Insert each element into a set/hashtable, first checking if its are already in it.
You might be able to take advantage of the fact that sum(array) = (n-2)*(n-3)/2 + two missing numbers.
Edit: As others have noted, combined with the sum-of-squares, you can use this, I was just a little slow in figuring it out.
Check this old but good paper on the topic:
Finding Repeated Elements (PDF)
Some answers to the question: Algorithm to determine if array contains n…n+m? contain as a subproblem solutions which you can adopt for your purpose.
For example, here's a relevant part from my answer:
bool has_duplicates(int* a, int m, int n)
{
/** O(m) in time, O(1) in space (for 'typeof(m) == typeof(*a) == int')
Whether a[] array has duplicates.
precondition: all values are in [n, n+m) range.
feature: It marks visited items using a sign bit.
*/
assert((INT_MIN - (INT_MIN - 1)) == 1); // check n == INT_MIN
for (int *p = a; p != &a[m]; ++p) {
*p -= (n - 1); // [n, n+m) -> [1, m+1)
assert(*p > 0);
}
// determine: are there duplicates
bool has_dups = false;
for (int i = 0; i < m; ++i) {
const int j = abs(a[i]) - 1;
assert(j >= 0);
assert(j < m);
if (a[j] > 0)
a[j] *= -1; // mark
else { // already seen
has_dups = true;
break;
}
}
// restore the array
for (int *p = a; p != &a[m]; ++p) {
if (*p < 0)
*p *= -1; // unmark
// [1, m+1) -> [n, n+m)
*p += (n - 1);
}
return has_dups;
}
The program leaves the array unchanged (the array should be writeable but its values are restored on exit).
It works for array sizes upto INT_MAX (on 64-bit systems it is 9223372036854775807).
suppose array is
a[0], a[1], a[2] ..... a[n-1]
sumA = a[0] + a[1] +....+a[n-1]
sumASquare = a[0]*a[0] + a[1]*a[1] + a[2]*a[2] + .... + a[n]*a[n]
sumFirstN = (N*(N+1))/2 where N=n-3 so
sumFirstN = (n-3)(n-2)/2
similarly
sumFirstNSquare = N*(N+1)*(2*N+1)/6 = (n-3)(n-2)(2n-5)/6
Suppose repeated elements are = X and Y
so X + Y = sumA - sumFirstN;
X*X + Y*Y = sumASquare - sumFirstNSquare;
So on solving this quadratic we can get value of X and Y.
Time Complexity = O(n)
space complexity = O(1)
I know the question is very old but I suddenly hit it and I think I have an interesting answer to it.
We know this is a brainteaser and a trivial solution (i.e. HashMap, Sort, etc) no matter how good they are would be boring.
As the numbers are integers, they have constant bit size (i.e. 32). Let us assume we are working with 4 bit integers right now. We look for A and B which are the duplicate numbers.
We need 4 buckets, each for one bit. Each bucket contains numbers which its specific bit is 1. For example bucket 1 gets 2, 3, 4, 7, ...:
Bucket 0 : Sum ( x where: x & 2 power 0 == 0 )
...
Bucket i : Sum ( x where: x & 2 power i == 0 )
We know what would be the sum of each bucket if there was no duplicate. I consider this as prior knowledge.
Once above buckets are generated, a bunch of them would have values more than expected. By constructing the number from buckets we will have (A OR B for your information).
We can calculate (A XOR B) as follows:
A XOR B = Array[i] XOR Array[i-1] XOR ... 0, XOR n-3 XOR n-2 ... XOR 0
Now going back to buckets, we know exactly which buckets have both our numbers and which ones have only one (from the XOR bit).
For the buckets that have only one number we can extract the number num = (sum - expected sum of bucket). However, we should be good only if we can find one of the duplicate numbers so if we have at least one bit in A XOR B, we've got the answer.
But what if A XOR B is zero?
Well this case is only possible if both duplicate numbers are the same number, which then our number is the answer of A OR B.
Sorting the array would seem to be the best solution. A simple sort would then make the search trivial and would take a whole lot less time/space.
Otherwise, if you know the domain of the numbers, create an array with that many buckets in it and increment each as you go through the array. something like this:
int count [10];
for (int i = 0; i < arraylen; i++) {
count[array[i]]++;
}
Then just search your array for any numbers greater than 1. Those are the items with duplicates. Only requires one pass across the original array and one pass across the count array.
Here's implementation in Python of #eugensk00's answer (one of its revisions) that doesn't use modular arithmetic. It is a single-pass algorithm, O(log(n)) in space. If fixed-width (e.g. 32-bit) integers are used then it is requires only two fixed-width numbers (e.g. for 32-bit: one 64-bit number and one 128-bit number). It can handle arbitrary large integer sequences (it reads one integer at a time therefore a whole sequence doesn't require to be in memory).
def two_repeated(iterable):
s1, s2 = 0, 0
for i, j in enumerate(iterable):
s1 += j - i # number_of_digits(s1) ~ 2 * number_of_digits(i)
s2 += j*j - i*i # number_of_digits(s2) ~ 4 * number_of_digits(i)
s1 += (i - 1) + i
s2 += (i - 1)**2 + i**2
p = (s1 - int((2*s2 - s1**2)**.5)) // 2
# `Decimal().sqrt()` could replace `int()**.5` for really large integers
# or any function to compute integer square root
return p, s1 - p
Example:
>>> two_repeated([2, 3, 6, 1, 5, 4, 0, 3, 5])
(3, 5)
A more verbose version of the above code follows with explanation:
def two_repeated_seq(arr):
"""Return the only two duplicates from `arr`.
>>> two_repeated_seq([2, 3, 6, 1, 5, 4, 0, 3, 5])
(3, 5)
"""
n = len(arr)
assert all(0 <= i < n - 2 for i in arr) # all in range [0, n-2)
assert len(set(arr)) == (n - 2) # number of unique items
s1 = (n-2) + (n-1) # s1 and s2 have ~ 2*(k+1) and 4*(k+1) digits
s2 = (n-2)**2 + (n-1)**2 # where k is a number of digits in `max(arr)`
for i, j in enumerate(arr):
s1 += j - i
s2 += j*j - i*i
"""
s1 = (n-2) + (n-1) + sum(arr) - sum(range(n))
= sum(arr) - sum(range(n-2))
= sum(range(n-2)) + p + q - sum(range(n-2))
= p + q
"""
assert s1 == (sum(arr) - sum(range(n-2)))
"""
s2 = (n-2)**2 + (n-1)**2 + sum(i*i for i in arr) - sum(i*i for i in range(n))
= sum(i*i for i in arr) - sum(i*i for i in range(n-2))
= p*p + q*q
"""
assert s2 == (sum(i*i for i in arr) - sum(i*i for i in range(n-2)))
"""
s1 = p+q
-> s1**2 = (p+q)**2
-> s1**2 = p*p + 2*p*q + q*q
-> s1**2 - (p*p + q*q) = 2*p*q
s2 = p*p + q*q
-> p*q = (s1**2 - s2)/2
Let C = p*q = (s1**2 - s2)/2 and B = p+q = s1 then from Viete theorem follows
that p and q are roots of x**2 - B*x + C = 0
-> p = (B + sqrtD) / 2
-> q = (B - sqrtD) / 2
where sqrtD = sqrt(B**2 - 4*C)
-> p = (s1 + sqrt(2*s2 - s1**2))/2
"""
sqrtD = (2*s2 - s1**2)**.5
assert int(sqrtD)**2 == (2*s2 - s1**2) # perfect square
sqrtD = int(sqrtD)
assert (s1 - sqrtD) % 2 == 0 # even
p = (s1 - sqrtD) // 2
q = s1 - p
assert q == ((s1 + sqrtD) // 2)
assert sqrtD == (q - p)
return p, q
NOTE: calculating integer square root of a number (~ N**4) makes the above algorithm non-linear.
Since a range is specified, you can perform radix sort. This would sort your array in O(n). Searching for duplicates in a sorted array is then O(n)
You can use simple nested for loop
int[] numArray = new int[] { 1, 2, 3, 4, 5, 7, 8, 3, 7 };
for (int i = 0; i < numArray.Length; i++)
{
for (int j = i + 1; j < numArray.Length; j++)
{
if (numArray[i] == numArray[j])
{
//DO SOMETHING
}
}
*OR you can filter the array and use recursive function if you want to get the count of occurrences*
int[] array = { 1, 2, 3, 4, 5, 4, 4, 1, 8, 9, 23, 4, 6, 8, 9, 1,4 };
int[] myNewArray = null;
int a = 1;
void GetDuplicates(int[] array)
for (int i = 0; i < array.Length; i++)
{
for (int j = i + 1; j < array.Length; j++)
{
if (array[i] == array[j])
{
a += 1;
}
}
Console.WriteLine(" {0} occurred {1} time/s", array[i], a);
IEnumerable<int> num = from n in array where n != array[i] select n;
myNewArray = null;
a = 1;
myNewArray = num.ToArray() ;
break;
}
GetDuplicates(myNewArray);
answer to 18..
you are taking an array of 9 and elements are starting from 0..so max ele will be 6 in your array. Take sum of elements from 0 to 6 and take sum of array elements. compute their difference (say d). This is p + q. Now take XOR of elements from 0 to 6 (say x1). Now take XOR of array elements (say x2). x2 is XOR of all elements from 0 to 6 except two repeated elements since they cancel out each other. now for i = 0 to 6, for each ele of array, say p is that ele a[i] so you can compute q by subtracting this ele from the d. do XOR of p and q and XOR them with x2 and check if x1==x2. likewise doing for all elements you will get the elements for which this condition will be true and you are done in O(n). Keep coding!
check this out ...
O(n) time and O(1) space complexity
for(i=0;i< n;i++)
xor=xor^arr[i]
for(i=1;i<=n-3;i++)
xor=xor^i;
So in the given example you will get the xor of 3 and 5
xor=xor & -xor //Isolate the last digit
for(i = 0; i < n; i++)
{
if(arr[i] & xor)
x = x ^ arr[i];
else
y = y ^ arr[i];
}
for(i = 1; i <= n-3; i++)
{
if(i & xor)
x = x ^ i;
else
y = y ^ i;
}
x and y are your answers
For each number: check if it exists in the rest of the array.
Without sorting you're going to have a keep track of numbers you've already visited.
in psuedocode this would basically be (done this way so I'm not just giving you the answer):
for each number in the list
if number not already in unique numbers list
add it to the unique numbers list
else
return that number as it is a duplicate
end if
end for each
How about this:
for (i=0; i<n-1; i++) {
for (j=i+1; j<n; j++) {
if (a[i] == a[j]) {
printf("%d appears more than once\n",a[i]);
break;
}
}
}
Sure it's not the fastest, but it's simple and easy to understand, and requires
no additional memory. If n is a small number like 9, or 100, then it may well be the "best". (i.e. "Best" could mean different things: fastest to execute, smallest memory footprint, most maintainable, least cost to develop etc..)
In c:
int arr[] = {2, 3, 6, 1, 5, 4, 0, 3, 5};
int num = 0, i;
for (i=0; i < 8; i++)
num = num ^ arr[i] ^i;
Since x^x=0, the numbers that are repeated odd number of times are neutralized. Let's call the unique numbers a and b.We are left with a^b. We know a^b != 0, since a != b. Choose any 1 bit of a^b, and use that as a mask ie.choose x as a power of 2 so that x & (a^b) is nonzero.
Now split the list into two sublists -- one sublist contains all numbers y with y&x == 0, and the rest go in the other sublist. By the way we chose x, we know that the pairs of a and b are in different buckets. So we can now apply the same method used above to each bucket independently, and discover what a and b are.
I have written a small programme which finds out the number of elements not repeated, just go through this let me know your opinion, at the moment I assume even number of elements are even but can easily extended for odd numbers also.
So my idea is to first sort the numbers and then apply my algorithm.quick sort can be use to sort this elements.
Lets take an input array as below
int arr[] = {1,1,2,10,3,3,4,5,5,6,6};
the number 2,10 and 4 are not repeated ,but they are in sorted order, if not sorted use quick sort to first sort it out.
Lets apply my programme on this
using namespace std;
main()
{
//int arr[] = {2, 9, 6, 1, 1, 4, 2, 3, 5};
int arr[] = {1,1,2,10,3,3,4,5,5,6,6};
int i = 0;
vector<int> vec;
int var = arr[0];
for(i = 1 ; i < sizeof(arr)/sizeof(arr[0]); i += 2)
{
var = var ^ arr[i];
if(var != 0 )
{
//put in vector
var = arr[i-1];
vec.push_back(var);
i = i-1;
}
var = arr[i+1];
}
for(int i = 0 ; i < vec.size() ; i++)
printf("value not repeated = %d\n",vec[i]);
}
This gives the output:
value not repeated= 2
value not repeated= 10
value not repeated= 4
Its simple and very straight forward, just use XOR man.
for(i=1;i<=n;i++) {
if(!(arr[i] ^ arr[i+1]))
printf("Found Repeated number %5d",arr[i]);
}
Here is an algorithm that uses order statistics and runs in O(n).
You can solve this by repeatedly calling SELECT with the median as parameter.
You also rely on the fact that After a call to SELECT,
the elements that are less than or equal to the median are moved to the left of the median.
Call SELECT on A with the median as the parameter.
If the median value is floor(n/2) then the repeated values are right to the median. So you continue with the right half of the array.
Else if it is not so then a repeated value is left to the median. So you continue with the left half of the array.
You continue this way recursively.
For example:
When A={2, 3, 6, 1, 5, 4, 0, 3, 5} n=9, then the median should be the value 4.
After the first call to SELECT
A={3, 2, 0, 1, <3>, 4, 5, 6, 5} The median value is smaller than 4 so we continue with the left half.
A={3, 2, 0, 1, 3}
After the second call to SELECT
A={1, 0, <2>, 3, 3} then the median should be 2 and it is so we continue with the right half.
A={3, 3}, found.
This algorithm runs in O(n+n/2+n/4+...)=O(n).
What about using the https://en.wikipedia.org/wiki/HyperLogLog?
Redis does http://redis.io/topics/data-types-intro#hyperloglogs
A HyperLogLog is a probabilistic data structure used in order to count unique things (technically this is referred to estimating the cardinality of a set). Usually counting unique items requires using an amount of memory proportional to the number of items you want to count, because you need to remember the elements you have already seen in the past in order to avoid counting them multiple times. However there is a set of algorithms that trade memory for precision: you end with an estimated measure with a standard error, in the case of the Redis implementation, which is less than 1%. The magic of this algorithm is that you no longer need to use an amount of memory proportional to the number of items counted, and instead can use a constant amount of memory! 12k bytes in the worst case, or a lot less if your HyperLogLog (We'll just call them HLL from now) has seen very few elements.
Well using the nested for loop and assuming the question is to find the number occurred only twice in an array.
def repeated(ar,n):
count=0
for i in range(n):
for j in range(i+1,n):
if ar[i] == ar[j]:
count+=1
if count == 1:
count=0
print("repeated:",ar[i])
arr= [2, 3, 6, 1, 5, 4, 0, 3, 5]
n = len(arr)
repeated(arr,n)
Why should we try out doing maths ( specially solving quadratic equations ) these are costly op . Best way to solve this would be t construct a bitmap of size (n-3) bits , i.e, (n -3 ) +7 / 8 bytes . Better to do a calloc for this memory , so every single bit will be initialized to 0 . Then traverse the list & set the particular bit to 1 when encountered , if the bit is set to 1 already for that no then that is the repeated no .
This can be extended to find out if there is any missing no in the array or not.
This solution is O(n) in time complexity

Resources