I am given a finite non decreasing arbitrary array. I can add zero or more to the individual elements of the array. What is the minimum additions required to ensure than the gcd of the array is strictly greater than 1? The non decreasing condition should hold.
Well the trivial thing to do is:
Check the gcd() of the array is actually 1. If it's bigger you're done already.
Otherwise round up numbers until they are even. This will not break the order of elements (i.e. the non-descreasing property will continue to hold once you are done with the entire array in this way).
Et Voilà: the gcd() of the array is now at least 2.
Number of additions performed:
Best case scenario: zero additions.
Worst case scenario: length(array) additions.
Average case, based on 50% chance of element being odd: ceil(0.5 * length(array)) additions.
Related
I have a question where I have an array which have either only 0 in it or its half 0 and half 1. I need to sugest a random algorithm which can decide if the array have 1's inside of it.
its need to have 75% chance of success so it can only fail in 1/4 of the options and it needs to run in O(1).
the size of the array can be any n.
I would really appreciate some help or ideas in this question
Assume n is even (otherwise you need to be more specific about what "half 0 and half 1 means").
Look at two (different) random elements in the array, and guess that the array is all zeroes if they're both zero. (If either of your elements is a one, you know that the array can't be all zeros, so must be half-and-half).
You always guess right when the array is all zero. When the array was half 1's and both of your selected elements are 0, you guess wrong. Assuming the array is half 1's, that happens with probability 1/2 * (n/2-1)/n = 1/4 - 1/2n. This is always less than 25%, so in the worst case, you succeed with at least 75% as required.
I note that there's not really any choice about what algorithm to use here -- O(1) means you can look at a bounded number of elements and the only ambiguous case is when you see all zeros. The only questions are to determine the number of elements you look at and the success probability.
Suppose two arrays are given A and B. A consists of integers and the second one consists of 0 and 1.
Now an operation is given - You can choose any adjacent bits in array B and you can toggle these two bits (for example - 00->11, 01->10, 10->01, 11->00) and you can perform this operation any number of times.
The output should be the sum of A[0]*B[0]+A[1]*B[1]+....+A[N-1]*B[N-1] such that the sum is maximum.
During the interview, my approach to this problem was to get the maximum number of 1's in array B in order to maximize the sum.
So to do that, I first calculated the total number of 1's in O(n) time in B. Let count = No. Of 1's=x.
Then I started traversing the array and toggle only if count becomes greater than x or based on the elements of array A (for example: Let B[i]=0 and B[i+1]=1 & A[i]=51 and A[i+1]=50
So I will toggle B[i] B[i+1] because A[i]>A[i+1])
But the interviewer was not quite satisfied with my approach and was asking me further to develop a less time complex algorithm.
Can anyone suggest a better approach with lesser time complexity?
You can create any B-vector with an even number of flipped bits just by repeatedly flipping the first bit that is in the wrong state.
So, pick all the positive numbers in A, and then drop the smallest one if you ended up with an a count that has a different oddness than the number of 1s in B. If you can't do that, because B has an odd number of 1s and A is all negative, then just pick the negative number closest to 0.
Then turn on all the bits corresponding to the numbers you chose, and turn off the other ones.
I have a list which contains random numbers such that Number >= 0. Now i have to divide the list into 2 equal parts (assume list contains even number of elements) such that all the numbers contain in first list are less than the numbers present in second list. This can be easily done by any sorting mechanism in O(nlogn). But i don't need data to be sorted in any two equal length list. Only condition is that (all elements in first list <= all elements in second list.)
So is there a way or hack we can reduce the complexity since we don't require sorted data here?
If the problem is actually solvable (data is right) you can find the median using the selection algorithm. When you have that you just create 2 equally sized arrays and iterate over the original list element by element putting each element into either of the new lists depending whether it's bigger or smaller than the median. Should run in linear time.
#Edit: as gen-y-s pointed out if you write the selection algorithm yourself or use a proper library it might already divide the input list so no need for the second pass.
I was solving the problems from codeforces practice problem achieve.
I am not able to find efficient solution.
How to solve the following problem?
I can only think of a brute force solution
Polycarpus has an array, consisting of n integers a1, a2, ..., an. Polycarpus likes it when numbers in an array match. That's why he wants the array to have as many equal numbers as possible. For that Polycarpus performs the following operation multiple times:
he chooses two elements of the array ai, aj (i ≠ j);
he simultaneously increases number ai by 1 and decreases number aj by 1, that is, executes ai = ai + 1 and aj = aj - 1.
The given operation changes exactly two distinct array elements. Polycarpus can apply the described operation an infinite number of times.
Now he wants to know what maximum number of equal array elements he can get if he performs an arbitrary number of such operation. Help Polycarpus.
Input
The first line contains integer n (1 ≤ n ≤ 105) — the array size. The second line contains space-separated integers a1, a2, ..., an (|ai| ≤ 104) — the original array.
Output
Print a single integer — the maximum number of equal array elements he can get if he performs an arbitrary number of the given operation.
Sample test(s)
input
2
2 1
output
1
input
3
1 4 1
output
3
find the sum of all the elements.
If the sum%n==0 then n else n-1
EDIT: Explanations :
First of all it is very easy to spot that the answer is minimum n-1.It cannot be lesser .
Proof: Choose any number that you wish to make as your target.And suppose the last index n.Now you make a1=target by applying operation on a1 and an.Similarly on a2 and an and so on.So all numbers except the last one are equal to target.
Now we need to see that if sum%n==0 then all numbers are possible.Clearly you can choose your target as the mean of all the numbers here.You can apply operation by choosing a index with value less than mean and other with value greater than mean and make one of them (possibly both) equal to mean.
Suppose there is given two String:
String s1= "MARTHA"
String s2= "MARHTA"
here we exchange positions of T and H. I am interested to write code which counts how many changes are necessary to transform from one String to another String.
There are several edit distance algorithms, the given Wikipeida link has links to a few.
Assuming that the distance counts only swaps, here is an idea based on permutations, that runs in linear time.
The first step of the algorithm is ensuring that the two strings are really equivalent in their character contents. This can be done in linear time using a hash table (or a fixed array that covers all the alphabet). If they are not, then s2 can't be considered a permutation of s1, and the "swap count" is irrelevant.
The second step counts the minimum number of swaps required to transform s2 to s1. This can be done by inspecting the permutation p that corresponds to the transformation from s1 to s2. For example, if s1="abcde" and s2="badce", then p=(2,1,4,3,5), meaning that position 1 contains element #2, position 2 contains element #1, etc. This permutation can be broke up into permutation cycles in linear time. The cycles in the example are (2,1) (4,3) and (5). The minimum swap count is the total count of the swaps required per cycle. A cycle of length k requires k-1 swaps in order to "fix it". Therefore, The number of swaps is N-C, where N is the string length and C is the number of cycles. In our example, the result is 2 (swap 1,2 and then 3,4).
Now, there are two problems here, and I think I'm too tired to solve them right now :)
1) My solution assumes that no character is repeated, which is not always the case. Some adjustment is needed to calculate the swap count correctly.
2) My formula #MinSwaps=N-C needs a proof... I didn't find it in the web.
Your problem is not so easy, since before counting the swaps you need to ensure that every swap reduces the "distance" (in equality) between these two strings. Then actually you look for the count but you should look for the smallest count (or at least I suppose), otherwise there exists infinite ways to swap a string to obtain another one.
You should first check which charaters are already in place, then for every character that is not look if there is a couple that can be swapped so that the next distance between strings is reduced. Then iterate over until you finish the process.
If you don't want to effectively do it but just count the number of swaps use a bit array in which you have 1 for every well-placed character and 0 otherwise. You will finish when every bit is 1.