Say i have the following code
def func(A,n):
for i = 0 to n-1:
for k = i+1 to n-1:
for l = k+1 to n-1:
if A[i]+A[k]+A[l] = 0:
return True
A is an array, and n denotes the length of A.
As I read it, the code checks if any 3 consecutive integers in A sum up to 0. I see the time complexity as
T(n) = (n-2)(n-1)(n-2)+O(1) => O(n^3)
Is this correct, or am I missing something? I have a hard time finding reading material about this (and I own CLRS)
You have the functionality wrong: it checks to see whether any three elements add up to 0. To improve execution time, it considers them only in index order: i < k < j.
You are correct about the complexity. Although each loop takes a short-cut, that short-cut is merely a scalar divisor on the number of iterations. Each loop is still O(n).
As for the coding, you already have most of it done -- and Stack Overflow is not a coding service. Give it your best shot; if that doesn't work and you're stuck, post another question.
If you really want to teach yourself a new technique, look up Python's itertools package. You can use this to generate all the combinations in triples. You can then merely check sum(triple) in each case. In fact, you can use the any method to check whether any one triple sums to 0, which could reduce your function body to a single line of Python code.
I'll leave that research to you. You'll learn other neat stuff on the way.
Addition for OP's comment.
Let's set N to 4, and look at what happens:
i = 0
for k = 1 to 3
... three k loop
i = 1
for k = 2 to 3
... two k loops
i = 2
for k = 3 to 3
... one k loop
The number of k-loop executions is the "triangle" number of n-1: 3 + 2 + 1. Let m = n-1; the formula is T(m) = m(m-1)/2.
Now, you propagate the same logic to the l loops. You run T(k) loops on l for k= 1, 2, 3. If I recall, this third-order "pyramid" formula is P(m) = m(m-1)(m-2)/6.
In terms of n, this is (n-1)(n-2)(n-3)/6 loops on l. When you multiply this out, you get a straightforward cubic formula in n.
Here is the sequence for n=5:
0 1 2
0 1 3
0 1 4
change k
0 2 3
0 2 4
change k
0 3 4
change k
change k
change l
1 2 3
1 2 4
change k
1 3 4
change k
change k
change l
2 3 4
BTW, l is a bad variable name, easily confused with 1.
Related
I found this problem in a hiring contest(which is over now). Here it is:
You are given two natural numbers N and X. You are required to create an array of N natural numbers such that the bitwise XOR of these numbers is equal to X. The sum of all the natural numbers that are available in the array is as minimum as possible.
If there exist multiple arrays, print the smallest one
Array A< Array B if
A[i] < B[i] for any index i, and A[i]=B[i] for all indices less than i
Sample Input: N=3, X=2
Sample output : 1 1 2
Explanation: We have to print 3 natural numbers having the minimum sum Thus the N-spaced numbers are [1 1 2]
My approach:
If N is odd, I put N-1 ones in the array (so that their xor is zero) and then put X
If N is even, I put N-1 ones again and then put X-1(if X is odd) and X+1(if X is even)
But this algorithm failed for most of the test cases. For example, when N=4 and X=6 my output is
1 1 1 7 but it should be 1 1 2 4
Anyone knows how to make the array sum minimum?
In order to have the minimum sum, you need to make sure that when your target is X, you are not cancelling the bits of X and recreating them again. Because this will increase the sum. For this, you have create the bits of X one by one (ideally) from the end of the array. So, as in your example of N=4 and X=6 we have: (I use ^ to show xor)
X= 7 = 110 (binary) = 2 + 4. Note that 2^4 = 6 as well because these numbers don't share any common bits. So, the output is 1 1 2 4.
So, we start by creating the most significant bits of X from the end of the output array. Then, we also have to handle the corner cases for different values of N. I'm going with a number of different examples to make the idea clear:
``
A) X=14, N=5:
X=1110=8+4+2. So, the array is 1 1 2 4 8.
B) X=14, N=6:
X=8+4+2. The array should be 1 1 1 1 2 12.
C) X=15, N=6:
X=8+4+2+1. The array should be 1 1 1 2 4 8.
D) X=15, N=5:
The array should be 1 1 1 2 12.
E) X=14, N=2:
The array should be 2 12. Because 12 = 4^8
``
So, we go as follows. We compute the number of powers of 2 in X. Let this number be k.
Case 1 - If k <= n (example E): we start by picking the smallest powers from left to right and merge the remaining on the last position in the array.
Case 2 - If k > n (example A, B, C, D): we compute h = n - k. If h is odd we put h = n-k+1. Now, we start by putting h 1's in the beginning of the array. Then, the number of places left is less than k. So, we can follow the idea of Case 1 for the remaining positions. Note that in case 2, instead of having odd number of added 1's we put and even number of 1's and then do some merging at the end. This guarantees that the array is the smallest it can be.
We have to consider that we have to minimize the sum of the array for solution and that is the key point.
First calculate set bits in N suppose if count of setbits are less than or equal to X then divide N in X integers based on set bits like
N = 15, X = 2
setbits in 15 are 4 solution is 1 14
if X = 3 solution is 1 2 12
this minimizes array sum too.
other case if setbits are greater than X
calculate difference = setbits(N) - X
If difference is even then add ones as needed and apply above algorithm all ones will cancel out.
If difference is odd then add ones but now you have take care of that 1 extra one in the answer array.
Check for the corner cases too.
I implemented the shuffling algorithm as:
import random
a = range(1, n+1) #a containing element from 1 to n
for i in range(n):
j = random.randint(0, n-1)
a[i], a[j] = a[j], a[i]
As this algorithm is biased. I just wanted to know for any n(n ≤ 17), is it possible to find that which permutation have the highest probablity of occuring and which permutation have least probablity out of all possible n! permutations. If yes then what is that permutation??
For example n=3:
a = [1,2,3]
There are 3^3 = 27 possible shuffle
No. occurence of different permutations:
1 2 3 = 4
3 1 2 = 4
3 2 1 = 4
1 3 2 = 5
2 1 3 = 5
2 3 1 = 5
P.S. I am not so good with maths.
This is not a proof by any means, but you can quickly come up with the distribution of placement probabilities by running the biased algorithm a million times. It will look like this picture from wikipedia:
An unbiased distribution would have 14.3% in every field.
To get the most likely distribution, I think it's safe to just pick the highest percentage for each index. This means it's most likely that the entire array is moved down by one and the first element will become the last.
Edit: I ran some simulations and this result is most likely wrong. I'll leave this answer up until I can come up with something better.
This question already has an answer here:
Count the total number of subsets that don't have consecutive elements
(1 answer)
Closed 4 years ago.
Given a set of consecutive numbers from 1 to n, I'm trying to find the number of subsets that do not contain consecutive numbers.
E.g., for the set [1, 2, 3], some possible subsets are [1, 2] and [1, 3]. The former would not be counted while the latter would be, since 1 and 3 are not consecutive numbers.
Here is what I have:
def f(n)
consecutives = Array(1..n)
stop = (n / 2.0).round
(1..stop).flat_map { |x|
consecutives.combination(x).select { |combo|
consecutive = false
combo.each_cons(2) do |l, r|
consecutive = l.next == r
break if consecutive
end
combo.length == 1 || !consecutive
}
}.size
end
It works, but I need it to work faster, under 12 seconds for n <= 75. How do I optimize this method so I can handle high n values no sweat?
I looked at:
Check if array is an ordered subset
How do I return a group of sequential numbers that might exist in an array?
Check if an array is subset of another array in Ruby
and some others. I can't seem to find an answer.
Suggested duplicate is Count the total number of subsets that don't have consecutive elements, although that question is slightly different as I was asking for this optimization in Ruby and I do not want the empty subset in my answer. That question would have been very helpful had I initially found that one though! But SergGr's answer is exactly what I was looking for.
Although #user3150716 idea is correct the details are wrong. Particularly you can see that for n = 3 there are 4 subsets: [1],[2],[3],[1,3] while his formula gives only 3. That is because he missed the subset [3] (i.e. the subset consisting of just [i]) and that error accumulates for larger n. Also I think it is easier to think if you start from 1 rather than n. So the correct formulas would be
f(1) = 1
f(2) = 2
f(n) = f(n-1) + f(n-2) + 1
Those formulas are easy to code using a simple loop in constant space and O(n) speed:
def f(n)
return 1 if n == 1
return 2 if n == 2
# calculate
# f(n) = f(n-1) + f(n - 2) + 1
# using simple loop
v2 = 1
v1 = 2
i = 3
while i <= n do
i += 1
v1, v2 = v1 + v2 + 1, v1
end
v1
end
You can see this online together with the original code here
This should be pretty fast for any n <= 75. For much larger n you might require some additional tricks like noticing that f(n) is actually one less than a Fibonacci number
f(n) = Fib(n+2) - 1
and there is a closed formula for Fibonacci number that theoretically can be computed faster for big n.
let number of subsets with no consecutive numbers from{i...n} be f(i), then f(i) is the sum of:
1) f(i+1) , the number of such subsets without i in them.
2) f(i+2) + 1 , the number of such subsets with i in them (hence leaving out i+1 from the subset)
So,
f(i)=f(i+1)+f(i+2)+1
f(n)=1
f(n-1)=2
f(1) will be your answer.
You can solve it using matrix exponentiation(http://zobayer.blogspot.in/2010/11/matrix-exponentiation.html) in O(logn) time.
I have suppose two places: -,- .Each of these place has a max limit. Such as first place has max limit of 3 and 2nd place has max limit of 7.
I have other 2 numbers which is totalmaxlimit and other is totalminlimit.
Ex; totalmaxlimit = 6
totalminlimit = 3
I want to write a code where I can fill above two places with all possible permutation and combinations such that the sum of three places is greater than equal to 3 and less than equal to 6.
Example:
3 0
3 1
2 0
2 1
2 4
Also,
2 6 will be wrong result because sum is greater than totalmaxlimit.
4 2 is also wrong as first place has max limit of 3.
Code in any language is fine. Thanks in advance.
Let's assume that:
1) The place is given by A, B coordinates
2) You have a totalMin (m) and a totalMax (M)
3) The rules are that A, B, and A+B should be >= m and <=M
4) The amount of values is given by the equation M-m. (e.g, if M =10 and m=0, we will have 10 valid values).
You can get a permutation by using the formula P = n! / (n-k)! where N is you number of values, and k is the valid numbers you can have.
So, for example, if m = 0, M = 6:
Permutations = (0,6) , (1,5), (2,4), etc...
Basically you have the sum of (X_n, X_M) where, when n grows, M decreases. M = M - n
I hope this helps for now, but I can provide a more 'profissional' formula if you like. The language could be python because I think it would be easier. But first you need to get the algorithm, passing it to code is trivial.
Here's the code:
def permutations(min, max):
# init variables
m = min
M = max
results = []
# main loop
for i in range(min, max):
for j in range(min, max):
if ((i + j) >= min and (i+j) <= max ):
results.append([i, j])
print(results)
I'm looking for an algorithm that generates all permutations of a set. To make it easier, the set is always [0, 1..n]. There are many ways to do this and it's not particularly hard.
What I also need is the number of inversions of each permutation.
What is the fastest (in terms of time complexity) algorithm that does this?
I was hoping that there's a way to generate those permutations that produces the number of inversions as a side-effect without adding to the complexity.
The algorithm should generate lists, not arrays, but I'll accept array based ones if it makes a big enough difference in terms of speed.
Plus points (...there are no points...) if it's functional and is implemented in a pure language.
There is Steinhaus–Johnson–Trotter algorithm that allows to keep inversion count easily during permutation generation. Excerpt from Wiki:
Thus, from the single permutation on one element,
1
one may place the number 2 in each possible position in descending
order to form a list of two permutations on two elements,
1 2
2 1
Then, one may place the number 3 in each of three different positions
for these three permutations, in descending order for the first
permutation 1 2, and then in ascending order for the permutation 2 1:
1 2 3
1 3 2
3 1 2
3 2 1
2 3 1
2 1 3
At every step of recursion we insert the biggest number in the list of smaller numbers. It is obvious that this insertion adds M new inversions, where M is insertion position (counting from the right). For example, if we have 3 1 2 list (2 inversions), and will insert 4
3 1 2 4 //position 0, 2 + 0 = 2 inversions
3 1 4 2 //position 1, 2 + 1 = 3 inversions
3 4 1 2 //position 2, 2 + 2 = 4 inversions
4 3 1 2 //position 3, 2 + 3 = 5 inversions
pseudocode:
function Generate(List, Count)
N = List.Length
if N = N_Max then
Output(List, 'InvCount = ': Count)
else
for Position = 0 to N do
Generate(List.Insert(N, N - Position), Count + Position)
P.S. Recursive method is not mandatory here, but I suspect that it is natural for functional guys
P.P.S If you are worried about inserting into lists, consider Even's speedup section that uses only exchange of neighbour elements, and every exchange increments or decrements inversion count by 1.
Here is an algorithm that does the task, is amortized O(1) per permutation, and generates an array of tuples of linked lists that share as much memory as they reasonably can.
I'll implement all except the linked list bit in untested Python. Though Python would be a bad language for a real implementation.
def permutations (sorted_list):
answer = []
def add_permutations(reversed_sublist, tail_node, inversions):
if (0 == len(sorted_sublist)):
answer.append((tail_node, inversions))
else:
for idx, val in enumerate(reversed_sublist):
add_permutations(
filter(lambda x: x != val),
ListNode(val, tail_node,
inversions + idx
)
add_permutations(reversed(sorted_list), EmptyListNode(), 0)
return answer
You might wonder at my claim of amortized O(1) work with all of this copying. That's because if m elements are left we do O(m) work then amortize it over m! elements. So the amortized cost of the higher level nodes is a converging cost per bottom call, of which we need one per permutation.