Why Time complexity of permutation function is O(n!) - algorithm

Consider following code.
public class Permutations {
static int count=0;
static void permutations(String str, String prefix){
if(str.length()==0){
System.out.println(prefix);
}
else{
for(int i=0;i<str.length();i++){
count++;
String rem = str.substring(0,i) + str.substring(i+1);
permutations(rem, prefix+str.charAt(i));
}
}
}
public static void main(String[] args) {
permutations("abc", "");
System.out.println(count);
}
}
here the logic, that i think is followed is- it considers each character of the string as a possible prefix and permutes the remaining n-1 characters.
so by this logic recurrence relation comes out to be
T(n) = n( c1 + T(n-1) ) // ignoring the print time
which is obviously O(n!). but when i used a count variable to see wheather algo really grows in order of n!, i found different results.
for 2-length string for count++(inside for loop) runs 4 times, for 3-length string value of count comes 15 and for 4 and 5-length string its 64 and 325.
It means it grows worse than n!. then why its said that this(and similar algos which generate permuatations) are O(n!) in terms of run time.

People say this algorithm is O(n!) because there are n! permutations, but what you are counting here are (in a sense) function calls - And there are more function calls than n!:
When str.length() == n, you do n calls;
For each of these n calls with str.length() == n - 1, you do n - 1 calls;
For each of these n * (n - 1) calls with str.length() == n - 2 you do n - 2 calls;
...
You do n! / k! calls with an input str of length k1, and since the length goes from n to 0, the total number of calls is:
sum k = 0 ... n (n! / k!) = n! sum k = 0 ... n (1 / k!)
But as you may know:
sum k = 0 ... +oo 1 / k! = e1 = e
So basically, this sum is always less than the constant e (and greater than 1), so you can say that the number of calls is O(e.n!) which is O(n!).
Runtime complexity is often different from theoretical complexity. In theoretical complexity, people want to know the number of permutations because the algorithm is probably going to check each of these permutations (so there are effectively n! check done), but in reality there is much more thing going on.
1 This formula will actually give you one compared to the values you got since you did not account for the initial function call.

this answer is for people like me who doesn't remember e=1/0!+1/1!+1/2!+1/3!...
I can explain using a simple example, say we want all the permutation of "abc"
/ / \ <--- for first position, there are 3 choices
/\ /\ /\ <--- for second position, there are 2 choices
/ \ / \ / \ <--- for third position, there is only 1 choice
above is the recursion tree, and we know that there are 3! leaf nodes, which represents all possible permutations of "abc" (which is also where we perform an action on the result, ie print()), but since you are counting all function calls, we need to know how many tree nodes in total (leaf + internal)
if it was a complete binary tree, we know there are 2^n leaf nodes...how many internal nodes?
x = |__________leaf_____________|------------------------|
let this represent 2^n leaf nodes, |----| represents the max number of
nodes in the level above, since each node has 1 parent, 2nd last level
cannot have more nodes than leaf
since its binary, we know second last level = (1/2)leaf
x = |__________leaf_____________|____2nd_____|-----------|
same for the third last level...which is (1/2)sec
x = |__________leaf_____________|____2nd_____|__3rd_|----|
x can be used to represent total number of tree nodes, and since we are always cutting half on the initial |-----| we know that total <= 2*leaf
now for permutation tree
x = |____leaf____|------------|
let this represent n! leaf nodes
since its second last level has 1 branch, we know second last level = x
x = |____leaf____|____2nd_____|-------------|
but third last level has 2 branches for each node, thus = (1/2)second
x = |____leaf____|____2nd_____|_3rd_|-------|
fourth last level has 3 branches for each node, thus = (1/3)third
x = |____leaf____|____2nd_____|_3rd_|_4|--| |
| | means we will no longer consider it
here we see that total < 3*leaf, this is as expected (e = 2.718)

Related

Search a word in a matrix runtime comlexity

Trying to analyze the runtime complexity of the following algorithm:
Problem: We have an m * n array A consisting of lower case letters and a target string s. The goal is to examine whether the target string appearing in A or not.
algorithm:
for(int i = 0; i < m; i++){
for(int j = 0; j < n; j++){
if(A[i][j] is equal to the starting character in s) search(i, j, s)
}
}
boolean search(int i, int j, target s){
if(the current position relative to s is the length of s) then we find the target
looping through the four possible directions starting from i, j: {p,q} = {i+1, j} or {i-1, j} or {i, j+1} or {i, j-1}, if the coordinate is never visited before
search(p, q, target s)
}
One runtime complexity analysis that I read is the following:
At each position in the array A, we are first presented with 4 possible directions to explore. After the first round, we are only given 3 possible choices because we can never go back. So the worst runtime complexity is O(m * n * 3**len(s))
However, I disagree with this analysis, because even though we are only presented with 3 possible choices each round, we do need to spend one operation to check whether that direction has been visited before or not. For instance, in java you probably just use a boolean array to track whether one spot has been visited before, so in order to know whether a spot has been visited or not, one needs a conditional check, and that costs one operation. The analysis I mentioned does not seem to take into account this.
What should be the runtime complexity?
update:
Let us suppose that the length of the target string is l and the runtime complexity at a given position in the matrix is T(l). Then we have:
T(l) = 4 T(l- 1) + 4 = 4(3T(l - 2) + 4) + 4 = 4(3( 3T(l -3) + 4) + 4)) + 4 = 4 * 3 ** (l - 1) + 4 + 4 *4 + 4 * 3 * 4 + ...
the +4 is coming from the fact that we are looping through four directions in each round besides recursively calling itself three times.
What should be the runtime complexity?
The mentioned analysis is correct and the complexity is indeed O(m * n * 3**len(s)).
For instance, in java you probably just use a boolean array to track whether one spot has been visited before, so in order to know whether a spot has been visited or not, one needs a conditional check, and that costs one operation.
That is correct and does not contradict the analysis.
The worst case we can construct is the matrix filled with only one letter a and a string aaaa....aaaax (many letters a and one x at the end). If m, n and len(s) are large enough, almost each call of the search function will generate 3 recursion calls of itself. Each of that calls will generate another 3 calls (which gives us total 9 calls of depth 2), each of them willl generate another 3 calls (which gives us total 27 calls of depth 3) and so on. Checking current string character, conditional checks, spawning a recursion are all O(1), so complexity of the whole search function is O(3**len(s)).
The solution is brute force. We have to touch each point on the board. That makes O(m*n) operation.
Now for each point, we have to run dfs() to check if the word exist. So we get
O(m * n * timeComplexityOf dfs)
this is a dfs written in python. Examine the time complexity
def dfs(r,c,i):
# O(1)
if i==len(word):
return True
# O(1)
# set is implemented as a hash table.
# So, time complexity of look up in a set is O(1)
if r<0 or c<0 or r>=ROWS or c>=COLS or word[i]!=board[r][c] or (r,c) in path_set:
return False
# O(1)
path.add((r,c))
# O(1)
res=(dfs(r+1,c,i+1) or
dfs(r-1,c,i+1) or
dfs(r,c+1,i+1) or
dfs(r,c-1,i+1))
# O(1)
path.remove((r,c))
return res
Since we dfs recursively calling itself, think about how many dfs calls will be on call stack. in worst case it will length of word. Thats why
O ( m * n * word.length)

What is the time complexity of this BFS algorithm?

I looked at LeetCode question 270. Perfext Squares:
Given an integer n, return the least number of perfect square numbers that sum to n.
A perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself. For example, 1, 4, 9, and 16 are perfect squares while 3 and 11 are not.>
Example 1:
Input: n = 12
Output: 3
Explanation: 12 = 4 + 4 + 4.
I solved it using the following algorithm:
def numSquares(n):
squares = [i**2 for i in range(1, int(n**0.5)+1)]
step = 1
queue = {n}
while queue:
tempQueue = set()
for node in queue:
for square in squares:
if node-square == 0:
return step
if node < square:
break
tempQueue.add(node-square)
queue = tempQueue
step += 1
It basically tries to go from goal number to 0 by subtracting each possible number, which are : [1 , 4, 9, .. sqrt(n)] and then does the same work for each of the numbers obtained.
Question
What is the time complexity of this algorithm? The branching in every level is sqrt(n) times, but some branches are destined to end early... which makes me wonder how to derive the time complexity.
If you think about what you're doing, you can imagine that you're doing a breadth-first search over a graph with n + 1 nodes (all the natural numbers between 0 and n, inclusive) and some number of edges m, which we'll determine later on. Your graph is essentially represented as an adjacency list, since at each point you iterate over all the outgoing edges (squares less than or equal to your number) and stop as soon as you consider a square that's too large. As a result, the runtime will be O(n + m), and all we have to do now is work out what m is.
(There's another cost here in computing all the square roots up to and including n, but that takes time O(n1/2), which is dominated by the O(n) term.)
If you think about it, the number of outgoing edges from each number k will be given by the number of perfect squares less than or equal to k. That value is equal to ⌊√k⌋ (check this for a few examples - it works!). This means that the total number of edges is upper-bounded by
√0 + √1 + √2 + ... + √n
We can show that this sum is Θ(n3/2). First, we'll upper-bound this sum at O(n3/2), which we can do by noting that
√0 + √1 + √2 + ... + √n
≤ √n + √n + √ n + ... + √n (n+1) times
= (n + 1)√n
= O(n3/2).
To lower-bound this at Ω(n3/2), notice that
√0 + √1 + √2 + ... + √ n
≥ √(n/2) + √(n/2 + 1) + ... + √(n) (drop the first half of the terms)
≥ √(n/2) + √(n/2) + ... + √(n/2)
= (n / 2)√(n / 2)
= Ω(n3/2).
So overall, the number of edges is Θ(n3/2), so using a regular analysis of breadth-first search we can see that the runtime will be O(n3/2).
This bound is likely not tight, because this assumes that you visit every single node and every single edge, which isn't going to happen. However, I'm not sure how to tighten things much beyond this.
As a note - this would be a great place to use A* search instead of breadth-first search, since you can fairly easily come up with heuristics to underestimate the remaining total distance (say, take the number and divide it by the largest perfect square less than it). That would cause the search to focus on extremely promising paths that jump rapidly toward 0 before less-good paths, like, say, always taking steps of size one.
Hope this helps!
Some observations:
The number of squares up to n is √n (floored to the nearest integer)
After the first iteration of the while loop, tempQueue will have √n entries
tempQueue can never have more than n entries, since all these values are positive, less than n and unique.
Every natural number can be written as the sum of four integer squares. So that means your BFS algorithm's while loop will iterate at the most 4 times. If the return statement did not get executed during any of the first 3 iterations, it is guaranteed it will in the 4th.
Every statement (except for the initialisation of squares) runs in constant time, even the call to .add().
The initialisation of squares has a list comprehension loop that has √n iterations, and range runs in constant time, so that initialisation has a time complexity of O(√n).
Now we can set a ceiling to the number of times the if node-square == 0 statement is executed (or any other statement in the innermost loop's body):
1⋅√n + √n⋅√n + n⋅√n + n⋅√n
Each of the 4 terms corresponds to an iteration of the while loop. The left factor of each product corresponds to the maximum size of queue in that particular iteration, and the factor at the right corresponds to the size of squares (always the same). This simplifies to:
√n + n + 2n3⁄2
In terms of time complexity this is:
O(n3⁄2)
This is the worst case time complexity. When the while loop only has to iterate twice, it is O(n), and when only once (when n is a square), it is O(√n).

Time complexity analysis of function with recursion inside loop

I am trying to analysis time complexity of below function. This function is used to check if a string is made of other strings.
set<string> s; // s has been initialized and stores all the strings
bool fun(string word) {
int len = word.size();
// something else that can also return true or false with O(1) complexity
for (int i=1; i<=len; ++i) {
string prefix = word.substr(0,i);
string suffix = word.substr(i);
if (prefix in s && fun(suffix))
return true;
else
return false;
}
}
I think the time complexity is O(n) where n is the length of word (am I right?). But as the recursion is inside the loop, I don't know how to prove it.
Edit:
This code is not a correct C++ code (e.g., prefix in s). I just show the idea of this function, and want to know how to analysis its time complexity
The way to analyze this is by developing a recursion relationship based on the length of the input and the (unknown) probability that a prefix is in s. Let's assume that the probability of a prefix being in s is given by some function pr(L) of the length L of the prefix. Let the complexity (number of operations) be given by T(len).
If len == 0 (word is the empty string), then T = 1. (The function is missing a final return statement after the loop, but we're assuming that the actual code is only a sketch of the idea, not what's actually executing).
For each loop iteration, denote the loop body complexity by T(len; i). If the prefix is not in s, then the body has constant complexity (T(len; i) = 1). This event has probability 1 - pr(i).
If the prefix is in s, then the function returns true or false according to the recursive call to fun(suffix), which has complexity T(len - i). This event has probability pr(i).
So for each value of i, the loop body complexity is:
T(len; i) = 1 * (1 - pr(i)) + T(len - i) * pr(i)
Finally (and this depends on the intended logic, not the posted code), we have
T(len) = sum i=1...len(T(len; i))
For simplicity, let's treat pr(i) as a constant function with value 0.5. Then the recursive relationship for T(len) is (up to a constant factor, which is unimportant for O() calculations):
T(len) = sum i=1...len(1 + T(len - i)) = len + sum i=0...len-1(T(i))
As noted above, the boundary condition is T(0) = 1. This can be solved by standard recursive function methods. Let's look at the first few terms:
len T(len)
0 1
1 1 + 1 = 2
2 2 + 2 + 1 = 5
3 3 + (4 + 2 + 1) = 11
4 4 + (11 + 5 + 2 + 1) = 23
5 5 + (23 + 11 + 5 + 2 + 1) = 47
The pattern is clearly T(len) = 2 * T(len - 1) + 1. This corresponds to exponential complexity:
T(n) = O(2n)
Of course, this result depends on the assumption we made about pr(i). (For instance, if pr(i) = 0 for all i, then T(n) = O(1). There would also be non-exponential growth if pr(i) had a maximum prefix length—pr(i) = 0 for all i > M for some M.) The assumption that pr(i) is independent of i is probably unrealistic, but this really depends on how s is populated.
Assuming that you've fixed the bugs others have noted, then the i values are the places that the string is being split (each i is the leftmost splitpoint, and then you recurse on everything to the right of i). This means that if you were to unwind the recursion, you are looking at up to n-1 different split points, and asking if each substring is a valid word. Things are ok if the beginning of word doesn't have a lot of elements from your set, since then you can skip the recursion. But in the worst case, prefix in s is always true, and you try every possible subset of the n-1 split points. This gives 2^{n-1} different splitting sets, multiplied by the length of each such set.

Number of Positive Solutions to a1 x1+a2 x2+......+an xn=k (k<=10^18)

The question is Number of solutions to a1 x1+a2 x2+....+an xn=k with constraints: 1)ai>0 and ai<=15 2)n>0 and n<=15 3)xi>=0 I was able to formulate a Dynamic programming solution but it is running too long for n>10^10. Please guide me to get a more efficient soution.
The code
int dp[]=new int[16];
dp[0]=1;
BigInteger seen=new BigInteger("0");
while(true)
{
for(int i=0;i<arr[0];i++)
{
if(dp[0]==0)
break;
dp[arr[i+1]]=(dp[arr[i+1]]+dp[0])%1000000007;
}
for(int i=1;i<15;i++)
dp[i-1]=dp[i];
seen=seen.add(new BigInteger("1"));
if(seen.compareTo(n)==0)
break;
}
System.out.println(dp[0]);
arr is the array containing coefficients and answer should be mod 1000000007 as the number of ways donot fit into an int.
Update for real problem:
The actual problem is much simpler. However, it's hard to be helpful without spoiling it entirely.
Stripping it down to the bare essentials, the problem is
Given k distinct positive integers L1, ... , Lk and a nonnegative integer n, how many different finite sequences (a1, ..., ar) are there such that 1. for all i (1 <= i <= r), ai is one of the Lj, and 2. a1 + ... + ar = n. (In other words, the number of compositions of n using only the given Lj.)
For convenience, you are also told that all the Lj are <= 15 (and hence k <= 15), and n <= 10^18. And, so that the entire computation can be carried out using 64-bit integers (the number of sequences grows exponentially with n, you wouldn't have enough memory to store the exact number for large n), you should only calculate the remainder of the sequence count modulo 1000000007.
To solve such a problem, start by looking at the simplest cases first. The very simplest cases are when only one L is given, then evidently there is one admissible sequence if n is a multiple of L and no admissible sequence if n mod L != 0. That doesn't help yet. So consider the next simplest cases, two L values given. Suppose those are 1 and 2.
0 has one composition, the empty sequence: N(0) = 1
1 has one composition, (1): N(1) = 1
2 has two compositions, (1,1); (2): N(2) = 2
3 has three compositions, (1,1,1);(1,2);(2,1): N(3) = 3
4 has five compositions, (1,1,1,1);(1,1,2);(1,2,1);(2,1,1);(2,2): N(4) = 5
5 has eight compositions, (1,1,1,1,1);(1,1,1,2);(1,1,2,1);(1,2,1,1);(2,1,1,1);(1,2,2);(2,1,2);(2,2,1): N(5) = 8
You may see it now, or need a few more terms, but you'll notice that you get the Fibonacci sequence (shifted by one), N(n) = F(n+1), thus the sequence N(n) satisfies the recurrence relation
N(n) = N(n-1) + N(n-2) (for n >= 2; we have not yet proved that, so far it's a hypothesis based on pattern-spotting). Now, can we see that without calculating many values? Of course, there are two types of admissible sequences, those ending with 1 and those ending with 2. Since that partitioning of the admissible sequences restricts only the last element, the number of ad. seq. summing to n and ending with 1 is N(n-1) and the number of ad. seq. summing to n and ending with 2 is N(n-2).
That reasoning immediately generalises, given L1 < L2 < ... < Lk, for all n >= Lk, we have
N(n) = N(n-L1) + N(n-L2) + ... + N(n-Lk)
with the obvious interpretation if we're only interested in N(n) % m.
Umm, that linear recurrence still leaves calculating N(n) as an O(n) task?
Yes, but researching a few of the mentioned keywords quickly leads to an algorithm needing only O(log n) steps ;)
Algorithm for misinterpreted problem, no longer relevant, but may still be interesting:
The question looks a little SPOJish, so I won't give a complete algorithm (at least, not before I've googled around a bit to check if it's a contest question). I hope no restriction has been omitted in the description, such as that permutations of such representations should only contribute one to the count, that would considerably complicate the matter. So I count 1*3 + 2*4 = 11 and 2*4 + 1*3 = 11 as two different solutions.
Some notations first. For m-tuples of numbers, let < | > denote the canonical bilinear pairing, i.e.
<a|x> = a_1*x_1 + ... + a_m*x_m. For a positive integer B, let A_B = {1, 2, ..., B} be the set of positive integers not exceeding B. Let N denote the set of natural numbers, i.e. of nonnegative integers.
For 0 <= m, k and B > 0, let C(B,m,k) = card { (a,x) \in A_B^m × N^m : <a|x> = k }.
Your problem is then to find \sum_{m = 1}^15 C(15,m,k) (modulo 1000000007).
For completeness, let us mention that C(B,0,k) = if k == 0 then 1 else 0, which can be helpful in theoretical considerations. For the case of a positive number of summands, we easily find the recursion formula
C(B,m+1,k) = \sum_{j = 0}^k C(B,1,j) * C(B,m,k-j)
By induction, C(B,m,_) is the convolution¹ of m factors C(B,1,_). Calculating the convolution of two known functions up to k is O(k^2), so if C(B,1,_) is known, that gives an O(n*k^2) algorithm to compute C(B,m,k), 1 <= m <= n. Okay for small k, but our galaxy won't live to see you calculating C(15,15,10^18) that way. So, can we do better? Well, if you're familiar with the Laplace-transformation, you'll know that an analogous transformation will convert the convolution product to a pointwise product, which is much easier to calculate. However, although the transformation is in this case easy to compute, the inverse is not. Any other idea? Why, yes, let's take a closer look at C(B,1,_).
C(B,1,k) = card { a \in A_B : (k/a) is an integer }
In other words, C(B,1,k) is the number of divisors of k not exceeding B. Let us denote that by d_B(k). It is immediately clear that 1 <= d_B(k) <= B. For B = 2, evidently d_2(k) = 1 if k is odd, 2 if k is even. d_3(k) = 3 if and only if k is divisible by 2 and by 3, hence iff k is a multiple of 6, d_3(k) = 2 if and only if one of 2, 3 divides k but not the other, that is, iff k % 6 \in {2,3,4} and finally, d_3(k) = 1 iff neither 2 nor 3 divides k, i.e. iff gcd(k,6) = 1, iff k % 6 \in {1,5}. So we've seen that d_2 is periodic with period 2, d_3 is periodic with period 6. Generally, like reasoning shows that d_B is periodic for all B, and the minimal positive period divides B!.
Given any positive period P of C(B,1,_) = d_B, we can split the sum in the convolution (k = q*P+r, 0 <= r < P):
C(B,m+1, q*P+r) = \sum_{c = 0}^{q-1} (\sum_{j = 0}^{P-1} d_B(j)*C(B,m,(q-c)*P + (r-j)))
+ \sum_{j = 0}^r d_B(j)*C(B,m,r-j)
The functions C(B,m,_) are no longer periodic for m >= 2, but there are simple formulae to obtain C(B,m,q*P+r) from C(B,m,r). Thus, with C(B,1,_) = d_B and C(B,m,_) known up to P, calculating C(B,m+1,_) up to P is an O(P^2) task², getting the data necessary for calculating C(B,m+1,k) for arbitrarily large k, needs m such convolutions, hence that's O(m*P^2).
Then finding C(B,m,k) for 1 <= m <= n and arbitrarily large k is O(n^2*P^2), in time and O(n^2*P) in space.
For B = 15, we have 15! = 1.307674368 * 10^12, so using that for P isn't feasible. Fortunately, the smallest positive period of d_15 is much smaller, so you get something workable. From a rough estimate, I would still expect the calculation of C(15,15,k) to take time more appropriately measured in hours than seconds, but it's an improvement over O(k) which would take years (for k in the region of 10^18).
¹ The convolution used here is (f \ast g)(k) = \sum_{j = 0}^k f(j)*g(k-j).
² Assuming all arithmetic operations are O(1); if, as in the OP, only the residue modulo some M > 0 is desired, that holds if all intermediate calculations are done modulo M.

Time complexity

The Problem is finding majority elements in an array.
I understand how this algorithm works, but i don't know why this has O(nlogn) as a time complexity.....
a. Both return \no majority." Then neither half of the array has a majority
element, and the combined array cannot have a majority element. Therefore,
the call returns \no majority."
b. The right side is a majority, and the left isn't. The only possible majority for
this level is with the value that formed a majority on the right half, therefore,
just compare every element in the combined array and count the number of
elements that are equal to this value. If it is a majority element then return
that element, else return \no majority."
c. Same as above, but with the left returning a majority, and the right returning
\no majority."
d. Both sub-calls return a majority element. Count the number of elements equal
to both of the candidates for majority element. If either is a majority element
in the combined array, then return it. Otherwise, return \no majority."
The top level simply returns either a majority element or that no majority element
exists in the same way.
Therefore, T(1) = 0 and T(n) = 2T(n/2) + 2n = O(nlogn)
I think,
Every recursion it compares the majority element to whole array which takes 2n.
T(n) = 2T(n/2) + 2n = 2(2T(n/4) + 2n) +
2n = ..... = 2^kT(n/2^k) + 2n + 4n + 8n........ 2^kn = O(n^2)
T(n) = 2T(n/2) + 2n
The question is how many iterations does it take for n to get to 1.
We divide by 2 in each iteration so we get a series: n , n/2 , n/4 , n/8 ... n/(n^k)
So, let's find k that will bring us to 1 (last iteration):
n/(2^k)=1 .. n=2^k ... k=log(n)
So we got log(n) iterations.
Now, in each iteration we do 2n operations (less because we divide n by 2 each time) but in worth case scenario lets say 2n.
So in total, we got log(n) iterations with O(n) operations: nlog(n)
I'm not sure if I understand, but couldn't you just create a hash map, walk over the array, incrementing hash[value] at every step, then sort the hash map (xlogx time complexity) and compare the top two elements? This would cost you O(n) + O(mlogm) + 2 = O(n + mlogm), with n the size of the array and m the amount of different elements in the vector.
Am I mistaken here? Or ...?
When you do this recursively, you split the array in two for each level, make a call for each half, then makes one of the tests a - d. The test a requires no looping, the other tests requires looping through the entire array. By average you will loop through (0 + 1 + 1 + 1) / 4 = 3 / 4 of the array for each level in the recursion.
The number of levels in the recursion is based on the size of the array. As you split the array in half each level, the number of levels will be log2(n).
So, the total work is (n * 3/4) * log2(n). As constants are irrelevant to the time complexity, and all logarithms are the same, the complexity is O(n * log n).
Edit:
If someone is wondering about the algorithm, here's a C# implementation. :)
private int? FindMajority(int[] arr, int start, int len) {
if (len == 1) return arr[start];
int len1 = len / 2, len2 = len - len1;
int? m1 = FindMajority(arr, start, len1);
int? m2 = FindMajority(arr, start + len1, len2);
int cnt1 = m1.HasValue ? arr.Skip(start).Take(len).Count(n => n == m1.Value) : 0;
if (cnt1 * 2 >= len) return m1;
int cnt2 = m2.HasValue ? arr.Skip(start).Take(len).Count(n => n == m2.Value) : 0;
if (cnt2 * 2 >= len) return m2;
return null;
}
This guy has a lot of videos on recurrence relation, and the different techniques you can use to solve them:
https://www.youtube.com/watch?v=TEzbkIggJfo&list=PLj68PAxAKGoyyBwi6qrfcsqE_4trSO1yL
Basically for this problem I would use the Master Theorem:
https://youtu.be/i5kTZof1LRY
T(1) = 0 and T(n) = 2T(n/2) + 2n
Master Theorem ==> AT(n/B) + 2n^D, so in this case A=2, B=3, D=1
So according to the Master Theorem this is O(nlogn)
You can also use another method to solve this (below) it would just take a little bit more time:
https://youtu.be/TEzbkIggJfo?list=PLj68PAxAKGoyyBwi6qrfcsqE_4trSO1yL
I hope this helps you out !

Resources