Recursive complexity,the answer is wrong? - algorithm

I have the following code, and I want to find the complexity:
analizz(int n)
c = 1
k = n*n
while k > 1 do k = k - 2
for i = 0 to 1 do
if n >1 then analizz(n/2)
The problem that the code written in this way ,and I'm trying to understand, the FOR loop is inside the while loop ,so the cost should be O(n^2) ,and one recursive call if n > 1, so T(n/2).
The answer should be T(n) = 2T(n/2) + cn2 , I cannot understand how exactly 2T(n/2) ? if there is only one recursive call?
ps.I don't know which title will describe best my problem

The code is written poorly, but if the answer is correct than the for is not inside the while loop and the if is inside the for loop. the while gives the cn^2 and the two recursive calls are inside the for loop

Related

How will summing a sub array affect time complexity in a nested for loop?

Trying to calculate time complexity of some simple code but I do not know how to calculate time complexity while summing a sub array. The code is as follows:
for i=1 to n {
for j = i+1 to n {
s = sum(A[i...j])
B[i,j]=s
}}
So I know the nested for loops inevitably give us a O(n^2) and I believe the function to sum to the sub array is also O(n^2). However, I think the time complexity for the whole algorithm is O(n^3). How do I get here with this information? Thank you!
I like to think of for loops as summations. As such, the number of steps (written as a function, T(n)) is:
T(n) = \sum_{i=1}^n numStepsInInnerForLoop
Here, I'm using something written in pseudo-MathJax, and have written the outer for loop as a summation from i=1 to n of the number of steps in the inner for loop (the one from i+1 to n). You can think of this analagously as summing the number of steps in the inner for loop, from i=1 to n. Substituting in numStepsInInnerForLoop results in:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n numStepsOfSumFunction]
This function now represents the number of steps where both for loops have been fleshed out as summations. Assuming that s = sum(A[i...j]) takes j-i+1 steps and B[i,j]=s takes just one step, we can substitute numStepsOfSumFunction with these more useful parameters and the equation now becomes:
T(n) = \sum_{i=1}^n [\sum_{j=i+1}^n (j-i+1 + 1)]
When you solve these summations (using the kind of formulas you see on this summation tutorial page) you'll get a cubic function for T(n) which corresponds to O(n^3).
Your reasoning leads me to believe that you're running this algorithm on a array of size n. If so, then every time you call the sum method in the inner for loop, you're calling this method on a specific range of values (indices i to j). For each iteration of this for loop, this sum method will iterate through 1, 2, 3, ..., then finally n elements in the last iteration as j increases from (i + 1) to n. Note that this is when i = 1. As i increases, it won't necessarily go from 1, 2, 3, ..., to n anymore since it will technically go up to n - i elements. Big O, though, is the worst case so we have to use this scenario.
1 + 2 + 3 + ... + n gives us n^2. The runtime of the sum method depends on the values of i and j; however, when run in the for loop with the given conditions, the total time-complexity of all the calls to sum in one iteration of the inner for loop is O(n^2). And finally, since this inner for loop is executed n times, the total time-complexity for the whole algorithm is O(n^3).

Trying to figure out time complexites

I am trying to figure out time complexities for the following:
First:
j = 1
while j < n:
j += log(j + 5)
Would this be log n?
Secondly, a recurrence relation:
T(n) = T(n/2) + T(n/4) + n
I know you can't apply Master Theorem here but I am not sure how to find the complexity otherwise. A solution would be nice, but references to how to help me understand this would be good I guess.
Next, another recurrence relation:
T(n) = T(n/2) + log(n)
I am fairly certain that Master Theorem can be applied here. Leaving us with:
a = 1, b = 2, f(n) = log(n)
This means we would compare
n^(log_2(1)) to log(n) ==> n^0 to log(n)
Making it Theta(log(n))
Finally
j=1
while(j<n):
k=j
while k<n:
k += sqrt(k)
j += 0.25*j
I can tell that the outer loop will run 4 times. I am unclear as to the inner loop, however. Would it be log^2 n log log n or am I completely off in my thinking.
I am just studying for a test and am finding the materials at my disposal to be woefully inadequate.
The First is O(n) as we know each time at least 1 added to the previous result.
If you exapand the recurrent equation, the second is:
T(n) = 2T(n/4) + T(n/8) + n + n/2 < 3T(n/4) + 3n/2
We can say from master theorem that T(n) = \Theta(n).
The third is true and it is \Theta(log(n)).
Outer loop in the forth loop is T(n+1) = 5T(n)/4. It means outer loop is run log_{1.25}n. In the worst case, we can say the inner loop runs in O(n). Hence, it would be O(nlog(n)). If you want tighter complexity analysis, you should scrutinize more,.

Algorithm complexity Min and max

I have a question..I got an algorithm
procedure summation(A[1...n])
s=0
for i= 1 to n do
j=min{max{i,A[i],n^3}
s= s + j
return s
And I want to find the min and max output at this algorithm with the use of asymptotic notation θ.
Any ideas on how to do that?
What I have to look on an algorithm to understand it's complexity?
If you want to know the big O notation or time complexity works? You might want to look at the following post What is a plain English explanation of "Big O" notation?.
For the psuedo code that you showed, the complexity is O(n). were n is the length of the array.
Often you can determine the complexity by just looking at how many nested loops the algorithm has. Of course is this not always the case but this can used as rule of thumb.
In the following example:
procedure summation(A[B1[1...n],B2[1...n],...,Bn[1...n]])
s=0
for i= 1 to n do
for j= 1 to m do
j=min{max{i,A[i,j],n^3}
s= s + j
return s
the complexity would be O(n m). (length of all arrays b -> m)
best or worst case
For the algorithm that you showed there is no best or worse case. It always runs the same time for the same array, the only influence on the run time is the length of the array.
An example where there you could be a best or worst case is following.
lets say you need to find the location of a specific number in a array.
If you method would be to to through the array from start to end. The best case would that the number is at the start. The worst case would be if the number would be at the end.
For a more detailed explanation look at the link.
Cheers.
The best and the worst case are the same because the algorithm will run the "same way" every time no matter the input. So based on that we will calculate the time complexity of the algorithm using math:
T(n) = 1 + (3+2) + 1
T(n) = 2 + 5
T(n) = 2 + 5 1
T(n) = 2 + 5 (n-1+1)
T(n) = 5n + 2
This summation (3+2) is due to the fact that inside the loop we have 5 distinct and measurable actions:
j = min{max{i,A[i]}, n^3} that counts as three actions because we have 2 comparisons and a value assignment to the variable j.
s = s + j that counts as 2 actions because we have one addition and a value assignment to the variable s.
Asymptotically: Θ(n)
How we calculate Θ(n):
We look at the result that is 5n + 2 and we take out the constants so it becomes
n. Then we choose the "biggest" variable that is n.
Other examples:
8n^3+5n+2 -> Θ(n)=n^3
10logn+n^4+7 -> Θ(n)=n^4
More info: http://bigocheatsheet.com/

Big O notation on some examples [duplicate]

Could someone help me out with the time complexity analysis on this pseudocode?
I'm looking for the worst-case complexity here, and I can't figure out if it's O(n^4), O(n^5) or something else entirely. If you could go into detail into how you solved it exactly, it would be much appreciated.
sum = 0
for i = 1 to n do
for j = 1 to i*i do
if j mod i == 0 then
for k = 1 to j do
sum = sum + 1
First loop: O(n)
Second loop: i is in average n/2, you could have an exact formula but it's O(n²)
Third loop happens i times inside the second loop, so an average of n/2 times. And it's O(n²) as well, estimating it.
So it's O(n*n²*(1 + 1/n*n²)), I'd say O(n^4). The 1/n comes from the fact that the third loop happens roughly 1/n times inside the second one.
It's all a ballpark estimation, with no rigorous proof, but it should be right. You could confirm it by running code yourself.

Building a recurrence relation for this code?

I need to build a recurrence relation for the following algorithm (T(n) stands for number of elemental actions) and find it's time complexity:
Alg (n)
{
if (n < 3) return;
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
for i=1 to 7
Alg(n-2);
}
I came to this Recurrence relation (don't know if it's right):
T(n) = 1 if n < 3
T(n) = 7T(n-2)+100n2 otherwise.
I don't know how to get the time complexity, though.
Is my recurrence correct? What's the time complexity of this code?
Let's take a look at the code to see what the recurrence should be.
First, let's look at the loop:
for i=1 to n
{
for j=i to 2i
{
for k=j-i to j-i+100
write (i, j, k);
}
}
How much work does this do? Well, let's begin by simplifying it. Rather than having j count up from i to 2i, let's define a new variable j' that counts up from 0 to i. This means that j' = j - i, and so we get this:
for i=1 to n
{
for j' = 0 to i
{
for k=j' to j'+100
write (i, j' + i, k);
}
}
Ah, that's much better! Now, let's also rewrite k as k', where k' ranges from 1 to 100:
for i=1 to n
{
for j' = 0 to i
{
for k'= 1 to 100
write (i, j' + i, k' + j);
}
}
From this, it's easier to see that this loop has time complexity Θ(n2), since the innermost loop does O(1) work, and the middle loop will run 1 + 2 + 3 + 4 + ... + n = Θ(n2) times. Notice that it's not exactly 100n2 because the summation isn't exactly n2, but it is close.
Now, let's look at the recursive part:
for i=1 to 7
Alg(n-2);
For starters, this is just plain silly! There's no reason you'd ever want to do something like this. But, that said, we can say that this is 7 calls to the algorithm on an input of size n - 2.
Accordingly, we get this recurrence relation:
T(n) = 7T(n - 2) + Θ(n2) [if n ≥ 3]
T(n) = Θ(1) [otherwise]
Now that we have the recurrence, we can start to work out the time complexity. That ends up being a little bit tricky. If you think about how much work we'll end up doing, we'll get that
There's 1 call of size n.
There's 7 calls of size n - 2.
There's 49 calls of size n - 4.
There's 343 calls of size n - 6.
...
There's 7k calls of size n - 2k
From this, we immediately get a lower bound of Ω(7n/2), since that's the number of calls that will get made. Each call does O(n2) work, so we can get an upper boudn of O(n27n/2). The true value lies somewhere in there, though I honestly don't know how to figure out what it is. Sorry about that!
Hope this helps!
A formal method is to do the following:
The prevailing order of growth can be intuitively inferred from the source code, when it comes to the number of recursive calls.
An algorithm with 2 recursive calls has a complexity of 2^n; with 3 recursive calls the complexity 3^n and so on.

Resources