Time Complexity of the programs - pseudocode

time complexity.
I think so it may be 'o(n log n)'.please help me with the answer.
def f()
ans = 0
for i = 1 to n:
for j = 1 to log(i):
ans += 1
print(ans)

Yes, it is O(n log(n) )
Note: it is customary to write (worst case) complexity as big O. Small O (or a greek letter omega) is reserved for something else.

Related

proving the big-oh etc for an algorithm

I'm learning how to prove/disprove big-Oh, big-Omega, and little-oh, and I have the following algorithm f(n). However I'm unsure how to prove this f(n) as it has an if statement which I've never come across before. How can I prove, for example, that this f(n) is O( n^2 )?
if n is even
4 sum(n/2,n)
else
2n-1 sum(n−3,n)
where sum(j,k) is a ‘partial arithmetic sum’ of the integers from j up to k, that is sum(j,k)=
if j > k
0
else
j+(j+1)+(j+2)+...+j
e.g. sum(3,4) = 3 + 4 = 7, etc.
Note that sum(j,k) = sum(1,k) – sum(1,j-1).
ok. Got it no worries. I'll try to help you understand this.
Big O notation is used to define an upper limit on how much time a program will take in term of its input size.
Let's try to see how much time each statement will take in this function
f(n) {
if n is even // O(1) .....#1
4 * sum(n/2,n) // O(n) .....#2
else // O(1) ................#3
(2n-1) * sum(n−3,n) // O(n) .......#4
}
if n is even
This can be done by a check like if ((n%2) == 0)) As you can see that this is a constant time operation. no loop nothing just one computation.
sum(j, k) function is being computated by iterating from j to k whenever j <= k. So, it will run (k - j + 1) times which is linear time
So, total complexity will be summation of complexity of the if block or the else block
For analyzing complexity, one needs to take care of worst time.
Complexity of if block = #1 + #2 = O(1) + O(n) = O(n)
Similarly for else block = #3 + #4 = O(1) + O(n) = O(n)
Max of both = maximum of(O(n), O(n)) = O(n)
Thus, the overall complexity = O(n)

Algorithm complexity Min and max

I have a question..I got an algorithm
procedure summation(A[1...n])
s=0
for i= 1 to n do
j=min{max{i,A[i],n^3}
s= s + j
return s
And I want to find the min and max output at this algorithm with the use of asymptotic notation θ.
Any ideas on how to do that?
What I have to look on an algorithm to understand it's complexity?
If you want to know the big O notation or time complexity works? You might want to look at the following post What is a plain English explanation of "Big O" notation?.
For the psuedo code that you showed, the complexity is O(n). were n is the length of the array.
Often you can determine the complexity by just looking at how many nested loops the algorithm has. Of course is this not always the case but this can used as rule of thumb.
In the following example:
procedure summation(A[B1[1...n],B2[1...n],...,Bn[1...n]])
s=0
for i= 1 to n do
for j= 1 to m do
j=min{max{i,A[i,j],n^3}
s= s + j
return s
the complexity would be O(n m). (length of all arrays b -> m)
best or worst case
For the algorithm that you showed there is no best or worse case. It always runs the same time for the same array, the only influence on the run time is the length of the array.
An example where there you could be a best or worst case is following.
lets say you need to find the location of a specific number in a array.
If you method would be to to through the array from start to end. The best case would that the number is at the start. The worst case would be if the number would be at the end.
For a more detailed explanation look at the link.
Cheers.
The best and the worst case are the same because the algorithm will run the "same way" every time no matter the input. So based on that we will calculate the time complexity of the algorithm using math:
T(n) = 1 + (3+2) + 1
T(n) = 2 + 5
T(n) = 2 + 5 1
T(n) = 2 + 5 (n-1+1)
T(n) = 5n + 2
This summation (3+2) is due to the fact that inside the loop we have 5 distinct and measurable actions:
j = min{max{i,A[i]}, n^3} that counts as three actions because we have 2 comparisons and a value assignment to the variable j.
s = s + j that counts as 2 actions because we have one addition and a value assignment to the variable s.
Asymptotically: Θ(n)
How we calculate Θ(n):
We look at the result that is 5n + 2 and we take out the constants so it becomes
n. Then we choose the "biggest" variable that is n.
Other examples:
8n^3+5n+2 -> Θ(n)=n^3
10logn+n^4+7 -> Θ(n)=n^4
More info: http://bigocheatsheet.com/

Big O notation estimate

Not sure if this is the right place for this kind of question but here it goes. Given the following code, how many basic operations are there, and how many times is each one performed. What is the big O notation for this running time. This is in MATLAB, if it matters.
total = 0;
for i = 1:n
for j = 1:n
total = total + j;
end
end
My thinking is that, for each n, the j = 1:n loop runs once. Within the j = 1:n loop there are n calculations. So for the j = 1:n loop its n^2. This runs n times within the i = 1:n loop, so the total amount of calcultions is n^3, and the big O noation is O(N^3). Is this correct?
The short answer is:
O(n^2)
The long (and simplified) answer is:
The big "O" refers to the complexity of an algorithm (in this case, your code). Your question asks "how many" loops or operations are performed, but the "O" notation gives a relative idea of the complexity of an algorithm, thus not an absolute quantity. This would totally be impractical, the idea of the O notation is to generalise a measure of the complexity so that algorithms can be compared relatively to the other, without worrying too much about how many assignments, loops, and so on are performed.
That being said, there are specific guidelines on how to compute the complexity of an algorithm. Generally:
Loops are of complexity O("n"), not matter how many iterations they perform (remember, this is an abstract measure).
Operations such as assignments, additions etc are generally approximated to O(1) (complexity of 1) because the time they take to be performed is negligible.
There are specific rules for if then else operations, but it would make things more complicated and I invite you to read some introduction material on performing algorithm complexity analysis.
Also, be careful, the "n" is not that used in your code, it is a special notation used to denote a "generic" linear complexity.
Measuring the complexity of an algorithm is a recursive operation. You start with the basic operations and move up to loops etc. So, here is a detailed (I purposely detail too much so you get an idea of how it works, but in practice you don't have to go in that level of detail):
You start of with the first instruction:
O(total = 0;) = O(1)
because it is an assignment.
Then:
O(total = total + j;) = O(total + j) + O(total = x)
where x is the result of total + j.
= O(1) + O(1)
These are basic operations, thus they have a complexity of 1.
= O(1)
Because "O" is a "greatness" indicator that considers any sum of constants as 1.
Now coming to the loop:
O(
for i = 1:n // O(n)
for j = 1:n // O(n)
total = total + j; // O(1)
end
end
)
=
O(
n * (
n * (
1
)
)
= O(n * n * 1)
= O(n^2)
If you had two loops in a row (for ... ; for .... ;), the complexity would not be O(2n), but O(n), because again, O generalises.
Hope that helps :)
Your analysis is on the right track, but you're overestimating the cost by a factor of n. In particular, look here:
Within the j = 1:n loop there are n calculations. So for the j = 1:n loop its n^2.
You are right that the j = 1:n loop does n calculations, but each individual iteration of the loop only does 1 calculation. Since the loop runs n times, the work done is O(n), not O(n2). If you then repeat the rest of your analysis from that point, you'll end up getting that the total work done is Θ(n2), a tighter bound than what you had before.
As a note - you can actually speed this up pretty significantly. Notice that the inner loop adds 1 + 2 + 3 + ... + n to the total. We know that 1 + 2 + 3 + ... + n = n(n+1)/2, so you can rewrite the code as
total = 0;
for i = 1:n
total = total + n * (n + 1) / 2;
end
But notice that now you're just adding in n copies of n * (n + 1) / 2, so you can just rewrite this as
total = n * n * (n + 1) / 2
and the whole thing takes time O(1).

What is the O notation of this loop?

I understand that this is O(N^2):
Loop from i=1 to N
Loop from j=1 to N
Do something with i,j
But what about this?
Loop from i=1 to N
Loop from j=1 to i
Do something with i,j
Is it still O(N^2) or O(N log N)? I don't really understand how to tell.
this is also O(N^2).
N(N-1)/2 ~ O(N^2).
i = 1 than j = 1
i = 2 than j = 1 to 2
i = 3 than j = 1 to 3
i = 4 than j = 1 to 4
…….
…
i = N than j = 1 to N
So for total is 1 + 2 + 3 + 4 + …. + N = (N * (N+1))/2 ~ O(N^2).
For second problem, the running time will be O (1/2 N^2), which later becomes O(N^2), as we don't care about the constant in O notation. Usually the log N algorithm involves dividing the subproblem into half-size of the actual size in each iteration. Take for example merge sort. In merge sort, in each iteration it is dividing the size of the array into half.
Also O(n^2).
You have to look at the worst case of how long your code will run.
So first loop runs from 1 to N.
For each iteration of that loop there is second loop, which runs from 1 to i.
And we know that i will be N on the last iteration, hence it will run for O(N*N), which is (N^2)
We ignore constants in big-O notations.
If these concepts are difficult, try googling some tutorials and examples. All you need is some practice, and you will get it.

For TimeComplexity analysis ima need help with a Summation

for an algorithm time complexity analysis i need to know what is the result of the summation of the function n/i when i runs from 1 to logn, i saw somewhere trustable that is actually the harmonic sum, but i highly doubt it...
the function of the algorithm was originally T(n)=5T(n/5)+n/logn
this question was originally found at the Introduction to algorithms second edition book
save me! :)
http://faculties.sbu.ac.ir/~tahmasbi/DataStructure/Introduction%20to%20Algorithms-Cormen%20Solution.pdf
in page 58,there's a line say:
=n*sigma n/i where i goes from 1 to logn
=n*sigma 1/i where i goes from 1 to logn
thats the only part i have problem with.... cause all they did was taking that n out of the sigma, but where is it gone to? why is it true to just make it disappear?
as opposed to what they're saying i think it should be like:
=n*sigma n/i where i goes from 1 to logn
=n*n*sigma 1/i where i goes from 1 to logn
=n^2*sigma 1/i where i goes from 1 to logn
it is indeed harmonic sum:
n/1 + n/2 + ... + n/logn = n* [ 1 + 1/2 + ... + 1/logn] = n*H(logn) where H(logn) is the logn harmonic number.
the book uses H(m) = theta(logm) and because of this, H(logn) = theta(loglogn). so at overall we get: T(n) = n*H(logn) = n*theta(loglogn) = theta(n*loglogn).

Resources