what's the time complexity of the following program [closed] - complexity-theory

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 8 years ago.
Improve this question
what's the time complexity of the following program?
sum=0;
for(i=1;i<=5;i++)
sum=sum+i;
and how to define this complexity in log ? i shall highly appreciate if someone explain complexity step by step. furthermore how to show in O(big o) and logn.
[Edited]
sum=0; //1 time
i=1; //1 time
i<=5; //6 times
i++ //5 times
sum=sum+i;//5 times
is time complexity 18? Correct?

Preliminaries
Time complexity isn't usually expressed in terms of a specific integer, so the statement "The time complexity of operation X is 18" isn't clear without a unit, e.g., 18 "doodads".
One usually expresses time complexity as a function of the size of the input to some function/operation.
You often want to ignore the specific amount of time a particular operation takes, due to differences in hardware or even differences in constant factors between different languages. For example, summation is still O(n) (in general) in C and in Python (you still have to perform n additions), but differences in constant factors between the two languages will result in C being faster in terms of absolute time the operation takes to halt.
One also usually assumes that "Big-Oh"--e.g, O(f(n))--is the "worst-case" running time of an algorithm. There are other symbols used to study more strict upper and lower bounds.
Your question
Instead of summing from 1 to 5, let's look at summing from 1 to n.
The complexity of this is O(n) where n is the number of elements you're summing together.
Each addition (with +) takes constant time, which you're doing n times in this case.
However, this particular operation that you've shown can be accomplished in O(1) (constant time), because the sum of the numbers from 1 to n can be expressed as a single arithmetic operation. I'll leave the details of that up to you to figure out.
As far as expressing this in terms of logarithms: not exactly sure why you'd want to, but here goes:
Because exp(log(n)) is n, you could express it as O(exp(log(n))). Why would you want to do this? O(n) is perfectly understandable without needing to invoke log or exp.

First of all the loop runs 5 times for 5 inputs hence it has a time complexity of O(n). I am assuming here that values in i are the inputs for sum.
Secondly you cant just define time complexity in log terms it should always in BIG O notation. For example if you perform a binary search then the worst case time complexity of that algorithm is O(log n) because you are getting result in say 3 iterations when the input arrays is 8.
Complexity = log2(base)8 = 3
now here your comlexity is in log.

Related

Which big O represents better worst running time? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 months ago.
Improve this question
I am looking at this question:
Which of the following time complexities represent better worst running time?
(a) O(lg(n!))
(b) O(n)
(c) O(nĀ²)
(d) O(lg(lg(n)))
(e) none of the above
Answer: (d)
According to options, option d should be the best case because it takes the least time.
And option c should be the worst case, according to options.
Why the answer is option (d) if the question is about the worst case?
Algorithm X can take anywhere between 1 second and 10 seconds. Algorithm Y can take anywhere between 5 seconds and 6 seconds. Which algorithm has better running time, X or Y?
It depends what exact characteristic of the algorithms you are comparing. There are several to consider. Among those are the best case running time, the worst case running time, and the average case running time.
If comparing the best case running time, X is better than Y, because the best time of X is smaller than the best time of Y.
If comparing the worst case running time, Y is better than X, because the worst time of Y is smaller than the worst time of X.
If comparing the average case running time ... that's the control question! What is the answer?
From Wikipedia:
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. [...] A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
Let's say we have a function š‘“ that has a big O description of O(š‘›Ā²), then that function might still behave much better in the best case, and in that case be described with -- let's say -- O(š‘›).
Worst and best case are only relevant concepts when there are aspects of the input that can vary, even when š‘› remains the same. A typical example is a sort function. In that case we take š‘› to be the size of the array to be sorted, but the actual values in that array can still vary. Sometimes those values give us a best case, sometimes a worst case, sometimes something in between...
If all we have is that big O notation, we cannot exclude there is a worst case for that function, where it really meets that asymptotic limit.
Now in that list of options we see what the worst case asymptotic behaviour of those functions would be, and from those we should pick the better one.

How can we compare excution time in theory and in pratice [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
So from my understading, we can only evaluate algorithm with asymptotic analysis but when executing a algorithm, it can only return amount of time.
My question is how can we compare those two ?
They are comparable but not in the way you want.
If you have an implementation that you evaluate asymptotically to be say O(N^2) and you measure to run in 60 seconds for an input of N=1000, then if you change the input to N=2000 I would expect the run-time to be on the order of 60*(2^2) 4 minutes (I increase the input by a factor of two, the runtime increases by a factor of 2 square).
Now, if you have another algorithm also O(N^2) you can observe it to run for N=1000 in 10 seconds (the compiler creates faster instructions, or the CPU is better). Now when you move to N=2000 the runtime I would expect to be around 40 seconds (same logic). If you actually measure it you might still see some differences from the expected value because of system load or optimizations but they become less significant as N grows.
So you can't really say which algorithm will be faster based on asymptotic complexity alone. The asymptotic complexity guarantees that there will be an input sufficiently large where the lower complexity is going to be faster, but there's no promise what "sufficiently large" means.
Another example is search. You can do linear search O(N) or binary search O(logN). If your input is small (<128 ints) the compiler and processor make linear search faster than binary search. However grow N to say 1 million items and the binary search will be much faster than linear.
As a rule, for large inputs optimize complexity first and for small inputs optimize run-time first. As always if you care about performance do benchmarks.

Best-case running time: upper and lower bound? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
In my assignment I was given some information about an algorithm in form of statements. One of these statements was: "the best-case running time of Algorithm B is Ī©(n^2);".
I was under the impression that best-case running time of algorithms is always either lower-bound, upper-bound or tight-bound. I am wondering if an algorithm such as this can also have an upper-bound of its best-case running time. If so, what are some examples of algorithms where this occurs?
A case is a class of inputs for which you consider your algorithm's performance. The best case is the class of inputs for which your algorithm's runtime has the most desirable asymptotic bounds. Typically this might mean there is no other class which gives a lower Omega bound. The worst case is the class of inputs for which your algorithm's runtime has the least desirable asymptotic bounds. Typically this might mean there is no other class which gives a higher O bound. The average case has nothing to do with desirability of bounds but looks at the expected value of the runtime given some stated distribution of inputs. Etc. You can define whatever cases you want.
Imagine the following algorithm:
Weird(a[1...n])
if n is even then
flip a coin
if heads then bogosort(a[1...n])
else if tails then bubble_sort(a[1...n])
else if n is odd then
flip a coin
if heads then merge_sort(a[1...n])
else if tails then counting_sort(a[1...n])
Given a list of n integers each between 1 and 10, inclusive, this algorithm sorts the list.
In the best case, n is odd. A lower bound on the best case is Omega(n) and an upper bound on the best case is O(n log n).
In the worst case, n is even. A lower bound on the worst case is O(n^2) and there is no upper bound on the worst case (since bogosort may never finish, despite that being terribly unlikely).
To define an average case, we would need to define probabilities. Assuming even/odd are equally likely for all n, then there is no upper bound on the average case and a lower bound on the average case is Omega(n^2), same as for the worst case. To get a different bound for the average case, we'd need to define the distribution so that n being even gets increasingly unlikely for larger lists. Maybe the only even-length list you typically pass into this algorithm has length 2, for example.

Quicksort complexities in depth [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
So I am having an exam, and a big part of this exam will be quicksort algorithm. As everyone knows, the best case scenario and actually an average case for this algorithm is: O(nlogn). The worst case scenario would be O(n^2).
As for the worst case scenario I know how to explain it: It happens when the selected pivot would be the smallest or the biggest value in the array, then we would have n quicksort calls which may take up to n time (I mean partition operation). Am I right?
Now the best/average case. I've read the Cormens book, I understood many things thanks to that book, but as for the quicksort algorithm he focuses on the mathematical formulas on how to explain O(nlogn) complexity. I just wanted to know why is it O(nlogn), not getting into some mathematical proof. For now I've only seen some Wikipedia explanation, that if we choose a pivot which divides our array into n/2, n/2+1 parts each time, then we would have a call tree of depth logn, but I don't know if that is true and even if so, why is it logn then.
I know that there are many materials covering quicksort on the internet, but they only cover implementation, or are just telling me the complexity, not explaining it.
Am I right?
Yes.
we would have a call tree of depth logn but I don't know if that is true
It is.
why is it logn?
Because we partition the array in half at every step, resulting in logn depth of the call graph. From this Intro:
See the tree and its depth, it's logn. Imagine it as the search in a BST costs logn, or why search takes logn too in Binary search in a sorted array.
PS: Math tell the truth, invest in understanding them, and you shall become a better Computer Scientist! =)
For the best case scenario, quick sort splits the current array 50% / 50% (in half) on each partition step for a time complexity of O(log2(n)) (1/.5 = 2), but the constant 2 is ignored, so it's O(n log(n).
If each partition step produced a 20% / 80% split, then the worst case time complexity would be based on the 80% or O(n log1.25(n)) (1/.8 = 1.25), but the constant 1.25 is ignored so it's also O(n log(n)), even though it's about 3 times slower than the 50% / 50% partition case for sorting 1 million elements.
The O(n^2) time complexity occurs when the partition split only produces a linear reduction in partition size with each partition step. The simplest and worst case example is when only 1 element is removed per partition step.

Amortized and Average runtime complexity [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
this is not homework, I am studying Amortized analysis. There are something confuse me .I can't totally understand the meaning between Amortized and Average complexity. Not sure this is right or not. Here is a question:
--
We know that the runtime complexity of a program depends on the program input combinations --- Suppose the probability of the program with runtime complexity O(n) is p, where p << 1, and in other cases (i.e for the (1-p)possible cases), the runtime complexity is O(logn). If we are running the program with K different input combinations, where K is a very large number, we can say that the amortized and average runtime complexity of this program is:
--
First question is: I have read the question here:Difference between average case and amortized analysis
So, I think there is no answer for the average runtime complexity. Because we have no idea about what average input. But it seems to be p*O(n)+(1-p)*O(logn). Which is correct and why?
Second, the amortized part. I have read Constant Amortized Time and we already know that the Amortized analysis differs from average-case analysis in that probability is not involved; an amortized analysis guarantees the average performance of each operation in the worst case.
Can I just say that the amortized runtime is O(n). But the answer is O(pn). I'm a little confuse about why the probability involved. Although O(n)=O(pn), but I really can't have any idea why p could appear there? I change the way of thinking. Suppose we do lost of times then K becomes very big so the amortized runtime is (KpO(n)+K*(1-p)O(logn))/k = O(pn). It seems to be the same idea with Average case.
Sorry for that confuse, help me please, thanks first!
With "average" or "expected" complexity, you are making assumptions about the probability distribution of the problem. If you are unlucky, (or if your problem generator maliciously fails to match your assumption 8^), all your operations will be very expensive, and your program might take a much greater time than you expect.
Amortized complexity is a guarantee on the total cost of any sequence of operations. That means, no matter how malicious your problem generator is, you don't have to worry about a sequence of operations taking a much greater time than you expect.
(Depending on the algorithm, it is not hard to accidentally stumble on the worst case. The classic example is the naive Quicksort, which does very badly on mostly-sorted input, even though the "average" case is fast)

Resources