N*2^N vs N*N Time complexity [closed] - algorithm

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
Which time complexity is better N*(2^N) or N^2 and why?
N*(2^N) or N^2

N*(2^N) is exponential.
If you take n=10, for example, you get 10240
N^2 is merely polynomial.
If you take n=10, for example, you get 100
Exponential is worse than polynomial for large N, and even for reasonable Ns, in your case. To see it intuitively, imagine growing N by 1. In the polynomial case, the result grows by a fraction ((N+1) / N) ^ 2. It grows, but not much. In the exponential case, growing N by 1 doubles the result.

Related

Which algorithm would be the faster algorithm? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
Improve this question
As per Big O notations, if time complexity of one algorithm is O(2^n) and the other is O(n^1000), then which would be faster one?
How to recognize overall behavior for some non-obvious cases: get logarithm of both functions.
(Sometimes we can also get ratio of the functions and evaluate ratio limit for large n's, here this approach is not good)
log(2^n) = n*log(2)
log(n^1000) = 1000*log(n)
The first result is slanted line with positive coefficient. The second one's plot is convex curve with negative second derivative, so the first function becomes larger at some big n value.
How plot looks
O(n^1000) is in the same class as (n^2) and O(n^777777777) which is Polynomial time, whereas O(2^n) is Exponential time which is way slower than Polynomial
https://www.bigocheatsheet.com/

Advantage of using Dinic's O((V^2)E) algorithm over Edmond-Karp algorithm O(V(E^2)) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is there any advantage of using Dinic's O((V^2)E) algorithm over Edmond-Karp algorithm O(V(E^2))?
In other words, I want to know how is O((V^2)E) better than O(V(E^2)) if it is from a Competitive Programming point of view.
Let's say the total number of vertices in n. "Usually", number of edges in a connected graph tend to be between n and n^2.
Mostly the input graphs are not very sparse, so the number of edges in maximum percentage of the cases would be greater than n (might be O(n log n), or in the worst case, O(n^2)).
So, if you consider the worst case scenario, O(V^2 * E) is O(n^4), whereas O(V*E^2) is O(n^5). Hence you see the advantage of using an O(V^2*E) time algorithm over O(V*E^2).

Algorithm - worst case time complex considers space complexity [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
To solve a particular problem, 2 algorithms are available -
Algo 1 takes O(n) time and O(n) space in worst case
Algo 2 takes O(nlogn) time and O(1) space in worst case
What is worst case time complexity to solve the problem out of algo 1,2 and why?
If you're asking which worst case is worse? Alg2's worst-case time is worse, because nlogn > n.
Edit (to answer the question raised in comments):
If you're asking what is the best worst case? Alg1's worst-case time is best, again because nlogn > n.

Prove n^2 + 5 log(n) = O(n^2) [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I am trying to prove that n^2 + 5 log(n) = O(n^2), O representing big-O notation. I am not great with proofs and any help would be appreciated.
Informally, we take big-O to mean the fastest growing term as n grows arbitrarily large. Since n^2 grows much faster than log(n), that should be clear.
More formally, asymptotic behaviors are identical when the limit of the ratio of two functions approaches 1 as their parameter(s) approach(es) infinity, which should sound like the same thing. So, you would need to show that lim(n->inf)((n^2+5log(n))/n^2) = 1.

Log-log plot/graph of algorithm time complexity [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I just wrote the quick and merge sort algorithms and I want to make a log-log plot of their run time vs size of array to sort.
As I have never done this my question is does it matter if I choose arbitrary numbers for the array length (size of input) or should I follow a pattern (something like 10^3, 10^4, 10^5, etc)?
In general, you need to choose array lengths, for each method, that are large enough to display the expected o(n log n) or O(n^2) type behavior.
If your n is too small the run time may be dominated by other growth rates, for example an algorithm with run time = 1000000*n + n^2 will look to be ~O(n) for n < 1000. For most algorithms the small n behavior means that your log-log plot will initially be curved.
On the other hand, if your n is too large your algorithm may take too long to complete.
The best compromise may be to start with small n, and time for n, 2n, 4n,..., or n, 3n, 9n,... and keep increasing until you can clearly see the log log plots asymptoting to a straight lines.

Resources