Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I had made the comparison in between 2 algorithms, A and B. I found that, the time complexity of these 2 algorithms is different, which are O(n^3) and O(n^4) respectively, but the space complexity of both A and B is same, which is O(n). Is that have probability?
That is possible, yes. The time complexity and the space complexity are not necessarily related to each other.
As an example, think of various sorting algorithms, such as the ones in this list - as you see, both insertion sort and heapsort have a space-complexity (column memory) of O(1), while their time complexities are different.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 12 months ago.
Improve this question
As per Big O notations, if time complexity of one algorithm is O(2^n) and the other is O(n^1000), then which would be faster one?
How to recognize overall behavior for some non-obvious cases: get logarithm of both functions.
(Sometimes we can also get ratio of the functions and evaluate ratio limit for large n's, here this approach is not good)
log(2^n) = n*log(2)
log(n^1000) = 1000*log(n)
The first result is slanted line with positive coefficient. The second one's plot is convex curve with negative second derivative, so the first function becomes larger at some big n value.
How plot looks
O(n^1000) is in the same class as (n^2) and O(n^777777777) which is Polynomial time, whereas O(2^n) is Exponential time which is way slower than Polynomial
https://www.bigocheatsheet.com/
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I was wondering which sorting algorithm between bubble sort, insertion sort, merge sort, quick sort and selection sort is the worst to use to sort a list of 100 million elements if you have limited computer space and why?
Probably selection sort. It's inefficient compared to the other options, at a time complexity of Ω(n²), as opposed to bubble and insertion sort at Ω(n) and quick and merge sort at Ω(n log(n)).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Is there any advantage of using Dinic's O((V^2)E) algorithm over Edmond-Karp algorithm O(V(E^2))?
In other words, I want to know how is O((V^2)E) better than O(V(E^2)) if it is from a Competitive Programming point of view.
Let's say the total number of vertices in n. "Usually", number of edges in a connected graph tend to be between n and n^2.
Mostly the input graphs are not very sparse, so the number of edges in maximum percentage of the cases would be greater than n (might be O(n log n), or in the worst case, O(n^2)).
So, if you consider the worst case scenario, O(V^2 * E) is O(n^4), whereas O(V*E^2) is O(n^5). Hence you see the advantage of using an O(V^2*E) time algorithm over O(V*E^2).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
To solve a particular problem, 2 algorithms are available -
Algo 1 takes O(n) time and O(n) space in worst case
Algo 2 takes O(nlogn) time and O(1) space in worst case
What is worst case time complexity to solve the problem out of algo 1,2 and why?
If you're asking which worst case is worse? Alg2's worst-case time is worse, because nlogn > n.
Edit (to answer the question raised in comments):
If you're asking what is the best worst case? Alg1's worst-case time is best, again because nlogn > n.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I'm not sure if this is a problem with my understanding but this aspect of Big Oh notation seems strange to me. Say you have two algorithms - the first preforms n^2 operations and the second performs n^2-n operations. Because of the dominance of the quadratic term, both algorithms would have complexity O(n^2), yet the second algorithm will always be better than the first. That seems weird to me, Big Oh notation makes it seem like they are same. I dunno...
Big O is not about the time it takes to execute your algorithm, it is about how well it will scale when presented with large data sets (large values of n).
When presented with a large data set, the n^2 term will quickly overshadow any linear term. So the linear term becomes insignificant.
When n grows towards infinity n^2 will be much greater then n so the -n won't have any significant difference on the outcome.