Compare the complexity of two algorithms given steps - algorithm

Assume you had a data set of size n and two algorithms that processed that data
set in the same way. Algorithm A took 10 steps to process each item in the data set. Algorithm B processed each item in 100 steps. What would the complexity
be of these two algorithms?
I have drawn from the question that algorithm A completes the processing of each item with 1/10th the complexity of algorithm B,and using the graph provided in the accepted answer from the question: What is a plain English explanation of "Big O" notation? I am concluding that algorithm B has a complexity of O(n^2) and algorithm A a complexity of O(n), but am struggling to make conclusions beyond that without the implementation.

You need more than one data point before you can start making any conclusions about time complexity. The difference of 10 steps and 100 steps between Algorithm A and Algorithm B could be for many different reasons:
Additive Constant difference: Algorithm A is always 90 steps faster than Algorithm B no matter the input. In this case, both algorithms would have the same time complexity.
Scalar Multiplicative difference: Algorithm A is always 10 times faster than Algorithm B no matter the input. In this case, both algorithms would have the same time complexity.
The case that you brought up, where B is O(n^2) and A is O(n)
Many, many other possibilities.

Related

Comparing Complexity Of Algorithms

I am currently learning about Big O Notation running times and amortized times.
I have the following question:
two algorithms based on the principle of Divide & Conquer are available to solve a problem of complexity n.
Algorithm 1 divide the problem into 18 small problems and requires O (n^2) operations to combine the sub-solutions together.
Algorithm 2 divide the problem into 64 small problems and requires O(n) operations to combine the sub-solutions together.
Which algorithm is better and faster (for large n)?
I'm guessing that the second Algorithm is better because it requires less time (O(n) is faster than O(n^2)).
Am I correct in my guess?
Does the number of small problems play a role in the speed of Algorithm or does it always require a constant Time?
In this case it's probably not intended to be a trap, but it's good to be careful and some counter-intuitive things can happen. The trap, if it happens, is mostly this: how much smaller do the sub-problems get, compared to how many of them are generated?
For example, it is true for Algorithm 1 here that if sub-problems are 1/5th of the size of the current problem or smaller (and perhaps they meant they would be 1/18th the size?), then overall the time complexity is in O(n²). But if the size of the problem only goes down by a factor of 4, we're already up to O(n2.085), and if the domain is only cut into half (but still 18 times) then it goes all the way up to O(n4.17).
Similarly for Algorithm 2, sure if it cuts a program into 64 sub problems that are each 1/64th of the size, the overall time complexity would be in O(n log n). But if the sub problems are even a little bit bigger, say 1/63rd of the size, immediately we go up a whole step in the hierarchy to O(n1.004) - a tiny constant in the exponent still, but no longer loglinear. Make the problems 1/8th the size and the complexity becomes quadratic, and if we go to a mere halving of the problem size at each step it's all the way up to O(n6)! On the other hand if the problems shrink only a little bit faster, say 1/65th of the size, immediately the complexity stops being loglinear again but this time in the other direction, becoming O(n).
So it could go either way, depending on how quickly the sub-problems shrink, which is not explicitly mentioned in your problem statement. Hopefully it is clear that merely comparing the "additional processing per step" is not sufficient, not in general anyway. A lot of processing per step is a disadvantage that cannot be overcome, but having only a little processing per step is an advantage that can be easily lost if the "shrinkage factor" is small compared to the "fan-out factor".
The Master theorem is used for asymptotic analysis for divide and conquer algorithms and will provide a way for you to get a direct answer rather than guessing.
T(n) = aT(n/b) + f(n)
where T is the main problem, n is the set of input, a is the number of subproblems you divide into, b is the factor that your input set is decreased by for each subproblem, and f(n) is the function to split and combine subproblems together. From here we find c:
f(n) is O(n^c)
For example, in your example algorithm 1, c = 2, and in algorithm 2, c = 1. The value a is 18 and 64 for algorithm 1 and 2 respectively. The next part is where your problem is missing the appropriate information since b is not provided. In other words, to get a clear answer, you need to know the factor that each subproblem divides the original input.
if c < logb(a) then T(n) is O(n^logb(a))
if c = logb(a) then T(n) is O(n^c log(n))
if c > logb(a) then T(n) is O(f(n))

How is pre-computation handled by complexity notation?

Suppose I have an algorithm that runs in O(n) for every input of size n, but only after a pre-computation step of O(n^2) for that given size n. Is the algorithm considered O(n) still, with O(n^2) amortized? Or does big O only consider one "run" of the algorithm at size n, and so the pre-computation step is included in the notation, making the true notation O(n+n^2) or O(n^2)?
It's not uncommon to see this accounted for by explicitly separating out the costs into two different pieces. For example, in the range minimum query problem, it's common to see people talk about things like an &langle;O(n2), O(1)&rangle;-time solution to the problem, where the O(n2) denotes the precomputation cost and the O(1) denotes the lookup cost. You also see this with string algorithms sometimes: a suffix tree provides an O(m)-preprocessing-time, O(n+z)-query-time solution to string searching, while Aho-Corasick string matching offers an O(n)-preprocessing-time, O(m+z)-query-time solution.
The reason for doing so is that the tradeoffs involved here really depend on the use case. It lets you quantitatively measure how many queries you're going to have to make before the preprocessing time starts to be worth it.
People usually care about the total time to get things done when they are talking about complexity etc.
Thus, if getting to the result R requires you to perform steps A and B, then complexity(R) = complexity(A) + complexity(B). This works out to be O(n^2) in your particular example.
You have already noted that for O analysis, the fastest growing term dominates the overall complexity (or in other words, in a pipeline, the slowest module defines the throughput).
However, complexity analysis of A and B will be typically performed in isolation if they are disjoint.
In summary, it's the amount of time taken to get the results that counts, but you can (and usually do) reason about the individual steps independent of one another.
There are cases when you cannot only specify the slowest part of the pipeline. A simple example is BFS, with the complexity O(V + E). Since E = O(V^2), it may be tempting to write the complexity of BFS as O(E) (since E > V). However, that would be incorrect, since there can be a graph with no edges! In those cases, you will still need to iterate over all the vertices.
The point of O(...) notation is not to measure how fast the algorithm is working, because in many specific cases O(n) can be significantly slower than, say O(n^3). (Imagine the algorithm which runs in 10^100 n steps vs. the one which runs in n^3 / 2 steps.) If I tell you that my algorithm runs in O(n^2) time, it tells you nothing about how long it will take for n = 1000.
The point of O(...) is to specify how the algorithm behaves when the input size grows. If I tell you that my algorithm runs in O(n^2) time, and it takes 1 second to run for n = 500, then you'll expect rather 4 seconds to for n = 1000, not 1.5 and not 40.
So, to answer your question -- no, the algorithm will not be O(n), it will be O(n^2), because if I double the input size the time will be multiplied by 4, not by 2.

How to calculate "n" for different notations Big O, Omega, Litle o , Litle omega and Theta Notation

I am studying algorithms, but the calculations to find Time Complexity are not that much easy for me, it is hard to remember when to use log n, n log n, n^2, n^3, 2n, etc, my doubt is all about how to consider these input functions while computing the complexity, is their any specific way to calculate the complexity ,like using for loop take's this much complexity always and so on....?
Log(n): when you are using recursion and a tree is generated use log(n).
I mean in divide and conquer when you are diving problem into 2-halfs actually you are generating a recursive tree.
its complexity is Log(n), why ? because its a binary tree in nature and for binary tree we use Log(Base2)(n).
try yourself: suppose n=4(Elements) so log(base2)(4)=2, you divide it into equal half.
nLog(n): remember Log(n) its was division till single element. after that you start merging sorted elements that take liner time
in other words Merging of elements has complexity "n" so total complexity will be n(Merging) + Log(n)(Dividing) which is finally become nLog(n).
n^2:
when you see a problem is solved in two nested loop then Complexity is n^2.
i.e Matrix/2-D arrays they computed in 2 Loops. one loop inside the outer Loop.
n^3: oh 3-D arrays, this is for 3 nested loops. loop inside loop inside loop.
2n: thanks you did not forgot to write "2" with this "n" otherwise I forgot to explain this.
so "2" in here with "n" is constant just ignore it. why ?. because if you travel to other city by AIR. you will count only hours taken by flight not the hours consumed in reaching AIR port. I mean this is minor we remove constant.
and for "n" just remember this word "Linear" i.e Big-O(n) is linear complexity. Sadly I discovered there is no Algorithm that sort elements in Linear time. i.e just in one loop.(Single array traversal).
Things To Remember:
Nominal Time: Linear Time, Complexity Big-O(n)
Polynomial Time: Not Linear Time, Complexity Big-O[ log(n), nlog(n), n^2, n^3, n^4, n^5).
Exponential Time: 2^n, n^n i.e this problem will solve in exponential time i.e N^power(n) (These are bad bad bad, not called algorithm)
There are many links on how to roughly calculate Big O and its sibling's complexity, but there is no true formula.
However, there are guidelines to help you calculate complexity such as these presented below. I suggest reviewing as many different programs and data structures to help familiarize yourself with the pattern and just study, study, study until you get it! There is a pattern and you will see it the more you study it.
Source: http://www.dreamincode.net/forums/topic/125427-determining-big-o-notation/
Nested loops are multiplied together.
Sequential loops are added.
Only the largest term is kept, all others are dropped.
Constants are dropped.
Conditional checks are constant (i.e. 1).

Big O Efficiency not always full proof?

I have been learning big o efficiency at school as the "go to" method for describing algorithm runtimes as better or worse than others but what I want to know is will the algorithm with the better efficiency always outperform the worst of the lot like bubble sort in every single situation, are there any situations where a bubble sort or a O(n2) algorithm will be better for a task than another algorithm with a lower O() runtime?
Generally, O() notation gives the asymptotic growth of a particular algorithm. That is, the larger category that an algorithm is placed into in terms of asymptotic growth indicates how long the algorithm will take to run as n grows (for some n number of items).
For example, we say that if a given algorithm is O(n), then it "grows linearly", meaning that as n increases, the algorithm will take about as long as any other O(n) algorithm to run.
That doesn't mean that it's exactly as long as any other algorithm that grows as O(n), because we disregard some things. For example, if the runtime of an algorithm takes exactly 12n+65ms, and another algorithm takes 8n+44ms, we can clearly see that for n=1000, algorithm 1 will take 12065ms to run and algorithm 2 will take 8044ms to run. Clearly algorithm 2 requires less time to run, but they are both O(n).
There are also situations that, for small values of n, an algorithm that is O(n2) might outperform another algorithm that's O(n), due to constants in the runtime that aren't being considered in the analysis.
Basically, Big-O notation gives you an estimate of the complexity of the algorithm, and can be used to compare different algorithms. In terms of application, though, you may need to dig deeper to find out which algorithm is best suited for a given project/program.
Big O is gives you the worst cast scenario. That means that it assumes the input in in the worst possible It also ignores the coefficient. If you are using selection sort on an array that is reverse sorted then it will run in n^2 time. If you use selection sort on a sorted array then it will run in n time. Therefore selection sort would run faster than many other sort algorithms on an already sorted list and slower than most (reasonable) algorithms on a reverse sorted list.
Edit: sorry, I meant insertion sort, not selection sort. Selection sort is always n^2

Time complexity of one algorithm cascaded into another?

I am working with random forest for a supervised classification problem, and I am using the k-means clustering algorithm to split the data at each node. I am trying to calculate the time complexity for the algorithm. From what I understand the the time complexity for k-means is
O(n · K · I · d )
where
n is the number of points,
K is the number of clusters,
I is the number of iterations, and
d is the number of attributes.
The k, I and d are constants or have an upper bound, and n is much larger as compared to these three, so I suppose the complexity is just O(n).
The random forest, on the other hand, is a divide-and-conquer approach, so for n instances the complexity is O(n · logn), though I am not sure about this, correct me if i am wrong.
To get the complexity of the algorithm do i just add these two things?
In this case, you don't add the values together. If you have a divide-and-conquer algorithm, the runtime is determined by a combination of
The number of subproblems made per call,
The sizes of those subproblems, and
The amount of work done per problem.
Changing any one of these parameters can wildly impact the overall runtime of the function. If you increase the number of subproblems made per call by even a small amount, you increase exponentially the number of total subproblems, which can have a large impact overall. Similarly, if you increase the work done per level, since there are so many subproblems the runtime can swing wildly. Check out the Master Theorem as an example of how to determine the runtime based on these quantities.
In your case, you are beginning with a divide-and-conquer algorithm where all you know is that the runtime is O(n log n) and are adding in a step that does O(n) work per level. Just knowing this, I don't believe it's possible to determine what the runtime will be. If, on the other hand, you make the assumption that
The algorithm always splits the input into two smaller pieces,
The algorithm recursively processes those two pieces independently, and
The algorithm uses your O(n) algorithm to determine which split to make
Then you can conclude that the runtime is O(n log n), since this is the solution to the recurrence given by the Master Theorem.
Without more information about the internal workings of the algorithm, though, I can't say for certain.
Hope this helps!

Resources