Algorithm Analysis Question - algorithm

NOTE: I'm ultra-newbie on algorithm analysis so don't take any of my affirmations as absolute truths, anything (or everything) that I state could be wrong.
Hi, I'm reading about algorithm analysis and "Big-O-Notation" and I fell puzzled about something.
Suppose that you are asked to print all permutations of a char array, for [a,b,c] they would be ab, ac, ba, bc, ca and cb.
Well one way to do it would be (In Java):
for(int i = 0; i < arr.length; i++)
for(int q = 0; q < arr.length; q++)
if(i != q)
System.out.println(arr[i] + " " + arr[q]);
This algorithm has a notation of O(n2) if I'm correct.
I thought other way of doing it:
for(int i = 0; i < arr.length; i++)
for(int q = i+1; q < arr.length; q++)
{
System.out.println(arr[i] + " " + arr[q]);
System.out.println(arr[q] + " " + arr[i]);
}
Now this algorithm is twice as fast than the original, but unless I'm wrong, for big-O-notation it's also a O(2)
Is this correct? Probably it isn't so I'll rephrase: Where am I wrong??

You are correct. O-notation gives you an idea of how the algorithm scales, not the absolute speed. If you add more possibilities, both solutions will scale the same way, but one will always be twice as fast as the other.
O(n) operations may also be slower than O(n^2) operations, for sufficiently small 'n'. Imagine your O(n) computation involves taking 5 square roots, and your O(n^2) solution is a single comparison. The O(n^2) operation will be faster for small sets of data. But when n=1000, and you are doing 5000 square roots but 1000000 comparisons, then the O(n) might start looking better.

I think most people agree first one is O(n^2). Outer loop runs n times and inner loop runs n times every time outer loop runs. So the run time is O(n * n), O(n^2).
The second one is O(n^2) because the outer loop runs n times. The inner loops runs n-1 times. On average for this algorithm, inner loop runs n/2 times for every outer loop. so the run time of this algorithm is O(n * n/2) => O ( 1/2 * n^2) => O(n^2).

Big-O notation says nothing about the speed of the algorithm except for how fast it is relative to itself when the size of the input changes.
An algorithm could be O(1) yet take a million years. Another algorithm could be O(n^2) but be faster than an O(n) algorithm for small n.
Some of the answers to this question may help with this aspect of big-O notation. The answers to this question may also be helpful.

Ignoring the problem of calling your program output "permutation":
Big-O-Notation omits constant coefficients. And 2 is a constant coefficient.
So, there is nothing wrong for programs two times faster than the original to have the same O()

You are correct. Two algorithms are equivalent in Big O notation if one of them takes a constant amount of time more ("A takes 5 minutes more than B"), or a multiple ("A takes 5 times longer than B") or both ("A takes 2 times B plus an extra 30 milliseconds") for all sizes of input.
Here is an example that uses a FUNDAMENTALLY different algorithm to do a similar sort of problem. First, the slower version, which looks much like your original example:
boolean arraysHaveAMatch = false;
for (int i = 0; i < arr1.length(); i++) {
for (int j = i; j < arr2.length(); j++) {
if (arr1[i] == arr2[j]) {
arraysHaveAMatch = true;
}
}
}
That has O(n^2) behavior, just like your original (it even uses the same shortcut you discovered of starting the j index from the i index instead of from 0). Now here is a different approach:
boolean arraysHaveAMatch = false;
Set set = new HashSet<Integer>();
for (int i = 0; i < arr1.length(); i++) {
set.add(arr1[i]);
}
for (int j = 0; j < arr2.length(); j++) {
if (set.contains(arr2[j])) {
arraysHaveAMatch = true;
}
}
Now, if you try running these, you will probably find that the first version runs FASTER. At least if you try with arrays of length 10. Because the second version has to deal with creating the HashSet object and all of its internal data structures, and because it has to calculate a hash code for every integer. HOWEVER, if you try it with arrays of length 10,000,000 you will find a COMPLETELY different story. The first version has to examine about 50,000,000,000,000 pairs of numbers (about (N*N)/2); the second version has to perform hash function calculations on about 20,000,000 numbers (about 2*N). In THIS case, you certainly want the second version!!
The basic idea behind Big O calculations is (1) it's reasonably easy to calculate (you don't have to worry about details like how fast your CPU is or what kind of L2 cache it has), and (2) who cares about the small problems... they're fast enough anyway: it's the BIG problems that will kill you! These aren't always the case (sometimes it DOES matter what kind of cache you have, and sometimes it DOES matter how well things perform on small data sets) but they're close enough to true often enough for Big O to be useful.

You're right about them both being big-O n squared, and you actually proved that to be true in your question when you said "Now this algorithm is twice as fast than the original." Twice as fast means multiplied by 1/2 which is a constant, so by definition they're in the same big-O set.

One way of thinking about Big O is to consider how well the different algorithms would fare even in really unfair circumstances. For instance, if one was running on a really powerful supercomputer and the other was running on a wrist-watch. If it's possible to choose an N that is so large that even though the worse algorithm is running on a supercomputer, the wrist watch can still finish first, then they have different Big O complexities. If, on the other hand, you can see that the supercomputer will always win, regardless of which algorithm you chose or how big your N was, then both algorithms must, by definition, have the same complexity.
In your algorithms, the faster algorithm was only twice as fast as the first. This is not enough of an advantage for the wrist watch to beat the supercomputer, even if N was very high, 1million, 1trillion, or even Graham's number, the pocket watch could never ever beat the super computer with that algorithm. The same would be true if they swapped algorithms. Therefore both algorithms, by definition of Big O, have the same complexity.

Suppose I had an algorithm to do the same thing in O(n) time. Now also suppose I gave you an array of 10000 characters. Your algorithms would take n^2 and (1/2)n^2 time, which is 100,000,000 and 50,000,000. My algorithm would take 10,000. Clearly that factor of 1/2 isn't making a difference, since mine is so much faster. The n^2 term is said to dominate the lesser terms like n and 1/2, essentially rendering them negligible.

The big-oh notation express a family of function, so say "this thing is O(n²)" means nothing
This isn't pedantry, it is the only, correct way to understand those things.
O(f) = { g | exists x_0 and c such that, for all x > x_0, g(x) <= f(x) * c }
Now, suppose that you're counting the steps that your algorithm, in the worst case, does in term of the size of the input: call that function f.
If f \in O(n²), then you can say that your algorithm has a worst-case of O(n²) (but also O(n³) or O(2^n)).
The meaninglessness of the constants follow from the definition (see that c?).

The best way to understand Big-O notation is to get the mathematical grasp of the idea behind the notation. Look for dictionary meaning of the word "Asymptote"
A line which approaches nearer to some curve than assignable distance, but, though
infinitely extended, would never meet it.
This defines the maximum execution time (imaginary because asymptote line meets the curve at infinity), so what ever you do will be under that time.
With this idea, you might want to know, Big-O, Small-O and omega notation.

Always keep in mind, Big O notation represents the "worst case" scenario. In your example, the first algorithm has an average case of full outer loop * full inner loop, so it is n^2 of course. Because the second case has one instance where it is almost full outer loop * full inner loop, it has to be lumped into the same pile of n^2, since that is its worst case. From there it only gets better, and your average compared to the first function is much lower. Regardless, as n grows, your functions time grows exponentially, and that is all Big O really tells you. The exponential curves can vary widely, but at the end of the day, they are all of the same type.

Related

Time Complexity (Big O) - Can value of N decides whether the time complexity is O(1) or O(N) when we have 2 nested FOR loops?

Suppose that I have 2 nested for loops, and 1 array of size N as shown in my code below:
int result = 0;
for( int i = 0; i < N ; i++)
{
for( int j = i; j < N ; j++)
{
result = array[i] + array[j]; // just some funny operation
}
}
Here are 2 cases:
(1) if the constraint is that N >= 1,000,000 strictly, then we can definitely say that the time complexity is O(N^2). This is true for sure as we all know.
(2) Now, if the constraint is that N < 25 strictly, then people could probably say that because we know that definitely, N is always too small, the time complexity is estimated to be O(1) since it takes very little time to run and complete these 2 for loops WITH MODERN COMPUTERS ? Does that sound right ?
Please tell me if the value of N plays a role in deciding the outcome of the time complexity O(N) ? If yes, then how big the value N needs to be in order to play that role (1,000 ? 5,000 ? 20,000 ? 500,000 ?) In other words, what is the general rule of thumb here ?
INTERESTING THEORETICAL QUESTION: If 15 years from now, the computer is so fast that even if N = 25,000,000, these 2 for loops can be completed in 1 second. At that time, can we say that the time complexity would be O(1) even for N = 25,000,000 ? I suppose the answer would be YES at that time. Do you agree ?
tl:dr No. The value of N has no effect on time complexity. O(1) versus O(N) is a statement about "all N" or how the amount of computation increases when N increases.
Great question! It reminds me of when I was first trying to understand time complexity. I think many people have to go through a similar journey before it ever starts to make sense so I hope this discussion can help others.
First of all, your "funny operation" is actually funnier than you think since your entire nested for-loops can be replaced with:
result = array[N - 1] + array[N - 1]; // just some hilarious operation hahaha ha ha
Since result is overwritten each time, only the last iteration effects the outcome. We'll come back to this.
As far as what you're really asking here, the purpose of Big-O is to provide a meaningful way to compare algorithms in a way that is indenependent of input size and independent of the computer's processing speed. In other words, O(1) versus O(N) has nothing to with the size of N and nothing to do with how "modern" your computer is. That all effects execution time of the algorithm on a particular machine with a particular input, but does not effect time complexity, i.e. O(1) versus O(N).
It is actually a statement about the algorithm itself, so a math discussion is unavoidable, as dxiv has so graciously alluded to in his comment. Disclaimer: I'm going to omit certain nuances in the math since the critical stuff is already a lot to explain and I'll defer to the mountains of complete explanations elsewhere on the web and textbooks.
Your code is a great example to understand what Big-O does tell us. The way you wrote it, its complexity is O(N^2). That means that no matter what machine or what era you run your code in, if you were to count the number of operations the computer has to do, for each N, and graph it as a function, say f(N), there exists some quadratic function, say g(N)=9999N^2+99999N+999 that is greater than f(N) for all N.
But wait, if we just need to find big enough coefficients in order for g(N) to be an upper bound, can't we just claim that the algorithm is O(N) and find some g(N)=aN+b with gigantic enough coefficients that its an upper bound of f(N)??? THE ANSWER TO THIS IS THE MOST IMPORTANT MATH OBSERVATION YOU NEED TO UNDERSTAND TO REALLY UNDERSTAND BIG-O NOTATION. Spoiler alert. The answer is no.
For visuals, try this graph on Desmos where you can adjust the coefficients:[https://www.desmos.com/calculator/3ppk6shwem][1]
No matter what coefficients you choose, a function of the form aN^2+bN+c will ALWAYS eventually outgrow a function of the form aN+b (both having positive a). You can push a line as high as you want like g(N)=99999N+99999, but even the function f(N)=0.01N^2+0.01N+0.01 crosses that line and grows past it after N=9999900. There is no linear function that is an upper bound to a quadratic. Similarly, there is no constant function that is an upper bound to a linear function or quadratic function. Yet, we can find a quadratic upper bound to this f(N) such as h(N)=0.01N^2+0.01N+0.02, so f(N) is in O(N^2). This observation is what allows us to just say O(1) and O(N^2) without having to distinguish between O(1), O(3), O(999), O(4N+3), O(23N+2), O(34N^2+4+e^N), etc. By using phrases like "there exists a function such that" we can brush all the constant coefficients under the rug.
So having a quadratic upper bound, aka being in O(N^2), means that the function f(N) is no bigger than quadratic and in this case happens to be exactly quadratic. It sounds like this just comes down to comparing the degree of polynomials, why not just say that the algorithm is a degree-2 algorithm? Why do we need this super abstract "there exists an upper bound function such that bla bla bla..."? This is the generalization necessary for Big-O to account for non-polynomial functions, some common ones being logN, NlogN, and e^N.
For example if the number of operations required by your algorithm is given by f(N)=floor(50+50*sin(N)), we would say that it's O(1) because there is a constant function, e.g. g(N)=101 that is an upper bound to f(N). In this example, you have some bizarre algorithm with oscillating execution times, but you can convey to someone else how much it doesn't slow down for large inputs by simply saying that it's O(1). Neat. Plus we have a way to meaningfully say that this algorithm with trigonometric execution time is more efficient than one with linear complexity O(N). Neat. Notice how it doesn't matter how fast the computer is because we're not measuring in seconds, we're measuring in operations. So you can evaluate the algorithm by hand on paper and it's still O(1) even if it takes you all day.
As for the example in your question, we know it's O(N^2) because there are aN^2+bN+c operations involved for some a, b, c. It can't be O(1) because no matter what aN+b you pick, I can find a large enough input size N such that your algorithm requires more than aN+b operations. On any computer, in any time zone, with any chance of rain outside. Nothing physical effects O(1) versus O(N) versus (N^2). What changes it to O(1) is changing the algorithm itself to the one-liner that I provided above where you just add two numbers and spit out the result no matter what N is. Let's say for N=10 it takes 4 operations to do both array lookups, the addition, and the variable assignment. If you run it again on the same machine with N=10000000 it's still doing the same 4 operations. The amount of operations required by the algorithm doesn't grow with N. That's why the algorithm is O(1).
It's why problems like finding a O(NlogN) algorithm to sort an array are math problems and not nano-technology problems. Big-O doesn't even assume you have a computer with electronics.
Hopefully this rant gives you a hint as to what you don't understand so you can do more effective studying for a complete understanding. There's no way to cover everything needed in one post here. It was some good soul-searching for me, so thanks.

Big O Notation - Growth Rate

I am trying to understand if my reasoning is correct:
If I am given the following snippet of code and asked to find it's Big O:
for(int i = 3; i < 1000; i++)
sum++;
I want to say O(n) because we are dealing with one for loop and sum++ which is iterated say n times but then looking at this I realise we are not dealing with n at all as we are given the amount of times this for loop iterates... but in my mind it would be wrong to say that this has a Big O of O(1) because the growth is linear and not constant and depends on the size of this loop (although the loop is 'constant'). Would I be correct in saying that this is O(n)?
Also, another one that has me thinking around which has a similar setup:
for(int i = 0; i < n * n * n; i++)
for(int j = 0; j < i; j++)
sum++;
Now here again I know that when dealing with a nested loop containing and outer and inner loop we would use the multiplication rule to derive our Big O. Let's assume that the inner loop was in fact j < n then I would say that the Big O of this snippet of code is O(n^4) but as it isn't and we have a the second loop running its iterations off i and not n then would it be correct to say this as a Big Order of O(n^3)?
I think what is throwing me is where 'n' is not appearing and we're given a constant or another variable and all of a sudden I'm assuming n must not be considered for that section of code. However, having said that the other part of my reasoning is telling me that despite not seeing an 'n' I should still treat the code as though there were an n as the growth rate would be the same regardless of the variable?
It works best if you consider the code to always be within a function, where the function's arguments are used to calculate complexity. Thus:
// this is O(1), since it always takes the same time
void doSomething() {
for(int i = 3; i < 1000; i++)
sum++;
}
And
// this is O(n^6), since it only takes one argument
// and if you plot it, the curve matches t = k * n^6
void doSomethingElse(int n) {
for(int i = 0; i < n * n * n; i++)
for(int j = 0; j < i; j++)
sum++;
}
In the end, the whole point of big-O is to say what the run-times (or memory-footprints; but if you don't say anything, you are referring to run-times) look like as the problem size increases. It matters not what happens in the inside (although you can use that to estimate complexity) - what really matters is what you would measure outside.
Looking closer at your second snippet, it's O(n^6) because:
outer loop runs exactly n^3 times; inner loop runs, on average, n^3 / 2 times.
therefore, inner sum runs n^3 * k * n^3 times (with k a constant). In big-O notation, that's O(n^6).
The first is either O(1) or simply a wrong question, just like you understand it.
The second is O(n6). Try to imagine the size of the inner loop. On first iteration, it will be 1. On the second, 2. On the ith, it will be i, and on the last, it will be n*n*n. So it will be n*n*n/2, but that's O(n*n*n). That, times the outer O(n3) is O(n6) overall.
Although the calculation of O() for your question, by others, may be correct, here is a little more insight that should help delineate the conceptual outlook for this whole asymptotic analysis story.
I think what is throwing me is where 'n' is not appearing and we're given a constant or another
variable and all of a sudden I'm assuming n must not be considered for
that section of code.
The simplest way to understand this one is to identify if the execution of a line of code is affected by/related to the current value of n.
Had the inner loop been, let's say, j < 10 instead of j < i, the complexity would have well been O(n^3).
Why is any constant considered O(1)?
This may agreeably sound a little counter-intuitive at first however, here is a small conceptual summary to clear the air.
Let us say that your first loop runs 1000 times. Then you set it to 10^1000 times and try to believe that hey, it doesn't take the same time anymore.
Fair enough! Even though it may now take your computer 5 seconds more to run the same piece of code, the time complexity still remains O(1).
What this practically means is that you can actually calculate the time that it takes your computer to execute that piece of code and it will remain constant forever (for the same configuration).
Big-Oh is actually a function on the input and not the measure of the discrete value itself (time/space).
I hope that the above explanation also helps clarify why we actually ignore the constants in the O() notation.
Why is this Big-Oh thing so generalized and why is it used at the first place?
I thought of including this extra info as I myself had this question in mind when learning this topic for the first time.
Asymptotic time-complexity is an apriori analysis of any algorithm to understand the worst (Big-Oh) behavior (time/space) of that program regardless of the size of the input.
Eg. Your second code can not perform worse than O(n^6).
It is generalized because from one computer to another, only the constant changes, not Big-Oh.
With more experience, you will realize that practically, you would want your algorithm's time-complexity to be as asymptotically small as possible. Till a polynomial function it is fine. But for large inputs, today's computers start coughing if you try to run an algorithm with exponential time complexity of the order O(k^n) or O(n^n), eg. The Travelling Salesman and other NP-C/H problems.
Hope this adds to the info. :)

Why would an O(n^2) algorithm run quicker than a O(n) algorithm on the same input?

Two algorithms say A and B are written to solve the same problem.
Algorithm A is O(n).
Algorithm B is (n^2).
You expect algorithm A to work better.
However when you run a specific example of the same machine, Algorithm B runs quicker.
Give the reasons to explain how such a thing happen?
Algorithm A, for example, can run in time 10000000*n which is O(n).
If algorithm B, is running in n*n which is O(n^2), A will be slower for every n < 10000000.
O(n), O(n^2) are asymptotic runtimes that describe the behavior when n->infinity
EDIT - EXAMPLE
Suppose you have the two following functions:
boolean flag;
void algoA(int n) {
for (int i = 0; i < n; i++)
for (int j = 0; j < 1000000; j++)
flag = !flag;
void algoB(int n) {
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++)
flag = !flag;
algoA has n*1000000 flag flip operations so it is O(n) whereas algoB has n^2 flag flip operations so it is O(n^2).
Just solve the inequality 1000000n > n^2 and you'll get that for n < 1000000 it holds. That is, the O(n) method will be slower.
Knowing the algorithms would help give a more exact answer.
But for the general case, I could think of a few relevant factors:
Hardware related
e.g. if the slower algorithm makes good use of caching & locality or similar low-level mechanisms (see Quicksort's performance compared to other theoretically faster sorting algorithms). Worth reading about timsort as well, as an example where an "efficient" algorithm is used to break the problem up in to smaller input sets and a "simpler" and theoretically "higher complexity" algo is used on those sets, because it's faster.
Properties of the input set
e.g. if the input size is small, the efficiency will not come through in a test; also, for example with sorting again, if the input is mostly pre-sorted vs completely random, you will see different results. Many different inputs should be used in a test of this type for an accurate result. Using just one example is simply not enough, as the input can be engineered to favor one algorithm instead of another.
Specific implementation of either algorithms
e.g. there's a long way to go from the theoretical description of an algorithm to implementation; poor use of data structures, recursion, memory management etc. can seriously affect performance
Big-O-notation says nothing about the speed itself, only about how the speed will change when you change n.
If both algorithms take the same time for a single iteration, #Itay's example is also correct.
While all of the answers so far seem correct... none of them feel really "right" in the context of a CS class. In a computational complexity course you want to be precise and use definitions. I'll outline a lot of the nuances of this question and of computational complexity in general. By the end, we'll conclude why Itay's solution at the top is probably what you should've written. My main issue with Itay's solution is that it lacks definitions which are key to writing a good proof for a CS class. Note that my definitions may differ slightly from your class' so feel free to substitute in whatever you want.
When we say "an algorithm is O(n)" we actually mean "this algorithm is in the set O(n)". And the set O(n) contains all algorithms whose worst-case asymptotic complexity f(n) has the property that f(n) <= c*n + c_0 for some constant c and c_0 where c, c_0 > 0.
Now we want to prove your claim. First of all, the way you stated the problem, it has a trivial solution. That's because our asymptotic bounds are "worst-case". For many "slow" algorithms there is some input that it runs remarkably quickly. For instance, insertion sort is linear if the input is already sorted! So take insertion sort (O(n)) and merge sort (O(nlog(n))) and notice that the insertion sort will run faster if you pass in a sorted array! Boom, proof done.
But I am assuming that your exam meant something more like "show why a linear algorithm might run faster than a quadratic algorithm in the worst-case." As Alex noted above, this is an open ended question. The crux of the issue is that runtime analysis makes assumptions that certain operations are O(1) (e.g. for some problem you might assume that multiplication is O(1) even though it becomes quadratically slower for large numbers (one might argue that the numbers for a given problem are bounded to be 100-bits so it's still "constant time")). Since your class is probably focusing specifically on computational complexity then they probably want you to gloss over this issue. So we'll prove the claim assuming that our O(1) assumptions are right, and so there aren't details like "caching makes this algorithm way faster than the other one".
So now we have one algorithm which runs in f(n) which is O(n) and some other algorithm which runs in g(n) which is O(n^2). We want to use the definitions above to show that for some n we can have g(n) < f(n). The trick is that our assumptions have not fixed the c, c_0, c', c_0'. As Itay mentions, we can choose values for those constants such that g(n) < f(n) for many n. And the rest of the proof is what he wrote above (e.g. let c, c_0 be the constants for f(n) and say they are both 100 while c', c_0' are the constants for g(n) and they are both 1. Then g(n) < f(n) => n + 1 < 100n^2 + 100 => 100n^2 - n + 99 > 0 => (factor to get actual bounds for n))
It depends on different scenario.There are 3 types of scenario 1.Best, 2.Average, 3.Worst. If you know sorting techniques there is also same things happens. For more information see following link:
http://en.wikipedia.org/wiki/Sorting_algorithm
Please correct me if I am wrong.

analysis of algorithm

why we always consider large value of input in analysis of algorithm for eg:in big-oh notation ?
The point of Big-O notation is precisely to work out how the running time (or space) varies as the size of input increases - in other words, how well it scales.
If you're only interested in small inputs, you shouldn't use Big-O analysis... aside from anything else, there are often approaches which scale really badly but work very well for small inputs.
Because the worst case performance is usually more of a problem than the best case performance. If your worst case performance is acceptable your algorithm will run fine.
Analysis of algorithms does not just mean running them on the computer to see which one is faster. Rather it is being able to look at the algorithm and determine how it would perform. This is done by looking at the order of magnitude of the algorithm. As the number of items(N) changes what effect does it have on the number of operations needed to execute(time). This method of classification is referred to as BIG-O notation.
Programmers use Big-O to get a rough estimate of "how many seconds" and "how much memory" various algorithms use for "large" inputs
It's because of the definition of BigO notation. Given O(f(n)) is the bounds on g([list size of n]): For some value of n, n0, all values of n, n0 < n, the run-time or space- complexity of g([list]) is less than G*f(n), where G is an arbitrary constant.
What that means is that after your input goes over a certain size, the function will not scale beyond some function. So, if f(x) = x (being eq to O(n)), n2 = 2 * n1, the function i'm computing will not take beyond double the amount of time. Now, note that if O(n) is true, so is O(n^2). If my function will never do worse than double, it will never do worse than square either. In practice the lowest order function known is usually given.
Big O says nothing about how well an algorithm will scale. "How well" is relative. It is a general way to quantify how an algorithm will scale, but the fitness or lack of fitness for any specific purpose is not part of the notation.
Suppose we want to check whether a no is prime or not. And Ram and Shyam came up with following solutions.
Ram's solution
for(int i = 2; i <= n-1; i++)
if( n % i == 0 )
return false;
return true;
now we know that the above algorithm will run n-2 times.
shyam's solution
for(int i = 2; i <= sqrt(n); i++)
if ( n % i == 0 )
return false;
return true;
The above algorithm will run sqrt(n) - 1 times
Assuming that in both the algorithms each run takes unit time(1ms) then
if n = 101
1st algorithm:- Time taken is 99 ms which is even less than blink of an eye
2nd algorithm:- Around 9 ms which again is not noticable.
if n = 10000000019
1st algorithm:- Time taken is 115 days which is 3rd of an year.
2nd algorithm:- Around 1.66 minutes which is equivalent to sipping a cup of coffee.
I think nothing need to be said now :D

Big Oh Notation and Calculating the Running Time for a Triple-Nested For-Loop

In Computer Science, it is very important for Computer Scientists to know how to calculate the running times of algorithms in order to optimize code. For you Computer Scientists, I pose a question.
I understand that, in terms of n, a double-nested for-loop typically has a running time of n2 and a triple-nested for-loop typically has a running time of n3.
However, for a case where the code looks like this, would the running time be n4?
x = 0;
for(a = 0; a < n; a++)
for(b = 0; b < 2a; b++)
for (c=0; c < b*b; c++)
x++;
I simplified the running time for each line to be virtually (n+1) for the first loop, (2n+1) for the second loop, and (2n)2+1 for the third loop. Assuming the terms are multiplied together, and we extract the highest term to find the Big Oh, would the running time be n4, or would it still follow the usual running-time of n3?
I would appreciate any input. Thank you very much in advance.
You are correct, n*2n*4n2 = O(n4).
The triple nested loop only means there will be three numbers to multiply to determine the final Big O - each multiplicand itself is dependent on how much "processing" each loop does though.
In your case the first loop does O(n) operations, the second one O(2n) = O(n) and the inner loop does O(n2) operations, so overall O(n*n*n2) = O(n4).
Formally, using Sigma Notation, you can obtain this:
Could this be a question for Mathematics?
My gut feelings, like BrokenGlass is that it is O(n⁴).
EDIT: Sum of squares and Sum of cubes give a pretty good understanding of what is involved. The answer is a resounding O(n^4): sum(a=0 to n) of (sum(b=0 to 2a) of (b^2)). The inner sum is congruent to a^3. Therefore your outer sum is congruent to n^4.
Pity, I thought you might get away with some log instead of n^4. Never mind.

Resources