I am given this algorithm (pseudo code, not in any specific language):
foo1(l,m,n){
for ( i = 1; i < l, i++){
for( j = 1; j < m ; j++){
for ( k = 1; k < n; k++){
//Constant time inner loop
}
}
}
}
I am trying to find the number of times he inner loop runs in respect to l, m, and n and come up with a function for it. I also am trying to figure out the big-O notation for the algorithm.
Looking at the algorithm, I was thinking that the inner loop would run l*m*n times. I came up with this because for example, if l,m,n were 3, 6, 9 respectively, then the inner loop would run(9*6*3) times. So the function that would return the number of times the inner loop runs would be something like:
f = l*m*n
Now the big-O is where I am struggling with (not necessarily with this problem) but I wanted to get some further insight as so how to tackle big-O problems to best determine the right big-O for an algorithm.
For this specific case, I was thinking that the big-O would be n^3 but that is just based on a guess really. How do I go about figuring out what the big-O actually is for this problem, and more generally for other algorithms I may encounter?
You are on the right track of understanding big-O. Above pseudo code indeed has complexity of O( lmn ) As you are probably looking for some references, I would like that you have a look at the following awesome post on stack overflow itself:
Plain English explanation of Big-O
In my opinion,this is one of the best guide to get you started with Big-O concepts.
If you delve further into depth follow this MIT Lecture .This will surely give you a nice ride about Big-O concepts in detail. I think these two references will clear lot of your concepts and will definitely help building your solid understanding.
Happy Leaning!
Related
Hello StackOverflow community!
I had this question in my mind from so many days and finally have decided to get it sorted out. So, given a algorithm or say a function which implements some non-standard algorithm in your daily coding activity, how do you go about analyzing the rum time complexity?
Ok let me be more specific. Suppose you are solving this problem,
Given a NxN matrix consisting of positive integers, find the longest increasing sequence in it. You may only traverse in up, down, left or right directions but not diagonally.
Eg: If the matrix is
[ [9,9,4],
[6,6,8],
[2,1,1] ].
the algorithm must return 4
(The sequence being 1->2->6->9)
So yeah, looks like I have to use DFS. I get this part. I have done my Algorithms course back in Uni and can work my way around such questions. So, I come up with this solution say,
class Solution
{
public int longestIncreasingPathStarting(int[][] matrix, int i, int j)
{
int localMax = 1;
int[][] offsets = {{0,1}, {0,-1}, {1,0}, {-1,0}};
for (int[] offset: offsets)
{
int x = i + offset[0];
int y = j + offset[1];
if (x < 0 || x >= matrix.length || y < 0 || y >= matrix[i].length || matrix[x][y] <= matrix[i][j])
continue;
localMax = Math.max(localMax, 1+longestIncreasingPathStarting(matrix, x, y));
}
return localMax;
}
public int longestIncreasingPath(int[][] matrix)
{
if (matrix.length == 0)
return 0;
int maxLen = 0;
for (int i = 0; i < matrix.length; ++i)
{
for (int j = 0; j < matrix[i].length; ++j)
{
maxLen = Math.max(maxLen, longestIncreasingPathStarting(matrix, i, j));
}
}
return maxLen;
}
}
Inefficient, I know, but I wrote it this way on purpose! Anyways my question is, how do you go about analyzing the run time of longestIncreasingPath(matrix) function?
I can understand the analysis they teach us in a Algos course, you know the standard MergeSort, QuickSort analysis etc. but unfortunately and I hate to say this, that did not prepare me to apply it in my day-day coding job. I want to do it now, and hence would like to start it by analyzing such functions.
Can someone help me out here and describe the steps one would take to analyze the runtime of the above function? That would greatly help me. Thanks in advance, Cheers!
For day to day work eye-balling things usually works well.
In this case you will try to go in every direction recursively. So a really bad example comes to mind like: [[1,2,3], [2,3,4], [3,4,5]] so that you have two options from most cells. I happen to know that this will be O((2*n) ! / (n!*n!)) steps, but another good guess would be O(2^N). Now that you have an example where you know or can compute more easily the complexity, the overall complexity has to be at least that.
Usually, it doesn't really matter which one it is exactly since for both O(N!) and O(2^N) the run-time grows very fast and should only work fast for up to around 10-20 maybe a bit more if you are willing to wait. You would not run this algorithm for N ~= 1000, you would need something polynomial. So an rough estimate that you have a exponential solution would be enough to make a decision.
So in general to get an idea of the complexity, try to relate your solution to other algorithms where you know the complexity already or figure out a worst case scenario for the algorithm where it's easier to judge the complexity. Even if you are slightly off it might still help you make a decision.
If you need to compare algorithms of more similar complexity (ie. O(NlogN) vs O(N^2) for N~=100) you should implement both and benchmark since the constant factor might be the leading contributor to the run-time.
This question already has answers here:
Big O, how do you calculate/approximate it?
(24 answers)
Closed 7 years ago.
This is likely ground that has been covered but I have yet to find an explanation that I am able to understand. It is likely that I will soon feel embarrassed.
For instance, I am trying to find the order of magnitude using Big-O notation of the following:
count = 0;
for (i = 1; i <= N; i++)
count++;
Where do I begin to find what defines the magnitude? I'm relatively bad at mathematics and, even though I've tried a few resources, have yet to find something that can explain the way a piece of code is translated to an algebraic equation. Frankly, I can't even surmise a guess as to what the Big-O efficiency is regarding this loop.
These notations (big O, big omega, theta) simply say how does the algorithm will be "difficult" (or complex) asymptotically when things will get bigger and bigger.
For big O, having two functions: f(x) and g(x) where f(x) = O(g(x)) then you can say that you are able to find one x from which g(x) will be always bigger than f(x). That is why the definition contains "asymptotically" because these two functions may have any run at the beginning (for example f(x) > g(x) for few first x) but from the single point, g(x) will get always superior (g(x) >= f(x)). So you are interested in behavior in a long run (not for small numbers only). Sometimes big-O notation is named upper bound because it describes the worst possible scenario (it will never be asymptotically more difficult that this function).
That is the "mathematical" part. When it comes to practice you usually ask: How many times the algorithm will have to process something? How many operations will be done?
For your simple loop, it is easy because as your N will grow, the complexity of algorithm will grow linearly (as simple linear function), so the complexity is O(N). For N=10 you will have to do 10 operations, for N=100 => 100 operations, for N=1000 => 1000 operations... So the growth is truly linear.
I'll provide few another examples:
for (int i = 0; i < N; i++) {
if (i == randomNumber()) {
// do something...
}
}
Here it seems that the complexity will be lower because I added the condition to the loop, so we have possible chance the number of "doing something" operations will be lower. But we don't know how many times the condition will pass, it may happen it passes every time, so using big-O (the worst case) we again need to say that the complexity is O(N).
Another example:
for (int i = 0; i < N; i++) {
for (int i = 0; i < N; i++) {
// do something
}
}
Here as N will be bigger and bigger, the # of operations will grow more rapidly. Having N=10 means that you will have to do 10x10 operations, having N=100 => 100x100 operations, having N=1000 => 1000x1000 operations. You can see the growth is no longer linear it is N x N, so we have O(N x N).
For the last example I will use idea of full binary tree. Hope you know what binary tree is. So if you have simple reference to the root and you want to traverse it to the left-most leaf (from top to bottom), how many operations will you have to do if the tree has N nodes? The algorithm would be something similar to:
Node actual = root;
while(actual.left != null) {
actual = actual.left
}
// in actual we have left-most leaf
How many operations (how long loop will execute) will you have to do? Well that depends on the depth of the tree, right? And how is defined depth of full binary tree? It is something like log(N) - with base of logarithm = 2. So here, the complexity will be O(log(N)) - generally we don't care about the base of logarithm, what we care about is the function (linear, quadratic, logaritmic...)
Your example is the order
O(N)
Where N=number of elements, and a comparable computation is performed on each, thus
for (int i=0; i < N; i++) {
// some process performed N times
}
The big-O notation is probably easier than you think; in all daily code you will find examples of O(N) in loops, list iterations, searches, and any other process that does work once per individual of a set. It is the abstraction that is first unfamiliar, O(N) meaning "some unit of work", repeated N times. This "something" can be a an incrementing counter, as in your example, or it can be lengthy and resource intensive computation. Most of the time in algorithm design the 'big-O', or complexity, is more important than the unit of work, this is especially relevant as N becomes large. The description 'limiting' or 'asymptotic' is mathematically significant, it means that an algorithm of lesser complexity will always beat one that is greater no matter how significant the unit of work, given that N is large enough, or "as N grows"
Another example, to understand the general idea
for (int i=0; i < N; i++) {
for (int j=0; j < N; j++) {
// process here NxN times
}
}
Here the complexity is
O(N2)
For example, if N=10, then the second "algorithm" will take 10 times longer than the first, because 10x10 = 100 (= ten times larger). If you consider what will happen when N equals, say a million, or billion, you should be able to work out it will also take this much longer. So if you can find a way to do something in O(N) that a super-computer does in O(N2), you should be able to beat it with your old x386, pocket watch, or other old tool
for(int i = 0; i < n; ++i)
{
for(int x = i; x < n; ++x)
{
// work...
}
}
What is the big o notation for this type of algorithm? Also, please explain to me how you came up with the solution.
Also, sorry for the vague title but I didn't know the name of this type of algorithm.
Here is what I tried:
If n is:
1, there will be 1 work execution.
2, there will be 3 work execution.
3, there will be 6 work execution.
4, there will be 10 work execution.
5, there will be 15 work execution.
People in the comment say it is n^2 but the numbers I'm getting don't match the result as 5^2 is 25 and not 15
Big O notation is derived from calculating the time complexity. You must take into account the amount of work in which your algorithm is doing.
Please see below my answer in which I derive Big 0. This is using LateX which is a nice tool to write equations.
Notes
The giant E like symbol - is called a Sigma. This is a mathematical symbol that is used in writing up algorithms to annotate a looping function. Think of it as your for - the bottom term is like your i=0 and the top term is like your i < n.
The (n-1) represents the work of the inner loop. - to calculate this, we break the equation into two separate Sigmas - as i is more complex to derive.
notice how the inner loop does not run n times but n-i. Also, line (3) - to understand what i is - we use Summations (Law 6 maybe?).
To get n^2 - we eliminate constants from the equation aswell as terms that do not dominate the growth of the function.
I was trying to find the time complexity for a for loop. below is loop detail.
for(int i=N;i>1;i=i/2)
{
for(int k=0;k<i;k++){
sum++;
}
}
Below is any find for the problem. Please correct me if i am going worng.
Inner loop will be exceute N+N/2+N/4+N/8....
so tn=ar^(n-1). So replacing Tn=1, a=N and r=1/2
1=N(1/2)^(n-1)
therefore
1/2N=(1/2)^n
So sum of inner loop is a GP. Sn=a(1-r^n)/(1-r)
Replacing a=N,r=1/2, we get
Sn=N(1-(1/2N))/(1-1/2)
therefore Sn=2N-1
I am not sure if complexity is N.
Please help
Thanks.
Below is the formal way (Sigma Notation) to infer the order of growth related to your algorithm (corroborated with experiments, using C's MinGW2.95 compiler).
What is the running time of the following algorithm in bigO.
for(int i=1;i<=n;i++){
for(int j=i;j<=n;j++){
for(int k=j; k<=n;k++){
for(int l=k; l<=n;l++){
...
}
}
}
}
This algorithm seems to be n^4. Of course, from the theoretical perspective (without any compiler considerations).
N^4. The fractional part doesn't count.
A formula for such a collection of dependent loops, can be inferred like the following:
Where c (constant) is the number of operations inside the innermost loop, n is the number of elements, and r is the number of nested loops.
In the question's case:
Another methodology, efficient but tedious, is to use Sigma notation:
O(N^4) is the cost.
each for nested is N^
so essentially N * N * N * N = N^4
CS610, Algorithm Development, NJIT.
My graduate coursework is actually coming in handy.
my answer is O(N^4)...because there are four "for loops"..and it is easy to judge the runing time of this algo...thanks !