time and space complexity - algorithm

I have a doubt related with time and space complexity in following 2 case
Blockquote
Case I:
Recurion: Factorial calculation.
int fact(int n)
{
if(n==0)
return 1;
else
return (n*fact(n-1));
}
here how come time complexity become 2*n and space complexity proportional to n.
and
Case II:
Iterative:-
int fact(int n)
{
int i, result = 1;
if(n==0)
result = 1;
else
{
for(1=1;i<=n;i++)
result*=i;
}
return (result);
}
Time complexity proportional to n and space complexity is constant.
This always remain confusing to me.

If my reasoning is faulty, somebody please correct me :)
For the space complexity, here's my explanation:
For the recursive solution, there will be n recursive calls, so there will be n stacks used; one for each call. Hence the O(n) space. This is not the case for the iterative solution - there's just one stack and you're not even using an array, there is only one variable. So the space complexity is O(1).
For the time complexity of the iterative solution, you have n multiplications in the for loop, so a loose bound will be O(n). Every other operation can be assumed to be unit time or constant time, with no bearing on the overall efficiency of the algorithm. For the recursive solution, I am not so sure. If there are two recursive calls at each step, you can think of the entire call stack as a balanced binary tree, and the total number of nodes will be 2*n - 1, but in this case I am not so sure.

From: https://cs.nyu.edu/~acase/fa14/CS2/module_extras.php
Space Complexity
Below we're going to compare three different calls to both the iterative and recursive factorial functions and see how memory gets used. Keep in mind that every variable that we declare has to have space reserved in memory to store it's data. So the space complexity of an algorithm in its simplest form is the number of variables in use. So in this simplest situation we can calculate approximate space complexity using this equation:
space complexity = number of function calls * number of variables per call

Time Complexity: The number of (machine) instructions which a program executes during its running time is called its time complexity in computer science.
Space Complexity:This is essentially the number of memory cells which an algorithm needs.
Case 1: In the program is of recursively calculating the factorial , so there will be one direct call to the function and than there will be backtracking, so the time complexity becomes 2*n.
Talking about the space complexity there will be n stacks declared during the point of execution of program, so it is n.
Case 2: This case is pretty simple here you have n iteration inside the for loop so time complexity is n
This is not the case for the iterative solution - there's just one stack and you're not even using an array, there is only one variable. So the space complexity is O(1)

Related

Describing space complexity of algorithms

I am currently a CS major, and I am practising different algorithmic questions. I make sure I try to analyse time and space complexity for every question.
However, I have a doubt:
If there are two steps (steps which call recursive functions for varying size of OP) in the algorithm, i.e.
int a = findAns(arr1)
int b = findAns(arr2)
return max(a,b);
Would the worst time complexity of this be: O(N1) + O(N2) or simply, O(max(N1,N2)). I ask because at a time, we would be calling the function with only single input array.
While calculating worst case space complexity, if it comes out to be, O(N) + O(logN), since N > logN, would we discard O(logN) or since logN is also dependent on N and say worst space complexity is O(N), we would say, worst case space complexity to be O(N) only.

Determine running time for multiple calls of a functions

Assume I have a function f(K) that runs in amortised logarithmic time in K, but linear worst case time. What is the running time of:
for i in range(N): f(N) (Choose the smallest correct estimate)
A. Linearithmic in N
B. Amortised linear in N
C. Quadratic in N
D.Impossible to say from the information given
Let's say f(N) just prints "Hello World" so it doesn't depend on how big the parameter is. Can we say the total running time is amortised linear in N ?
This kinda looks like a test question, so instead of just saying the answer, allow me to explain what each of these algorithmic complexity concepts mean.
Let's start with a claim function f(n) runs in constant time. I am aware it's not even mentioned in the question, but it's really the basis for understanding all other algorithmic complexities. If some function runs in constant time, it means that its runtime is not dependent on its input parameters. Note that it could be as simple as print Hello World or return n, or as complex as finding the first 1,000,000,000 prime numbers (which obviously takes a while, but takes the same amount of time on every invocation). However, this last example is more of an abuse of the mathematical notation; usually constant-time functions are fast.
Now, what does it mean if a function f(n) runs in amortized constant time? It means that if you call the function once, there is no guarantee on how fast it will finish; but if you call it n times, the sum of time spent will be O(n) (and thus, each invocation on average took O(1)). Here is a lengthier explanation from another StackOverflow answer. I can't think of any extremely simple functions that run in amortized constant time (but not constant time), but here is one non-trivial example:
called = 0
next_heavy = 1
def f(n):
called += 1
if called == next_heavy:
for i in range(n):
print i
next_heavy *= 2
On 512-th call, this function would print 512 numbers; however, before that it only printed a total of 511, so it's total number of prints is 2*n-1, which is O(n). (Why 511? Because sum of powers of two from 1 to 2^k equals 2^(k+1).)
Note that every constant time function is also an amortized constant time function, because it takes O(n) time for n calls. So non-amortized complexity is a bit stricter than amortized complexity.
Now your question mentions a function with amortized logarithmic time, which similarly to above means that after n calls to this function, the total runtime is O(nlogn) (and average runtime per one call is O(logn)). And then, per question, this function is called in a loop from 1 to N, and we just said that by definition those N calls together would run in O(NlogN). This is linearithmic.
As for the second part of your question, can you deduce what's the total running time of the loop based on our previous observations?

Why does the following code only have space complexity O(N)? [duplicate]

This question already has answers here:
What is the space complexity of this code?
(3 answers)
Closed 6 years ago.
This is taken straight from Cracking the coding interview by Gayle Lakmaan McDowell.
She lists the following code:
int f(int n) {
if(n<=1){
return 1;
}
return f(n-1) + f(n-1);
}
She has an articulate explanation about why the time complexity is O(2^n), but why is the space complexity here only O(N)?
The space complexity is taking in account the call stack usage. The function will call itself O(N) times before returning, so the call stack will be O(N) deep.
Update (thanks #MatthewWetmore):
To clarify, the two recursive calls in the f(n-1) + f(n-1); expression are not executed in parallel. First one call is completed consuming the linear stack usage, and then the second one, consuming the same stack size. So no doubling of the space is happening, which is different from the running-time consumption (which is obviously accumulated each call).
It uses that much space in system stack for function call
like F(5) will call F(4) which wil call F(3)
F(5)->F(4)->F(3)->F(2)->F(1)->return value will close function in reverse sequence
so for calculation this space is allocated in stack that is for F(5) 5 stack space so for F(n) , space will be O(N)
The O notation is an asymtotic notation. This means that the notation states that the time to complete the algorithm would never surpass certain limit and it can go below but this is not specified, it is bounded.
The time complexity of an algorithm depends on the size of the input. This can be {1} an input of one number or {1,2} an input double the size of the previous input. That is what the n means O(n). This means the algorithm has linear complexity that means the time to complete the algorithm would be n * some constant or C * n.
You may ask yourself how do we determine C well that is why we use aymptotic notation we can omit the exact value of C, this is useful because we might see an algorithm can have a C that varies depending on the order of the input or some other condition. You might see best case scnearios and worst case scenarios where the complexity is Cn and the worst is Cnlogn in this case we say the algorithm has O(nlogn) complexity this is because we take the worst case scenario as the boundary. In this case we still omit the C and we omit the best case scenario Cn. You might consider reading an algorithm book for more information like the art of computer programming book 1, or Algorithms by Robert Sedgewick. The book by Sedgewick uses Java and is very accesible.
Space complexity works the same way but this time we are talking about memory space, there is a technique called dynamic programming and this technique stores previous results so you can use them later as solutions to subproblems. That way we skip the time it takes to make the calculation to the subproblem many times, however the amount of memory needed to solve the problem increases the classic example is the Change-making problem. You store many solutions and the space it takes in system memoy increases. The problem has linear space complexity because the amount of memory needed to solve the problem behaves as a function Cn.
As an example this function uses n which can be said to be one space in memory, the 1 constant is another so we can say the space in memory would be 1 + n; in this case we say the linear function has a displacement a the linear function is Cn + a where C = 1 and a = 1. Still a linear funtion. The n comes from the recursion as we will need n spaces in memory to store the n numbers as the recursion goes all the way down to 1. so for example lets say n = 3.
The first time we have 3 in memory as it goes down the recursion we store 2 in memory and last 1, we can say 1 is already in memory or not as the space where the 1 resides might be on stack or on heap. That is why we use O notation this is dependent on the architecture implementation. So while sometimes the complexity can be 1+n it can also be n but saying O(n) we cover both cases.
Also if you have doubts about the line:
f(n-1) + f(n-1);
Depending on architecture you may have a stack or you can send them to main memory(No example of this architecture comes to mind) and have a garbage collector clean the references as it goes out of scope, it will still be linear space as you store 3 * 2, 2 * 2, 1 * 2. In this case it will be 2n+1 or 2n. Still linear. But I doubt you see anything like this most likey you will hava a stack and as soon as the recursion return it will start popping all the variables from the stack, using at most 1+n or n+1.

Determining time complexity of an algorithm

Below is some pseudocode I wrote that, given an array A and an integer value k, returns true if there are two different integers in A that sum to k, and returns false otherwise. I am trying to determine the time complexity of this algorithm.
I'm guessing that the complexity of this algorithm in the worst case is O(n^2). This is because the first for loop runs n times, and the for loop within this loop also runs n times. The if statement makes one comparison and returns a value if true, which are both constant time operations. The final return statement is also a constant time operation.
Am I correct in my guess? I'm new to algorithms and complexity, so please correct me if I went wrong anywhere!
Algorithm ArraySum(A, n, k)
for (i=0, i<n, i++)
for (j=i+1, j<n, j++)
if (A[i]+A[j]=k)
return true
return false
Azodious's reasoning is incorrect. The inner loop does not simply run n-1 times. Thus, you should not use (outer iterations)*(inner iterations) to compute the complexity.
The important thing to observe is, that the inner loop's runtime changes with each iteration of the outer loop.
It is correct, that the first time the loop runs, it will do n-1 iterations. But after that, the amount of iterations always decreases by one:
n - 1
n - 2
n - 3
…
2
1
We can use Gauss' trick (second formula) to sum this series to get n(n-1)/2 = (n² - n)/2. This is how many times the comparison runs in total in the worst case.
From this, we can see that the bound can not get any tighter than O(n²). As you can see, there is no need for guessing.
Note that you cannot provide a meaningful lower bound, because the algorithm may complete after any step. This implies the algorithm's best case is O(1).
Yes. In the worst case, your algorithm is O(n2).
Your algorithm is O(n2) because every instance of inputs needs time complexity O(n2).
Your algorithm is Ω(1) because there exist one instance of inputs only needs time complexity Ω(1).
Following appears in chapter 3, Growth of Function, of Introduction to Algorithms co-authored by Cormen, Leiserson, Rivest, and Stein.
When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no mater what particular input of size n is chosen for each value of n, the running time on that input is at least a constant time g(n), for sufficiently large n.
Given an input in which the summation of first two elements is equal to k, this algorithm would take only one addition and one comparison before returning true.
Therefore, this input costs constant time complexity and make the running time of this algorithm Ω(1).
No matter what the input is, this algorithm would take at most n(n-1)/2 additions and n(n-1)/2 comparisons before returning value.
Therefore, the running time of this algorithm is O(n2)
In conclusion, we can say that the running time of this algorithm falls between Ω(1) and O(n2).
We could also say that worst-case running of this algorithm is Θ(n2).
You are right but let me explain a bit:
This is because the first for loop runs n times, and the for loop within this loop also runs n times.
Actually, the second loop will run for (n-i-1) times, but in terms of complexity it'll be taken as n only. (updated based on phant0m's comment)
So, in worst case scenerio, it'll run for n * (n-i-1) * 1 * 1 times. which is O(n^2).
in best case scenerio, it's run for 1 * 1 * 1 * 1 times, which is O(1) i.e. constant.

Where to find what O(n^2) and O(n) etc means?

I've been noticing answers on stack overflow that use terms like these, but I don't know what they mean. What are they called and is there a good resource that can explain them in simple terms?
That notation is called Big O notation, and is used as a shorthand to express algorithmic complexity (basically how long a given algorithim will take to run as the input size (n) grows)
Generally speaking, you'll run into the following major types of algorithims:
O(1) - Constant - The length of time that this algorithim takes to complete is not dependent on the number of items that the algorithim has to process.
O(log n) - Logarithmic - The length of time that this algorithim takes to complete is dependent on the number of items that the algorithim has to process. As the input size becomes larger, less time is needed for each new input.
O(n) - Linear - The length of time that this algorithim takes to complete is directly dependent on the number of items that the algorithim has to process. As input size grows, the time it takes grows in equal amounts.
O(n^2) - Polynominal - As input size grows, the time it takes to process the input grows by a larger and larger amount - meaning that large input sizes become prohibitively difficult to solve.
O(2^n) - Exponential - The most complicated types of problems. Time to process increases based on input size to an extreme degree.
Generally you can get a rough gauge of the complexity of an algorithim by looking at how it's used. For example, looking at the following method:
function sum(int[] x) {
int sum = 0;
for (int i = 0; i < x.length; i++) {
sum += x[i];
}
return sum;
}
There's a few things that have to be done here:
Initialize a variable called sum
Initialize a variable called i
For each iteration of i: Add x[i] to sum, add 1 to i, check if i is less than x.length
Return sum
There's a few operations that run in constant time here (the first two and the last), since the size of x wouldn't affect how long they take to run. As well, there are some operations that run in linear time (since they are run once for each entry in x). With Big O notation, the algorithim is simplified to the most complex, so this sum algorithim would run in O(n)
Read about Computational Complexity first, then try some books about algorithms like Introduction to Algorithms.
From Wikipedia page:
Big O notation characterizes functions according to their growth rates
If you don't won't to drill into details you can very often approximate algorithm complexity by analizing its code:
void simpleFunction(arg); // O(1) - if number of function instructions is constant and don't depend on number of input size
for (int i=0;i<n;i++) {simpleFunction(element[i]);} // O(n)
for (int i=0;i<n;i++) { // this one runs O(n^2)
for (int j=0;j<n;j++) {
simpleFunction(element[i]);
}
}
for (int i=0;i<n;i*=2) { // O(lgn)
simpleFunction(element[i]);
}
Sometimes it is not so simple to estimate function/algorithm big O notation complexity in such cases amortized analysis is used. Above code should serve only as quickstarter.
This is called the Big O notation and is used to quantify the complexity of algorithms.
O(1) means the algorithm takes a constant time no matter how much data there is to process.
O(n) means the algorithm speed grows in a linear way with the amount of data.
and so on...
So the lower the power of n in the O notation the better your algorithm is to solve the problem. Best case is O(1) (n=0). But there is an inherent complexity in many problems so that you won't find such an ideal algorithm in nearly all cases.
The answers are good so far. The main term to web search on is "Big O notation".
The basic idea behind the math of "someformula is O(someterm)" is that, as your variable goes to infinity, "someterm" is the part of the formula that dominates.
For example, assume you have 0.05*x^3 + 300*x^2 + 200000000*x + 10. For very low sizes of x (x==1 or x==2), that 200000000*x will be by far the biggest part. At that point a plot of the formula will look linear. As you go along, at some point the 300*x^2 part will be bigger. However, if you keep making x even bigger, as big as you care to, the 0.05*x^3 part will be the largest, and will eventually totally outstrip the other parts of the formula. That is where it becomes clear from a graph that you are looking at a cubed function. So we would say that the formula is O(x^3).

Resources