Would variables declared inside of a for loop that loops N times make the space complexity O(N) even though those variables fall out of scope each time the loop repeats?
for(var i = 0; i < N; i++){
var num = i + 5;
}
Would variables declared inside an O(N) for loop make the space complexity O(N)
No, since variables go out of scope at the end of every iteration, thus they are destroyed.
As a result the space complexity remains constant, i.e. O(1).
1 (fixed-size) variable that you change n times (which could include unallocating and reallocating it) is still just 1 variable, thus O(1) space.
But this may possibly be somewhat language-dependent - if some language (or compiler) decides to keep all of those earlier declarations of the variable in memory, that's going to be O(n), not O(1).
Consider, for example, two ways of doing this in C++:
for (int i = 0; i < N; i++)
int num = i + 5;
for (int i = 0; i < N; i++)
int* num = new int(i + 5);
In the former case, the variable can be reused and it will be O(1).
When you use new, that memory will not be automatically freed, so every iteration in the latter case will assign more memory instead of reusing the old (technically the pointer will get reused, but what it pointed to will remain), thus it will use O(n) space. Doing this is a terrible idea and will be a memory leak, but it's certainly possible.
(I'm not too sure what the C++ standard says about what compilers are or are not required to do in each case, this is mostly just meant to show that this type of in-loop assignment is not necessarily always O(1)).
No, it remains O(1) as explained below:
for(var i = 0; i < N; i++){
var num = i + 5; //allocate space for var `num`
} // release space acquired by `num`
Related
So I've just started learning algorithms and data structures, and I've read about Big O and how it portrays complexity of algorithms based on how the number of operations required scales
But what actually counts as an operation? In this bubble sort, does each iteration of the for loop count as an operation, or only when an if statement is triggered, or all of them?
And since there are so many different algorithms of all kinds, how do you immediately identify what would count as an "operation" happening in the algorithm's code?
function bubbleSort(array) {
for (let i = 0; i < array.length; i++) {
for (let j = 0; j < array.length; j++) {
if (array[j + 1] < array[j]) {
let tmp = array[j]
array[j] = array[j+1]
array[j+1] = tmp
}
}
}
return array
}
You can count anything as an operation that will execute within a constant amount of time, independent of input. In other words, operations that have a constant time complexity.
If we assume your input consists of fixed-size integers (like 32-bit, 64 bit), then all of the following can be considered such elementary operations:
i++
j < array.length
array[j + 1] < array[j]
let tmp = array[j]
...
But that also means you can take several of such operations together and still consider them an elementary operation. So this is also an elementary operation:
if (array[j + 1] < array[j]) {
let tmp = array[j]
array[j] = array[j+1]
array[j+1] = tmp
}
So, don't concentrate on breaking down operations into smaller operations, and those again into even smaller operations, when you are already certain that the larger operation is O(1).
Usually, everything that happens is a single operation. This is one of the reason we don't actually count the exact number of them, but instead use asymptotic notations (big O and big Theta).
However, sometimes you are interested about one kind of operation only. A common example is algorithms that use IO. Since IO is significantly more time consuming than anything happening on the CPU, you often just "count" the number of IO operations instead. In these cases, you often actually care about exact number of times an IO occurs, and can't use only asymptotic notations.
When you have nested for-loops where the amount of loops for the nested one changes each time, what is the easiest approach to analyze the total runtime? It's hard for me to conceptualize how to factor in the changing max value since I've only ever analyzed nested loops where the max was out of N, which led to a pretty simple O(n^2) runtime. Should I make a summation and use that?
For Example:
int val = 1;
for (int i = 0; i < n; i++) {
for (int j = 0; j < val; j++) {
val++;
}
}
My intuition tells me this is 2^n, but I have no practical way of really proving that
In general, to find the time complexity of loops, you need to find how many times they execute, as a function of the input. Sometimes it is straightforward, sometimes it is not. You may end up with complex mathematical expressions, and in some cases you may not be able to decide at all.
As for your example, your outer loop would clearly run exactly n times. Your inner loop, however, checks its loop condition j < val, which the first time is true because j = 0 and val = 1. Then, it increments val by 1 on each iteration so it will always be true that j < val. Therefore we notice that it is an infinite loop, and your program thus runs in O(∞).
(As a side note, in practice, depending on the language of implementation, eventually val may overflow and become smaller than j, which will cause the loop to finish. In this case, it only depends on the integer size you are using.)
Although, I found pretty good replies to the same question! However, I want time complexity equation of the following piece of code
sum = 0; c1=11
for (i=0; i<N; i++) c2=6
for (j=0; j<N; j++) c2=6
for (j=0; j<N; j++) c2=7
sum += arr[i][j] c3=2
While each statement has a cost associated with it, I require complete time complexity equation and its answer.
Regards
The comments section got quite long so I am going to write up an answer summarizing everything.
Measuring Time Complexity
In Computer Science, we measure time complexity by the number of steps/iterations your algorithm takes to evaluate.
So if you have a simple array of length n and you go through this array only once, say to print all the elements, we say that this algorithm is O(n) because the time is takes to run will grow proportionally to the size of the array you have, thus n
You can think of Big-O O(..) as a higher order function that compares other functions. if we say f(x) = O(n) it means that you function grows at most as fast as y=n thus linearly. This means that if you were to plot these functions on a graph, there would be a point c x = c after which the graph of n will always be on top of f(x) for any given x > c. Big-O signifies upper bound of a function in terms of another function.
So let's look at your original question and what it means to be constant time. Say we have this function
def printFirst5(arr: Array[Int]) = {
for(i =0 ;i < 5; i++){
println(arr[i])
}
}
This is what we call a constant time algorithm. Can you see why? Because no matter what array you pass into this (as long as it has at least 5 elements), it will only go through the first 5 elements. You can pass it an array of length 100000000000 you can pass it an array of length 10 it doesn't matter. In each case it will only look at the first 5 elements. Meaning this function printFirst5 will never go above the line y=5 in terms of time complexity. These kind of functions are denoted O(1) for constant time.
Now, finally, let's look at you edited version. (I am not sure what you are trying to do in your example because it is syntactically wrong, so I will write my own example)
def groupAllBy3(array: Array[Int]) = {
for(i=0; i < array.length; i++){
for(j=0; j < array.length; j++){
for(k=0; k< array.length; k++{
println(s"$array[i], $array[j], $array[k]")
}
}
}
}
This functions time complexity is O(N3). Why? Let's take a look.
The innermost loop will go through N elements for every j
How many js are there? Well there will be N js for every i.
How many is are there? N many.
So in total we get numberof-i * numberof-j * numberof-k = N * N * N = O(N^3)
Just to make sure you understand this correctly, let's take a look at another example. What would happen if these loops weren't nested? If we had:
def printAllx3(array: Array[Int]) = {
for(i=0; i < array.length; i++){
println(s"array[i]")
}
for(j=0; j < array.length; j++){
println(s"array[j]")
}
for(k=0; k< array.length; k++{
println(s"array[k]")
}
}
What is the case here?
The first loop goes through N elements, the second loop goes through N elements, the third loop goes through N elements. But they don't depend on each other in terms of iterations so we get N + N + N = 3N = O(N)
Do you see the difference?
With all due respect, I believe you are missing some of the fundamentals of what time complexity is & how we measure it. There is only so much I can explain here, I highly recommend you do some reading on the subject and ask any further questions you don't understand.
Hope this helps
From a popular definition ,a loop or recursion that runs a constant number of times is also considered as O(1).
For example the following loop is O(1)
// Here c is a constant
for (int i = 1; i <= c; i++) {
// some O(1) expressions
}
Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount.
For example following functions have O(n) time complexity.
// Here c is a positive integer constant
for (int i = 1; i <= n; i += c) {
// some O(1) expressions
}
I got a little confused with the following example here lets take c = 5 and according to the O(1) definition the below code becomes - O(1)
for(int i = 0; i < 5 ; i++){
cout<<"Hello<<endl";
}
Function 1:
for(int i = 0; i < len(array); i+=2){
if(key == array[i])
cout<<"Element found";
}
Function 2:
for(int i =0;i < len(array) ; i++){
if(key == array[i])
cout<<"Element found";
}
But when we compare the above 2 examples will they both become O(n) or first function is O(1) from definition.What exaclty does a loop running constant number of times means?
Assuming that len(array) is the b we're talking about [*], both your functions are O(n).
Function 2 will execute the if n times (once for each element of the array), making it obviously O(n).
Function 1, on the other hand, will execute the if n/2 times (once for every other element in the array), leading to a run time of O(n*1/2), and since constant factors (1/2 in this case) are usually omitted in O notation, you'll again end up with O(n).
[*] For the sake of completeness, if your array were of a fixed size, ie. len(array) were a constant, than both functions would be O(1).
"Loop running a costant number of times" means the loop runs a number of times that is limited from above by a constant, i.e. a given number that is indipendent from the input of your program.
Both in function 1 and 2 (unless the lenghts of the arrays are fixed or you can prove they'll never be grater than a specific constant, indipendently of the input) the if will be execute a number of time that depends on the size of the input so the time complexity can't be O(1).
"Time Complexity of a loop is considered as O(n) if the loop variables is incremented / decremented by a constant amount" is a misleading definition
Given a is much larger than b, would
for (i = 0; i < a; i++)
for (k = 0 k < b; k++)
be faster than
for (i = 0; i < b; i++)
for (k = 0 k < a; k++)
It feels to me the former would be faster but I cannot seem to get my head around this.
Well it really depends on what your doing. It's hard to do runtime analysis without knowing what's being done. That being said, if your using this code to traverse through a large array, its more important to go through each column in each row rather than visa-versa.
[0][1][2]
[3][4][5]
[6][7][8]
is really [0][1][2][3][4][5][6][7][8] in memory.
Your computer's cache provides a greater advantage when memory access is close together, and going sequentially though memory rather than skipping through rows provide much more locality.
Starting a loop takes effort; there's the loop variable itself plus al the variables declared within the loop, which are all allocated memory and pushed onto the stack.
This means the fewer times you enter a loop the better, so loop over the smaller range in the outer loop.