Number of steps in a nested loop - algorithm

I am trying to calculate the number of steps executed for the following nested loop specially for asymptotic growth. Based on the number of steps I will derive the Big O for this algorithm.
def get_multiples(list):
multiple = []
for o in list:
for i in list:
multiple.append(o*i)
return multiple
The way I have calculated is as follows (list consists of large number of elements = "n"):
Assignment statement (no. of steps = 1):
multiple = []
Nested Loops:
for o in list:
for i in list:
multiple.append(o*i)
In the outer loop the variable o is assigned n times. Each time the outer loop executes, first the variable i is assigned n times, then the variables are multiplied n times and finally the list is appended n times. Therefore the no. of steps = n*(n+n+n) = 3n2
Return statement (No. of steps = 1):
return multiple
Therefore the total no. of steps = 3n2 + 2
However the correct answer is 3n2 + n +2. Apparently the execution of the outer loop takes additional n steps which is not required for the inner loop.
Can somebody explain to me what did I miss ?
It does not make a difference to complexity since it will still be O(n2)

I think that the correct way to calculate the nested loop is as follows:
The number o is assigned n times.
the number i is assigned n2 times, o*i is calculated n2 times, the append function is called n2 times.
therefore n + n2 + n2 + n2 = 3n2 + n
add it to the rest and you get 3n2 + n + 2

def get_multiples(list):
multiple = [] // 1 STEP
for o in list: // Executed n times, so n STEPS
for i in list: // Executed n times for each value of o, so n*n STEPS
multiple.append(o*i) // 1 STEP to multiply and 1 STEP to append, repeated for each pair of (o, i), so 2*n*n STEPS
return multiple // 1 STEP
Adding the above: 1 + n + n2 + 2n2 + 1 = 3n2 + n + 2

Related

How do I write a pseudo code if a recurrence is given

T(n)=4T(n/4) + log4n is the recurrence provided and I was wondering how to write a pseudocode based on it.
This says: make four recursive calls, and have each recursive call do an amount of work equal to log 4n (rounded down).
Note that log 4n = log 4 + log n = 2 + log n.
Something like this gets pretty close::
function foo(array[1...n])
if n <= 1 then return 1
c = 1
while c < n do
c *= 4
return c + foo(arr[1...n/4]) + foo(arr[n/4+1...n/2]) + foo(arr[n/2+1...3n/4]) + foo(arr[3n/4+1...n]
The recurrence here is T(n) = 4T(n/4) + log 4n + 15, if I counted correctly, and making some assumptions about operations taking the same time to run.
We can bring that 15 down by turning the algorithm more and more silly. Store n/4 in a variable, call foo on the first 1/4 of arr four times, but only return c; this brings the 15 down to 1, I think. To get rid of that 1 operation, start c at 4 instead, killing one loop iteration (2 ops) and add one op back at the end, like return c + 1 instead of return c.

Program and run time determination

Given the following pseudo code
D(A) // A is an array of numbers
U_Size = 1
for i=2 to length(A)
U=?
for j=1 to U_Size
if A[j]=A[i]
then U = FALSE
j = U_size
if U = TRUE
then U_Size = U_Size + 1
A[U_Size]=A[i]
return U_Size
What will be the best opton to replace the "?" in line 4? ( U=? )
and what exactly does this program do - Answered
How should I determine the run-time & space complexity of this program
MY ANSWER: In line 4, I initialized U -> U = TRUE and figured that the program arranges all of the different elements of the array in the beginning of the arry and returns the amount of different elements
The Question remained unanswered is: How should I determine the run-time & space complexity of this program (2)
If you are not familiar with the Big O notation, I would suggest to read about it because in computer science we use that to represent the time and space complexity.
Let the length of input array is n.
The time complexity of this code will be O(n2) in the worst-case and the worst-case occurs for an array of all distinct numbers. For an input array of all distinct numbers, the if condition if A[j] = A[i] is always false, so the inner looop of j loops from 1 to U_size for every i and U_size is increased by 1 everytime.If it is still not clear then you can understand it using math for an array of all distinct numbers.
For i = 2, and U_size = 1, the inner loop of j runs from 1 to 1 i.e 1 time.
For i = 3, and U_size = 2, the inner loop of j runs from 1 to 2 i.e 2 times.
For i = 4, and U_size = 3, the inner loop of j runs from 1 to 3 i.e 3 times.
For i = 5, and U_size = 4, the inner loop of j runs from 1 to 4 i.e 4 times.
.
.
.
.
.
For i = n (length of array A), and U_size = n-1, the inner loop of j runs from 1 to n-1 i.e n-1 times.
So, if you sum up the running times for all the iterations of i,
1 + 2 + 3 + 4 + ... + n-1, you get n*(n-1)/2 which is ~ O(n2).
Now, space complexity. You have used the array A itself for assigning after the j loop. If you do not take into account the input array for reporting space, then you get O(1) as space complexity.
If you store a separate array for assigning the element i.e A[U_Size]=A[i] or you consider the input array itself for space, then you would say space complexity as O(n).

Triple Nested For-loop with two independent outer loops and one dependent inner loop

I have the following sequence of loops.
TripleLoop(int n)
for i <- 1 to n
for j <- 1 to n
for k <- j to n
do num <- j + i
return num
I know the two outer loops run "n" times.
Thus, we have n * n = n^2 for the two outer loops.
However, the third inner loop depends on the variable "j".
How do I begin to solve these types of nested dependent for-loops?
I'm not sure if I should multiply, or add the third inner loop to the two outer loops.
Can someone help me out here?
Well the inner loop (the one with k as iterator) is executed n-j+1 times, since it starts at j and ends with n.
The total number of steps the middle for loop thus performs is the sum of steps per iteration for j, so that means that the total number of times we run the body of the inner for loop is:
n
---
\ n * (n + 1)
/ n - j + 1 = -------------
--- 2
j=1
so after one iteration of the outer loop (the one with i as an iterator), we have n*(n+1)/2 steps.
In total, our algorithm will thus run the body of the inner loop for a total of n * n * (n+1)/2 times. Since the outer loop runs n times, and the number of steps in the body of that loop does not depend on the value of i itself.
If we consider the num <- j + 1 part to run in constant time (well strictly speaking summing up huge numbers can not be done in constant time), then this is thus an O(n3) algorithm.

How to find the running time of a specific procedure?

For each of the procedures below, let T (n) be the running time. Find the order of T (n)
(i.e., find f(n) such that T (n) ∈ (f(n)).
Procedure Fum(int n):
for i from 1 to n do
y ← 1/i
x ← i
while x > 0 do
x ← x − y
end while
end for
I know how to find run times of simple functions but since this is a nested loop where the inner loop depends on a variable from the outer loop, I'm having trouble.
It should be 1+4+9+...+n^2 = n(n+1)(2n+1)/6, or simply O(n^3), for this case.
For each step in the for-loop, it will run i^2 times for the while. Given x=i;y=1/i;, it will take i^2 (as x=y*i^2) times for x to reach x<=0 by decreament step x=x-y.
For i, it will be 1,2,...,n, summing them up, you will get 1+4+9+...n^2 = n(n+1)(2n+1)/6.
First, lets consider the runtime of the inner loop:
We want to figure out how many times the inner loop runs, in terms of i.
That is, we want to solve for f(i) in x-f(i)y = 0. If we sub in x = i, and y = 1/i, we get f(i) = i^2.
We know the outer loop will run exactly n times, so then, we get the total number of times the inner loop will be run:
= 1 + 4 + 9 + ... + n^2
This sum is equal to n(n+1)(2n+1)/6, which is O(n^3)

Complexity of a recursive function with a loop

I have a recursive function working on a list, the function contains a loop where itself is called, and ends up with another function g. Its structure is similar as follows, to simplify the issue, we can assume that l is always a list without duplicate elements.
let rec f l = function
| [] -> g ()
| _ ->
List.fold_left
(fun acc x ->
let lr = List.filter (fun a -> (a <> x)) l in
acc + (f lr))
1 l
I am not sure how to express the complexity of this function, with List.length l and the complexity of g.
I think it is proportional to the complexity of g and the factorial of List.length l, could anyone confirm?
Since you assume that the list l does not contain any duplicates, what this function does is compute all sublists that have one less element than the original list and call itself recursively on all of them. So, the number of times g is called when starting with a list of size n is g?(n) = n · g?(n-1) = n!
Now, let's consider everything else the function has to do. The amount of work at each step of the recursion includes :
For each element in the original list, constructing a new list of one less element. This is a total amount of work equal to n2
Once the result of the recursive call is known, add it to an accumulator. This is a total amount of work equal to n (this part can be ignored, since the filter is more costly).
So, since we know how many times each recursive step will be called (based on our previous analysis), the total amount of non-g related work is: t?(n) = n2 + n (n-1)2 + n (n-1) (n-2)2 + ... + n!
This formula looks like a pain, but in fact t?(n) / n! has a finite non-zero limit as n increases (it is the sum of the k+1 / k! with 0 < k < n) and so t?(n) = Θ(n!).
Okay. I don't mean to seem mustrustful. This really does look like a functional programming homework because it's not very practical code.
Let F(n) be the number of comparisons plus the number of additions for an input of length n. And let G be the run time of g. Since g doesn't take any operands, G is constant. We are just counting the number of times its called.
The fold will execute its function n times. Each execution will call filter to do n comparisons and remove exactly one element from its input each time, then recursively call f on this shortened list and do one addition. So the total cost is
F(n) = n * (n + F(n - 1) + 1) [ if n > 0 ]
= G [ otherwise ]
The first term expands to
F(n) = n * F(n - 1) + n^2 + n
This is O(n! + n^3 + n^2 + nG) = O(n! + nG) as you proposed.
I hope this is helpful.

Resources