Calculating complexities of both time and space - complexity-theory

int foo2(int k) //function defining
{
int x=0; // O(1)
while(n>0) // O(log(n))
{
int i; // another O(log(n)) because its inside the loop
for(i=0;i<n;i++) // O((n+1)*log(n)) this loop happens inside the while loop
{
x += i; // O(n*log(n)) happens inside the while and for loops
}
n /=2; // O(log(n) because its inside the while loop and outside the for loop
}
}
I am trying to compute the complexity of the following code so my answer after my analysis is that the complexity of time is O(n*log(n)). can you check if my steps are correct and if they are not can you tell me whats wrong? also is it
because there are no arrays or strings here then the complexity of space is O(1)?

Related

Time complexity of an algorithm - embedded loop

I'm trying to solve this algorithm but I'm not sure
here is the code(trying to get the complexity)
For (i =0, i<N, i++) {
For (j=0, j<i/2, j++) {
S.O.P (“”);
}
}
the S.O.P stands for the instructions given to the CPU.
I don't know what S.O.P. stands for, but for the sake of the question, I'll suppose it takes a fixed amount of time - O(1).
Therefore, the only thing left is defining the time the for loops take to run.
for (i =0, i<N, i++) { // n times
For (j=0, j<i/2, j++) { // for each time, this runs n/2 times
S.O.P (“”); // fixed time O(1)
}
}
Given that, we can calculate:
T(n) = {sum for i from 0 to n} i/2 = (1/2)*(n*(n-1)/2)
T(n) = (n*(n - 1)/4) * O(1) = O(n^2/4) = O(n^2)
So the final time complexity would be O(n^2) (O of n squared).

What would be the tight asymptotic runtime (Big Theta) for these algorithms?

Question 1
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
The first loop will run for log(n).
The second loop will run for log(n).
The upper bound is O(log^2(n). What would be Big Θ?
Question 2
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The upper bound for this is O(n). I am not quite sure about the Big Θ.
It would great if someone could clarify on these. Thank You
Lets get clear with the definitions of the notations first.
Big O: It denotes the upper bound of the algorithm.
Big Theta: It denotes the average bound of the algorithm.
For your first question
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
For i=N, inner loop runs 2N times, i=N/2 inner loop runs for N times, for i=N/4 inner loop runs for N/2 times.....
so the total complexity = O(2N+N+N/2+N/4+...+1)
which is equal to O(N(2+1+1/2+1/4+....1/N))= O(N(3+1/2+1/4+....1/N))
N(3+1/2+1/4+....1/N)
= N( 3 + 1 - (0.5)^logN ) = O(N(4-1/N)) = O(N)
So the complexity is O(N), even in theta notation its the same N as the above loops takes the same time for all cases.
For your second question
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The while loop takes O(sqrt(N)). Same as above, here also the theta notation will also be the same as big O notation, which is sqrt(N).
The theta notation varies from big O if input has multiple cases. Lets take an example of insertion sort https://en.wikipedia.org/wiki/Insertion_sort where N is the size of the input array. If the input array is already sorted it takes linear time, but if the input array is reverse sorted it takes N^2 time to sort the array.
So in that case for insertion sort, the time complexity is O(N^2).
For best case it is theta(N) and for worst case its theta(N^2).

Worst Case Big O runtime

Can someone help me find the worst case big-O runtime of the following algorithm in terms of n?
// precondition: A contains only positive numbers
public int[] four(int A[])
{
int n=A.length;
int[] B = new int[n];
int max;
for (int k=n-1; k >= 0; k--) {
max = findMax(A); //call to findMax above
B[k]=A[max];
A[max]=-1;
}
return B;
}
// precondition: A contains only positive numbers
public int[] four(int A[])
{
int n=A.length; //constant operation
int[] B = new int[n]; //constant operation
int max; //constant operation
for (int k=n-1; k >= 0; k--) { //iterates through n times
max = findMax(A); //call to findMax above //will take complexity of O(n) assuming A is scrambled,
B[k]=A[max]; //constant operation
A[max]=-1; //constant operation
}
return B; //constant operation
}
This entire operation will take O(n^2) time as the loop is ran O(n) times with an operation inside that takes the time complexity of O(n) to complete assuming findmax() will take O(n) which is usual in the case that A[] is a scrambled array
This itself looks to be selection sort using 2 arrays.
The complexity of your code depends on the complexity of findMax().
As you algorithm counts one time from n-1 to 0, the time-complexity is O(n⋅f(n)), where f(n) is the complexity of findMax().
A is assumend to be an unsorted, since you are sorting it. So findMax() is probably a linear search with a complexity of O(n).
So the overall complexity would be O(n²).

Why is this time complexity O(n)?

Why does the below function have a time complexity of O(n)? I can't figure it out for the life of me.
void setUpperTriangular (
int intMatrix[0,…,n-1][0,…,n-1]) {
for (int i=1; i<n; i++) {
for (int j=0; j<i; j++) {
intMatrix[i][j] = 0;
}
}
}
}
I keep getting the final time complexity as O(n^2) because:
i: execute n times{//Time complexity=n*(n*1)
j: execute n times{ //Time complexity=n*1
intMatrix[i][j] = 0; //Time complexity=1
}
}
The code iterates through n^2/2 (half a square matrix) locations in the array, so its time complexity is O(n^2)
This is same as insertion sort's for loop. Time complexity of insertion sort is O(n2).
So, CS department head explained it a different way. He said that since the second loop doesn't iterate n times, it iterates n! times. So technically it is O(n).
It can be at most considered as O(n.m) which finally comes down to O(n.n) or O(n^2)..

The relationship between linear recursion, binary recursion and runtime

Say we have a method that takes n steps, but that calls itself linearly at the worst case n times. In such a case would the Big O would be n*n? So is a recursive call generally n^2 similarly to two for loops?
Let's now take a binary recursion algorithm such as binary Fibonacci. 1 call of that algorithm takes n step, but let's say it can reiterate up to n times. Would the run-time of that algorithm be 2^n?
Let f() be a function which calls itself n times. Consider the C code representing the function f().
int f(int n)
{
int i;
if(n==0)
{
printf("\n Recursion Stopped");
}
else
{
for(i=0;i<=n;i++)
{
printf("\n Hello");
}
f(n-1);
}
}
For n = 5, the message Hello will be printed 15 times.
For n = 10, the message Hello will be printed 55 times.
In general the message will be printed n*(n+1)/2 times.
Thus the complexity of the function f() is O(n2). Remember f() is a function which has a complexity n and f() is recursively called n times. The complexity of such a function is equal to the following loop order if the inner loop contains constant time expressions like addition, subtraction etc.
for(i=0;i<=n;i++)
{
for(j=i;j<=n;j++)
{
/* Some constant time operation */
}
}
For a Binary Recursion the time complexity is O(2n).
A Binary Recursive function calls itself twice.
The following function g() is an example for binary recursion, (which is a Fibonacci binary recursion)
int g(int n)
{
int i;
if(n==0)
{
printf("\n Recursion Stopped");
}
else
{
printf("\n Hello");
g(n-1);
g(n-2);
}
}
The recurrence relation is g(n) = g(n-1)+g(n-2), for n>1.
Solving which we get an upper bound of O(2n).
The function g() is also Θ(φn),
where φ is the golden ratio and φ = ((1+ √5)/2)

Resources