time complexity for the following - time

int i=1,s=1;
while(s<=n)
{
i++;
s=s+i;
}
time complexity for this is O(root(n)).
I do not understood it how.
since the series is going like 1+2+...+k .
please help.

s(k) = 1 + 2 + 3 + ... k = (k + 1) * k / 2
for s(k) >= n you need at least k steps. n = (k + 1) * k / 2, thus k = -1/2 +- sqrt(1 + 4 * n)/2;
you ignore constants and coeficients and O(-1/2 + sqrt(1+4n)/2) = O(sqrt(n))

Let the loop execute x times. Now, the loop will execute as long as s is less than n.
We have :
After 1st iteration :
s = s + 1
After 2nd iteration :
s = s + 1 + 2
As it goes on for x iterations, Finally we will have
1 + 2 ... + x <= n
=> (x * (x + 1)) / 2 <= n
=> O(x^2) <= n
=> x= O (root(n))

This computes the sum s(k)=1+2+...+k and stops when s(k) > n.
Since s(k)=k*(k+1)/2, the number of iterations required for s(k) to exceed n is O(sqrt(n)).

Related

How do you find the time complexity in terms of theta for the following question?

Question from my university exam, 2020
The above link has the question I'm referring to in detail(It's an image)
Okay, First Let's assume for loop will execute k times then see the value of j in kth instruction is j = 2(1) + 2(2) + 2(3) + ....... + 2(k)
j = k^2 + k
but you know j <= n so solve it you get k <= (-1 + root( 1 + 4n )) /2
so you can same it has time complexity of O(root(n))
You can write an equation to find the number of loop iteration:
2 * 1 + 2 * 2 + ... + 2 * k = n
k means the number of iteration for the loop up to j <= n.
Hence,
2 (1 + 2 +... +k) = n => 2 (k(k+1)/2) = n
=> k(k+1) = n => k = Theta(sqrt(n))
Therefore, as th other operation is assumed to be in Theta(1), total time complexity of the algorithm is in Theta(sqrt(n)).

Finding sum in less than O(N)

Question:
In less O(n) find a number K in sequence 1,2,3...N such that sum of 1,2,3...K is exactly half of sum of 1,2,3..N
Maths:
I know that the sum of the sequence 1,2,3....N is N(N+1)/2.
Therefore our task is to find K such that:
K(K+1) = 1/2 * (N)(N+1)/2 if such a K exists.
Pseudo-Code:
sum1 = n(n+1)/2
sum2 = 0
for(i=1;i<n;i++)
{
sum2 += i;
if(sum2 == sum1)
{
index = i
break;
}
}
Problem: The solution is O(n) but I need better such as O(n), O(log(n))...
You're close with your equation, but you dropped the divide by 2 from the K side. You actually want
K * (K + 1) / 2 = N * (N + 1) / (2 * 2)
Or
2 * K * (K + 1) = N * (N + 1)
Plugging that into wolfram alpha gives the real solutions:
K = 1/2 * (-sqrt(2N^2 + 2N + 1) - 1)
K = 1/2 * (sqrt(2N^2 + 2N + 1) - 1)
Since you probably don't want the negative value, the second equation is what you're looking for. That should be an O(1) solution.
The other answers show the analytical solutions of the equation
k * (k + 1) = n * (n + 1) / 2 Where n is given
The OP needs k to be a whole number, though, and such value may not exist for every chosen n.
We can adapt the Newton's method to solve this equation using only integer arithmetics.
sum_n = n * (n + 1) / 2
k = n
repeat indefinitely // It usually needs only a few iterations, it's O(log(n))
f_k = k * (k + 1)
if f_k == sum_n
k is the solution, exit
if f_k < sum_n
there's no k, exit
k_n = (f_k - sum_n) / (2 * k + 1) // Newton step: f(k)/f'(k)
if k_n == 0
k_n = 1 // Avoid inifinite loop
k = k - k_n;
Here there is a C++ implementation.
We can find all the pairs (n, k) that satisfy the equation for 0 < k < n ≤ N adapting the algorithm posted in the question.
n = 1 // This algorithm compares 2 * k * (k + 1) and n * (n + 1)
sum_n = 1 // It finds all the pairs (n, k) where 0 < n ≤ N in O(N)
sum_2k = 1
for every n <= N // Note that n / k → sqrt(2) when n → ∞
while sum_n < sum_2k
n = n + 1 // This inner loop requires a couple of iterations,
sum_n = sum_n + n // at most.
if ( sum_n == sum_2k )
print n and k
k = k + 1
sum_2k = sum_2k + 2 * k
Here there is an implementation in C++ that can find the first pairs where N < 200,000,000:
N K K * (K + 1)
----------------------------------------------
3 2 6
20 14 210
119 84 7140
696 492 242556
4059 2870 8239770
23660 16730 279909630
137903 97512 9508687656
803760 568344 323015470680
4684659 3312554 10973017315470
27304196 19306982 372759573255306
159140519 112529340 12662852473364940
Of course it becomes impractical for too large values and eventually overflows.
Besides, there's a far better way to find all those pairs (have you noticed the patterns in the sequences of the last digits?).
We can start by manipulating this Diophantine equation:
2k(k + 1) = n(n + 1)
introducing u = n + 1 → n = u - 1
v = k + 1 k = v - 1
2(v - 1)v = (u - 1)u
2(v2 - v) = u2 + u
2(4v2 - 4v) = 4u2 + 4u
2(4v2 - 4v) + 2 = 4u2 - 4u + 2
2(4v2 - 4v + 1) = (4u2 - 4u + 1) + 1
2(2v - 1)2 = (2u - 1)2 + 1
substituting x = 2u - 1 → u = (x + 1)/2
y = 2v - 1 v = (y + 1)/2
2y2 = x2 + 1
x2 - 2y2 = -1
Which is the negative Pell's equation for 2.
It's easy to find its fundamental solutions by inspection, x1 = 1 and y1 = 1. Those would correspond to n = k = 0, a solution of the original Diophantine equation, but not of the original problem (I'm ignoring the sums of 0 terms).
Once those are known, we can calculate all the other ones with two simple recurrence relations
xi+1 = xi + 2yi
yi+1 = yi + xi
Note that we need to "skip" the even ys as they would lead to non integer solutions. So we can directly use theese
xi+2 = 3xi + 4yi → ui+1 = 3ui + 4vi - 3 → ni+1 = 3ni + 4ki + 3
yi+2 = 2xi + 3yi vi+1 = 2ui + 3vi - 2 ki+1 = 2ni + 3ki + 2
Summing up:
n k
-----------------------------------------------
3* 0 + 4* 0 + 3 = 3 2* 0 + 3* 0 + 2 = 2
3* 3 + 4* 2 + 3 = 20 2* 3 + 3* 2 + 2 = 14
3*20 + 4*14 + 3 = 119 2*20 + 3*14 + 2 = 84
...
It seems that the problem is asking to solve the diophantine equation
2K(K+1) = N(N+1).
By inspection, K=2, N=3 is a solution !
Note that technically this is an O(1) problem, because N has a finite value and does not vary (and if no solution exists, the dependency on N is even meanignless).
The condition you have is that the sum of 1..N is twice the sum of 1..K
So you have N(N+1) = 2K(K+1) or K^2 + K - (N^2 + N) / 2 = 0
Which means K = (-1 +/- sqrt(1 + 2(N^2 + N)))/2
Which is O(1)

What is the time complexity for given snippet?

for i = 1 to n do
for j = 1 to i do
for k = 1 to j do
What is its time complexity in terms of 'n'?
The inner-most loop will obviously run j times. Assuming that it contains operations worth 1 time unit, this will be:
T_inner(j) = j
The middle loop will run i times, i.e.
T_middle(i) = Sum {j from 1 to i} T_inner(j)
= Sum {j from 1 to i} j
= i/2 * (1 + i)
Finally:
T_outer(n) = Sum {i from 1 to n} T_middle(i)
= Sum {i from 1 to n} (i/2 * (1 + i))
= 1/6 * n * (1 + n) * (2 + n)
= 1/6 n^3 + 1/2 n^2 + 1/3 n
And this is obviously O(n^3).
Note: This only counts the operations in the inner most block. It neglects the operations necessary to perform the loop. But if you include those, you will see that the time complexity is the same.

Big O runtime for this algorithm?

Here's the pseudocode:
Baz(A) {
big = −∞
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
So line 3 will be O(n) (n being the length of the array, A)
I'm not sure what line 4 would be...I know it decreases by 1 each time it is run, because i will increase.
and I can't get line 6 without getting line 4...
All help is appreciated, thanks in advance.
Let us first understand how first two for loops work
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
First for loop will run from 1 to n(length of Array A) and the second for loop will depend on value of i. SO when i = 1 second for loop will run for n times..When i increments to 2 your second for loop will run for (n-1) time ..so it will go on till 1.
So your second for loop will run as follows:
n + (n - 1) + (n - 2) + (n - 3) + .... + 1 times...
You can use following formula: sum(1 to n) = N * (N + 1) / 2 which gives (N^2 + N)/2 So we have Big oh for these two loops as
O(n^2) (Big Oh of n square )
Now let us consider third loop also...
Your third for loop looks like this
for k = j to j + i - 1
But this actually means,
for k = 0 to i - 1 (you are just shifting the range of values by adding/subtracting j but number of times the loop should run will not change, as difference remains same)
So your third loop will run from 0 to 1(value of i) for first n iterations of second loop then it will run from 0 to 2(value of i) for first (n - 1) iterations of second loop and so on..
So you get:
n + 2(n-1) + 3(n-2) + 4(n-3).....
= n + 2n - 2 + 3n - 6 + 4n - 12 + ....
= n(1 + 2 + 3 + 4....) - (addition of some numbers but this can not be greater than n^2)
= `N(N(N+1)/2)`
= O(N^3)
So your time complexity will be N^3 (Big Oh of n cube)
Hope this helps!
Methodically, you can follow the steps using Sigma Notation:
Baz(A):
big = −∞
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
sum = 0
for k = j to j + i - 1
sum = sum + A(k)
if sum > big
big = sum
return big
For Big-O, you need to look for the worst scenario
Also the easiest way to find the Big-O is to look into most important parts of the algorithm, it can be loops or recursion
So we have this part of the algorithm consisting of loops
for i = 1 to length(A)
for j = 1 to length(A) - i + 1
for k = j to j + i - 1
sum = sum + A(k)
We have,
SUM { SUM { i } for j = 1 to n-i+1 } for i = 1 to n
= 1/6 n (n+1) (n+2)
= (1/6 n^2 + 1/6 n) (n + 2)
= 1/6 n^3 + 2/6 2 n^2 + 1/6 n^2 + 2/6 n
= 1/6 n^3 + 3/6 2 n^2 + 2/6 n
= 1/6 n^3 + 1/2 2 n^2 + 1/3 n
T(n) ~ O(n^3)

What is the running time for this function?

I have 3 questions in this function ,
Sum = 0
MyFunction (N)
M = 1,000,000
If (N > 1)
For I = 1 to M do
Sum = 0
J = 1
Do
Sum = Sum + J
J = J + 2
While J < N
End For
If (MyFunction(N / 2) % 3 == 0)
Return (2 * MyFunction(N / 2))
Else
Return (4 * MyFunction(N / 2))
End If
Else
Return 1
End If
End MyFunction
First question is : What's the Complexity of the non-recursive part of code?
I think the non recursive part is that loop
For I = 1 to M do
Sum = 0
J = 1
Do
Sum = Sum + J
J = J + 2
While J < N
End For
and my answer is M * log(n) , but my slides say it's not M * log (n) !
I need explanation for this.
The second question is: What's the correct recurrence for the previous code of MyFunction?
when I saw these lines of code
If (MyFunction(N / 2) % 3 == 0)
Return (2 * MyFunction(N / 2))
Else
Return (4 * MyFunction(N / 2))
End If
I think that it's T(n) = T(n/2) + Theta(non-recursive),
because if will execute one of the 2 calls.
Again this answer is wrong.
The third one is: What's the complexity of MyFunction?
My answer based on the 2 questions is T(n) = T(n/2) + M * lg n
and total running time is M * lg n .
Let's look at this one piece at a time.
First, here's the non-recursive part of the code:
For I = 1 to M do
Sum = 0
J = 1
Do
Sum = Sum + J
J = J + 2
While J < N
End For
The outer loop will run Θ(M) times. Since M is a fixed constant (one million), the loop will run Θ(1) times.
Inside the loop, the inner while loop will run Θ(N) times, since on each iteration J increases by two and stops as soon as J meets or exceeds N. Therefore, the total work done by this loop nest is Θ(N): Θ(N) work Θ(1) times.
Now, let's look at this part:
If (MyFunction(N / 2) % 3 == 0)
Return (2 * MyFunction(N / 2))
Else
Return (4 * MyFunction(N / 2))
End If
The if statement will make one recursive call on an input of size N / 2, and then depending on the result there will always be a second recursive call of size N / 2 (since you're not caching the result).
This gives the following recurrence relation for the runtime:
T(n) = 2T(n / 2) + Θ(n)
Using the Master Theorem, this solves to Θ(n log n).
Hope this helps!

Resources