Find the time complexity of the algorithm? - algorithm

I think it is log(logn) because the cycle repeats log(logn) times...
j=1;
i=2;
while (i <= n) do {
B[j] = A[i];
j = j + 1;
i = i * i;
}

You are right, it is O(lg(lg n)) where lg stands for base 2 logarithm.
The reason being that the sequence of values of i is subject to the rule i = prev(i) * prev(i), which turns out to be 2, 2^2, 2^4, 2^8, ... for steps 1, 2, 3, 4, .... In other words, the value of i after k iterations is 2^{2^k}.
Thus, the loop will stop as soon as 2^{2^k} > n or k > lg(lg(n)) (Just take lg twice to both sides of the inequality. The inequality remains valid because lg is an increasing function.)

Related

summation function prove it is big oh and big theta

I don't get how to solve the summation problem proving that it is big oh of n^4 and big omega of n^4.
The problem is this:
f(n) = Σ(i=1 to n) Σ(j=1 to i) Σ(k=1 to j) of k
I wrote the code in C++ for what I think the summation is saying.
for(int i = 1; i <=n; i++)
for(int j = 1; j <= i; j++)
for (int k = 1; k<= j; k++)
something bigoh(1)
I know that I need to prove it is big oh and big omega of n^4
Your code is not reflecting the sum, as the last part of the summing formula is k, while your code assumes a constant for the inner part ("something bigoh(1)"). The code should be:
sum = 0
for(int i = 1; i <=n; i++)
for(int j = 1; j <= i; j++)
for (int k = 1; k <= j; k++)
for (int m = 1; m <= k; m++)
sum++
The innermost loop looks a bit overkill, because it can be replaced with
sum += k
...but writing it this way you can translate the problem to how many times sum++ is executed in the code.
Imagine you have an array of values 1,2,...n, and that you should pick four numbers from it (allowing to pick the same number again), but the order of picking is not important, then you can pick:
1, 1, 1, 1
2, 1, 1, 1
2, 2, 1, 1
2, 2, 2, 1
2, 2, 2, 2
3, 1, 1, 1
3, 2, 1, 1
...
...etc. You would not count {1, 2, 1, 1} as that is one you already counted with {2, 1, 1, 1} -- order is not distinguished. So we only count where the chosen numbers are in a non-increasing order.
Now notice how the four nested loops in this (corrected) code do exactly that: they iterate such combinations, avoiding to count a set twice (by keeping i >= j >= k >= m).
So given that the inner task has constant time complexity, this problem boils down to: how many such combinations exist?
This is a Combination with repetitions. This is denoted as C((n, m)), where in our case m=4, so we count the number of 4-multisubsets, C((n, 4)) ("n multichoose 4"). This number is equivalent to
n(n+1)(n+2)(n+3)/4!
This is evidently O(n4).
There is no way there can fewer (or more) executions of the inner part of the nested loops, so this is also a lower bound: Ω(n4)

Time complexity of nested while with changing condition

I'm trying to solve the complexity of this loop
for(int i= 0; i < n; i++) {
c = i;
while(c > 1){
O(1);
c = c / 2;
}
}
as the while condition changes in every loop I don't know how to calculate that strange series.
I mean, if the loop where
for(int i= 0; i < n; i++) {
c = n;
while(c > 1){
O(1);
c = c / 2;
}
}
I know the while has a complexity of O(logn) and it repeats itself n times, so the complexity would be O(nlogn).
The problem I have with previous loop is "c=i". As c=i, first time (c=0) the loop would reproduce 0 times, when c=1 it would reproduce 0 times again, when c=2 it would reproduce 1 time, then the series would follow and it is 0, 0, 1, 2, 2, 3, 3... (while reproductions each time of for loop)
O(logn) would not repeat itself n times, would repeat a number of times I can't come up with, so I don't know how to solve it.
This need a bit of math involved.Given that log is well defined for a and b:
log(a) + log(b) = log(ab)
Here you have
log(1) + log(2) +....+ log(n) = log(1*....*n) = log(n!)
There is a mathematical approximation for log(n!), namely
log(n!) ~ nlog(n) - n + 1
which reveal O(log(n!)= O(nlog(n))

Understanding the steps in this sum of subarrays algorithm and its run time

I've been staring at this for a while and it's not sinking in. I think I understand at a basic level what's going on. E.g. A = {1, 2, 3, 4}
Sum = A[0] + [A[0] + A[1]] + [A[0] + A[1] + A[2]] + [A[0] + A[1] + A[2] + A[3]]
However, I'm not able to understand the steps via the explanation/notation below - or at least, it's a little fuzzy. Could someone please explain the steps/walk through what's happening.
Example 1.4 (Sums of subarrays). The problem is to compute, for each subarray a[j..j +m−1] of size m in an array a of size n, the partial sum of its elements s[j] = ∑ m−1 k=0 a[j+k]; j = 0,...,n−m. The total number of these subarrays is n−m+1.
At first glance, we need to compute n−m+1 sums, each of m items, so that the running time is proportional to m(n−m+1). If m is fixed, the time depends still linearly on n. But if m is growing with n as a fraction of n, such as m = n 2, then T(n) = cn 2n 2 +1= 0.25cn2 +0.5cn. The relative weight of the linear part, 0.5cn, decreases quickly with respect to the quadratic one as n increases.
Well, the explanation, you provided seems to be not about your understanding of the problem. I think, your Example 1.4 is really about following.
A = {1, 2, 3, 4}, m = 3.
Sum = (A[0] + A[1] + A[2]) + (A[1] + A[2] + A[3]).
Here you have n-m+1 (4-3+1=2) subsums of m(3) elements each. The described algorithm can be preformed in code like this:
function SumOfSubarrays(A, n, m) {
sum = 0;
//loop for subarrays
for (j = 0; j <= n - m; j++;) {
//loop for elements in each subarray
for (k = 0; k <= m - 1; k++) {
sum += A[j + k];
}
}
}
Time complexity of this algorithm depends linearly on n. But, as it is said in Example 1.4, if m growths as a fraction of n, then time complexity becomes quadratic.
You need totally m(n−m+1) operations: (n−m+1) for outer loop as it is a number of subarrays, m for inner loop as it is a number of elements in each subarray. If m depends on n then you have, for example:
m = 0.5 * n
m(n-m+1) = 0.5n(n-0.5n+1) = 0.5n(0.5n-1) = 0.25n^2 - 0.5n
Where quadratic part growths faster as it is quadratic.

Is this loop O(nlog(n))?

I have a nested for loop that I am trying to analyze the efficiency of. The loop looks like this:
int n = 1000;
for (int i = 0; i < n; i++) {
for (int j = 0; j < i; j++) {
System.out.print("*");
}
}
I don't believe that this algorithm is O(n^2) because the inner loop does not run n times, it only runs i times. However, it certainly is not O(n). So I hypothesize that it must be between the two efficiencies, which gives O(nlog(n)). Is this accurate or is it really a O(n^2) algorithm and I'm misunderstanding the effect the inner loop has on the efficiency?
Your algorithm will run a triangular number of times:
n * (n + 1) / 2
In the above case, n = 999 because the first j loop doesn't run:
(999 * 1000) / 2 = 499500
It is lower than n**2, but it still is O(n**2), because n * (n + 1) / 2 is n**2 / 2 + n / 2. When n is large, you can ignore n / 2 compared to n**2 / 2, and you can also ignore the constant 1 / 2 factor.
I kind of get your doubts, but try to think in this way: what value will i have in the worst case scenario? Answer is n-1, right? So, as the complexity is evaluated by considering the worst case scenario it turns out that it is O(n^2) as n * (n-1) ~ n^2.
The number of iterations is sum from i=0 to n-1 (sum from j=0 to i-1 (1)). The inner sum is obviously equal to i. sum from i=0 to n-1 (i) = n * (n-1) / 2 = O(n^2) is well known.

Asymptotic analysis

I'm having trouble understanding how to make this into a formula.
for (int i = 1; i <= N; i++) {
for (int j = 1; j <= N; j += i) {
I realize what happens, for every i++ you have 1 level of multiplication less of j.
i = 1, you get j = 1, 2, 3, ..., 100
i = 2, you get j = 1, 3, 5, ..., 100
I'm not sure how to think this in terms of Big-theta.
The total of j is N, N/2, N/3, N/4..., N/N (My conclusion)
How would be best to try and think this as a function of N?
So your question can be actually reduced to "What is the tight bound for the harmonic series 1/1 + 1/2 + 1/3 + ... + 1/N?" For which the answer is log N (you can consider it as continuous sum instead of discrete, and notice that the integral of 1/N is log N)
Your harmonic series is the formula of the whole algorithm (as you have correctly concluded)
So, your sum:
N + N/2 + N/3 + ... + N/N = N * (1 + 1/2 + 1/3 + ... + 1/N) = Theta(N * log N)
So the tight bound for the algorithm is N*log N
See the [rigorous] mathematical proof here (see the "Integral Test" and "Rate of Divergence" part)
Well, you can methodically use Sigma notation:

Resources