So I have a loop embedded inside a loop here:
int a,b,n;
for (a = 1; a <=n; a++) {
for (b = 0; b < n; b+=a)
cout << "hey" << endl;
}
n is a power of 2
I'm trying to understand how to calculate the Time complexity of this however I'm having trouble figuring out the Big-theta notation for this.
I know that the the outter loop runs in O(n) time, but I'm not sure about the inner loop due to the b+=a. I know that If I had the time for both loops, I could multiply them to get the Big-theta time of the function, but I'm not sure what the inner loop is running at.
When I plug in sample n's (ie. 2, 4, 8, 16), then the inner loop is looped 3, 9, 24, 61 times, respectively. I don't see how these values correlate.
Edit:
Ok I see what you are saying, but I'm trying to compare it to this function. What would the time for this function be? Then i can compare the speed of the two:
int a,b,n;
int z = 1;
for (a = 0; a <n; a++) {
for (b = 0; b < n; b=b+z)
cout << "hey" << endl;
z = z * 2;
}
You can see that the inner loop runs the amount of times that a is included in n, ie the biggest integer k that satisfies:
It corresponds to a ceiling function. So the total number of iterations of the inner loop is:
The second sum is the Divisor summatory function, which can be written:
So the time complexity of the whole code is O(nlogn).
EDIT
About the second piece of code you posted, the calculations are simpler. If n=2^k, the number of iterations is:
So the second one is faster since it's a O(n).
Related
So many loops, I stuck at counting how many times the last loop runs.
I also don't know how to simplify summations to get big Theta. Please somebody help me out!
int fun(int n) {
int sum = 0
for (int i = n; i > 0; i--) {
for (int j = i; j < n; j *= 2) {
for (int k = 0; k < j; k++) {
sum += 1
}
}
}
return sum
}
Any problem has 2 stages:
You guess the answer
You prove it
In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.
The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.
for (int i = n; i > 0; i--) {
for (int j = 1; j < n; j *= 2) {
perform j steps
}
}
The run-time of this new middle loop, including the inner loop, is 1+2+4+...+n, or approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.
I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.
To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.
Here is a "smaller" or "faster" code:
do n/2 iterations; set i=n/2 for each of them {
do just one iteration, where you set j = i {
perform j steps
}
}
This code is faster because each loop does less work. It does something like n²/4 iterations.
Here is a "greater" or "slower" code:
do n iterations; set i=n for each of them {
for (int j = 1; j <= 2 * n; j *= 2) {
perform j steps
}
}
I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.
This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1+2+4+...+n+2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².
We got, in a somewhat formal manner:
n²/4 ≤ runtime ≤ 4n²
So runtime = O(n²).
Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".
These programs do the calculation ∑𝑖=0 𝑎𝑖 𝑥
I am trying to figure out big O calculations. I have done alot of study but I am having a problem getting this down. I understand that big O is worst case scenario or upper bounds. From what I can figure program one has two for loops one that runs for the length of the array and the other runs to the value of the first loop up to the length of the array. I think that if both ran the full length of the array then it would be quadratic O(N^2). Since the second loop only runs the length of the length of the array once I am thinking O(NlogN).
Second program has only one for loop so it would be O(N).
Am I close? If not please explain to me how I would calculate this. Since this is in the homework I am going to have to be able to figure something like this on the test.
Program 1
// assume input array a is not null
public static double q6_1(double[] a, double x)
{
double result = 0;
for (int i=0; i<a.length; i++)
{
double b = 1;
for (int j=0; j<i; j++)
{
b *= x;
}
result += a[i] * b;
}
return result;
}
Program 2
// assume input array a is not null
public static double q6_2(double[] a, double x)
{
double result = 0;
for (int i=a.length-1; i>=0; i--)
{
result = result * x + a[i];
}
return result;
}
I'm using N to refer to the length of the array a.
The first one is O(N^2). The inner loop runs 1, 2, 3, 4, ..., N - 1 times. This sum is approx N(N-1)/2 which is O(N^2).
The second one is O(N). It is simply iterating through the length of the array.
Complexity of a program is basically number of instructions executed.
When we talk about the upper bound, it means we are considering the things in worst case which should be taken in consideration by every programmer.
Let n = a.length;
Now coming back to your question, you are saying that the time complexity of the first program should be O(nlogn), which is wrong. As when i = a.length-1 the inner loop will also iterate from j = 0 to j = i. Hence the complexity would be O(n^2).
You are correct in judging the time complexity of the second program which is O(n).
This is my first question on stackoverflow. I've been solving some exercises from "Algorithm design" by Goodrich, Tamassia. However, I'm quite clueless about this problem. Unusre where to start from and how to proceed. Any advice would be great. Here's the problem:
Boolean matrices are matrices such that each entry is 0 or 1, and matrix multiplication is performed by using AND for * and OR for +. Suppose we are given two NxN random Boolean matrices A and B, so that the probability that any entry
in either is 1, is 1/k. Show that if k is a constant, then there is an algorithm for multiplying A and B whose expected running time is O(n^2). What if k is n?
Matrix multiplication using the standard iterative approach is O(n3), because you have to iterate over n rows and n columns, and for each element do a vector multiply of one of the rows and one of the columns, which takes n multiplies and n-1 additions.
Psuedo code to multiply matrix a by matrix b and store in matrix c:
for(i = 0; i < n; i++)
{
for(j = 0; j < n; j++)
{
int sum = 0;
for(m = 0; m < n; m++)
{
sum += a[i][m] * b[m][j];
}
c[i][j] = sum;
}
}
For a boolean matrix, as specified in the problem, AND is used in
place of multiplication and OR in place of addition, so it becomes
this:
for(i = 0; i < n; i++)
{
for(j = 0; j < n; j++)
{
boolean value = false;
for(m = 0; m < n; m++)
{
value ||= a[i][m] && b[m][j];
if(value)
break; // early out
}
c[i][j] = value;
}
}
The thing to notice here is that once our boolean "sum" is true, we can stop calculating and early out of the innermost loop, because ORing any subsequent values with true is going to produce true, so we can immediately know that the final result will be true.
For any constant k, the number of operations before we can do this early out (assuming the values are random) is going to depend on k and will not increase with n. At each iteration there will be a (1/k)2 chance that the loop will terminate, because we need two 1s for that to happen and the chance of each entry being a 1 is 1/k. The number of iterations before terminating will follow a Geometric distribution where p is (1/k)2, and the expected number of "trials" (iterations) before "success" (breaking out of the loop) doesn't depend on n (except as an upper bound for the number of trials) so the innermost loop runs in constant time (on average) for a given k, making the overall algorithm O(n2). The Geometric distribution formula should give you some insight about what happens if k = n. Note that in the formula given on Wikipedia k is the number of trials.
I have been practicing analyzing algorithms lately. I feel like I have a pretty good understanding of analyzing non-recursive algorithms but I am unsure, and have just begun to embark on a full understanding of recursive algorithm as well. Although, I have not had a formal check on my methods and if what I have been doing is indeed correct
Would it be too much to ask if someone could check a few algorithms that I have implemented and analyzed and see if my understanding is along the right lines or if I am completely off.
here:
1)
sum = 0;
for (i = 0; i < n; i++){
for (j = 0; j < i*i; j++){
if (j % i == 0) {
for (k = 0; k < j; k++){
sum++;
}
}
}
}
My analysis of this one was O(n^5) due to:
Sum(i = 0 to n)[Sum(j = 0 to i^2)[Sum(k = 0 to j) of 1]]
which evaluated to:
(1/2)(n^5/5 + n^4/2 + n^3/3 - n/30) + (1/2)(n^3/3 + n^2/2 + n/6) + (1/2)(n^3/3 + n^2/2 + n/6) + n + 1.
Hence it is O(n^5)
Is this correct as an evaluation of the summations of the loops?
a triple summation. I have assumed that the if statement will always pass for worse case complexity. Is this a correct assumption for worst case?
2)
int tonyblair (int n, int a) {
if (a < 12) {
for (int i = 0; i < n; i++){
System.out.println("*");
}
tonyblair(n-1, a);
} else {
for (int k = 0; k < 3000; k++){
for (int j = 0; j < nk; j++){
System.out.println("#");
}
}
}
}
My analysis of this algorithm is O(infinity) due to the infinite recursion in the if statement if it is assumed to be true, which would be the worst case. Although, for pure analysis, I analyzed if this were not true and the if statement would not run. I then got a complexity of O(nk) due to:
Sum(k = 0 to 3000)[Sum(j = 0 to nk) of 1]
which then evaluated to nk(3001) + 3001. Hence is O(nk), where k is not discarded due to it controlling the number of iterations of the loop.
Number 1
I can't tell how you've derived your formula. Usually adding terms happens when there are multiple steps in an algorithm, such as precomputing data and then looking up values from the data. Instead, nested for loops implies multiplication. Also, the worst case is the best case for this snippet of code, because given a value of n, sum will be the same at the end.
To find the complexity, we want to find the number of times that the inner loop is evaluated. Summations are often easy to solve if they go from 1 to n, so I'm going to drop the 0s from them later on. If i is 0, the middle loop won't run, and if j is 0, the inner loop won't run. We can rewrite the code equivalently as:
sum = 0;
for (i = 1; i < n; i++)
{
for (j = 1; j < i*i; j++)
{
if (j % i == 0)
{
for (k = 0; k < j; k++)
{
sum++;
}
}
}
}
I could make my life harder by forcing the outer loop to start at 2, but I'm not going to. The outer loop now runs from 1 to n-1. The middle loop runs based on the current value of i, so we need to do a summation:
The middle for loop always goes to (i^2 - 1), and j will only be divisible by i for a total of (i - 1) times (i, i*2, i*3, ..., i*(i-2), i*(i-1)). With this, we get:
The middle loop then executes j times. The j in our summation is not the same as the j in the code though. The j in the summation represents each time the middle loop executes. Each time the middle loop executes, the j in the code will be (i * (number of executions so far)) = i * (the j in the summation). Therefore, we have:
We can move the i to in-between the two summations, as it is a constant for the inner summation. Then, the formula for the sum of 1 to n is well known: n*(n+1)/2. Because we are going to n - 1, we must subtract n out. This gives:
The summations for the sum of squares and the sum of cubes are also well known. Keeping in mind that we are only summing to n-1 in both cases, we must remember to subtract n^3 and n^2, respectively, and we get:
This is obviously n^4. If we solve it all the way, we get:
Number 2
For the last one, it is in fact O(infinity) if a < 12 because of the if statement. Well, technically everything is O(infinity), because Big-O only provides an upper bound on runtime. If a < 12, it is also omega(infinity) and theta(infinity). If only the else runs, then we have the summation from 1 to 2999 of i*n:
It's very important to notice that the summation from 1 to 2999 is a constant (it's 4498500). No matter how large a constant is, it's still a constant, and not dependent on n. We will end up throwing it out of the runtime calculations. Sometimes, when a theoretically fast algorithm has a large constant, it is practically slower than other algorithms that are theoretically slow. One example I can think of is Chazelle's linear time triangulation algorithm. No one has ever implemented it. In any case, we have 4498500 * n. This is theta(n):
Hi I have two algorithms that need their complexity worked out, i've had a try myself at first; O(N^2) & O(N^3) Here they are:
Treat Y as though it's declared 'y=int[N][N]' and B as though 'B=int[N][N]'....
int x(int [] [] y)
{
int z = 0
for (int i =0; i<y.length; i++)
z = z + y[i].length;
return z;
}
int A (int [] [] B)
{
int c =0
for ( int i =0; i<B.length; i++)
for (int j =0; j<B[i].length; j++)
C = C + B[i] [j];
return C;
}
Thanks alot :)
To caclulate the algorithmic complexity, you need to tally up the number of operations performed in the algorithm (the big-O notation is concerned about worst case scenario)
In the first case, you have a loop that is performed N times (y.length==N). Inside the loop you have one operation (executed on each iteration). This is linear in the number of inputs, so O(x)=N.
Note: calculating y[i].length is a constant length operation.
In the second case, you have the outer loop that is performed N times (just like in the first case), and in each iteration another loop if the same length (N==B[i].length) is executed. Inside the inner loop you have one operation (executed on each iteration of the inner loop). This is O(N*N)==O(N^2) overall.
Note: calculating b[i][j] is a constant length operation
Note: remember that for big-O, only the fastest-growing term matters, so additive constants can be ignored (e.g. the initialization of the return value and the return instruction are both operations, but are constants and not executed in a loop; the term depending on N grows faster than constant)