These programs do the calculation ∑𝑖=0 𝑎𝑖 𝑥
I am trying to figure out big O calculations. I have done alot of study but I am having a problem getting this down. I understand that big O is worst case scenario or upper bounds. From what I can figure program one has two for loops one that runs for the length of the array and the other runs to the value of the first loop up to the length of the array. I think that if both ran the full length of the array then it would be quadratic O(N^2). Since the second loop only runs the length of the length of the array once I am thinking O(NlogN).
Second program has only one for loop so it would be O(N).
Am I close? If not please explain to me how I would calculate this. Since this is in the homework I am going to have to be able to figure something like this on the test.
Program 1
// assume input array a is not null
public static double q6_1(double[] a, double x)
{
double result = 0;
for (int i=0; i<a.length; i++)
{
double b = 1;
for (int j=0; j<i; j++)
{
b *= x;
}
result += a[i] * b;
}
return result;
}
Program 2
// assume input array a is not null
public static double q6_2(double[] a, double x)
{
double result = 0;
for (int i=a.length-1; i>=0; i--)
{
result = result * x + a[i];
}
return result;
}
I'm using N to refer to the length of the array a.
The first one is O(N^2). The inner loop runs 1, 2, 3, 4, ..., N - 1 times. This sum is approx N(N-1)/2 which is O(N^2).
The second one is O(N). It is simply iterating through the length of the array.
Complexity of a program is basically number of instructions executed.
When we talk about the upper bound, it means we are considering the things in worst case which should be taken in consideration by every programmer.
Let n = a.length;
Now coming back to your question, you are saying that the time complexity of the first program should be O(nlogn), which is wrong. As when i = a.length-1 the inner loop will also iterate from j = 0 to j = i. Hence the complexity would be O(n^2).
You are correct in judging the time complexity of the second program which is O(n).
Related
For the "longest common prefix" problem on Leetcode, Leetcode said that this solution is O(S) complexity where S is the sum of all characters. https://leetcode.com/problems/longest-common-prefix/solution/
Solution:
public String longestCommonPrefix(String[] strs) {
if (strs.length == 0) return "";
String prefix = strs[0];
for (int i = 1; i < strs.length; i++)
while (strs[i].indexOf(prefix) != 0) {
prefix = prefix.substring(0, prefix.length() - 1);
if (prefix.isEmpty()) return "";
}
return prefix;
}
Isn't a while loop inside of a for loop n^2 though? I thought all nested loops were n^2.
For O(n) time complexity we must define n. It is typically the length of the input array. However in this case, we instead define S to be the sum of all characters as the length of the input array cannot accurately describe our time complexity.
The complexity of O(n^2) will be if we are iterating over the same array of length n with both the for loop and while loop.
Isn't a while loop inside of a for loop n^2 though?
Not generally, but in this case you are right: the algorithm has a worst time complexity of O(𝑛²). The article is wrong.
Let's take as input the following two strings (respectively 10 and 20 characters long):
abbbbbbbbb
aaaaaaaaaacccccccccc
The outer loop will only make one iteration.
The inner loop will make 10 iterations, each time performing indexOf and then making the prefix string one character shorter, from its initial size of 10 until it is just "a".
Java's indexOf method has in general a O(𝑛𝑚) time complexity. The first time it executes, it will try to match the prefix string in the second string at index 0, 1, ..., 8, 9, each time failing at the second character comparison (of "b" with another letter). So we have 20 character-to-character comparisons here. The second time it executes, prefix has one less "b", and so there are now 18 comparisons, then 16, then 14, ... 4, and then the last iteration (when prefix is just one character): 1 (success!), making a total of 109 character-to-character comparisons!
Clearly this leads to a quadratic complexity.
The problem is with the indexOf method. The code should use the startsWith method instead, as it makes no sense to look at other indices than index 0. Then the algorithm will have a O(𝑛) complexity. This is easy to see: every character in the strings S2 ... Sn is compared at most once with a character from S1.
So the code should have:
while (!strs[i].startsWith(prefix)) {
I thought all nested loops were n^2.
No, that is not generally true. As 𝑛 here is the total number of characters, we can see that nor the outer loop, nor the inner loop has a number of iterations that corresponds to that 𝑛.
Here are some other examples of nested loops that do not represent a complexity of O(𝑛²)
for (int i = 0; i < n; i++)
for (int j = 1; j < n; j*=2)
//...
This is O(𝑛log𝑛)
Or:
for (int i = 0; i < n; i++)
for (int j = 0; j < n; j++) {
if (j % 3 == i % 3) break;
//...
}
This is O(𝑛)
So many loops, I stuck at counting how many times the last loop runs.
I also don't know how to simplify summations to get big Theta. Please somebody help me out!
int fun(int n) {
int sum = 0
for (int i = n; i > 0; i--) {
for (int j = i; j < n; j *= 2) {
for (int k = 0; k < j; k++) {
sum += 1
}
}
}
return sum
}
Any problem has 2 stages:
You guess the answer
You prove it
In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.
The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.
for (int i = n; i > 0; i--) {
for (int j = 1; j < n; j *= 2) {
perform j steps
}
}
The run-time of this new middle loop, including the inner loop, is 1+2+4+...+n, or approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.
I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.
To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.
Here is a "smaller" or "faster" code:
do n/2 iterations; set i=n/2 for each of them {
do just one iteration, where you set j = i {
perform j steps
}
}
This code is faster because each loop does less work. It does something like n²/4 iterations.
Here is a "greater" or "slower" code:
do n iterations; set i=n for each of them {
for (int j = 1; j <= 2 * n; j *= 2) {
perform j steps
}
}
I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.
This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1+2+4+...+n+2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².
We got, in a somewhat formal manner:
n²/4 ≤ runtime ≤ 4n²
So runtime = O(n²).
Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".
I'm having some trouble calculating the bigO of this algorithm:
public void foo(int[] arr){
int count = 0;
for(int i = 0; i < arr.length; i++){
for(int j = i; j > 0; j--){
count++;
}
}
}
I know the first for loop is O(n) time but I can't figure out what nested loop is. I was thinking O(logn) but I do not have solid reasoning. I'm sure I'm missing out on something pretty easy but some help would be nice.
Let's note n the length of the array.
If you consider the second loop alone, it is just a function f(i), and since it will iterate on all elements from i to 1, its complexity will be O(i). Since you know that j<n, you can say that it is O(n). However, there is no logarithm involved, since in the worst case, i.e. j=n, you will perfrom n iterations.
As for evaluating the complexity of both loops, observe that for each value of i, the second loop goes throught i iterations, so the total number of iterations is
1+2+...+(n-1)= n*(n-1)/2=(1/2)*(n^2-n)
which is O(n^2).
If we consider c as a number of times count is incremented in the inner loop then the total number of times count is incremented can be represented by the formula below:
As you can see, the total time complexity of the algorithm is O(n^2).
I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take n² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)
Hi I have two algorithms that need their complexity worked out, i've had a try myself at first; O(N^2) & O(N^3) Here they are:
Treat Y as though it's declared 'y=int[N][N]' and B as though 'B=int[N][N]'....
int x(int [] [] y)
{
int z = 0
for (int i =0; i<y.length; i++)
z = z + y[i].length;
return z;
}
int A (int [] [] B)
{
int c =0
for ( int i =0; i<B.length; i++)
for (int j =0; j<B[i].length; j++)
C = C + B[i] [j];
return C;
}
Thanks alot :)
To caclulate the algorithmic complexity, you need to tally up the number of operations performed in the algorithm (the big-O notation is concerned about worst case scenario)
In the first case, you have a loop that is performed N times (y.length==N). Inside the loop you have one operation (executed on each iteration). This is linear in the number of inputs, so O(x)=N.
Note: calculating y[i].length is a constant length operation.
In the second case, you have the outer loop that is performed N times (just like in the first case), and in each iteration another loop if the same length (N==B[i].length) is executed. Inside the inner loop you have one operation (executed on each iteration of the inner loop). This is O(N*N)==O(N^2) overall.
Note: calculating b[i][j] is a constant length operation
Note: remember that for big-O, only the fastest-growing term matters, so additive constants can be ignored (e.g. the initialization of the return value and the return instruction are both operations, but are constants and not executed in a loop; the term depending on N grows faster than constant)