I have written two different algorithms to calculate Pythagorean Triples:
import java.util.*;
class Untitled {
public static void main(String[] args) {
int n = 20;
System.out.println("--------------------");
algorithmOne(n);
System.out.println("--------------------\n");
algorithmTwo(n);
System.out.println("--------------------");
}
public static void algorithmOne(int n){
long startTime = System.nanoTime();
for (int i = 1 ; i <= n ; i++) {
for (int j = 1 ; j <= n ; j++) {
for (int k = 1; k <= n ; k++) {
if (Math.pow(i,2) + Math.pow(j,2) == Math.pow(k,2)) {
System.out.println(i + "," + j + "," + k);
}
}
}
}
System.out.println("Run Time: " + (System.nanoTime() - startTime)/1000000 + " milliseconds");
}
public static void algorithmTwo(int n){
long startTime = System.nanoTime();
ArrayList<Integer> squares = new ArrayList<>();
// O(n)
for(int i = 1 ; ; i++) {
if ((int)Math.sqrt(i) > n) {
break;
}
if (Math.sqrt(i) == (int)Math.sqrt(i)) {
squares.add(i);
}
}
// O(n^3)
for (int i = 0 ; i < squares.size() ; i++) {
for (int j = 0 ; j < squares.size() ; j++) {
for (int k = 0 ; k < squares.size() ; k++) {
if (squares.get(i) + squares.get(j) == squares.get(k)) {
System.out.println((int)Math.sqrt(squares.get(i)) + "," + (int)Math.sqrt(squares.get(j)) + "," + (int)Math.sqrt(squares.get(k)));
}
}
}
}
System.out.println("Run Time: " + (System.nanoTime() - startTime)/1000000 + " milliseconds");
}
}
I believe both algorithms are O(n^3), however when I calculate the time they take to run, the second algorithm is a lot faster. using n=20, algorithm1 takes about 60 milliseconds and algorithms takes about 5 milliseconds. How can these two algorithms have the same time complexity, but one runs faster than the other? I understand that the second algorithm doesn't have to iterate over as many numbers in the triple for loop, but shouldn't that mean that the time complexity would be less?
The big-O notation "hides the constant". Two algorithms, one that runs in 5n^3 milliseconds and another that runs in 5000000n^3 milliseconds would both have complexity O(n^3), but the second one would be million times slower. This is why big-O notation does not tell the entire story. For example there's a lot of different sorting algorithms that are O(N log N), yet some of them are faster than others, or faster than others on specific inputs, etc... There's more to performance than a basic introductory book to algorithms would tell you.
As far as your code goes, it seems that accessing a value within an array is faster than calculating Math.pow, therefore the version that precomputes the squares is faster overall. However, I guess that Math.pow in java is costlier than a simple integer multiplication. I would try replacing it with simple multiplication: i*i + j*j == k*k and see if there's any significant difference after that. Even if it's still slower than the 2nd algorithm, it's worth knowing that re-calculating some value may be faster than fetching it from memory in some circumstances.
Another unrelated thing I noticed in your code, is that you assume that the complexity of your square-calculating loop is O(n):
// O(n)
for(int i = 1 ; ; i++) {
if ((int)Math.sqrt(i) > n) break;
...
}
This isn't true though. You loop while sqrt(i) <= n, that is i <= n*n. Therefore the loop is executed n^2 times, giving a complexity of O(n^2) for that loop. The following loop does indeed have O(n) complexity:
// O(n)
for(int i = 1 ; i<=n; i++) {
squares.add(i*i);
}
It will run faster, but won't make much of a different due to the overall O(n^3) complexity of the entire algorithm.
You have to understand complexity first.
Lets take an example.
Suppose you have a bag that can contain 5 kg apple but doesn't matter how many(in number) apples are there. It will contain 5 kg but it can be 20 apple or it can be 5 apple.
So, when we talk about time-complexity then you talk about the amount (in kg) not the number of apple(it can be memory-complexity).
That means when you are talking about big-Oh then you have to know what it means.
by O(n3) means it takes at most that amount. So, it can compute every steps but at most n3 or it can skip some steps but still it will take at most n3. I think, now it is clear to you why I am using at most.
Your second algorithm is skipping some steps but your first algorithm is not. So you second algorithm runs a little bit faster but in big-Oh sense it can take at most that amount (O(n3)) and still size of the lists matters.
Additional info:
That is called Pythagorean triple. O(|result|) solution exists.
You can look it up on Wikipedia.
Also learn Stern-Brocot tree that is the essential part to write the most optimal case.
To learn the theory, search rational points on circle.
Not an easy topic but helps if you know a little geometry.
Related
Problem: Find best way to cut a rod of length n.
Each cut is integer length.
Assume that each length i rod has a price p(i).
Given: rod of length n, and a list of prices p, which provided the price of each possible integer lenght between 0 and n.
Find best set of cuts to get maximum price.
Can use any number of cuts, from 0 to n−1.
There is no cost for a cut.
Following I present a naive algorithm for this problem.
CUT-ROD(p,n)
if(n == 0)
return 0
q = -infinity
for i = 1 to n
q = max(q, p[i]+CUT-ROD(p,n-1))
return q
How can I prove that this algorithm is exponential? Step-by-step.
I can see that it is exponential. However, I'm not able to proove it.
Let's translate the code to C++ for clarity:
int prices[n];
int cut-rod(int n) {
if(n == 0) {
return 0;
}
q = -1;
res = cut-rod(n-1);
for(int i = 0; i < n; i++) {
q = max(q, prices[i] + res);
}
return q;
}
Note: We are caching the result of cut-rod(n-1) to avoid unnecessarily increasing the complexity of the algorithm. Here, we can see that cut-rod(n) calls cut-rod(n-1), which calls cut-rod(n-2) and so on until cut-rod(0). For cut-rod(n), we see that the function iterates over the array n times. Therefore the time complexity of the algorithm is equal to O(n + (n-1) + (n-2) + (n-3)...1) = O(n(n+1)/2) which is approximately equal to O((n^2)/2).
EDIT:
If we are using the exact same algorithm as the one in the question, its time complexity is O(n!) since cut-rod(n) calls cut-rod(n-1) n times. cut-rod(n-1) calls cut-rod(n-2) n-1 times and so on. Therefore the time complexity is equal to O(n*(n-1)*(n-2)...1) = O(n!).
I am unsure if this counts as a step-by-step solution but it can be shown easily by induction/substitution. Just assume T(i)=2^i for all i<n then we show that it holds for n:
These programs do the calculation ∑𝑖=0 𝑎𝑖 𝑥
I am trying to figure out big O calculations. I have done alot of study but I am having a problem getting this down. I understand that big O is worst case scenario or upper bounds. From what I can figure program one has two for loops one that runs for the length of the array and the other runs to the value of the first loop up to the length of the array. I think that if both ran the full length of the array then it would be quadratic O(N^2). Since the second loop only runs the length of the length of the array once I am thinking O(NlogN).
Second program has only one for loop so it would be O(N).
Am I close? If not please explain to me how I would calculate this. Since this is in the homework I am going to have to be able to figure something like this on the test.
Program 1
// assume input array a is not null
public static double q6_1(double[] a, double x)
{
double result = 0;
for (int i=0; i<a.length; i++)
{
double b = 1;
for (int j=0; j<i; j++)
{
b *= x;
}
result += a[i] * b;
}
return result;
}
Program 2
// assume input array a is not null
public static double q6_2(double[] a, double x)
{
double result = 0;
for (int i=a.length-1; i>=0; i--)
{
result = result * x + a[i];
}
return result;
}
I'm using N to refer to the length of the array a.
The first one is O(N^2). The inner loop runs 1, 2, 3, 4, ..., N - 1 times. This sum is approx N(N-1)/2 which is O(N^2).
The second one is O(N). It is simply iterating through the length of the array.
Complexity of a program is basically number of instructions executed.
When we talk about the upper bound, it means we are considering the things in worst case which should be taken in consideration by every programmer.
Let n = a.length;
Now coming back to your question, you are saying that the time complexity of the first program should be O(nlogn), which is wrong. As when i = a.length-1 the inner loop will also iterate from j = 0 to j = i. Hence the complexity would be O(n^2).
You are correct in judging the time complexity of the second program which is O(n).
This is my question and I've managed to bring out an answer for part a, but for part b I'm not really confident about my answer of part b.
In a recent court case, a judge cited a city for contempt and ordered a fine of $2 for the first day.
Each subsequent day, until the city followed the judge’s order, the fine was squared
(i.e., the fine progressed as follows: $2, $4, $16, $256, $65,536,...).
a. What would be the fine on day N?
b. How many days would it take for the fine to reach D dollars (a Big-Oh answer will do)?
Ans a : 2^(2^n-1)
For answer b, I made the following program to find the big Oh.
for (int i = 0; i < n - 1; i++) {
result = 2 * result;
}
printf("%d\t", result);
for (int j = 0; j < result; j++) {
res = 2 * res ;
}
printf("%d\n", res);
I have calculated the big Oh of the first loop to be Sumation of n
And since the second loop runs 2^n-1 times the first loop, its big Oh is 2^n and adding them both they become (2^n) + n
According to my algorithm my answer is O(N)
int days=5;
int fine = 2;
for(int i=0; i<days-1; i++)
fine = fine * fine;
cout << fine;
The first loop runs n-1 times, the second runs 2^(n-1) times. The time-complexity
is the sum of these so (n-1) + 2^(n-1) = O(2^n + n) = O(2^n).
The question doesn't seem to be asking for the time-complexity though. It's asking
how many days would pass before the fine reaches D dollars. This is the inverse of
the answer to a): O(log log D) ($65536 is reached after log(log(65536)) + 1 days, for example).
You don't really need any software to answer these questions. Big O is a math term that happens to be used in software development.
Let's look at the progression:
2 = 2^1 = 2^(2^0)
4 = 2^2 = 2^(2^1)
16 = 2^4 = 2^(2^2)
256 = 2^8 = 2^(2^3)
65536 = 2^16 = 2^(2^4)
Answer to question a.
The penalty on day n would be 2^(2^(n-1)).
You could program it like this:
pow(2, pow(2, n-1));
Answer to question b.
x = log2 (log2 D) + 1
Or without the "+ 1" if we're not to count the first day.
This will return a positive real number, so you may want to ceil it depending on the requirements.
Now, in big O notation, it would be O(log(log)), which describes how the value grows. What that means is that when the input (D in this case) is multiplied by n, the value of the function will increase by at most log(log n) times.
I did the below code to check the number of iteration and swaps required for bubble sort and insertion sort. Even though (referring to below code) insertion sort did literally half iterations and same number of swaps as compared to bubble sort then how come both have same BIG-O complexity
static void bubbleSortExample()
{
int iterationCount=0;
int swaps=0;
int [] arr={2,6,1,4,8,7,10,3};
int temp=0;
for(int i=0; i< arr.length; i++)
{
iterationCount=iterationCount+1;
for(int j=0; j<arr.length-1; j++)
{
iterationCount=iterationCount+1;
if(arr[j]> arr[j+1])
{
swaps= swaps+1;
temp= arr[j+1];
arr[j+1]= arr[j];
arr[j]= temp;
}
}
}
System.out.println("Bubble Sort----Iteration count are : " + iterationCount + " and swaps are : " + swaps);
}
//Bubble Sort Example Ends
//Insertion Sort Example Starts
static void insertionSortExample()
{
int iterationCount=0;
int swaps=0;
int [] arr={2,6,1,4,8,7,10,3};
for(int i=1;i< arr.length;i++)
{
iterationCount=iterationCount+1;
int key=arr[i];// this is the number that needs to be inserted in its correct position
for(int j=i-1; j >= 0; j--)
{
iterationCount=iterationCount+1;
if(key < arr[j])
{
swaps= swaps+1;
int t= arr[j];
arr[j]=key;
arr[j+1]=t;
}
}
}
System.out.println("Insertion Sort----Iteration count are : " + iterationCount + " and swaps are : " + swaps);
}
OUTPUT
Bubble Sort----Iteration count are : 64 and swaps are : 9
Insertion Sort----Iteration count are : 35 and swaps are : 9
Whoa!Whoa! Wait.You are confusing two things.
One is running time which is the actual running time of a program on an instance of input.
Second is time complexity which is how the running time grows as input size grows.
A program which is O(N^2) can run much faster than a code which is O(NlogN) in practise.This is because the inputs may be mostly average cases, however the Big-Oh analysis is meant only for worst case analysis.This is because Big-Oh does not tell about actual running time(which may depend on nature of input(best case/worst case), details of actual implementation).Big-Oh only gives us a guarentee that an algorithm will run no worse than a constant times that function.
You can read my answers here to clarify these.
So when we say bubble sort/insertion sort is O(N2), we mean to say that that the running time in the worst case scenario is no larger than a constant times N^2.Realize that this is indeed the case for the two algorithms.
If you still have confusion please feel free to ask.
Keep in mind that the notation just expresses how the algorithm behaves as n grows. A linear factor is always dropped from that. So it really doesn't state whether an algorithm is fast, it just states by what factor it will take more time to complete as you increase n.
In bubble sort in ith iteration you have n-i-1 inner iterations (n^2)/2 total, but in insertion sort you have maximum i iterations on i'th step, but i/2 on average, as you can stop inner loop earlier, after you found correct position for the current element.
So you have (sum from 0 to n) / 2 which is (n^2) / 4 total;
That's why insertion sort is faster than bubble sort.
I am reading some information on time complexity and I'm quite confused as to how the following time complexities are achieved and if there is a particular set of rules or methods for working this out?
1)
Input: int n
for(int i = 0; i < n; i++){
print("Hello World, ");
}
for(int j = n; j > 0; j--){
print("Hello World");
}
Tight: 6n + 5
Big O: O(n)
2)
Input: l = array of comparable items
Output: l = array of sorted items
Sort:
for(int i = 0; i < l.length; i++){
for(int j = 0; j < l.length; j++){
if(l{i} > l{j}){
} }
Swap(l{i},l{j});
}
return ls;
Worst Case Time Complexity: 4n2 +3n+2 = O(n2)
For a given algorithm, time complexity or Big O is a way to provide some fair enough estimation of "total elementary operations performed by the algorithm" in relationship with the given input size n.
Type-1
Lets say you have an algo like this:
a=n+1;
b=a*n;
there are 2 elementary operations in the above code, no matter how big your n is, for the above code a computer will always perform 2 operations, as the algo does not depend on the size of the input, so the Big-O of the above code is O(1).
Type-2
For this code:
for(int i = 0; i < n; i++){
a=a+i;
}
I hope you understand the Big-O in O(n), as elementary operation count directly depend on the size of n
Type-3
Now what about this code:
//Loop-1
for(int i = 0; i < n; i++){
print("Hello World, ");
}
//Loop-2
for(int i = 0; i < n; i++){
for(int j = 0; j < n; j++) {
x=x+j;
}
}
As you can see loop-1 is O(n) and loop-2 is O(n^2). So it feel like total complexity should be O(n)+O(n^2). But no, the time complexity of the above code is O(n^2). Why? Because we are trying to know the fair enough count of elementary operations performed by the algorithm for a given input size n, which will be comparatively easy to understand by another person. With this logic, O(n)+O(n^2) become O(n^2), or O(n^2)+O(n^3)+O(n^4) become O(n^4)!
Again, you may ask: But how? How all the lower power of Big-O become so insignificant as we add it with a higher power of Big-O, that we can completely omit them (lower powers) when we are describing the complexity of our algorithm to another human?
I will try show the reason for this case: O(n)+O(n^2)=O(n^2).
Lets say n=1000 then the exact count for O(n) is 1000 operations and the exact count for O(n^2) is 1000*1000=1000000, so O(n^2) is 1000 time bigger than O(n), which means your program will spend most of the execution time in O(n^2) and thus it is not worth to mention that your algorithm also has some O(n).
PS. Pardon my English :)
In the first example, the array has n elements, and you go through these elements Twice. The first time you start from index 0 until i, and the second time you start from index n to 0. So, to simplify this, we can say that it took you 2n. When dealing with Big O notation, you should keep in mind that we care about the bounds:
As a result, O(2n)=O(n)
and O(an+b)=O(n)
Input: int n // operation 1
for(int i = 0; i < n; i++){ // operation 2
print("Hello World, "); // Operation 3
}
for(int j = n; j > 0; j--) // Operation 4
{
print("Hello World"); //Operation 5
}
As you can see, we have a total of 5 operations outside the loops.
Inside the first loop, we do three internal operations: checking if i is less than n, printing "Hello World", and incrementing i .
Inside the second loop, we also have three internal operations.
So, the total number of of opetations that we need is: 3n ( for first loop) + 3n ( second loop) + 5 ( operations outside the loop). As a result, the total number of steps required is 6n+5 ( that is your tight bound).
As I mentioned before, O( an +b )= n because once an algorithm is linear, a and b do not have a great impact when n is very large.
So, your time complexity will become : O(6n+5) =O(n).
You can use the same logic for the second example keeping in mind that two nested loops take n² instead of n.
I will slightly modify Johns answer. Defining n is one constant operation, defining integer i and assigning it to 0 is 2 constant operations. defining integer j and assigning with n is another 2 constant operations. checking the conditions for i,j inside for loop,increment,print statement depends on n so the total will be 3n+3n+5 which is equal to 6n+5. Here we cannot skip any of the statements during execution so its average case running time will also be its worst case running time which is O(n)