Run time calculation - runtime

Algorithm Mistery (A[1..N])
for (i <- 1 to N)
{
for(i <- 1 to (N-i))
{
if(A[j] > A[j+1])
{
temp <- A[i]
A[j] <- A[j + 1]
A[i + 1] <- temp
}
}
}
I wanted to know please how I calculate the running time of the given code

The Big-O runtime for this algorithm would be N^2 since it had the nested for loops.
This post has a similar question: Run time of nested loops
Check this documentation to figure out how to calculate it yourself. https://www.learneroo.com/modules/106/nodes/559

Related

Analyze the time cost of the following algorithms using Θ notation

So many loops, I stuck at counting how many times the last loop runs.
I also don't know how to simplify summations to get big Theta. Please somebody help me out!
int fun(int n) {
int sum = 0
for (int i = n; i > 0; i--) {
for (int j = i; j < n; j *= 2) {
for (int k = 0; k < j; k++) {
sum += 1
}
}
}
return sum
}
Any problem has 2 stages:
You guess the answer
You prove it
In easy problems, step 1 is easy and then you skip step 2 or explain it away as "obvious". This problem is a bit more tricky, so both steps require some more formal thinking. If you guess incorrectly, you will get stuck at your proof.
The outer loop goes from n to 0, so the number of iterations is O(n). The middle loop is uncomfortable to analyze because its bounds depend on current value of i. Like we usually do in guessing O-rates, let's just replace its bounds to be from 1 to n.
for (int i = n; i > 0; i--) {
for (int j = 1; j < n; j *= 2) {
perform j steps
}
}
The run-time of this new middle loop, including the inner loop, is 1+2+4+...+n, or approximately 2*n, which is O(n). Together with outer loop, you get O(n²). This is my guess.
I edited the code, so I may have changed the O-rate when I did. So I must now prove that my guess is right.
To prove this, use the "sandwich" technique - edit the program in 2 different ways, one which makes its run-time smaller and one which makes its run-time greater. If you manage to make both new programs have the same O-rate, you will prove that the original code has the same O-rate.
Here is a "smaller" or "faster" code:
do n/2 iterations; set i=n/2 for each of them {
do just one iteration, where you set j = i {
perform j steps
}
}
This code is faster because each loop does less work. It does something like n²/4 iterations.
Here is a "greater" or "slower" code:
do n iterations; set i=n for each of them {
for (int j = 1; j <= 2 * n; j *= 2) {
perform j steps
}
}
I made the upper bound for the middle loop 2n to make sure its last iteration is for j=n or greater.
This code is slower because each loop does more work. The number of iterations of the middle loop (and everything under it) is 1+2+4+...+n+2n, which is something like 4n. So the number of iterations for the whole program is something like 4n².
We got, in a somewhat formal manner:
n²/4 ≤ runtime ≤ 4n²
So runtime = O(n²).
Here I use O where it should be Θ. O is usually defined as "upper bound", while sometimes it means "upper or lower bound, depending on context". In my answer O means "both upper and lower bound".

Insertion Sort INVALID DIMENSION error in BASIC

I'm tearing my hair our trying to understand why the following insertion sort program in TI-BASIC works sometimes but gives a dimension error other times.
0→dim(L₁ // clear the list L1
randIntNoRep(1,5,5)→L₁
For(I,2,5)
L₁(I)→K
I-1→J
While J>0 and L₁(J)>K
L₁(J)→L₁(J+1)
J-1→J
End // end while
K→L₁(J+1)
End // end for
Disp L₁
As far as I can tell the code is a faithful implementation of Insertion Sort based on this pseudocode:
for i ← 2 to n
key ← A[i]
j ← i − 1
while j > 0 and A[j] > key do
A[j + 1] ← A[j]
j ← j − 1
A[j + 1] ← key
I've tried stepping through the code manually and it looks like the BASIC version does the same as the pseudocode. What am I missing please?
OK I see the problem. TI-BASIC doesn't appear to do short circuit Boolean evaluation, so it sometimes tries to access the list at index 0, and fails. Refactoring in TI-BASIC is a real pain as there isn't even a break statement.

Having trouble finding correct Big O for while & if statement

int i=0;
while(i < N){
if(nums[i] != i + 1 && nums[i] != nums[nums[i] - 1]){
// swap
int tmp = nums[i];
nums[i] = nums[tmp - 1];
nums[tmp - 1] = tmp;
} else {
i++;
}
}
I am confused to find correct Big O for this algorithm.
Even though we will go through total N times with while loop,
if nums[i] meets the condition of if-statement, we will repeat swap until we do not meet the if-statement.
Can we say Time Complexity of this is O(N)?
or worst case of this would be O(N^2)?
I agree the problem isn't well-defined. Perhaps the OP didn't include some context. After reading the code over, it's evident that it's a sorting algorithm (a weird one). And by the way it accesses array indices, I think the algorithm expects the array nums of size N to be filled with integers from 1...N, not necessarily in order and there can be repeats.
Regarding SomeWittyUsername's point, let's just for this sake say that the elements don't lead to an infinite loop etc.
I did a brief annotation of the code.
int i=0;
while(i < N){
if(nums[i] != i + 1 && nums[i] != nums[nums[i] - 1]){ // 1) check if it's in the right place 2) check if the potential swap will have no effect because of a repeat.
// Swap nums[i] with nums[nums[i]-1]
// Why swap these two values?
// This effectively places num[i] where it should be in the array
int tmp = nums[i];
nums[i] = nums[tmp - 1];
nums[tmp - 1] = tmp;
} else {
// The element is in the correct spot
i++;
}
}
It looks like best-case scenario nums is initially sorted and the algorithm works in O(N) time.
But since big-O notation is supposed to refer to worst case scenario... my best answer is: It's also O(N) but with higher coefficients. It's not O(N^2) because every time it meets a misplaced element, it puts it in the right place.
This is not well-defined problem.
It can crash and it can run indefinitely.
Crash:
Input = [100, 1] --> attempt to access array at position 99 on the first iteration
Infinite run:
Input = [2,3,4,5] --> each iteration will get to the if condition. For i == 0 there is no element in the array that equals 1 (i.e., i + 1) and since all the array elements are different it doesn't matter how they're swapped, there won't be a situation when one of them equals another. Given that, the loop will run infinitely for i == 0.

Summation Run Time Analysis

A[] is size of n, B[][] is size of nxn
for i_{1,n} {
for j_{1,n} {
if(i<=j) -> B[i,j] = sum of the elements A[i], A[i+1],...,A[j]
else -> B[i,j] = 0
I understand that first 2 for loops are n iterations for both.
My question is how to do if(i<=j) part. At max it will sum n number of times (When i = 1 and j = n). At min, it will just do one thing, thus 1.
I'm really really lost.
I'm terrible with this theoretical run-time analysis stuff, but this might help to look at things another way:
for i_{1,n}
{
if (j>i) -> B[i,j] = sum of A[i],A[i+1],...,A[j]
else -> B[i,j] = 0
}
Which (I think) happens to be the same as:
for i_{1,n}
{
for j_{1,n}
{
B[i,j] = 0
if (j > i)
{
for k_{i,j}
B[i,j] += A[k];
}
}
}
That would make it a cubic complexity algorithm. The third loop over k executes 0 times, then 1 time, then two times, then three times, and so on up to n times. Its upper bound is still n. And then we have two more loops with n iterations, so our upper bound complexity is n*n*n or O(n^3).

Proposed analysis of algorithm

I have been practicing analyzing algorithms lately. I feel like I have a pretty good understanding of analyzing non-recursive algorithms but I am unsure, and have just begun to embark on a full understanding of recursive algorithm as well. Although, I have not had a formal check on my methods and if what I have been doing is indeed correct
Would it be too much to ask if someone could check a few algorithms that I have implemented and analyzed and see if my understanding is along the right lines or if I am completely off.
here:
1)
sum = 0;
for (i = 0; i < n; i++){
for (j = 0; j < i*i; j++){
if (j % i == 0) {
for (k = 0; k < j; k++){
sum++;
}
}
}
}
My analysis of this one was O(n^5) due to:
Sum(i = 0 to n)[Sum(j = 0 to i^2)[Sum(k = 0 to j) of 1]]
which evaluated to:
(1/2)(n^5/5 + n^4/2 + n^3/3 - n/30) + (1/2)(n^3/3 + n^2/2 + n/6) + (1/2)(n^3/3 + n^2/2 + n/6) + n + 1.
Hence it is O(n^5)
Is this correct as an evaluation of the summations of the loops?
a triple summation. I have assumed that the if statement will always pass for worse case complexity. Is this a correct assumption for worst case?
2)
int tonyblair (int n, int a) {
if (a < 12) {
for (int i = 0; i < n; i++){
System.out.println("*");
}
tonyblair(n-1, a);
} else {
for (int k = 0; k < 3000; k++){
for (int j = 0; j < nk; j++){
System.out.println("#");
}
}
}
}
My analysis of this algorithm is O(infinity) due to the infinite recursion in the if statement if it is assumed to be true, which would be the worst case. Although, for pure analysis, I analyzed if this were not true and the if statement would not run. I then got a complexity of O(nk) due to:
Sum(k = 0 to 3000)[Sum(j = 0 to nk) of 1]
which then evaluated to nk(3001) + 3001. Hence is O(nk), where k is not discarded due to it controlling the number of iterations of the loop.
Number 1
I can't tell how you've derived your formula. Usually adding terms happens when there are multiple steps in an algorithm, such as precomputing data and then looking up values from the data. Instead, nested for loops implies multiplication. Also, the worst case is the best case for this snippet of code, because given a value of n, sum will be the same at the end.
To find the complexity, we want to find the number of times that the inner loop is evaluated. Summations are often easy to solve if they go from 1 to n, so I'm going to drop the 0s from them later on. If i is 0, the middle loop won't run, and if j is 0, the inner loop won't run. We can rewrite the code equivalently as:
sum = 0;
for (i = 1; i < n; i++)
{
for (j = 1; j < i*i; j++)
{
if (j % i == 0)
{
for (k = 0; k < j; k++)
{
sum++;
}
}
}
}
I could make my life harder by forcing the outer loop to start at 2, but I'm not going to. The outer loop now runs from 1 to n-1. The middle loop runs based on the current value of i, so we need to do a summation:
The middle for loop always goes to (i^2 - 1), and j will only be divisible by i for a total of (i - 1) times (i, i*2, i*3, ..., i*(i-2), i*(i-1)). With this, we get:
The middle loop then executes j times. The j in our summation is not the same as the j in the code though. The j in the summation represents each time the middle loop executes. Each time the middle loop executes, the j in the code will be (i * (number of executions so far)) = i * (the j in the summation). Therefore, we have:
We can move the i to in-between the two summations, as it is a constant for the inner summation. Then, the formula for the sum of 1 to n is well known: n*(n+1)/2. Because we are going to n - 1, we must subtract n out. This gives:
The summations for the sum of squares and the sum of cubes are also well known. Keeping in mind that we are only summing to n-1 in both cases, we must remember to subtract n^3 and n^2, respectively, and we get:
This is obviously n^4. If we solve it all the way, we get:
Number 2
For the last one, it is in fact O(infinity) if a < 12 because of the if statement. Well, technically everything is O(infinity), because Big-O only provides an upper bound on runtime. If a < 12, it is also omega(infinity) and theta(infinity). If only the else runs, then we have the summation from 1 to 2999 of i*n:
It's very important to notice that the summation from 1 to 2999 is a constant (it's 4498500). No matter how large a constant is, it's still a constant, and not dependent on n. We will end up throwing it out of the runtime calculations. Sometimes, when a theoretically fast algorithm has a large constant, it is practically slower than other algorithms that are theoretically slow. One example I can think of is Chazelle's linear time triangulation algorithm. No one has ever implemented it. In any case, we have 4498500 * n. This is theta(n):

Resources