Algorithm time complexity analysis - time

I haven't got sufficient knowledge of time complexity, so my question is:
Is there any direct formula to calculate time complexity of an algorithm, example- I have read somewhere that big O of this code is n*log2(n), so can you tell me how they got this expression?
for(i=1;i<=n;i=i*2)
for this loop I am unable to calculate the big O. This loops will make 7 iterations for a value of n=100. How does that help arrive to the given formula?

By itself, this loop will iterate through ceil(log(n)) times. That's log(n) with log base of 2. This is because after ceiling(log(n)) multiplications, i will have reached or passed n, for any n. A quick example:
For n=100:
Iteration: i:
1 1
2 2
3 4
4 8
5 16
6 32
7 64
8 128
So i will be checked on the 8th iteration and you don't go into the loop, as it's not <= 100. It will be nlog2(n) as you suggest if there's another inner loop that fully loops through n times. Then the two times for the two loops get multiplied to get the total time.

Related

Big O Notation O(n^2) what does it mean?

For example, it says that in 1 sec 3000 number are sorted with selection sort. How can we predict how many numbers are gonna be sorted in 10 sec ?
I checked that selection sort needs O(n^2) but I dont understand how I am gonna calculate how many numbers are gonna be sorted in 10 sec.
We cannot use big O to reliably extrapolate actual running times or input sizes (whichever is the unknown).
Imagine the same code running on two machines A and B, different parsers, compilers, hardware, operating system, array implementation, ...etc.
Let's say they both can parse and run the following code:
procedure sort(reference A)
declare i, j, x
i ← 1
n ← length(A)
while i < n
x ← A[i]
j ← i - 1
while j >= 0 and A[j] > x
A[j+1] ← A[j]
j ← j - 1
end while
A[j+1] ← x[3]
i ← i + 1
end while
end procedure
Now system A spends 0.40 seconds on the initialisation part before the loop starts, independent on what A is, because on that configuration the initiation of the function's execution context including the allocation of the variables is a very, very expensive operation. It also needs to spend 0.40 seconds on the de-allocation of the declared variables and the call stack frame when it arrives at the end of the procedure, again because on that configuration the memory management is very expensive. Furthermore, the length function is costly as well, and takes 0.19 seconds. That's a total overhead of 0.99 seconds
On system B this memory allocation and de-allocation is cheap and takes 1 microsecond. Also the length function is fast and needs 1 microsecond. That's a total overhead that is 2 microseconds.
System A is however much faster on the rest of the statements in comparison with system B.
Both implementations happen to need 1 second to sort an array A having 3000 values.
If we now take the reasoning that we could predict the array size that can be sorted in 10 seconds based on the results for 1 second, we would say:
𝑛 = 3000, and the duration is 1 second which corresponds to 𝑛² = 9 000 000 operations. So if 9 000 000 operations corresponds to 1 second, then 90 000 000 operations correspond to 10 seconds, and 𝑛 = √(𝑛²) ~= 9 487 (the size of the array that can be sorted in 10 seconds).
However, if we follow the same reasoning, we can look at the time needed for completing the outer loop only (without the initialisation overhead), which also is O(𝑛²) and thus the same reasoning can be followed:
𝑛 = 3000, and the duration in system A is 0.01 second which corresponds to 𝑛² = 9 000 000 operations. So if 9 000 000 operations can be executed in 0.01 second then in 10 - 0.99 seconds (overhead is subtracted) we can execute 9.01 / 0.01 operations, i.e 𝑛² = 8 109 000 000 operations, and now 𝑛 = √(𝑛²) ~= 90 040.
The problem is that using the same reasoning on big O, the predicted outcomes differ by a factor of about 10!
We may be tempted to think that this is now only a "problem" of constant overhead, but similar things can be said about operations in the outer loop. For instance it might be that x ← A[i] has a relatively high cost for some reason on some system. These are factors that are not revealed in the big O notation, which only retains the most significant factor, omitting linear and constant factors that play a role.
The actual running time for an actual input size is dependent on a more complex function that is likely close to polynomial, like 𝑛² + 𝑎𝑛 + 𝑏. These coefficients 𝑎, and 𝑏 would be needed to make a more reasonable prediction possible. There might even be function components that are non-polynomial, like 𝑛² + 𝑎𝑛 + 𝑏 + 𝑐√𝑛... This may seem unlikely, but systems on which the code runs may do all kinds of optimisations while code runs which may have such or similar effect on actual running time.
The conclusion is that this type of reasoning gives no guarantee that the prediction is anywhere near the reality -- without more information about the actual code, system on which it runs,... etc, it is nothing more than a guess. Big O is a measure for asymptotic behaviour.
As the comments say, big-oh notation has nothing to do with specific time measurements; however, the question still makes sense, because the big-oh notation is perfectly usable as a relative factor in time calculations.
Big-oh notation gives us an indication of how the number of elementary operations performed by an algorithm varies as the number of items to process varies.
Simple algorithms perform a fixed number of operations per item, but in more complicated algorithms the number of operations that need to be performed per item varies as the number of items varies. Sorting algorithms are a typical example of such complicated algorithms.
The great thing about big-oh notation is that it belongs to the realm of science, rather than technology, because it is completely independent of your hardware, and of the speed at which your hardware is capable of performing a single operation.
However, the question tells us exactly how much time it took for some hypothetical hardware to process a certain number of items, so we have an idea of how much time that hardware takes to perform a single operation, so we can reason based on this.
If 3000 numbers are sorted in 1 second, and the algorithm operates with O( N ^ 2 ), this means that the algorithm performed 3000 ^ 2 = 9,000,000 operations within that second.
If given 10 seconds to work, the algorithm will perform ten times that many operations within that time, which is 90,000,000 operations.
Since the algorithm works in O( N ^ 2 ) time, this means that after 90,000,000 operations it will have sorted Sqrt( 90,000,000 ) = 9,486 numbers.
To verify: 9,000,000 operations within a second means 1.11e-7 seconds per operation. Since the algorithm works at O( N ^ 2 ), this means that to process 9,486 numbers it will require 9,486 ^ 2 operations, which is roughly equal to 90,000,000 operations. At 1.11e-7 seconds per operation, 90,000,000 operations will be done in roughly 10 seconds, so we are arriving at the same result via a different avenue.
If you are seriously pursuing computer science or programming I would recommend reading up on big-oh notation, because it is a) very important and b) a very big subject which cannot be covered in stackoverflow questions and answers.

How to know when ShearSorting is done

I'm currently doing some shearSorting and cannot figure out when this operation is supposed to be done with an n x n matrix.
What I'm doing currently is I'm copying the matrix at the start of each iteration of the loop to a temp matrix and then at the end of each iteration of the loop I'm comparing both the original and the temp matrices and if they are the same then I break out of the loop and exit. I do not like this approach as we always end up going through one extra iteration after the matrix in sorted and done which is a waste of CPU time and cycles.
There has to be a better way to do this checking. I keep finding references to log(n) to signify how many iteration we need but I don't believe they mean actual log(n) as log(5) for a 5x5 matrix in 0.69 which is impossible for number of iterations.
Any suggestions?
SO I know shearSort takes log(n) run iterations to complete so for a case of 5x5 matrix we will have 3 runs for rows and 3 runs for columns. But what if the 5x5 matrix I was given is kinda almost sorted and only needs one or 2 more iterations to be completed, in that case I do not see the point in iterating 6 time through it as this would be considered a waste of CPU power and cycles.
Also we have the following solution: if we copy the matrix at start of each iteration of the shearSort function to a temporary matrix and at the end of each iteration we compare the 2 matrices together and they are the same then we know that we are done (Note here an iteration would mean both a row and a column sort as a matrix might not need a row sort at first but would need a column sort after ). In this case we would be preserving CPU cycles in case the matrix doesn't need N + 1 iterations, but this solution would provide an issue which is when N + 1 iterations are needed then we would be doing N + 3 iterations to finish ( the extra 2 iterations would be one to check if 2 matrices are same for row and one for column).
To solve this we would have to use a combination of both solutions:
we would still be copying the matrix at start and comparing it to temp matrix at the end and if they are equal before we get to the N + 1 iterations then we are done and do not need to go on any further, and if they are not then we go to the N + 1 iteration and stop after since we know at this point the matrix should be sorted after N + 1 iterations.

Time complexity Big Oh using Summations

Few questions about deriving expressions to find the runtime using summations.
The Big-Oh time complexity is already given, so using summations to find the complexity is what I am focused on.
So I know that there are 2 instructions that must be run before the first iteration of the loop, and 2 instructions that have to be run, (the comparison, and increment of i) after the first iteration. Of course, there is only 1 instruction within the for loop. So deriving I have 2n + 3, ridding of the 3 and the 2, I know the time complexity is O(n).
Here I know how to start writing the summation, but the increment in the for loop is still a little confusing for me.
Here is what I have:
So I know my summation time complexity derivation is wrong.
Any ideas as to where I'm going wrong?
Thank you
Just use n / 2 on the top and i = 1 on the bottom:
The reason it's i = 1 and not i = 0 is because the for loop's condition is i < n so you need to account for being one off since in the summation, i will increase all the way up to n / 2 and not 1 short.

What are the number of swaps required in selection sort for each case?

I believe that selection sort has the following behavior:
Best case: No swaps required as all elements are properly arranged
Worst case: n-1 swaps required i.e a swap required for each pass and there are n-1 passes as we know where n is number of elements in array
Average case: Not able to find out this. What is the procedure for finding it out?
Is the above information correct?
This says time complexity of swaps in best case is O(n)
http://ocw.utm.my/file.php/31/Module/ocwChp5SelectionSort.pdf
Each iteration of selection sort consists of scanning across the array, finding the minimum element that hasn't already been placed yet, then swapping it to the appropriate position. In a naive implementation of selection sort, this means that there will always be n - 1 swaps made regardless of distribution of elements in the input array.
If you want to minimize the number of swaps, though, you can implement selection sort so that it doesn't perform a swap in the case where the element to be moved is already in the right place. If you add in this restriction, then you're correct that zero swaps would be made in the best case. (I'm not sure whether it's worthwhile to modify selection sort this way, since swaps are pretty fast in most cases).
Really, it depends on the implementation. You could potentially have a weird implementation of selection sort that constantly swaps the candidate minimum element to its tentative final spot on each iteration, which would dramatically increase the number of swaps in the worst case. I'm not sure why you'd do this, though. It's little details like this that accounts for why your explanation seems at odds with what you've found online - depending on how the code is put together, the number of swaps can be different.
The best case and worst case running time of selection sort are n^2. This is because regardless of how the elements are initially arranged, on the i iteration of the main for loop, the algorithm always inspects each of the remaining n-i elements to find the smallest one remaining.
Selection sort is the algorithm which takes minimum number of swaps, and in the best case it takes ZERO (0) swaps, when the input is in the sorted array like 1,2,3,4. But the more pertinent question is what is the worst case of number of swaps in selection sort? And for which input does it occur?
Answer: Worst case of number of swaps is n-1. But it does not occur for the just the oppositely ordered input, rather the oppositely ordered input like 6,5,3,2,1 does not take the worst number of swaps rather it takes n/2 swaps. So what is really the input for which the number of swaps takes N-1 swaps, if you analyse a bit more you’ll see that the worst case occurs for “SINE WAVE KIND OF AN INPUT”. That is alternatively increasing and decreasing input, same as the crest and trough.
7 6 8 5 9 4 10 3 - input of eight (8) elements will therefore require 7 swaps
3 6 8 5 9 4 10 7 (1)
3 4 8 5 9 6 10 7 (2)
3 4 5 8 9 6 10 7 (3)
3 4 5 6 9 8 10 7 (4)
3 4 5 6 7 8 10 9 (5)
3 4 5 6 7 8 10 9 (6)
3 4 5 6 7 8 9 10 (7)
Hence proved that the worst case of number of swaps in selection sort is n-1, best case is 0, and average is (n-1)/2 swaps.

Determining running time of an algorithm to compare two arrays

I want to know how it is possible to determine the run time of an algorithm written in pseudocode so that I can familiarize myself with run time. So for example, how do you know what the run time of an algorithm that will compare 2 arrays to determine if they are not the same?
Array 1 = [1, 5, 3, 2, 10, 12] Array 2 = [3, 2, 1, 5, 10, 12]
So these two arrays are not the same since they are ordered differently.
My pseudocode is:
1) set current pointer to first number in first array
2) set second pointer to first number in second array
3) while ( current pointer != " ") compare with same position element in other array
4) if (current pointer == second pointer)
move current pointer to next number
move second pointer to next number
5) else (output that arrays are not the same)
end loop
So I am assuming first off my code is correct. I know step 4 executes only once since it only takes 1 match to display arrays are not the same. So step 4 takes only constant time (1). I know step 1 and 2 only execute once also.
so far I know run time is 3 + ? (? being the run time of loop itself)
Now I am lost on the loop part. Does the loop run n times (n being number of numbers in the array?), since worst case might be every single number gets matched? Am I thinking of run time in the right way?
If someone can help with this, I'll appreciate it.
Thanks!
What you are asking about is called the time-complexity of your algorithm. We talk about the time complexity of algorithms using so called Big-O notation.
Big-O notation is a method for talking about the approximate number of steps our algorithms take relative to the size of the algorithms input, in the worst possible case for an input of that size.
Your algorithm runs in O(n) time (pronounced "big-oh of n" or "order n" or sometimes we just say "linear time").
You already know that steps 1,2, and 4 all run in a constant number of steps relative to the size of the array. We say that those steps run in O(1) time ("constant time").
So let's consider step 3:
If there are n elements in the array, then step 3 needs to do n comparisons in the worst case. So we say that step 3 takes O(n) time.
Since the algorithm takes O(n) time on step 3, and all other steps are faster, we say that the total time complexity of your algorithm is O(n).
When we write O(f), where f is some function, we mean that the algorithm runs within some constant factor of f for large values.
Take your algorithm for example. For large values of n (say n = 1000), the algorithm doesn't take exactly n steps. Suppose that a comparison takes 5 instructions to complete in your algorithm, on your machine of choice. (It could be any constant number, I'm just choosing 5 for example.) And suppose that steps 1, 2, 4 all take some constant number of steps each, totalling 10 instructions for all three of those steps.
Then for n = 1000 your algorithm would take:
Steps 1 + 2 + 4 = 10 instructions. Step 3 = 5*1000 = 5000 instructions.
This is a total of 5010 instructions. This is about 5*n instructions, which is a constant factor of n, which is why we say it is O(n).
For very large n, the 10 in f = 5*n + 10 becomes more and more insignificant, as does the 5. For this reason, we simply reduce the function to f is within a constant factor of n for large n by saying f is in O(n).
In this way it's easy to describe the idea that a quadratic function like f1 = n^2 + 2 is always larger than any linear function like f2 = 10000*n + 50000 when n is large enough, by simply writing f1 as O(n) and f2 as O(n^2).
You are correct. The running time is O(n) where n is the number of elements in the arrays. Each time you add 1 element to the arrays, you would have to execute the loop 1 more time in the worst case.

Resources