What will be the complexity of this code? - algorithm

My code is :
vector<int> permutation(N);
vector<int> used(N,0);
void try(int which, int what) {
// try taking the number "what" as the "which"-th element
permutation[which] = what;
used[what] = 1;
if (which == N-1)
outputPermutation();
else
// try all possibilities for the next element
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
used[what] = 0;
}
int main() {
// try all possibilities for the first element
for (int first=0; first<N; first++)
try(0,first);
}
I was learning complexity from some website where I came across this code. As per my understanding, the following line iterates N times. So the complexity is O(N).
for (int first=0; first<N; first++)
Next I am considering the recursive call.
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
So, this recursive call has number of step involved = t(n) = N.c + t(0).(where is some constant step)
There we can say that for this step, the complexity is = O(N).
Thus the total complexity is - O(N.N) = O(N^2)
Is my understanding right?
Thanks!

Complexity of this algorithm is O(N!) (or even O(N! * N) if outputPermutation takes O(N) which seems possible).
This algorithm outputs all permutations of 0..N natural numbers without repetitions.
Recursive function try itself sequentially tries to put N elements into position which and for each try it recursively invokes itself for the next which position, until which reaches N-1. Also, for each iteration try is actually invoked (N - which) times, because on each level some element is marked as used in order to eliminate repetitions. Thus the algorithm takes N * (N - 1) * (N - 2) ... 1 steps.

It is a recursive function. The function "try" calls itself recursively, so there is a loop in main (), a loop in try (), a loop in the recursive call to try (), a loop in the next recursive call to try () and so on.
You need to analyse very carefully what this function does, or you will get a totally wrong result (as you did). You might consider actually running this code, with values of N = 1 to 20 and measuring the time. You will see that it is most definitely not O (N^2). Actually, don't skip any of the values of N; you will see why.

Related

What are the number of operations in this method?

I am currently learning about asymptotic analysis, however I am unsure of how many operations are in these nested loops. Would somebody be able to help me understand how to approach them? Also what would the Big-O notation be? (This is also my first time on stack overflow so please forgive any formatting errors in the code).
public static void primeFactors(int n){
while( n % 2 == 0){
System.out.print(2 + " ");
n /= 2;
}
for(int i = 3; i <= Math.sqrt(n); i += 2){
while(n % i == 0){
System.out.print(i + " ");
n /= i;
}
}
}
In the worst case, the first loop (while) is O(log(n)). In the second loop, the outer loop runs in O(sqrt(n)) and the inner loop runs in O(log_i(n)). Hence, the time complexity of the second loop (inner and outer in total) is:
O(sum_{i = 3}^{sqrt(n)} log_i(n))
Therefore, the time complexity of the mentioned algorithm is O(sqrt(n) log(n)).
Notice that, if you mean n is modified inside the inner loop, and it affects on sqrt(n) in the outer loop, so the complexity of the second loop is O(sqrt(n)). Therefore, under this assumtion, the time complexity of the alfgorithm will be O(sqrt(n)) + O(log(n)) == O(sqrt(n)).
First, we see that the first loop is really a special case of the inner loop that occurs in the other loop, with i equal to two. It was separated as a special case in order to be able to increase i with steps of 2 instead of 1. But from the point of view of asymptotic complexity the step by 2 makes no difference: it represents a constant coefficient, which we can ignore. And so for our analysis we can just rewrite the code to this:
public static void primeFactors(int n){
for(int i = 2; i <= Math.sqrt(n); i += 1){ // note the change in start and increment value
while(n % i == 0){
System.out.print(i + " ");
n /= i;
}
}
}
The number of times that n/i is executed, corresponds to the number of non-distinct prime divisors that a number has. According to this Q&A that number of times is O(loglogn). It is not straightforward to derive this, so I had to look it up.
We should also consider the number of times the for loop iterates. The Math.sqrt(n) boundary for i can lower as the for loop iterates. The more divisions take place, the (much) fewer iterations the for loop has to make.
We can see that at the time that the loop exits, i has surpassed the square root of the greatest prime divisor of n. In the worst case that greatest prime divisor is n itself (when n is prime). So the for loop can iterate up to the square root of n, so O(√n) times. In that case the inner loop never iterates (no divisions).
We should thus see which is more determining, and we get O(√n + loglogn). This is O(√n).
The first loop divides n by 2 until it is not divisible by 2 anymore. The maximum number of time this can happen is log2(n).
The second loop, at first sight, seems to run sqrt(n) times the inner loop which is also O(log(n)), but that is actually not the case. Everytime the while condition of the second loop is satisfied, n is drastically decreased, and since the condition of a for-loop is executed on each iteration, sqrt(n) also decreases. The worst case actually happens if the condition of while loop is never satisfied, i.e. if n is prime.
Hence the overall time complexity is O(sqrt(n)).

Time complexity of two different solution for Bubble Sort

I made a solution for Bubble sort in two ways.
One is checking from begins to ends every time. And the other is also checking from begins to ends, but 'ends' is getting smaller(-1). Because we can assure that the last one is sorted when every loop is finished.
In my opinion, the time complexity of the first one is O(n^2) and the other's is O(nlogn). Is it right?
first
var bubbleSort = function(array) {
// Your code here.
let changed = true;
let temp;
while(changed){
changed = false
for(let i=0 ; i<array.length-1 ; i++){
if(array[i] > array[i+1]){
temp = array[i+1];
array[i+1] = array[i];
array[i] = temp;
changed = true;
}
}
}
return array;
};
second
var bubbleSort = function(array) {
// Your code here.
let temp;
for(let i=0 ; i<array.length-1 ; i++){
for(let j=0 ; j<array.length-1-i ; j++){
if(array[j]>array[j+1]){
temp = array[j+1];
array[j+1] = array[j];
array[j] = temp;
}
}
}
return array;
};
Both versions of bubble sort are O(n²) in the worst case and also in the average case.
The first version, which I call "naive bubble sort", has an outer loop and an inner loop. The inner loop iterates n-1 times, and the outer loop also iterates up to n-1 times. This fact can be proven as a corollary of the fact that the second version (where the outer loop is limited to n-1 iterations) is correct. So the worst case number of iterations is (n-1) * (n-1) = O(n²). Its best case running time is O(n), but this happens rarely enough that the average is still O(n²).
The second version, which is the one normally referred to as "bubble sort", has an outer loop which iterates n-1 times, and an inner loop which iterates n-1-i times. Since i is on average about n/2, the number of iterations is approximately n * n/2 = O(n²). There is no short-circuiting, so this is the best, worst and average case for this version of the algorithm.
The average case for both algorithms is O(n²) because of a fundamental fact about the bubble sort algorithm: it performs one swap per inversion in the input array. An inversion is a pair of indices whose elements are out of order. There are a total of (n choose 2) = n * (n-1) / 2 pairs, and on average half of them will be inversions. To see this, consider that if an array has k inversions, then the reverse of that array has (n choose 2) - k inversions. So, either version of bubble sort does an average of n * (n-1) / 4 swaps, which is O(n²).

Big O calculation given a piece of code

These programs do the calculation ∑𝑖=0 𝑎𝑖 𝑥
I am trying to figure out big O calculations. I have done alot of study but I am having a problem getting this down. I understand that big O is worst case scenario or upper bounds. From what I can figure program one has two for loops one that runs for the length of the array and the other runs to the value of the first loop up to the length of the array. I think that if both ran the full length of the array then it would be quadratic O(N^2). Since the second loop only runs the length of the length of the array once I am thinking O(NlogN).
Second program has only one for loop so it would be O(N).
Am I close? If not please explain to me how I would calculate this. Since this is in the homework I am going to have to be able to figure something like this on the test.
Program 1
// assume input array a is not null
public static double q6_1(double[] a, double x)
{
double result = 0;
for (int i=0; i<a.length; i++)
{
double b = 1;
for (int j=0; j<i; j++)
{
b *= x;
}
result += a[i] * b;
}
return result;
}
Program 2
// assume input array a is not null
public static double q6_2(double[] a, double x)
{
double result = 0;
for (int i=a.length-1; i>=0; i--)
{
result = result * x + a[i];
}
return result;
}
I'm using N to refer to the length of the array a.
The first one is O(N^2). The inner loop runs 1, 2, 3, 4, ..., N - 1 times. This sum is approx N(N-1)/2 which is O(N^2).
The second one is O(N). It is simply iterating through the length of the array.
Complexity of a program is basically number of instructions executed.
When we talk about the upper bound, it means we are considering the things in worst case which should be taken in consideration by every programmer.
Let n = a.length;
Now coming back to your question, you are saying that the time complexity of the first program should be O(nlogn), which is wrong. As when i = a.length-1 the inner loop will also iterate from j = 0 to j = i. Hence the complexity would be O(n^2).
You are correct in judging the time complexity of the second program which is O(n).

Time Complexity of a Recursive Algorithm with a Condition not Based on Size

I have a question about the complexity of a recursive function
The code (in C#) is like this:
public void function sort(int[] a, int n)
{
bool done = true;
int j = 0;
while (j <= n - 2)
{
if (a[j] > a[j + 1])
{
// swap a[j] and a[j + 1]
done = false;
{
j++;
}
j = n - 1;
while (j >= 1)
{
if (a[j] < a[j - 1])
{
// swap a[j] and a[j - 1]
done = false;
{
j--;
}
if (!done)
sort(array, length);
}
Now, the difficulty I have is the recursive part of the function.
In all of the recursions I have seen so far, we can determine the number of recursive calls based on the input size because every time we call the function with a smaller input etc.
But for this problem, the recursive part doesn't depend on the input size; instead it's based on whether the elements are sorted or not. I mean, if the array is already sorted, the function will run in O(n) because of the two loops and no recursive calls (I hope I'm right about this part).
How can we determine O(n) for the recursive part?
O(f(n)) means that your algorithm is always faster or equal as f(n) regardless of input (considering only size of input). So you should find worst case for your input of size n.
This one looks like some bubble sort algorithm (although weirdly complicated) which is O(n^2). In worst case, every call of sort function takes O(n) and you transport highest number to the end of array - you have n items so its O(n)*O(n) => O(n^2).
This is bubble sort. It's O(n^2). Since the algorithm swaps adjacent elements, the running time is proportional to the number of inversions in a list, which is O(n^2). The number of recursions will be O(n). The backward pass just causes it to recurse about half the time but doesn't affect the actual complexity--it's still doing the same amount of work.

Analysis of algorithm by Average-case

I'm trying to solve a very simple algorithm analysis (apparently isn't so simple to me).
The algorithm is going like this:
int findIndexOfN(int A[], int n) {
// this algorithm looks for the value n in array with size of n.
// returns the index of n in case found, otherwise returns -1.
// it is possible that n doesn't appear in the array.
// n appears at most one time.
// the probability that n doesn't appear in the array is $1/(n+1)$
// for each cell in the array, the probability that n is found in index i
// is $1/(n+1)$
int index, fIndex;
index = 0;
fIndex = -1;
while (index < n && fIndex == -1) {
if(A[index] == n) {
fIndex = index;
}
index++;
}
return fIndex;
}
Now I'm trying to calculate the average running time. I think this is Geometric series but I can't find out a way to merge between the terms probability and complexity.
For example, I know that in case the value n is found in index 1, then it would take 1 loop step to get the second index (1) and find n.
The probabilty on the other hand gives me some fractions....
Here is what I got so far:
$\sigma from i=1 to n evaluate ( (1/n) * ((n-1)/n)^i-1 )
But again, I can't find out the connection of this formula to T(n) and also I can't find a relation of BigOh, BigOmega or Theta for this function.
This algorithm is BigOh(n), BigOmega(n) and Theta(n).
To know this you don't need to compute probabilities or use the Master Theorem (as your function isn't recursive). You just need to see that the function is like a loop over n terms. Maybe it would be easier if you represented your function like this:
for (int i = 0; i < n; ++i) {
if (A[i] == n)
return i;
}
I know this seems counterintuitive, because if n is the first element of your array, indeed you only need one operation to find it. What is important here is the general case, where n is somewhere in the middle of your array.
Let's put it like this: given the probabilities you wrote, there is 50% chances that n is between the elements n/4 and 3n/4 of your array. In this case, you need between n/4 and 3n/4 tests to find your element, which evaluates to O(n) (you drop the constant when you do BogOh analysis).
If you want to know the average number of operations you will need, you can compute a series, like you wrote in the question. The actual series giving you the average number of operations is
1/(n+1) + 2/(n+1) + 3/(n+1) ... + n/(n+1)
Why? Because you need one test if n is in the first position (with probability 1/(n+1)), two tests if n is in the second position (with probability 1/(n+1)), ... i tests if n is in the ith position (with probability 1/(n+1))
This series evaluates to
n(n+1)/2 * 1/(n+1) = n/2

Resources