Order of complexity of fibonacci functions - complexity-theory

I have the following three algorithms that give the Fibonacci numbers. I would like to know how I would be able to find out each of their order of complexity. Does anyone know how I might be able to determine this?
Method 1
------------------------------------------------------------------------
long long fibb(long long a, long long b, int n) {
return (--n>0)?(fibb(b, a+b, n)):(a);
}
Method 2
------------------------------------------------------------------------
long long int fibb(int n) {
int fnow = 0, fnext = 1, tempf;
while(--n>0){
tempf = fnow + fnext;
fnow = fnext;
fnext = tempf;
}
return fnext;
}
Method 3
------------------------------------------------------------------------
long long unsigned fib(unsigned n) {
return floor( (pow(PHI, n) - pow(1 - PHI, n))/sqrt(5) );
}
Correct me if I am wrong, but my guess would be that method one is O(2^n) since it is recursive, medtod 2 will be O(n) and the last one will be O(1).

Methods 1 and 2 have complexity O(n). The reasons are straightforward:
Method 1 recurses exactly n-1 times, each recursion performs a simple arithmetic operation.
Method 2 iterates exactly n-1 times, each iteration has constant time, simple math again.
Method 3 has indeed a complexity of O(1), but may not compute the correct value, merely an approximation.

Related

Worst-case time complexity for adding two binary numbers using bit manipulation

I'm looking over the solution to adding 2 binary numbers, using the bit manipulation approach. The input are 2 binary-number strings, and the expected output is the string for the binary addition result.
Here is the Java code similar to the solution,
class BinaryAddition {
public String addBinary(String a, String b) {
BigInteger x = new BigInteger(a, 2);
BigInteger y = new BigInteger(b, 2);
BigInteger zero = new BigInteger("0", 2);
BigInteger carry, answer;
while (y.compareTo(zero) != 0) {
answer = x.xor(y);
carry = x.and(y).shiftLeft(1);
x = answer;
y = carry;
}
return x.toString(2);
}
}
While the algorithm makes a lot of sense, I'm somehow arriving at a different worst-case time complexity than the O(M + N) stated in the Leetcode solution, where M, N refers to the lengths of input strings.
In my opinion, it should be O(max(M, N)^2), since the while loop can run up to max(M, N) times, and each iteration can take max(M, N) operations. For example, adding 1111 and 1001 would take 4 iterations of the while loop.
Appreciate your advice on this or where I might have gone wrong, thanks.

Complexity analysis for the permutations algorithm

I'm trying to understand the time and space complexity of an algorithm for generating an array's permutations. Given a partially built permutation where k out of n elements are already selected, the algorithm selects element k+1 from the remaining n-k elements and calls itself to select the remaining n-k-1 elements:
public static List<List<Integer>> permutations(List<Integer> A) {
List<List<Integer>> result = new ArrayList<>();
permutations(A, 0, result);
return result;
}
public static void permutations(List<Integer> A, int start, List<List<Integer>> result) {
if(A.size()-1==start) {
result.add(new ArrayList<>(A));
return;
}
for (int i=start; i<A.size(); i++) {
Collections.swap(A, start, i);
permutations(A, start+1, result);
Collections.swap(A, start, i);
}
}
My thoughts are that in each call we swap the collection's elements 2n times, where n is the number of elements to permute, and make n recursive calls. So the running time seems to fit the recurrence relation T(n)=nT(n-1)+n=n[(n-1)T(n-2)+(n-1)]+n=...=n+n(n-1)+n(n-1)(n-2)+...+n!=n![1/(n-1)!+1/(n-2)!+...+1]=n!e, hence the time complexity is O(n!) and the space complexity is O(max(n!, n)), where n! is the total number of permutations and n is the height of the recursion tree.
This problem is taken from the Elements of Programming Interviews book, and they're saying that the time complexity is O(n*n!) because "The number of function calls C(n)=1+nC(n-1) ... [which solves to] O(n!) ... [and] ... we do O(n) computation per call outside of the recursive calls".
Which time complexity is correct?
The time complexity of this algorithm, counted by the number of basic operations performed, is Θ(n * n!). Think about the size of the result list when the algorithm terminates-- it contains n! permutations, each of length n, and we cannot create a list with n * n! total elements in less than that amount of time. The space complexity is the same, since the recursion stack only ever has O(n) calls at a time, so the size of the output list dominates the space complexity.
If you count only the number of recursive calls to permutations(), the function is called O(n!) times, although this is usually not what is meant by 'time complexity' without further specification. In other words, you can generate all permutations in O(n!) time, as long as you don't read or write those permutations after they are generated.
The part where your derivation of run-time breaks down is in the definition of T(n). If you define T(n) as 'the run-time of permutations(A, start) when the input, A, has length n', then you can not define it recursively in terms of T(n-1) or any other function of T(), because the length of the input in all recursive calls is n, the length of A.
A more useful way to define T(n) is by specifying it as the run-time of permutations(A', start), when A' is any permutation of a fixed, initial array A, and A.length - start == n. It's easy to write the recurrence relation here:
T(x) = x * T(x-1) + O(x) if x > 1
T(1) = A.length
This takes into account the fact that the last recursive call, T(1), has to perform O(A.length) work to copy that array to the output, and this new recurrence gives the result from the textbook.

What will be the complexity of this code?

My code is :
vector<int> permutation(N);
vector<int> used(N,0);
void try(int which, int what) {
// try taking the number "what" as the "which"-th element
permutation[which] = what;
used[what] = 1;
if (which == N-1)
outputPermutation();
else
// try all possibilities for the next element
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
used[what] = 0;
}
int main() {
// try all possibilities for the first element
for (int first=0; first<N; first++)
try(0,first);
}
I was learning complexity from some website where I came across this code. As per my understanding, the following line iterates N times. So the complexity is O(N).
for (int first=0; first<N; first++)
Next I am considering the recursive call.
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
So, this recursive call has number of step involved = t(n) = N.c + t(0).(where is some constant step)
There we can say that for this step, the complexity is = O(N).
Thus the total complexity is - O(N.N) = O(N^2)
Is my understanding right?
Thanks!
Complexity of this algorithm is O(N!) (or even O(N! * N) if outputPermutation takes O(N) which seems possible).
This algorithm outputs all permutations of 0..N natural numbers without repetitions.
Recursive function try itself sequentially tries to put N elements into position which and for each try it recursively invokes itself for the next which position, until which reaches N-1. Also, for each iteration try is actually invoked (N - which) times, because on each level some element is marked as used in order to eliminate repetitions. Thus the algorithm takes N * (N - 1) * (N - 2) ... 1 steps.
It is a recursive function. The function "try" calls itself recursively, so there is a loop in main (), a loop in try (), a loop in the recursive call to try (), a loop in the next recursive call to try () and so on.
You need to analyse very carefully what this function does, or you will get a totally wrong result (as you did). You might consider actually running this code, with values of N = 1 to 20 and measuring the time. You will see that it is most definitely not O (N^2). Actually, don't skip any of the values of N; you will see why.

Recurrence relation of the following function

I'm trying to determine the recurrence relation of the following recursive function..I think I have done it correctly but would like some input on my method of solving..
Solve for C(n) the number of additions that this function does:
//precondition: n>0
int fct (const int A[], int n) {
if (n==1)
return A[0]*A[0];
else return A[n-1] * fct(A,n-1) * A[n-1];
}
Here, there are exactly two additions that occur as well as a recursive call for n-1.
C(1)=1
C(n)=2+C(n-1) //2 because of the number of additions plus the recursive call C(n-1)
Therefore
C(2)=C(1)+2=1+3=3
C(3)=C(2)+2=2+3=5
C(4)=C(3)+2=7
C(n)=2n-1
where big o is O(n)?
Correct. Keep in mind the structure of the recursive function depends on n:
int addRec(int A[], int n) {
The additions are otherwise constant time, so you are performing a constant operation n times, which results in the O(n) time complexity you got.

Tracking Runtime Complexity

What is the best way to calculate runtime complexity for any method? It's easy to do that for non-recursive methods, like bubblesort
outer-for loop
{
inner-for loop
{
compare and exchange
}
}
To check, the best way is to put a counter in the inner-most loop. But, when the method is recursive, where should I put the counter, for instance merge sort,
sort(int[] array){
left = first-half
right = second-half
sort(left);
sort(right);
ret merge(left, right);
}
merge(int[] left, right)
{
count = length(left + right);
int[] result;
loop-count-times
{
compare and put in result;
}
return result;
}
Since this is merge sort, the big(o) is o(n log n), so an array of 100 ints should return a big-o of 200 exactly. Where will the counter go? If I put it at the top of sort(..), I get an average of 250, 280, 300, which should be wrong. What is the best place for this counter?
references:http://en.wikipedia.org/wiki/Mergesort
Thanks.
Since this is merge sort, the big(o) is o(n log n), so an array of 100 ints should return a big-o of 200 exactly.
Not even close to right.
Computational complexity denoted using the big Ordo-notation does not tell you how many steps/computational operations will be executed exactly. There's a reason it's called asymptotic and not identical complexity: it only gives you a function that approaches (more precisely, gives a higher bound on) the running time of the algorithm with regards to the size of the input.
So O(n log n) doesn't mean that for 100 elements, 200 operations will be performed (how come, by the way, that the base of the logarithm must be 10?), it tells you that if you increase the size of your input, the (average-case) running time will be proportional to the number of pieces of input data added, multiplied by the logarithm of the number of this additional data.
To the point: if you want to count the number of calls to a recursive function, you should put the counter in as an argument, like this:
void merge_sort(int array[], size_t length, int *counter)
{
(*counter)++;
// apply the algorithm to `array`:
merge_sort(array, length, counter);
}
and call it like this:
int num_calls = 0;
merge_sort(array, sizeof(array) / sizeof(array[0]), &num_calls);
printf("Called %d times\n", num_calls);
I think you have slightly misunderstood the concept of Big-O notation. If the complexity is O(n log n) and the value of n is 100, there is no strict rule that the program should execute exactly in Big-O of 200. It only gives us an upper bound. For example consider selection sort with an O(n2) complexity. Even if n is 100 the counter set inside the inner loop will not give you 1002 as result if the list is already sorted. So in your case what you get as answer (250, 280, 300, etc.) is perfectly valid. Because all the answers are limited by k times n log n, where k is an arbitrary constant.

Resources