Whats the Time and Space Complexity for below Recursive code snippet? - algorithm

Below is a recursive function to calculate the value of Binomial Cofecient 'C' i.e. Combination ! I wish to understand this code's Time and Space complexity in terms of N and K (Assuming that We are calculating NCK).
public class ValueOfBinomialCofecientC {
static int globalhitsToThisMethod = 0;
public static void main(String[] args) {
// Calculate nCk.
int n = 61, k = 55;
long beginTime = System.nanoTime();
int ans = calculateCombinationVal(n, k);
long endTime = System.nanoTime() - beginTime;
System.out.println("Hits Made are : " +globalhitsToThisMethod + " -- Result Is : " + ans + " ANd Time taken is:" + (endTime-beginTime));
}
private static int calculateCombinationVal(int n, int k) {
globalhitsToThisMethod++;
if(k == 0 || k == n){
return 1;
} else if(k == 1){
return n;
} else {
int res = calculateCombinationVal(n-1, k-1) + calculateCombinationVal(n-1, k);
return res;
}
}
}

Recursive equation : T(n ,k) = C + T(n-1,k-1) + T(n-1,k);
where, T(n ,k) = time taken for computing NCK
C = constant time => for above if-else
T(n ,k)
| --> work done = C =>after 'C' amount of work function is split --> level-0
____________________________________________
| |
T(n-1,k-1) T(n-1,k)
| --> C | --> C => Total work = C+C = 2C -->leve1-1
____________________________ _______________________
| | | |
T(n-2,k-2) T(n-2,k-1) T(n-2,k-1) T(n-2,k)
using tree method : total work done at level-2 = 4C => 2*2*C
level-3 = 8C => 2*2*2*C
max level tree can grow = max(k+1,n-k-1)
=> T(n ,k) = 2^max(k+1,n-k-1) * C
let C=1
=> T(n) = 2^max(k+1,n-k+1)
T(n) = O(2^n) , k < n/2;
= O(2^k) , k > = n/2;

The runtime is, very interestingly, nCk. Recursively, it is expressed as:
f(n,k) = f(n-1,k-1) + f(n-1,k)
Express each term using the combination formula nCk = n!/(k! * (n-k)!). This is going to bloat up the answer if I try to write every step out, but once you substitute that expression in, multiply the whole equation by (n-k)! * k!/(n-1)!. It should all cancel out to give you n = k + n - k.
There probably are more general approaches to solving multi variable recursive equations, but this one is very obvious if you write out the first few values up to n=5 and k=5.

Related

Finding sum of geometric sequence with modulo 10^9+7 with my program

The problem is given as:
Output the answer of (A^1+A^2+A^3+...+A^K) modulo 1,000,000,007, where 1≤ A, K ≤ 10^9, and A and K must be an integer.
I am trying to write a program to compute the above question. I have tried using the formula for geometric sequence, then applying the modulo on the answer. Since the results must be an integer as well, finding modulo inverse is not required.
Below is the code I have now, its in pascal
Var
a,k,i:longint;
power,sum: int64;
Begin
Readln(a,k);
power := 1;
For i := 1 to k do
power := ((power mod 1000000007) * a) mod 1000000007;
sum := a * (power-1) div (a-1);
Writeln(sum mod 1000000007);
End.
This task came from my school, they do not give away their test data to the students. Hence I do not know why or where my program is wrong. I only know that my program outputs the wrong answer for their test data.
If you want to do this without calculating a modular inverse, you can calculate it recursively using:
1+ A + A2 + A3 + ... + Ak
= 1 + (A + A2)(1 + A2 + (A2)2 + ... + (A2)k/2-1)
That's for even k. For odd k:
1+ A + A2 + A3 + ... + Ak
= (1 + A)(1 + A2 + (A2)2 + ... + (A2)(k-1)/2)
Since k is divided by 2 in each recursive call, the resulting algorithm has O(log k) complexity. In java:
static int modSumAtoAk(int A, int k, int mod)
{
return (modSum1ToAk(A, k, mod) + mod-1) % mod;
}
static int modSum1ToAk(int A, int k, int mod)
{
long sum;
if (k < 5) {
//k is small -- just iterate
sum = 0;
long x = 1;
for (int i=0; i<=k; ++i) {
sum = (sum+x) % mod;
x = (x*A) % mod;
}
return (int)sum;
}
//k is big
int A2 = (int)( ((long)A)*A % mod );
if ((k%2)==0) {
// k even
sum = modSum1ToAk(A2, (k/2)-1, mod);
sum = (sum + sum*A) % mod;
sum = ((sum * A) + 1) % mod;
} else {
// k odd
sum = modSum1ToAk(A2, (k-1)/2, mod);
sum = (sum + sum*A) % mod;
}
return (int)sum;
}
Note that I've been very careful to make sure that each product is done in 64 bits, and to reduce by the modulus after each one.
With a little math, the above can be converted to an iterative version that doesn't require any storage:
static int modSumAtoAk(int A, int k, int mod)
{
// first, we calculate the sum of all 1... A^k
// we'll refer to that as SUM1 in comments below
long fac=1;
long add=0;
//INVARIANT: SUM1 = add + fac*(sum 1...A^k)
//this will remain true as we change k
while (k > 0) {
//above INVARIANT is true here, too
long newmul, newadd;
if ((k%2)==0) {
//k is even. sum 1...A^k = 1+A*(sum 1...A^(k-1))
newmul = A;
newadd = 1;
k-=1;
} else {
//k is odd.
newmul = A+1L;
newadd = 0;
A = (int)(((long)A) * A % mod);
k = (k-1)/2;
}
//SUM1 = add + fac * (newadd + newmul*(sum 1...Ak))
// = add+fac*newadd + fac*newmul*(sum 1...Ak)
add = (add+fac*newadd) % mod;
fac = (fac*newmul) % mod;
//INVARIANT is restored
}
// k == 0
long sum1 = fac + add;
return (int)((sum1 + mod -1) % mod);
}

How do i find the big O notation

int something = 0 ;
int t = n ;
while ( t > 1 ) {
for (int i=0 ; i < t ; i++) {
something++ ;
}
t = t / 2 ;
}
//number 2
int sum = 0;
int i = 1;
while (sum <= n) {
sum = sum + i;
i++;
}
How do I find the tightest upper bound in big O notation. I think they would both be log n but I am not sure if the for loop in the first segment of code affects its run time significantly.
The complexity of the above code is O(n).
Let's calculate the complexity of first part of the program.
int t = n;
while ( t > 1 ) {
for (int i=0 ; i < t ; i++) {
something++ ;
}
t = t / 2 ;
}
This will execute: t + t/2 + t/4 + ... + 1
and the sum of this series is 2t - 1. (Refer this).
Therefore, the time complexity of the first part if O(2t - 1) = O(t) or O(n).
For second part:
int sum = 0;
int i = 1;
while (sum <= n) {
sum = sum + i;
i++;
}
Sum will be 1+2+3+....i which is i*(i+1)/2.
So sum =i*(i+1)/2 ~ i2
Also sum <= n
So i2 <= n
or i ~
Therfore the complexity of second part is O()
So the complexity of program is:
T(n): O(n) + O() = O(n)
Overall complexity is O(n)

Algorithm complexity

I got this problem "Implement this method to return the sum of the two largest numbers in a given array."
I resolved it in this way:
public static int sumOfTwoLargestElements(int[] a) {
int firstLargest = largest(a, 0, a.length-1);
int firstLarge = a[firstLargest];
a[firstLargest] = -1;
int secondLargest = largest(a, 0, a.length-1);
return firstLarge + a[secondLargest];
}
private static int largest(int s[], int start , int end){
if (end - start == 0){
return end;
}
int a = largest(s, start, start + (end-start)/2) ;
int b = largest(s, start + (end-start)/2+1 , end);
if(s[a] > s[b]) {
return a;
}else {
return b;
}
}
Explanation: I implemented a method 'largeset'. This method is responsible to get the largest number in a given array.
I call the method tow times in the same array. The first call will get the first largest number.I put it aside into variable and i replace it by '-1' number into the array. Then, i call the largest medhod second time.
Some one can tell me what is the complexity of this algo? please
The time complexity of the algorithm is O(n).
Each recursive call's complexity is actually:
f(n) = 2*f(n/2) + CONST
It is easy to see (by induction1) that f(n) <= CONST'*n - and thus it is O(n).
The space complexity is O(logN) - because this is the maximal depth of the recursion - so you allocate O(logN) memory on the call stack.
(1)
If you use f(n) = 2*n*CONST - CONST you get:
f(n) = 2*f(n/2) + CONST = (h.i.) 2*(2*CONST*n/2 - CONST) + CONST =
= 2*n*CONST - 2CONST + CONST = 2*n*CONST - CONST
(Checking the base is is left as exercise for the reader)
The complexity of the algorithm would be measured as O(n).
But the real answer is that your algorithm is WAY more complex, and more expensive in terms of machine resources than it needs to be. And it's WAY more expensive in terms of someone reading your code and figuring out what it's doing.
The complexity of your algorithm should really be on the order of:
public static int sumOfTwoLargestElements(int[] a) {
//TODO handle case when argument is null,
//TODO handle case when array has less than two non-null elements, etc.
int firstLargest = Integer.MIN_VALUE;
int secondLargest = Integer.MIN_VALUE;
for (int v : a) {
if ( v > firstLargest ) {
secondLargest = firstLargest;
firstLargest = v;
} else if ( v > secondLargest ) secondLargest = v;
}
//TODO handle case when sum exceeds Integer.MAX_VALUE;
return firstLargest + secondLargest;
}
The reccurence for 'Largest' method is:
_
f(n) = !
! 1 n = 1
! 2f(n/2) n >=2
!_
If we experiment some few cases, we notice that
f(n) = 2^log(n) When n is power of 2 Rq:Log base 2
Proof:
By induction,
f(1) = 2^log(1) = 2^log(2^0) = 1
We suppose that f(n) = 2^log(n)=n
We show f(2n) = 2^log(2n)= 2n^log(2)=2n
f(2n) = 2*f(2n/2) = 2*f(n)
= 2*2^log(n)
= 2^log(n) + 1
= 2^log(n) + log(2^0)
= 2^log(2n)
= 2n^log(2) by log properties
= 2n
Then f(n) = 2^log(n)=n When n is power of2-smooth function f(2n) < c f(n). it follows smooth function properties that **f(n) = theta of n**

Which fibonacci function will evaluate faster?

I am trying to get the first 100 fibonacci numbers to output to a .txt file. I got it to run, but it's taking a while. Will fibonacci or fibonacci2 be faster? The code below uses the first one.
#!/usr/bin/env node
var fs = require('fs');
// Fibonacci
// http://en.wikipedia.org/wiki/Fibonacci_number
var fibonacci = function(n) {
if(n < 1) { return 0;}
else if(n == 1 || n == 2) { return 1;}
else if(n > 2) { return fibonacci(n - 1) + fibonacci(n - 2);}
};
// Fibonacci: closed form expression
// http://en.wikipedia.org/wiki/Golden_ratio#Relationship_to_Fibonacci_sequence
var fibonacci2 = function(n) {
var phi = (1 + Math.sqrt(5))/2;
return Math.round((Math.pow(phi, n) - Math.pow(1-phi, n))/Math.sqrt(5));
};
// Find first K Fibonacci numbers via basic for loop
var firstkfib = function(k) {
var i = 1;
var arr = [];
for(i = 1; i < k+1; i++) {
var fibi = fibonacci(i);
arr.push(fibi);
// Print to console so I can monitor progress
console.log(i + " : " + fibi);
}
return arr;
};
var fmt = function(arr) {
return arr.join(",");
};
var k = 100;
// write to file
var outfile = "fibonacci.txt";
var out = fmt(firstkfib(k));
fs.writeFileSync(outfile, out);
console.log("\nScript: " + __filename + "\nWrote: " + out + "\nTo: " + outfile);
In general, recursive function are "cleaner" and "easier" to write, but are often requiring more ressources (mostly memory due to an accumulation of stacks). in your case the best way to get the 100 first would be to programit using a simple loop that will compute the next number of the fibonacci series and add it to a list.
double a[100];
a[0] = 1;
a[1] = 1;
K=2;
Do{
{
a[k] = a[k - 2] + a[k- 1];
k++;
}While (k!=100)
The recursive fibonacci function is implemented the wrong way. The correct way to implement it recursively is discussed in this article Recursion and Fibonacci Numbers. For those too lazy to read, here is their code (it's in C, but it shouldn't be too hard to translate):
unsigned long fib(unsigned int n)
{
return n == 0 ? 0 : fib2(n, 0, 1);
}
unsigned long fib2(unsigned int n, unsigned long p0, unsigned long p1)
{
return n == 1 ? p1 : fib2(n - 1, p1, p0 + p1);
}
An even more efficient implementation would cache the values of the fibonacci sequence as it computes them:
var cache = [];
var fibonacci = function(n) {
if(cache.length > n) return cache[n];
return (cache[n] = fib2(n, 0, 1));
};
var fib2 = function(n, p0, p1) {
if(cache.length > n) return cache[n];
return n == 1 ? p1 : (cache[n] = fib2(n - 1, p1, p0 + p1));
};
I don't really know the language, so there might be some problems with the code, but this is at least the gist of it.
For your question, we can't do better than O(n) since you need to produce all of the first n (n=100) numbers.
Interestingly, if you just need the nth fib number, there exists an O(log n) solution as well.
The algorithm is simple enough: Find the nth power of matrix A using a Divide and Conquer approach and report (0,0)th element, where
A = |1 1 |
|1 0 |
The recursion being
A^n = A^(n/2) * A^(n/2)
Time complexity:
T(n) = T(n/2) + O(1) = O(logn)
If you think about it with a piece of paper, you'd find that the proof is simple and is based upon the principle of induction.
If you still need help, refer to this link
NOTE: Of course you could iteratively calculate A, A^2, A^3 and so on. However, it wouldn't make sense to use it compared to the other simpler solutions described in the other answers. (Because of sheer code complexity)
This is a very naive way to do this calculation. try to do something like:
long[] a = new long[100];
a[0] = 1;
a[1] = 1;
for (int i = 2; i < 100; ++i)
{
a[i] = a[i - 2] + a[i - 1];
}
for (int i = 0; i < 100; ++i)
Console.WriteLine(a[i]);
This way you are getting a linear time O(n)

Find all the quadruples [a, b, c, d] where a^3 + b^3 = c^3 + d^3 when 1 <= a, b, c or d <= 10000 [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Looking for an algorithm or some coding hints to find the solutions for
a^3 + b^3 = c^3 + d^3, where a, b, c and d all are in the range [1 .. 10000]
It's an interview question.
I'm thinking priority queues to at least iterate for a and b values. Some hint will be great, will try to work through from there.
Using a hash map to store the (cube,(a,b)), you can iterate all possible pairs of integers, and output a solution once you have found that the required sum of cubes is already in the map.
pseudo code:
map <- empty hash_map<int,list<pair<int,int>>>
for each a in range(0,10^5):
for each b in range(a,10^5): //making sure each pair repeats only once
cube <- a^3 + b^3
if map.containsKey(cube):
for each element e in map.get(cube):
output e.first(), e.last(), a, b //one solution
else:
map.put(cube,new list<pair<int,int>>)
//for both cases, add the just found pair to the relevant list
map.get(cube).add(cube,new pair(a,b))
This solution is O(n^2) space(1) and O(n^2 + OUTPUT) time on average, where OUTPUT is the size of the output.
EDIT:
Required space is actually O(n^2 logn), where n is the range (10^5), because to represent 10^5 integers you need ceil(log_2(10^15)) = 50 bits. So, you actually need something like 500,000,000,000 bits (+ overhead for map and list) which is ~58.2 GB (+ overhead).
Since for most machines it is a bit too much - you might want to consider storing the data on disk, or if you have 64bits machine - just store in into "memory" and let the OS and virtual memory system do this as best as it can.
(1) As the edit clarifies, it is actually O(n^2log(n)) space, however if we take each integer storage as O(1) (which is usually the case) we get O(n^2) space. Same principle will apply for the time complexity, obviously.
Using a priority queue is almost certainly the simplest solution, and also the most practical one, since it's O(n) storage (with a log factor if you require bignums). Any solution which involves computing all possible sums and putting them in a map will require O(n^2) storage, which soon becomes impractical.
My naive, non-optimized implementation using a priority queue is O(n^2 log(n)) time. Even so, it took less than five seconds for n = 10000 and about 750 seconds for n = 100000, using a couple of megabytes of storage. It certainly could be improved.
The basic idea, as per your comment, is to initialize a priority queue with pairs (a, a+1) for a in the range [1, N), and then repeatedly increment the second value of the smallest (by sum of cubes) tuple until it reaches N. If at any time the smallest two elements in the queue are equal, you have a solution. (I could paste the code, but you only asked for a hint.)
Using Hashmap (O(n^2) solution):
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import static java.lang.Math.pow;
/**
* Created by Anup on 10-10-2016.
*/
class Pair {
int a;
int b;
Pair(int x, int y) {
a = x;
b = y;
}
}
public class FindCubePair {
public static void main(String[] args) {
HashMap<Long, ArrayList<Pair>> hashMap = new HashMap<>();
int n = 100000;
for(int i = 1; i <= n; i++) {
for(int j = 1; j <= n; j++) {
long sum = (long) (pow(i, 3) + pow(j, 3));
if(hashMap.containsKey(sum)) {
List<Pair> list = hashMap.get(sum);
for(Pair p : list) {
System.out.println(i + " " + j + " " + p.a + " " + p.b);
}
} else {
ArrayList<Pair> list = new ArrayList<>();
hashMap.put(sum, list);
}
hashMap.get(sum).add(new Pair(i, j));
}
}
}
}
Unfortunately, the value of integers printed does not even reach 1000 on my computer due to resource limitation.
A quicker than trivial solution is as follows: You calculate all values that a^3 + b^3 can have, and store all possible values of a and b with it. This is done by looping through a and b, storing the results (a^3 + b^3) in a binary tree and having a list of values (a's and b's) associated to each result.
After this step, you need to traverse the list and for each value, choose every possible assignment for a,b,c,d.
I think this solution takes O(n^2 log n) time and O(n^2) space, but i might be missing something.
int Search(){
int MAX = 10000000;
for(int a = 0; a < MAX; a++){
int a3 = a * a * a;
if(a3 > MAX) break;
for(int b = a; b < MAX; b ++){
int b3 = b * b * b;
if(a3 + b3 > MAX)break;
for(int c = 0; c < a; c++){
int c3 = c*c*c;
int m = a3 - c3;
int d = b+1;
while(true){
int d3 = d * d * d;
if(d3-b3 <= m){
if((d3 - b3) == m){
count++;
PUSH_Modified(a3, b3, c3, b3, a, b, c, d);
}
d++;
continue;
}
else
break;
}
}
}
}
return 0;
}
Let's assume a solution:
a=A, b=B, c=C, and d=D.
Given any solution we can generate another 3 solutions
abcd
ABCD
ABDC
BACD
BADC
Actually, if A=B, or C=D, then we might only have 1 or 2 further solutions.
We can choose the solutions we look for first by ordering A <= B and C <= D. This will reduce the search space. We can generate the missed solutions from the found ones.
There will always be at least one solution, where A=C and B=D. What we're looking for is when A>C and B<D. This comes from the ordering: C can't be greater than A because, as we've chosen to only look at solutions where D>C, the cube sum would be too big.
We can calculate A^3 + B^3, put it in a map as the key, with a vector of pairs A,B as the value.
There will be (n^2)/2 values.
If there are already values in the vector they will all have lower A and they are the solutions we're looking for. We can output them immediately, along with their permutations.
I'm not sure about complexity.
One Solution - using concept of finding 2 sum in a sorted array. This is O(n3)
public static void pairSum() {
int SZ = 100;
long[] powArray = new long[SZ];
for(int i = 0; i< SZ; i++){
int v = i+1;
powArray[i] = v*v*v;
}
int countPairs = 0;
int N1 = 0, N2 = SZ-1, N3, N4;
while(N2 > 0) {
N1=0;
while(N2-N1 > 2) {
long ts = powArray[N1] + powArray[N2];
N3 = N1+1; N4 = N2-1;
while(N4 > N3) {
if(powArray[N4]+powArray[N3] < ts) {
N3++;
}else if(powArray[N4]+powArray[N3] > ts) {
N4--;
}else{
//System.out.println((N1+1)+" "+(N2+1)+" "+(N3+1)+" "+(N4+1)+" CUBE "+ts);
countPairs++;
break;
}
}
N1++;
}
N2--;
}
System.out.println("quadruplet pair count:"+countPairs);
}
Logic :
a^3 + b^3 = c^3 + d^3
Then, a^3+b^3-c*3-d^3 = 0
Try to solve this equation by putting all combination of values for a,b,c and d in range of [0 , 10^5].
If equation is solved print the values of a,b,c and d
public static void main(String[] args) {
//find all solutions of a^3 + b^3 = c^3 + d^3
double power = 3;
long counter = 0; // to count the number of solution sets obtained
int limit = 100000; // range from 0 to limit
//looping through every combination of a,b,c and d
for(int a = 0;a<=limit;a++)
{
for(int b = 0;b<=limit;b++)
{
for(int c = 0;c<=limit;c++)
{
for(int d = 0;d<=limit;d++)
{
// logic used : a^3 + b^3 = c^3 + d^3 can be written as a^3 + b^3 - c^3 - d^3 = 0
long result = (long)(Math.pow(a,power ) + Math.pow(b,power ) - Math.pow(c,power ) - Math.pow(d,power ));
if(result == 0 )
{
counter++; // to count the number of solutions
//printing the solution
System.out.println( "a = "+ a + " b = " + b + " c = " + c + " d = " + d);
}
}
}
}
}
//just to understand the change in number of solutions as limit and power changes
System.out.println("Number of Solutions =" + counter);
}
Starting with the brute force approach, its pretty obvious it will O(n^4) time to execute.
If space is not a constraint, we can go for combination of list and maps.
The code is self-explanatory, we are using a nested list to keep track of all entries for a particular sum (key in map).
The time complexity is thus reduced from O(n^4) to O(n^2)
public void printAllCubes() {
int n = 50;
Map<Integer, ArrayList<ArrayList>> resMap = new HashMap<Integer, ArrayList<ArrayList>>();
ArrayList pairs = new ArrayList<Integer>();
ArrayList allPairsList = new ArrayList<ArrayList>();
for (int c = 1; c < n; c++) {
for (int d = 1; d < n; d++) {
int res = (int) (Math.pow(c, 3) + Math.pow(d, 3));
pairs.add(c);
pairs.add(d);
if (resMap.get(res) == null) {
allPairsList = new ArrayList<ArrayList>();
} else {
allPairsList = resMap.get(res);
}
allPairsList.add(pairs);
resMap.put(res, allPairsList);
pairs = new ArrayList<Integer>();
}
}
for (int a = 1; a < n; a++) {
for (int b = 1; b < n; b++) {
int result = (int) (Math.pow(a, 3) + Math.pow(b, 3));
ArrayList<ArrayList> pairList = resMap.get(result);
for (List p : pairList) {
System.out.print(a + " " + b + " ");
for (Object num : p)
System.out.print(num + " ");
System.out.println();
}
}
}
}

Resources