I got this problem "Implement this method to return the sum of the two largest numbers in a given array."
I resolved it in this way:
public static int sumOfTwoLargestElements(int[] a) {
int firstLargest = largest(a, 0, a.length-1);
int firstLarge = a[firstLargest];
a[firstLargest] = -1;
int secondLargest = largest(a, 0, a.length-1);
return firstLarge + a[secondLargest];
}
private static int largest(int s[], int start , int end){
if (end - start == 0){
return end;
}
int a = largest(s, start, start + (end-start)/2) ;
int b = largest(s, start + (end-start)/2+1 , end);
if(s[a] > s[b]) {
return a;
}else {
return b;
}
}
Explanation: I implemented a method 'largeset'. This method is responsible to get the largest number in a given array.
I call the method tow times in the same array. The first call will get the first largest number.I put it aside into variable and i replace it by '-1' number into the array. Then, i call the largest medhod second time.
Some one can tell me what is the complexity of this algo? please
The time complexity of the algorithm is O(n).
Each recursive call's complexity is actually:
f(n) = 2*f(n/2) + CONST
It is easy to see (by induction1) that f(n) <= CONST'*n - and thus it is O(n).
The space complexity is O(logN) - because this is the maximal depth of the recursion - so you allocate O(logN) memory on the call stack.
(1)
If you use f(n) = 2*n*CONST - CONST you get:
f(n) = 2*f(n/2) + CONST = (h.i.) 2*(2*CONST*n/2 - CONST) + CONST =
= 2*n*CONST - 2CONST + CONST = 2*n*CONST - CONST
(Checking the base is is left as exercise for the reader)
The complexity of the algorithm would be measured as O(n).
But the real answer is that your algorithm is WAY more complex, and more expensive in terms of machine resources than it needs to be. And it's WAY more expensive in terms of someone reading your code and figuring out what it's doing.
The complexity of your algorithm should really be on the order of:
public static int sumOfTwoLargestElements(int[] a) {
//TODO handle case when argument is null,
//TODO handle case when array has less than two non-null elements, etc.
int firstLargest = Integer.MIN_VALUE;
int secondLargest = Integer.MIN_VALUE;
for (int v : a) {
if ( v > firstLargest ) {
secondLargest = firstLargest;
firstLargest = v;
} else if ( v > secondLargest ) secondLargest = v;
}
//TODO handle case when sum exceeds Integer.MAX_VALUE;
return firstLargest + secondLargest;
}
The reccurence for 'Largest' method is:
_
f(n) = !
! 1 n = 1
! 2f(n/2) n >=2
!_
If we experiment some few cases, we notice that
f(n) = 2^log(n) When n is power of 2 Rq:Log base 2
Proof:
By induction,
f(1) = 2^log(1) = 2^log(2^0) = 1
We suppose that f(n) = 2^log(n)=n
We show f(2n) = 2^log(2n)= 2n^log(2)=2n
f(2n) = 2*f(2n/2) = 2*f(n)
= 2*2^log(n)
= 2^log(n) + 1
= 2^log(n) + log(2^0)
= 2^log(2n)
= 2n^log(2) by log properties
= 2n
Then f(n) = 2^log(n)=n When n is power of2-smooth function f(2n) < c f(n). it follows smooth function properties that **f(n) = theta of n**
Related
I did this as a solution to one of the leetcode problems, but I'm not sure what the complexity of my algorithm is.
public String countAndSay(int n) {
if (n == 1) return "1";
String pre = countAndSay(n-1);
char[] prev = pre.toCharArray();
int len = prev.length;
if (len == 1 && prev[0] == '1') return "11";
int idx = 1;
int rep = 1;
String res = "";
while (idx <= len) {
if (idx == len) {
res += (Integer.toString(rep) + prev[idx-1]);
break;
}
if (prev[idx-1] == prev[idx]) rep++;
else {
res += (Integer.toString(rep) + prev[idx-1]);
rep = 1;
}
idx++;
}
return res;
}
Since the recursion takes place n times and the loop is O(n), I feel like it should be O(n^2). Is that correct? If not, can you please explain why?
Here are a few facts:
The method calls itself recursively on input n-1.
The method produces the sequence known as look-and-say sequence.
The length of the resulting string grows exponentially with base λ, where λ = 1.303577269034... is Conway's constant, so the length is O(λ^n).
The loop is quadratic on the length of the string (because of the repeated string concatenations), so we have O((λ^n)^2) = O((λ^2)^n) for the loop.
Hence we can derive the following recurrence relation:
T(0) = 1
T(n) = T(n-1) + O((λ^2)^n)
The asymptotic behaviour of this relation is given by
T(n) ∈ Θ((λ^2)^n) = Θ(1.6993^n)
If you use a StringBuilder instead of doing the evil repeated string concatenations, you can get it down to Θ(λ^n) which would also be asymptotically optimal for this problem.
Below is a recursive function to calculate the value of Binomial Cofecient 'C' i.e. Combination ! I wish to understand this code's Time and Space complexity in terms of N and K (Assuming that We are calculating NCK).
public class ValueOfBinomialCofecientC {
static int globalhitsToThisMethod = 0;
public static void main(String[] args) {
// Calculate nCk.
int n = 61, k = 55;
long beginTime = System.nanoTime();
int ans = calculateCombinationVal(n, k);
long endTime = System.nanoTime() - beginTime;
System.out.println("Hits Made are : " +globalhitsToThisMethod + " -- Result Is : " + ans + " ANd Time taken is:" + (endTime-beginTime));
}
private static int calculateCombinationVal(int n, int k) {
globalhitsToThisMethod++;
if(k == 0 || k == n){
return 1;
} else if(k == 1){
return n;
} else {
int res = calculateCombinationVal(n-1, k-1) + calculateCombinationVal(n-1, k);
return res;
}
}
}
Recursive equation : T(n ,k) = C + T(n-1,k-1) + T(n-1,k);
where, T(n ,k) = time taken for computing NCK
C = constant time => for above if-else
T(n ,k)
| --> work done = C =>after 'C' amount of work function is split --> level-0
____________________________________________
| |
T(n-1,k-1) T(n-1,k)
| --> C | --> C => Total work = C+C = 2C -->leve1-1
____________________________ _______________________
| | | |
T(n-2,k-2) T(n-2,k-1) T(n-2,k-1) T(n-2,k)
using tree method : total work done at level-2 = 4C => 2*2*C
level-3 = 8C => 2*2*2*C
max level tree can grow = max(k+1,n-k-1)
=> T(n ,k) = 2^max(k+1,n-k-1) * C
let C=1
=> T(n) = 2^max(k+1,n-k+1)
T(n) = O(2^n) , k < n/2;
= O(2^k) , k > = n/2;
The runtime is, very interestingly, nCk. Recursively, it is expressed as:
f(n,k) = f(n-1,k-1) + f(n-1,k)
Express each term using the combination formula nCk = n!/(k! * (n-k)!). This is going to bloat up the answer if I try to write every step out, but once you substitute that expression in, multiply the whole equation by (n-k)! * k!/(n-1)!. It should all cancel out to give you n = k + n - k.
There probably are more general approaches to solving multi variable recursive equations, but this one is very obvious if you write out the first few values up to n=5 and k=5.
I've two C programs
where input N should be any integer 2^n : n>=1
one is-
int power(int x,int n)
{
if(n==2)
return x*x;
else
return power(x,n/2)*power(x,n/2);
}
int main()
{
int x=6;
int n=8;
printf("%d",power(x,n));
getch();
}
Other one is-
int power(int x,int n)
{
if(n==2)
return x*x;
else
{
int result=power(x,n/2);
return result*result;
}
}
int main()
{
int x=6;
int n=8;
printf("%d",power(x,n));
getch();
}
For the first one time complexity function will be-
T(n)=2T(n/2)+c hence by deriving we will get O(log n)
For the last one it will be-
T(n)=T(n/2)+c hence by deriving we will get O(log n)
is it correct?
For the first relation you actually have
T(n) = 2 T(n/2)+c = 2^2*T(n/2^2) + 2c+c = ...
= 2^(k-1)*T(n/2^(k-1)) + (k-1)*c+(k-2)*c+...+c =
= 2^(k-1)*T(2) + (k-1)*k*c/2 = n/2 + c*(log n-1)(log n)/2 = O(n)
since you have that n=2^k and the n/2 dominates c*(log n)^2 (as n goes to infinity n/2 becomes much larger than c*(log n)^2).
For the second one you are right:
T(n) = 2*T(n/2)+c = T(n/2^2) + 2c = ... = T(n/2^(k-1)) + (k-1)c = T(2)+(k-1)*c=
= 1 + c*(logn-1) = O(log n)
I have solved the program here. Previously I thought complexity was O(n!)
where n were characters in the word.
But today I feel it is wrong. It should be (6)^(characters in the word) where 6 is the sides in the cube.
Making it more generic, assuming cube would have more than 6 sides, the complexity should be O(cubefaces ^ (characters in input word))
Can someone please explain me time-complexity in this case?
If there is if (cubeMatrix.length != word.length()) return false;, and every side letter of the cube is unique (i.e. no two sides of a cube have the same letter), then the time complexity of your algorithm is O(SN - S + 1 S!) when N >= S, and O(S N!) when N <= S. Here S is the number of cube sides, and N is the number of cubes.
In brief, you make the recursive call only then there is a unused letter in the word corresponding to the cube side letter, so, in the worst case the number of times you make the recursive call is not more than the number of word letters left unused. And the number of unused word letters decreases with increasing of the recursion depth, and eventually this number becomes less than the number of cube sides. That's why, in the final recursion depths the complexity becomes factorial.
A bit more details
Let's introduce f(n) that is how many times you call findWordExists with cubeNumber = n. We also introduce g(n) that is how many times findWordExists with cubeNumber = n recursively calls itself (but now with cubeNumber = n + 1).
f(0) = 1, because you call findWordExists non-recursively only once.
f(n) = f(n - 1) g(n - 1) when n > 0.
We know that g(n) = min { S, N - n }, because, as I already pointed out, findWordExists is called recursively no more times than the number of letters left — the if (frequency > 0) check is responsible for this — and the number of letters left equals to the number of cubes left, i.e. N - n.
Now we can calculate how many times findWordExists is called in total:
f(0) + f(1) + ... + f(N) =
= 1 + g(0) + g(0) g(1) + ... + g(0) g(1) ... g(N - 1) =
= 1 + S + S2 + ... + SN - S + SN - S (S - 1) + SN - S (S - 1) (S - 2) + ... + SN - S (S - 1) (S - 2) ... 1 =
= O(SN - S S!).
But every findWordExists call (except finals) iterate over each side, so we need to multiply the number of findWordExists calls by the number of sides: S O(SN - S S!) = O(SN - S + 1 S!) — and that is our time complexity.
Better Algorithm
Actually, your problem is a bipartite matching problem, so there are much more efficient algorithms than brute force, e.g. Kuhn’s algorithm.
The complexity of Kuhn’s algorithm is O(N M), where N is the number of vertices, and M is the number of edges. In your case, N is the number of cubes, and M is just N2, so the complexity in your case could be O(N3). But you also need to iterate over all the sides of all the cubes, so if the number of cube sides is greater than N2, then the complexity is O(N S), where S is the number of cube sides.
Here is a possible implementation:
import java.util.*;
public class CubeFind {
private static boolean checkWord(char[][] cubes, String word) {
if (word.length() != cubes.length) {
return false;
}
List<Integer>[] cubeLetters = getCubeLetters(cubes, word);
int countMatched = new BipartiteMatcher().match(cubeLetters, word.length());
return countMatched == word.length();
}
private static List<Integer>[] getCubeLetters(char[][] cubes, String word) {
int cubeCount = cubes.length;
Set<Character>[] cubeLetterSet = new Set[cubeCount];
for (int i = 0; i < cubeCount; i++) {
cubeLetterSet[i] = new HashSet<>();
for (int j = 0; j < cubes[i].length; j++) {
cubeLetterSet[i].add(cubes[i][j]);
}
}
List<Integer>[] cubeLetters = new List[cubeCount];
for (int i = 0; i < cubeCount; i++) {
cubeLetters[i] = new ArrayList<>();
for (int j = 0; j < word.length(); j++) {
if (cubeLetterSet[i].contains(word.charAt(j))) {
cubeLetters[i].add(j);
}
}
}
return cubeLetters;
}
public static void main(String[] args) {
char[][] m = {{'e', 'a', 'l'} , {'x', 'h' , 'y'}, {'p' , 'q', 'l'}, {'l', 'h', 'e'}};
System.out.println("Expected true, Actual: " + CubeFind.checkWord(m, "hell"));
System.out.println("Expected true, Actual: " + CubeFind.checkWord(m, "help"));
System.out.println("Expected false, Actual: " + CubeFind.checkWord(m, "hplp"));
System.out.println("Expected false, Actual: " + CubeFind.checkWord(m, "hplp"));
System.out.println("Expected false, Actual: " + CubeFind.checkWord(m, "helll"));
System.out.println("Expected false, Actual: " + CubeFind.checkWord(m, "hel"));
}
}
class BipartiteMatcher {
private List<Integer>[] cubeLetters;
private int[] letterCube;
private boolean[] used;
int match(List<Integer>[] cubeLetters, int letterCount) {
this.cubeLetters = cubeLetters;
int cubeCount = cubeLetters.length;
int countMatched = 0;
letterCube = new int[letterCount];
Arrays.fill(letterCube, -1);
used = new boolean[cubeCount];
for (int u = 0; u < cubeCount; u++) {
if (dfs(u)) {
countMatched++;
Arrays.fill(used, false);
}
}
return countMatched;
}
boolean dfs(int u) {
if (used[u]) {
return false;
}
used[u] = true;
for (int i = 0; i < cubeLetters[u].size(); i++) {
int v = cubeLetters[u].get(i);
if (letterCube[v] == -1 || dfs(letterCube[v])) {
letterCube[v] = u;
return true;
}
}
return false;
}
}
for (int i = 0; i < cubeMatrix[cubeNumber].length; i++)
This will tell about the number of characters in the given cubes(or rather faces of the cube).
Also, inside this loop, you have a
if (frequency > 0) {
charFreq.put(cubeMatrix[cubeNumber][i], frequency - 1);
if (findWordExists(cubeMatrix, charFreq, cubeNumber + 1)) {
return true;
..
// and so on.
This will result in a recursive call, thereby calling for the cubeNumber+1, then cubeNumber+1+1,.., and so on.
And, at last when this condition
if (cubeNumber == cubeMatrix.length) {
for (Integer frequency : charFreq.values()) {
if (frequency > 0) return false;
}
return true;
}
will meet, the for-loop wont't get executed any further.
Assuming, no. of cubes = n, and the characters stored in each cube = generalised faces of each cube(as coined by OP) = f.
WORST CASE ANALYSIS :-
Starting from 0th cube to (n-1)th cube, the for-loop will iterate for cubeMatrix[cubeNumber].length times, which is equal to the number of characters stored in each cube = f times.
And, in each of the iteration of the for-loop, the actual recursive call in the case of cubeNumber 0 will be n-1 times, till it reaches the last cube number.
Hence, for each character entry in the cubeArray(f characters), we have to call all the cubes available(total n as per our assumption).
Hence, total number of times the code checks for finding a word = f ^ n.
In your terms, f = cubefaces = number of characters possible on the faces of your generalised cube;
and , n = total number of cubes available for your test.
It does depend on the frequency of characters which is reduced based
upon the character in the word when the word length doesn't match with
the number of cubes.In that case, the result will be false.
But, in those case where word length is equal to the number of cubes,
in the worst case, the output will be independent of the word length.
Strictly, it will also depend on the number of characters of the word(as comparison with frequency will reduce several cases in calculation), but, in worst-case scenario, unfortunately, it doesn't depend on the number of characters in the word as we will be checking all the entries of characters in all of the available cubes to create the word.
I am trying to get the first 100 fibonacci numbers to output to a .txt file. I got it to run, but it's taking a while. Will fibonacci or fibonacci2 be faster? The code below uses the first one.
#!/usr/bin/env node
var fs = require('fs');
// Fibonacci
// http://en.wikipedia.org/wiki/Fibonacci_number
var fibonacci = function(n) {
if(n < 1) { return 0;}
else if(n == 1 || n == 2) { return 1;}
else if(n > 2) { return fibonacci(n - 1) + fibonacci(n - 2);}
};
// Fibonacci: closed form expression
// http://en.wikipedia.org/wiki/Golden_ratio#Relationship_to_Fibonacci_sequence
var fibonacci2 = function(n) {
var phi = (1 + Math.sqrt(5))/2;
return Math.round((Math.pow(phi, n) - Math.pow(1-phi, n))/Math.sqrt(5));
};
// Find first K Fibonacci numbers via basic for loop
var firstkfib = function(k) {
var i = 1;
var arr = [];
for(i = 1; i < k+1; i++) {
var fibi = fibonacci(i);
arr.push(fibi);
// Print to console so I can monitor progress
console.log(i + " : " + fibi);
}
return arr;
};
var fmt = function(arr) {
return arr.join(",");
};
var k = 100;
// write to file
var outfile = "fibonacci.txt";
var out = fmt(firstkfib(k));
fs.writeFileSync(outfile, out);
console.log("\nScript: " + __filename + "\nWrote: " + out + "\nTo: " + outfile);
In general, recursive function are "cleaner" and "easier" to write, but are often requiring more ressources (mostly memory due to an accumulation of stacks). in your case the best way to get the 100 first would be to programit using a simple loop that will compute the next number of the fibonacci series and add it to a list.
double a[100];
a[0] = 1;
a[1] = 1;
K=2;
Do{
{
a[k] = a[k - 2] + a[k- 1];
k++;
}While (k!=100)
The recursive fibonacci function is implemented the wrong way. The correct way to implement it recursively is discussed in this article Recursion and Fibonacci Numbers. For those too lazy to read, here is their code (it's in C, but it shouldn't be too hard to translate):
unsigned long fib(unsigned int n)
{
return n == 0 ? 0 : fib2(n, 0, 1);
}
unsigned long fib2(unsigned int n, unsigned long p0, unsigned long p1)
{
return n == 1 ? p1 : fib2(n - 1, p1, p0 + p1);
}
An even more efficient implementation would cache the values of the fibonacci sequence as it computes them:
var cache = [];
var fibonacci = function(n) {
if(cache.length > n) return cache[n];
return (cache[n] = fib2(n, 0, 1));
};
var fib2 = function(n, p0, p1) {
if(cache.length > n) return cache[n];
return n == 1 ? p1 : (cache[n] = fib2(n - 1, p1, p0 + p1));
};
I don't really know the language, so there might be some problems with the code, but this is at least the gist of it.
For your question, we can't do better than O(n) since you need to produce all of the first n (n=100) numbers.
Interestingly, if you just need the nth fib number, there exists an O(log n) solution as well.
The algorithm is simple enough: Find the nth power of matrix A using a Divide and Conquer approach and report (0,0)th element, where
A = |1 1 |
|1 0 |
The recursion being
A^n = A^(n/2) * A^(n/2)
Time complexity:
T(n) = T(n/2) + O(1) = O(logn)
If you think about it with a piece of paper, you'd find that the proof is simple and is based upon the principle of induction.
If you still need help, refer to this link
NOTE: Of course you could iteratively calculate A, A^2, A^3 and so on. However, it wouldn't make sense to use it compared to the other simpler solutions described in the other answers. (Because of sheer code complexity)
This is a very naive way to do this calculation. try to do something like:
long[] a = new long[100];
a[0] = 1;
a[1] = 1;
for (int i = 2; i < 100; ++i)
{
a[i] = a[i - 2] + a[i - 1];
}
for (int i = 0; i < 100; ++i)
Console.WriteLine(a[i]);
This way you are getting a linear time O(n)