Is this correct time complexity? - performance

I did this as a solution to one of the leetcode problems, but I'm not sure what the complexity of my algorithm is.
public String countAndSay(int n) {
if (n == 1) return "1";
String pre = countAndSay(n-1);
char[] prev = pre.toCharArray();
int len = prev.length;
if (len == 1 && prev[0] == '1') return "11";
int idx = 1;
int rep = 1;
String res = "";
while (idx <= len) {
if (idx == len) {
res += (Integer.toString(rep) + prev[idx-1]);
break;
}
if (prev[idx-1] == prev[idx]) rep++;
else {
res += (Integer.toString(rep) + prev[idx-1]);
rep = 1;
}
idx++;
}
return res;
}
Since the recursion takes place n times and the loop is O(n), I feel like it should be O(n^2). Is that correct? If not, can you please explain why?

Here are a few facts:
The method calls itself recursively on input n-1.
The method produces the sequence known as look-and-say sequence.
The length of the resulting string grows exponentially with base λ, where λ = 1.303577269034... is Conway's constant, so the length is O(λ^n).
The loop is quadratic on the length of the string (because of the repeated string concatenations), so we have O((λ^n)^2) = O((λ^2)^n) for the loop.
Hence we can derive the following recurrence relation:
T(0) = 1
T(n) = T(n-1) + O((λ^2)^n)
The asymptotic behaviour of this relation is given by
T(n) ∈ Θ((λ^2)^n) = Θ(1.6993^n)
If you use a StringBuilder instead of doing the evil repeated string concatenations, you can get it down to Θ(λ^n) which would also be asymptotically optimal for this problem.

Related

Is this method BigO(n^2) or BigO(nLog(n))?

This is a method i wrote that checks if two strings are premutations of each other:
public static boolean permutation(String str1, String str2) {
boolean terminate = false;
for(int x = 0; x < str1.length(); x++){
for(int y = 0; y < str2.length(); y++){
if(y != str2.length() - 1){
if(str1.charAt(x) == str2.charAt(y)){
str1.replace(String.valueOf(str1.charAt(x)), "");
str2.replace(String.valueOf(str2.charAt(y)), "");
break;
}
}else{
if(str1.charAt(x) != str2.charAt(y)){
return false;
}else{
str1.replace(String.valueOf(str1.charAt(x)), "");
str2.replace(String.valueOf(str2.charAt(y)), "");
}
}
}
}
return true;
}
as you see in the code I'm using a for loop inside another for loop but not checking all the elements of x with y (but sometimes checking all the elements of y with the current value of x), So I'm guessing is BigO(nLog(n)) since we are not comparing all the elements overall but my mind is telling me this might be BigO(n^2). Also I am removing the indexes once a certain condition is met so the other remained indexes Won't inspect the removing elements, in my opinion it is a BigO(nLog(n)) but just in case it is not i want to know why.
!UPDATE!: Due to the lack of knowledge and that i did that algorithm in 7 minutes the actual problem was an O(n^3), I did not pay attention to the replace method of the string that it was actually an O(n) time complexity, also did not check if the input where the same length so it returned true when it should return false in some cases, here's the new code judging by it's structure i would say it is a O(n) in it's best and O(n^2) in it's worst:
public static boolean permutation(String str1, String str2){
if(str1.length() != str2.length()){
return false;
}
HashSet<Character> string1 = new HashSet<>();
for(int x =0; x < str1.length(); x++){
string1.add(str1.charAt(x));
}
for(int x = 0; x < str1.length(); x++){
if(!string1.contains(str2.charAt(x))){
return false;
}
}
return true;
}
You haven't considered the time complexity for the replace() method. It takes O(N) time complexity.
Agreed that you are processing the strings depending upon certain conditions, but the 2 for loops anyway make the time complexity O(N^2).
Therefore, overall time complexity = O(N^2 * N) = O(N^3).
To be precise, if the length of str1 is N and the length of str2 is M, then the time complexity ~ O (N * M * (N + M)).

Complexity of algorithms of function

public Term[] allMatches(String prefix) {
Term myword = new Term(prefix, 0);
int firstindex = RangeBinarySearch.firstIndexOf(terms, myword, Term.byPrefixOrder(prefix.length()));
int lastindex = RangeBinarySearch.lastIndexOf(terms, myword, Term.byPrefixOrder(prefix.length()));
Term[] Matches = new Term[lastindex-firstindex+1];
int j = 0;
for (int i = firstindex; i <= lastindex; i++) {
Matches[j] = terms[i];
j++;
}
Arrays.sort(Matches,Term.byReverseWeightOrder());
return Matches;
}
I'm wondering of my complexity here.
I have a sorted array of String and I must search of matches with the given prefix,
the first index is binary search so it is Log n
the last index is also Log n
matches [j] and terms[i] is m + m where terms are the whole array of strings and matches is the matches strings with given prefix
Array.sort is mLogm
so the complexity is 2Log n + 2m + mlogm so its logn +m +mlog m or how?
It should be in the assignment logn + mlogm in the worst case and I don't know how to fix it?

Confused about these asymptotic notation and its runtime

public static Asymptotic f3_notation = Asymptotic.BIG_THETA;
public static Runtime f3_runtime = Runtime.LINEAR;
/* When f3 is first called, start will be 0 and end will be the length of the array - 1 */
public int f3(char[] array, int start, int end) {
if (array.length <= 1 || end <= start){
return 1;
}
int mid = start + ((end - start) / 2);
return f3(array, start, mid) + f3(array, mid + 1, end);
}
public static Asymptotic f4_notation = Asymptotic.BIG_THETA;
public static Runtime f4_runtime = Runtime.LINEARITHMIC;
/* When f4 is first called, start will be 0 and end will be the length of the array - 1 */
public int f4(char[] array, int start, int end) {
if (array.length <= 1 || end <= start) return 1;
int counter = 0;
for (int i = start; i < end; i++) {
if (array[i] == 'a') counter++;
}
int mid = start + ((end - start) / 2);
return counter + f4(array, start, mid) + f4(array, mid + 1, end);
}
So I have these two methods. What I don't understand is that both have recursion but why is the first one is linear and the second method is linearithmic?
I was told that if there is division or multiplication, usually its runtime is log-n. Though the first method has the division, it still is considered as linear but the second is not.
The more I understand, the more it confuses me and makes me feel like I know nothing.
The formula for the first method is:
T(n) = 2T(n/2) + O(1)
So if you draw the corresponding tree for this formula you will see that the amount of work is proportional to number of nodes in the tree which is O(n). Also you could use Master Method to solve this.
But for the second it is:
T(n) = 2T(n/2) + O(n)
In fact, what happens here is that your tree will have (just like the first method) O(log n) levels, but here in each level you are spending O(n) time which will result in O(n log n) time complexity. Again, Master Theorem works for this. Note that in the first case, though your tree (for the formula) will have O(log n) levels but in each level you will spend time proportional to the number of nodes on that level, and not O(n).

Algorithm complexity

I got this problem "Implement this method to return the sum of the two largest numbers in a given array."
I resolved it in this way:
public static int sumOfTwoLargestElements(int[] a) {
int firstLargest = largest(a, 0, a.length-1);
int firstLarge = a[firstLargest];
a[firstLargest] = -1;
int secondLargest = largest(a, 0, a.length-1);
return firstLarge + a[secondLargest];
}
private static int largest(int s[], int start , int end){
if (end - start == 0){
return end;
}
int a = largest(s, start, start + (end-start)/2) ;
int b = largest(s, start + (end-start)/2+1 , end);
if(s[a] > s[b]) {
return a;
}else {
return b;
}
}
Explanation: I implemented a method 'largeset'. This method is responsible to get the largest number in a given array.
I call the method tow times in the same array. The first call will get the first largest number.I put it aside into variable and i replace it by '-1' number into the array. Then, i call the largest medhod second time.
Some one can tell me what is the complexity of this algo? please
The time complexity of the algorithm is O(n).
Each recursive call's complexity is actually:
f(n) = 2*f(n/2) + CONST
It is easy to see (by induction1) that f(n) <= CONST'*n - and thus it is O(n).
The space complexity is O(logN) - because this is the maximal depth of the recursion - so you allocate O(logN) memory on the call stack.
(1)
If you use f(n) = 2*n*CONST - CONST you get:
f(n) = 2*f(n/2) + CONST = (h.i.) 2*(2*CONST*n/2 - CONST) + CONST =
= 2*n*CONST - 2CONST + CONST = 2*n*CONST - CONST
(Checking the base is is left as exercise for the reader)
The complexity of the algorithm would be measured as O(n).
But the real answer is that your algorithm is WAY more complex, and more expensive in terms of machine resources than it needs to be. And it's WAY more expensive in terms of someone reading your code and figuring out what it's doing.
The complexity of your algorithm should really be on the order of:
public static int sumOfTwoLargestElements(int[] a) {
//TODO handle case when argument is null,
//TODO handle case when array has less than two non-null elements, etc.
int firstLargest = Integer.MIN_VALUE;
int secondLargest = Integer.MIN_VALUE;
for (int v : a) {
if ( v > firstLargest ) {
secondLargest = firstLargest;
firstLargest = v;
} else if ( v > secondLargest ) secondLargest = v;
}
//TODO handle case when sum exceeds Integer.MAX_VALUE;
return firstLargest + secondLargest;
}
The reccurence for 'Largest' method is:
_
f(n) = !
! 1 n = 1
! 2f(n/2) n >=2
!_
If we experiment some few cases, we notice that
f(n) = 2^log(n) When n is power of 2 Rq:Log base 2
Proof:
By induction,
f(1) = 2^log(1) = 2^log(2^0) = 1
We suppose that f(n) = 2^log(n)=n
We show f(2n) = 2^log(2n)= 2n^log(2)=2n
f(2n) = 2*f(2n/2) = 2*f(n)
= 2*2^log(n)
= 2^log(n) + 1
= 2^log(n) + log(2^0)
= 2^log(2n)
= 2n^log(2) by log properties
= 2n
Then f(n) = 2^log(n)=n When n is power of2-smooth function f(2n) < c f(n). it follows smooth function properties that **f(n) = theta of n**

Which fibonacci function will evaluate faster?

I am trying to get the first 100 fibonacci numbers to output to a .txt file. I got it to run, but it's taking a while. Will fibonacci or fibonacci2 be faster? The code below uses the first one.
#!/usr/bin/env node
var fs = require('fs');
// Fibonacci
// http://en.wikipedia.org/wiki/Fibonacci_number
var fibonacci = function(n) {
if(n < 1) { return 0;}
else if(n == 1 || n == 2) { return 1;}
else if(n > 2) { return fibonacci(n - 1) + fibonacci(n - 2);}
};
// Fibonacci: closed form expression
// http://en.wikipedia.org/wiki/Golden_ratio#Relationship_to_Fibonacci_sequence
var fibonacci2 = function(n) {
var phi = (1 + Math.sqrt(5))/2;
return Math.round((Math.pow(phi, n) - Math.pow(1-phi, n))/Math.sqrt(5));
};
// Find first K Fibonacci numbers via basic for loop
var firstkfib = function(k) {
var i = 1;
var arr = [];
for(i = 1; i < k+1; i++) {
var fibi = fibonacci(i);
arr.push(fibi);
// Print to console so I can monitor progress
console.log(i + " : " + fibi);
}
return arr;
};
var fmt = function(arr) {
return arr.join(",");
};
var k = 100;
// write to file
var outfile = "fibonacci.txt";
var out = fmt(firstkfib(k));
fs.writeFileSync(outfile, out);
console.log("\nScript: " + __filename + "\nWrote: " + out + "\nTo: " + outfile);
In general, recursive function are "cleaner" and "easier" to write, but are often requiring more ressources (mostly memory due to an accumulation of stacks). in your case the best way to get the 100 first would be to programit using a simple loop that will compute the next number of the fibonacci series and add it to a list.
double a[100];
a[0] = 1;
a[1] = 1;
K=2;
Do{
{
a[k] = a[k - 2] + a[k- 1];
k++;
}While (k!=100)
The recursive fibonacci function is implemented the wrong way. The correct way to implement it recursively is discussed in this article Recursion and Fibonacci Numbers. For those too lazy to read, here is their code (it's in C, but it shouldn't be too hard to translate):
unsigned long fib(unsigned int n)
{
return n == 0 ? 0 : fib2(n, 0, 1);
}
unsigned long fib2(unsigned int n, unsigned long p0, unsigned long p1)
{
return n == 1 ? p1 : fib2(n - 1, p1, p0 + p1);
}
An even more efficient implementation would cache the values of the fibonacci sequence as it computes them:
var cache = [];
var fibonacci = function(n) {
if(cache.length > n) return cache[n];
return (cache[n] = fib2(n, 0, 1));
};
var fib2 = function(n, p0, p1) {
if(cache.length > n) return cache[n];
return n == 1 ? p1 : (cache[n] = fib2(n - 1, p1, p0 + p1));
};
I don't really know the language, so there might be some problems with the code, but this is at least the gist of it.
For your question, we can't do better than O(n) since you need to produce all of the first n (n=100) numbers.
Interestingly, if you just need the nth fib number, there exists an O(log n) solution as well.
The algorithm is simple enough: Find the nth power of matrix A using a Divide and Conquer approach and report (0,0)th element, where
A = |1 1 |
|1 0 |
The recursion being
A^n = A^(n/2) * A^(n/2)
Time complexity:
T(n) = T(n/2) + O(1) = O(logn)
If you think about it with a piece of paper, you'd find that the proof is simple and is based upon the principle of induction.
If you still need help, refer to this link
NOTE: Of course you could iteratively calculate A, A^2, A^3 and so on. However, it wouldn't make sense to use it compared to the other simpler solutions described in the other answers. (Because of sheer code complexity)
This is a very naive way to do this calculation. try to do something like:
long[] a = new long[100];
a[0] = 1;
a[1] = 1;
for (int i = 2; i < 100; ++i)
{
a[i] = a[i - 2] + a[i - 1];
}
for (int i = 0; i < 100; ++i)
Console.WriteLine(a[i]);
This way you are getting a linear time O(n)

Resources