What is the time complexity of my recursive method with dynamic programming - algorithm

I'm working on an algorithm on getting the f(n) of a set 'S'. It is defined as
example: f(4) = S(4) + f(3) + f(2) + f(1) + f(0)
this is my pseudocode
func solve(int k, int[] s, int[] memo)
{
if(k==0) //base case
return s[0]
if(memo[k] == -1)
{
var nTemp = 0
for(int i=0; i < k; i++)
nTemp = (nTemp + solve(i, s, memo))
memo[k] = (nTemp + s[k])
}
return memo[k]
}
I'm not sure about it's time complexity though, I think it is O(n) but i'm not sure.

Let's consider how many operations solve has to perform starting from k = 1:
k = 1: memo[1] = s[0] + s[1] -> 1 sum
k = 2: memo[2] = memo[0] + memo[1] + s[2] -> 2 sums
...
k = m: memo[s] = memo[0] + memo[1] + ... + s[m] -> m sums
So actually, the number of operations is 1 + 2 + .. + k = (k + 1)*k/2 ~ k*k. Hence, the total time complexity is O(k^2).
However, if values up to k-1 are already cached in memo and we need to calculate f(k) then the time complexity is O(k) since it's only about summing up memo[i] such that i<k.

Related

Time and Space algorithm complexity

I am coding brute force approach for one coding problem - I need to count the maximum score path in the array with maximum step k.
Input: nums = [1,-1,-2,4,-7,3], k = 2
Output: 7
Explanation: You can choose your jumps forming the subsequence [1,-1,4,3] (underlined above). The sum is 7.
And I encountered a problem with calculating complexity. My thought was that on each elemnt we may call function k times, so time and space are O(k^n), where n is length of the array. My second guess: for first element we call function at most 1 time, for second 2 times (that is if k > i) and so on. So we have sum 1 + 2 + ... + k + k + ... + k = ((1 + k) / 2)k + ((k + k) / 2) / (n-k) = O(k^2). I think the first one is correct, but I can't tell for sure why :/
Here's my Java code:
public int maxResult(int[] nums, int k) {
return maxResult(nums, k, nums.length - 1);
}
private int maxResult(int[] nums, int k, int index) {
if (index == 0)
return nums[0];
int max = Integer.MIN_VALUE;
int start = index - k < 0 ? 0 : index - k;
for ( int i = start; i < index; i++ ) {
int res = maxResult(nums, k, i);
System.out.println(i);
max = Math.max(res, max);
}
return max + nums[index];
}
The recurrence relation for your code for a particular k is
C(n) = sum(C(n-i) for i = 1...k) for n>k
C(n) = C(1) + C(2) + ... + C(n-1) for n <= k
C(1) = 1
These are the recurrence relations for the higher-order Fibonacci numbers, shifted by k-1 places. That is, C(n) = kFib(k, n+k-1). The k-Fibonacci numbers grow as Theta(alpha^n) where alpha is some constant based on k -- for k=2, alpha is the golden ratio, and as k increases, alpha gets closer and closer to 2. (Specifically, alpha is is the positive root of (x^k - x^(k-1) - ... - x - 1)).
Therefore C(n) = kFib(k, n+k-1) = Theta(alpha^(n+k)).
Because alpha is always less than 2, O(2^(n+k)) is a simple correct bound, although not a tight one.

Worst case time complexity for the code

Why is the worst time complexity of the following code is O(N)?
/*
* V is sorted
* V.size() = N
* The function is initially called as searchNumOccurrence(V, k, 0, N-1)
*/
int searchNumOccurrence(vector<int> &V, int k, int start, int end) {
if (start > end) return 0;
int mid = (start + end) / 2;
if (V[mid] < k) return searchNumOccurrence(V, k, mid + 1, end);
if (V[mid] > k) return searchNumOccurrence(V, k, start, mid - 1);
return searchNumOccurrence(V, k, start, mid - 1) + 1 + searchNumOccurrence(V, k, mid + 1, end);
}
What's the worst case? the worst case will be that all element are the same and equals to k. Then you have to at least read all elements, which is N. Since most function calls increase the output by 1, there are about N function calls (some returns 0, but they don't spawn new calls). Therefore, the worst time complexity is O(N).
Yes, in the worst case if all the numbers in the array are equal to k, then in this worst case, the recurrence relation shall be:
T(n) = 2*T(n/2)
This translates into O(n).
The last case -
return searchNumOccurrence(V, k, start, mid - 1) + 1 + searchNumOccurrence(V, k, mid + 1, end);
is the bottleneck step.
Assuming all the values in the array are the same, we get the following relation :
T(N) = 2 * T(N/2) + constant
= 4 * T(N/4) + constant ( 2 * constant = another constant )
= 8 * T(N/8) + constant
.....
= N * T(N/N) + constant
= N + constant
= O(N)

Calculating time complexity of algorithm

How to calculate time complexity of function f?
void f(int n)
{
if (n <= 1)
return;
g(n, n / 3);
}
void g(int n, int m)
{
int i = 1;
while (m < n) {
m += i;
i++;
}
f(n / 2);
}
The answer is sqrt(n), but I don't see how...
Thanks
First, note that the the program can be translated now to a single function program, by inlining g(n,m) in f():
void f(int n)
{
if (n <= 1)
return;
m = n/3;
while (m < n) {
m += i;
i++;
}
f(n / 2);
}
The inner loop runs in O(sqrt(n)) iteration, because it starts from n/3, ends with n, and is increased by 1,2,3,... so if we sum it we get:
n/3 + (1 + 2 + ... + i) >= n
We need to solve the above equation to find the final value of i, and we get:
1 + 2 + ... + i >= 2n/3
From sum of arithmetic progression:
i(i+1)/2 >= 2n/3
From the above inequality, we can conclude that indeed i is in O(sqrt(n)).
So, we can denote the complexity as:
T(n) = T(n/2) + O(sqrt(n))
^ ^
recursive step syntatic sugar for some function
which is in O(sqrt(n)).
Now, we can see that:
T(n) = T(n/2) + sqrt(n) = T(n/4) + sqrt(n/2) + sqrt(n) = ... =
= sqrt(1) + ... + sqrt(n/2) + sqrt(n)
And the above sum is in O(sqrt(n))
Let Fn be the time complexity of f(n) and Gn,m be the time complexity of g(n,m).
Gn,m = sqrt(n-m) + Fn / 2
Fn = Gn,n/3 = sqrt(n-n/3) + Fn / 2 = C sqrt(n) + Fn/2
So the answer is sqrt(n).

Efficiently evaluating a recursive function?

I came across with an interesting puzzle on my previous interview.
You need to implement a function which would fit the following conditions:
m, n - positive integer numbers > 0
F(m, n) = F(m-1, n-1) + F(m, n-1)
F(1, n) = 1
F(m, 1) = 1
Obviously you can write the recursive implementation:
int F(int m, int n)
{
if(m == 1) return 1;
if(n == 1) return 1;
return F(m-1, n-1) + F(m, n-1);
}
But for input data equals one billion it will run very long time because it will get 2^1000000000 iterations :)
Does anybody have any ideas how to optimize this solution?
function F(m, n)
v = 1
s = 1
k = 1
while k < m do
v = v * (n-k) / k
s = s + v
k = k + 1
end
return s
end

How to calculate Running time of an algorithm

I have the following algorithm below :
for(i = 1; i < n; i++){
SmallPos = i;
Smallest = Array[SmallPos];
for(j = i + 1; j <= n; j++)
if (Array[j] < Smallest) {
SmallPos = j;
Smallest = Array[SmallPos];
}
Array[SmallPos] = Array[i];
Array[i] = Smallest;
}
Here is my calculation :
For the nested loop, I find a time complexity of
1 ("int i = 0") + n+1 ("i < n") + n ("i++")
* [1 ("j = i + 1") + n+1 ("j < n") + n ("j++")]
+ 3n (for the if-statement and the statements in it)
+ 4n (the 2 statements before the inner for-loop and the final 2 statements after the inner for-loop).
This is (1 + n + 1 + n)(1 + 1 + n + n) + 7n = (2 + 2n)(2 + 2n) + 7n = 4n^2 + 15n + 4.
But unfortunately, the text book got T(n) = 2n^2 +4n -5.
Please, anyone care to explain to me where I got it wrong?
Here is a formal manner to represent your algorithm, mathematically (Sigma Notation):
Replace c by the number of operations in the outer loop, and c' by the number of operations in the inner loop.

Resources