I was trying to solve my university problem about recurrence equations and computational complexity but I can't understand how to set the recurrence equation.
static void comb(int[] a, int i, int max) {
if(i < 0) {
for(int h = 0; h < a.length; h++)
System.out.print((char)(’a’+a[h]));
System.out.print("\n");
return;
}
for(int v = max; v >= i; v--) {
a[i] = v;
comb(a, i-1, v-1);
}
}
static void comb(int[] a, int n) { // a.length <= n
comb(a, a.length-1, n - 1);
return;
}
I tried to set the following equation
O(n) + c if i < 0
T (n, i, j) = {
(j-i) T(n, i-1, j-1) otherwise
Solving
T(n, i, j) = (j-i) T(n, i-1, j-1) =
(j-i) (j-1-i+1) T(n, i-2, j-2) =
(j-i)^k T(n, i-k, j-k)
At this point I'm stuck and I can not figure out how to proceed.
Thanks and sorry for my bad english.
Luigi
With your derivation
T(n, i, j) = ... = (j-i)^k T(n, i-k, j-k)
you are almost done! Just set k = i+1 and you get:
T(n, i, j) = (j-i)^(i+1) T(n,-1,j-i-1) = (j-i)^(i+1) O(n) =
O(n (j-i)^(i+1))
Related
void merge(int arr[], int l, int m, int r);
int min(int x, int y) { return (x<y)? x :y; }
void mergeSort(int arr[], int n)
{
int curr_size;
int left_start;
for (curr_size=1; curr_size<=n-1; curr_size = 2*curr_size)
{
for (left_start=0; left_start<n-1; left_start += 2*curr_size)
{
int mid = min(left_start + curr_size - 1, n-1);
int right_end = min(left_start + 2*curr_size - 1, n-1);
merge(arr, left_start, mid, right_end);
}
}
}
void merge(int arr[], int l, int m, int r)
{
int i, j, k;
int n1 = m - l + 1;
int n2 = r - m;
int L[n1], R[n2];
for (i = 0; i < n1; i++)
L[i] = arr[l + i];
for (j = 0; j < n2; j++)
R[j] = arr[m + 1+ j];
i = 0;
j = 0;
k = l;
while (i < n1 && j < n2)
{
if (L[i] <= R[j])
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}
while (i < n1)
{
arr[k] = L[i];
i++;
k++;
}
while (j < n2)
{
arr[k] = R[j];
j++;
k++;
}
}
I couldn't figure out O(nlogn) time complexity of the iterative merge sort.
I need to calculate myself using steps like we do in normal calculation of time complexities. Calculations of Recursive merge sort is available but i couldn't find and figure out how to solve for iterative one.
Please help
Assuming n is a power of 2, there's n merges of size 1, and n/2 merges of size 2, n/4 merges of size 4, and so on, down to 1 merge of size n.
The cost of a merge of size k is linear in k.
So the total cost is n + n/2 * 2 + n/4 * 4 + ... + 1 * n = n log_2 n.
I learned that time function of merge sort is right below.
T(n) = 2T(n/2) + Θ(n) if n>1
I understand why T(n) = 2T(n/2)+ A
But why does A = Θ(n)?
I think A is maybe dividing time, but i don't understand why it is expressed as Θ(n)
Please help!
No, A is not the dividing step. A is the merging step which is linear.
void merge(int a[], int b[], int p, int q, int c[])
/* Function to merge the 2 arrays a[0..p} and b[0..q} into array c{0..p + q} */
{
int i = 0, j = 0, k = 0;
while (i < p && j < q) {
if (a[i] <= b[j]) {
c[k] = a[i];
i++;
}
else {
c[k] = b[j];
j++;
}
k++;
}
while (i < p) {
c[k] = a[i];
i++;
k++;
}
while (j < q) {
c[k] = b[j];
j++;
k++;
}
}
This merging step takes O(p + q) time when p and q are the subarray lengths and here p + q = n.
Given an (1-based) array a with n elements and function f(i, j) (1 ≤ i, j ≤ n) as (i - j)2 + g(i, j)2. Function g is calculated by the following pseudo-code:
int g(int i, int j)
{
int sum = 0;
for (int k = min(i, j) + 1; k <= max(i, j); k = k + 1)
sum = sum + a[k];
return sum;
}
Find a value mini ≠ j f(i, j).
I have created a iterative brute force algorithm for this but the solution need to be coded in divide and conquer.
Brute force algo :
def g_fun(i,j):
sum=0
for k in xrange(min(i,j)+1,max(i,j)+1):
sum+=arr[k-1]
return sum
def f_fun(i,j):
s=g_fun(i,j)
return ((i-j)**2 + s**2)
n=input("n : ")
arr=map(int,raw_input("Array : ").split())
low=100000 #infinity
for i in xrange(1,n+1):
for j in xrange(1,n+1):
if i!=j:
temp=f_fun(i,j)
if temp < low:
low=temp
print low
I got asked this question in an interview and was not sure how to answer. This is a regular 3SUM problem and we all know the O(n^2) answer. Question goes this way: You have 3 non-sorted arrays a, b, c. Find three element such that a[i] + b[j] + c[k] = 0. You are not allowed to use hashing in this scenario and the solution must be <= O(n^2)
Here is my answer and yes this is still O(n^3) unfortunately
public static void get3Sum(int[] a, int[] b, int[] c) {
int i = 0, j = 0, k = 0, lengthOfArrayA = a.length, lengthOfArrayB = b.length, lengthOfArrayC = c.length;
for (i = 0; i < lengthOfArrayA; i++) {
j = k = 0;
while (j < lengthOfArrayB) {
if (k >= lengthOfArrayC) {
j++;
continue;
} else if (a[i] + b[j] + c[k] == 0) {
// found it: so print
System.out.println(a[i] + " " + b[j] + " " + c[k]);
k++;
if (j > lengthOfArrayB - 1)
break;
} else {
k++;
if (k >= lengthOfArrayC) {
j++;
k = 0;
}
}
}
}
}
Anyone has any brilliant ideas to solve this in less then or equal to O(N^2)?
Thanks!
Sort A and Sort B.
Once we sort, given an S, in O(n) time, we can solve the problem of finding i,j such that A[i] + B[j] = S.
This we can do by maintaining two pointers a and b, a initially at the lowest element of A, and b at the largest. Then you increment a or decrement b appropriately after comparing A[a] + B[b] with S.
For your problem, run the O(n) algorithm n times (so O(n^2)) by taking S to be all -C[k].
We all heard of bentley's beautiful proramming pearls problem
which solves maximum subsequence sum:
maxsofar = 0;
maxcur = 0;
for (i = 0; i < n; i++) {
maxcur = max(A[i] + maxcur, 0);
maxsofar = max(maxsofar, maxcur);
}
What if we add an additional condition maximum subsequence that is lesser M?
This should do this. Am I wright?
int maxsofar = 0;
for (int i = 0; i < n - 1; i++) {
int maxcur = 0;
for (int j = i; j < n; j++) {
maxcur = max(A[j] + maxcur, 0);
maxsofar = maxcur < M ? max(maxsofar, maxcur) : maxsofar;
}
}
Unfortunately this is O(n^2). You may speed it up a little bit by breaking the inner loop when maxcur >=M, but still n^2 remains.
This can be solved using dynamic programming albeit only in pseudo-polynomial time.
Define
m(i,s) := maximum sum less than s obtainable using only the first i elements
Then you can calculate max(n,M) using the following recurrence relation
m(i,s) = max(m(i-1,s), m(i-1,s-A[i]]+A[i]))
This solution is similar to the solution to the knapsack problem.
If all A[i] > 0, you can do this in O(n lg n): precompute partial sums S[i], then binary search S for S[i] + M. For instance:
def binary_search(L, x):
def _binary_search(lo, hi):
if lo >= hi: return lo
mid = lo + (hi-lo)/2
if x < L[mid]:
return _binary_search(lo, mid)
return _binary_search(mid+1, hi)
return _binary_search(0, len(L))
A = [1, 2, 3, 2, 1]
M = 4
S = [A[0]]
for a in A[1:]:
S.append(S[-1] + a)
maxsum = 0
for i, s in enumerate(S):
j = binary_search(S, s + M)
if j == len(S):
break
sum = S[j-1] - S[i]
maxsum = max(sum, maxsum)
print maxsum
EDIT: as atuls correctly points out, the binary search is overkill; since S is increasing, we can just keep track of j each iteration and advance from there.
Solveable in O(n log(n)). Using a binary search tree (balanced) to search for smallest value larger than sum-M, and then update min, and insert sum, by going from left to right. Where sum is the partial sum so far.
best = -infinity;
sum = 0;
tree.insert(0);
for(i = 0; i < n; i++) {
sum = sum + A[i];
int diff = sum - tree.find_smallest_value_larger_than(sum - M);
if (diff > best) {
best = diff;
}
tree.insert(sum);
}
print best