Path of Length N in graph with constraints - algorithm

I want to find number of path of length N in a graph where the vertex can be any natural number. However two vertex are connected only if the product of the two vertices is less than some natural number P. If the product of two vertexes are greater than P than those are not connected and can't be reached from one other.
I can obviously run two nested loops (<= P) and create an adjacency matrix, but P can be extremely large and this approach would be extremely slow. Can anyone think of some optimal approach to solve the problem? Can we solve it using Dynamic Programming?

I agree with Ante's recurrence, although I used a slightly simplified version. Note that I'm using the letter P to name the maximum product, as it is used in the original problem statement:
f(1,x) = 1
f(i,x) = sum(f(i-1, y) for y in {1, ..., floor(P/x)})
f(i,x) is the number of sequences of length i that end with x. The answer to the question is then f(n+1, 1).
Of course since P can be up to 10^9 in this task, a straightforward implementation with a DP table is out of the question. However, there are only up to m < 70000 possible different values of floor(P/i). So let's find the maximal segments aj ... bj, where floor(P/aj) = floor(P/bj). We can find those segments in O(number of segments * log P) using binary search.
Imagine the full DP table for f. Since there are only m different values for floor(P/x), every row of f consists of m contiguous ranges that have the same value.
So let's compute the compressed DP table, where we represent the rows as list of (length, value) pairs. We start with f(1) = [(P, 1)] and we can compute f(i+1) from f(i) by processing the segments in increasing order and computing prefix sums of the lengths stored in f(i).
The total runtime of my implementation of this approach is O(m (log P + n)). This is the code I used:
using ll=long long;
const int mod = 1000000007;
void add(int& x, ll y) { x = (x+y)%mod; }
int main() {
int n, P;
cin >> n >> P;
int x = 1;
vector<pair<int,int>> segments;
while(x <= P) {
int y = x+1, hi = P+1;
while(y<hi) {
int mid = (y+hi)/2;
if (P/mid < P/x) hi=mid;
else y=mid+1;
}
segments.push_back(make_pair(P/x, y-x));
x = y;
}
reverse(begin(segments), end(segments));
vector<pair<int,int>> dp;
dp.push_back(make_pair(P,1));
for (int i = 1; i <= n; ++i) {
int j = 0;
int sum_smaller = 0, cnt_smaller = 0;
vector<pair<int,int>> dp2;
for (auto it : segments) {
int value = it.first, cnt = it.second;
while (cnt_smaller + dp[j].first <= value) {
cnt_smaller += dp[j].first;
add(sum_smaller,(ll)dp[j].first*dp[j].second);
j++;
}
int pref_sum = sum_smaller;
if (value > cnt_smaller)
add(pref_sum, (ll)(value - cnt_smaller)*dp[j].second);
dp2.push_back(make_pair(cnt, pref_sum));
}
dp = dp2;
reverse(begin(dp),end(dp));
}
cout << dp[0].second << endl;
}
I needed to do some micro-optimizations with the handling of the arrays to get AC, but those aren't really relevant, so I left them away.

If number of vertices is small than adjacency matrix (A) can help. Since sum of elements in A^N is number of distinct paths, if paths are oriented. If not than number of paths i sum of elements / 2. That is due an element (i,j) represents number of paths from vertex i to vertex j.
In this case, same approach can be done by DP, using reasoning that number of paths of length n from vertex v is sum of numbers of paths of length n-1 of all it's neighbours. Neigbours of vertex i are vertices from 1 to floor(Q/i). With that we can construct function N(vertex, length) which represent number of paths from given vertex with given length:
N(i, 1) = floor(Q/i),
N(i, n) = sum( N(j, n-1) for j in {1, ..., floor(Q/i)}.
Number of all oriented paths of length is sum( N(i,N) ).

Related

O(n) solution to counting sub-arrays with sum constraints

I'm trying to improve my intuition around the following two sub-array problems.
Problem one
Return the length of the shortest, non-empty, contiguous sub-array of A with sum at least
K. If there is no non-empty sub-array with sum at least K, return -1
I've come across an O(N) solution online.
public int shortestSubarray(int[] A, int K) {
int N = A.length;
long[] P = new long[N+1];
for (int i = 0; i < N; ++i)
P[i+1] = P[i] + (long) A[i];
// Want smallest y-x with P[y] - P[x] >= K
int ans = N+1; // N+1 is impossible
Deque<Integer> monoq = new LinkedList(); //opt(y) candidates, as indices of P
for (int y = 0; y < P.length; ++y) {
// Want opt(y) = largest x with P[x] <= P[y] - K;
while (!monoq.isEmpty() && P[y] <= P[monoq.getLast()])
monoq.removeLast();
while (!monoq.isEmpty() && P[y] >= P[monoq.getFirst()] + K)
ans = Math.min(ans, y - monoq.removeFirst());
monoq.addLast(y);
}
return ans < N+1 ? ans : -1;
}
It seems to be maintaining a sliding window with a deque. It looks like a variant of Kadane's algorithm.
Problem two
Given an array of N integers (positive and negative), find the number of
contiguous sub array whose sum is greater or equal to K (also, positive or
negative)"
The best solution I've seen to this problem is O(nlogn) as described in the following answer.
tree = an empty search tree
result = 0
// This sum corresponds to an empty prefix.
prefixSum = 0
tree.add(prefixSum)
// Iterate over the input array from left to right.
for elem <- array:
prefixSum += elem
// Add the number of subarrays that have this element as the last one
// and their sum is not less than K.
result += tree.getNumberOfLessOrEqual(prefixSum - K)
// Add the current prefix sum the tree.
tree.add(prefixSum)
print result
My questions
Is my intuition that algorithm one is a variant of Kandane's algorithm correct?
If so, is there a variant of this algorithm (or another O(n) solution) that can be used to solve problem two?
Why can problem two only be solved in O(nlogn) time when they look so similar?

Smallest missing integer algorithm that runs in O(n)?

What algorithm might find a missing integer in O(n) time, from an array?
Say we have an array A with elements in a value-range {1,2,3...2n}. Half the elements are missing so length of A = n.
E.g:
A = [1,2,5,3,10] , n=5
Output = 4
The smallest missing integer must be in the range [1, ..., n+1]. So create an array of flags, all initially false, indicating the presence of that integer. Then an algorithm is:
Scan the input array, setting flags to true as you encounter values in the range. This operation is O(n). (That is, set flag[A[i]] to true for each position i in the input array, provided A[i] <= n.)
Scan the flag array for the first false flag. This operation is also O(n). The index of the first false flag is the smallest missing integer.
EDIT: O(n) time algorithm with O(1) extra space:
If A is writable and there are some extra bits available in the elements of A, then a constant-extra-space algorithm is possible. For instance, if the elements of A are signed values, and since all the numbers are positive, we can use the sign bit of the numbers in the original array as the flags, rather than creating a new flag array. So the algorithm would be:
For each position i of the original array, if abs(A[i]) < n+1, make the value at A[abs(A[i])] negative. (This assumes array indexes are based at 1. Adjust in the obvious way if you are using 0-based arrays.) Don't just negate the value, in case there are duplicate values in A.
Find the index of the first element of A that is positive. That index is the smallest missing number in A. If all positions are negative, then A must be a permutation of {1, ..., n} and hence the smallest missing number is n+1.
If the elements are unsigned, but can hold values as high as 4 n + 1, then in step 1, instead of making the element negative, add 2 n + 1 (provided the element is <= 2 n) and use (A[i] mod (2n+1)) instead of abs(A[i]). Then in step 2, find the first element < 2 n + 1 instead of the first positive element. Other such tricks are possible as well.
You can do this in O(1) additional space, assuming that the only valid operations on the array is to read elements, and to swap pairs of elements.
First note that the specification of the problem excludes the possibility of the array containing duplicates: it contains half of the numbers from 1 to 2N.
We perform a quick-select type algorithm. Start with m=1, M=2N+1, and pivot the array on (m + M)/2. If the size of the left part of the array (elements <= (m+M)/2) is less than (m + M)/2 - m + 1, then the first missing number must be there. Otherwise, it must be in the right part of the array. Repeat on the left or right side accordingly until you find the missing number.
The size of the slice of the array under consideration halves each time and pivoting an array of size n can be done in O(n) time and O(1) space. So overall, the time complexity is 2N + N + N/2 + ... + 1 <= 4N = O(N).
An implementation of Paul Hankin's idea in C++
#include <iostream>
using namespace std;
const int MAX = 1000;
int a[MAX];
int n;
void swap(int &a, int &b) {
int tmp = a;
a = b;
b = tmp;
}
// Rearranges elements of a[l..r] in such a way that first come elements
// lower or equal to M, next come elements greater than M. Elements in each group
// come in no particular order.
// Returns an index of the first element among a[l..r] which is greater than M.
int rearrange(int l, int r, int M) {
int i = l, j = r;
while (i <= j)
if (a[i] <= M) i++;
else swap(a[i], a[j--]);
return i;
}
int main() {
cin >> n;
for (int i = 0; i < n; i++) cin >> a[i];
int L = 1, R = 2 * n;
int l = 0, r = n - 1;
while (L < R) {
int M = (L + R) / 2; // pivot element
int m = rearrange(l, r, M);
if (m - l == M - L + 1)
l = m, L = M + 1;
else
r = m - 1, R = M;
}
cout << L;
return 0;
}

Algorithm for selecting closest pairs using Dynamic Programming

I have been trying to solve this problem my professor has given me but couldn't make a proper solution. The following is the problem
Problem:
A rectangular circuit board has two parallel sides with width W between them. There are m terminals on the upper side of the board and n terminals (n < m) on the lower side. Let U1 < U[2] < … < U[m] be the distances from the left end of the board to the m terminals on the upper side, respectively. Let L1 < L[2] < … < L[n] be the distances from the left end of the board to the n terminals on the lower side, respectively. Now, we need to select n terminals from the m terminals on the upper side to be connected to the n terminals on the lower side by n straight line segments, respectively, such that the total length of the n line segments is minimized. The following figure illustrates the problem for m = 8 and n = 4.
(a) Prove that, in an optimal solution, any two line segments will not intersect.
(b) Design an O(mn) dynamic programming algorithm to solve this minimization problem. You need to define sub-problems, show the inductive formula, initial conditions, and a pseudocode. You can use d(i, j) to denote the distance between U[i] and L[j], 1 ≤ i ≤ m, 1 ≤ j ≤ n. (The calculation of d(i, j) = ) can be omitted.
My Approach:
For the above problem, my approach was first to make a matrix d(i,j) where i are the terminals on the bottom and j are the terminals on the top. d(i,j) has all the distances from any two circuits.Then iterating through each row I will find the smallest distance and mark the respective terminal. But I am not sure this would work if the top circuits are all to the extreme right of the side. So can anyone provide me with a better approach.
I have written a recursive Dynamic Programming solution that uses memoisation, the complexity is O(mn), here at each recursive level we can either choose to join the current point defined in the U[] array with the point defined in the L[] array, or we can move forward without doing so:
#include<iostream>
#define INF 1e9
using namespace std;
int n, m, d[100][100], dp[100][100];
int solve(int idx1, int idx2){
if(idx1 > m){
if(idx2 < n) return INF;
else return 0;
}
if(idx2 > n) return 0;
if(dp[idx1][idx2] != -1) return dp[idx1][idx2];
int v1, v2;
//include current
v1 = solve(idx1 + 1, idx2 + 1) + d[idx1][idx2];
//do not include current
v2 = solve(idx1 + 1, idx2);
return dp[idx1][idx2] = min(v1, v2);
}
int main(){
//enter the the distances
for(int i = 0;i < 100;i++) for(int j = 0;j < 100;j++) dp[i][j] = -1;
cout << solve(1, 1) << endl;
return 0;
}
For the part (a) of your question, let us assume that 2 line segments do intersect, then we cannot have an optimal solution because if we just swapped the 2 end points of the line segments defined by the L[] array then the distance would reduce, hence giving us a better solution.

Generating M distinct random numbers (one at a time) from a given range 0..N-1 in less than O(M) memory

Is there any method to do this?
I mean, we even cannot work with "in" array of {0,1,..,N-1} (because it's at least O(N) memory).
M can be = N. N can be > 2^64. Result should be uniformly random and would better be every possible sequence (but may not).
Also full-range PRNGs (and friends) aren't suitable, because it will give same sequence each time.
Time complexity doesn't matter.
If you don't care what order the random selection comes out in, then it can be done in constant memory. The selection comes out in order.
The answer hinges on estimating the probability that the smallest value in a random selection of M distinct values of the set {0, ..., N-1} is i, for each possible i. Call this value p(i, M, N). With more mathematics than I have the patience to type into an interface which doesn't support Latex, you can derive some pretty good estimates for the p function; here, I'll just show the simple, non-time-efficient approach.
Let's just focus on p(0, M, N), which is the probability that a random selection of M out of N objects will include the first object. Then we can iterate through the objects (that is, the numbers 0...N-1) one at a time; deciding for each one whether it is included or not by flipping a weighted coin. We just need to compute the coin's weights for each flip.
By definition, there are MCN possible M-selections of a set of N objects. Of these MCN-1 do not include the first element. (That's the count of M-selections of N-1 objects, which is all the M-selections of the set missing one element). Similarly, M-1CN-1 selections do include the first element (that is, all the M-1-selections of the N-1-set, with the first element added to each selection).
These two values add up to MCN; the well-known recursive algorithm for computing C.
So p(0, M, N) is just M-1CN-1/MCN. Since MCN = N!/(M!*(N-M)!), we can simplify that fraction to M/N. As expected, if M == N, that works out to 1 (M of N objects must include every object).
So now we know what the probability that the first object will be in the selection. We can then reduce the size of the set, and either reduce the remaining selection size or not, depending on whether the coin flip determined that we did or did not include the first object. So here's the final algorithm, in pseudo-code, based on the existence of the weighted random boolean function:
w(x, y) => true with probability X / Y; otherwise false.
I'll leave the implementation of w for the reader, since it's trivial.
So:
Generate a random M-selection from the set 0...N-1
Parameters: M, N
Set i = 0
while M > 0:
if w(M, N):
output i
M = M - 1
N = N - 1
i = i + 1
It might not be immediately obvious that that works, but note that:
the output i statement must be executed exactly M times, since it is coupled with a decrement of M, and the while loop executes until M is 0
The closer M gets to N, the higher the probability that M will be decremented. If we ever get to the point where M == N, then both will be decremented in lockstep until they both reach 0.
i is incremented exactly when N is decremented, so it must always be in the range 0...N-1. In fact, it's redundant; we could output N-1 instead of outputting i, which would change the algorithm to produce sets in decreasing order instead of increasing order. I didn't do that because I think the above is easier to understand.
The time complexity of that algorithm is O(N+M) which must be O(N). If N is large, that's not great, but the problem statement said that time complexity doesn't matter, so I'll leave it there.
PRNGs that don't map their state space to a lower number of bits for output should work fine. Examples include Linear Congruential Generators and Tausworthe generators. They will give the same sequence if you use the same seed to start them, but that's easy to change.
Brute force:
if time complexity doesn't matter it would be a solution for 0 < M <= N invariant. nextRandom(N) is a function which returns random integer in [0..N):
init() {
for (int idx = 0; idx < N; idx++) {
a[idx] = -1;
}
for (int idx = 0; idx < M; idx++) {
getNext();
}
}
int getNext() {
for (int idx = 1; idx < M; idx++) {
a[idx -1] = a[idx];
}
while (true) {
r = nextRandom(N);
idx = 0;
while (idx < M && a[idx] != r) idx++;
if (idx == M) {
a[idx - 1] = r;
return r;
}
}
}
O(M) solution: It is recursive solution for simplicity. It supposes to run nextRandom() which returns a random number in [0..1):
rnd(0, 0, N, M); // to get next M distinct random numbers
int rnd(int idx, int n1, int n2, int m) {
if (n1 >= n2 || m <= 0) return idx;
int r = nextRandom(n2 - n1) + n1;
int m1 = (int) ((m-1.0)*(r-n1)/(n2-n1) + nextRandom()); // gives [0..m-1]
int m2 = m - m1 - 1;
idx = rnd(idx, n1, r-1, m1);
print r;
return rnd(idx+1, r+1, n2, m2);
}
the idea is to select a random r in between [0..N) on first step which splits the range on two sub-ranges by N1 and N2 elements in each (N1+N2==N-1). We need to repeat the same step for [0..r) which has N1 elements and [r+1..N) (N2 elements) choosing M1 and M2 (M1+M2==M-1) so as M1/M2 == N1/N2. M1 and M2 must be integers, but the proportion can give real results, we need to round values with probabilities (1.2 will give 1 with p=0.8 and 2 with p=0.2 etc.).

maximum sum of a subset of size K with sum less than M

Given:
array of integers
value K,M
Question:
Find the maximum sum which we can obtain from all K element subsets of given array such that sum is less than value M?
is there a non dynamic programming solution available to this problem?
or if it is only dp[i][j][k] can only solve this type of problem!
can you please explain the algorithm.
Many people have commented correctly that the answer below from years ago, which uses dynamic programming, incorrectly encodes solutions allowing an element of the array to appear in a "subset" multiple times. Luckily there is still hope for a DP based approach.
Let dp[i][j][k] = true if there exists a size k subset of the first i elements of the input array summing up to j
Our base case is dp[0][0][0] = true
Now, either the size k subset of the first i elements uses a[i + 1], or it does not, giving the recurrence
dp[i + 1][j][k] = dp[i][j - a[i + 1]][k - 1] OR dp[i][j][k]
Put everything together:
given A[1...N]
initialize dp[0...N][0...M][0...K] to false
dp[0][0][0] = true
for i = 0 to N - 1:
for j = 0 to M:
for k = 0 to K:
if dp[i][j][k]:
dp[i + 1][j][k] = true
if j >= A[i] and k >= 1 and dp[i][j - A[i + 1]][k - 1]:
dp[i + 1][j][k] = true
max_sum = 0
for j = 0 to M:
if dp[N][j][K]:
max_sum = j
return max_sum
giving O(NMK) time and space complexity.
Stepping back, we've made one assumption here implicitly which is that A[1...i] are all non-negative. With negative numbers, initializing the second dimension 0...M is not correct. Consider a size K subset made up of a size K - 1 subset with sum exceeding M and one other sufficiently negative element of A[] such that overall sum no longer exceeds M. Similarly, our size K - 1 subset could sum to some extremely negative number and then with a sufficiently positive element of A[] sum to M. In order for our algorithm to still work in both cases we would need to increase the second dimension from M to the difference between the sum of all positive elements in A[] and the sum of all negative elements (the sum of the absolute values of all elements in A[]).
As for whether a non dynamic programming solution exists, certainly there is the naive exponential time brute force solution and variations that optimize the constant factor in the exponent.
Beyond that? Well your problem is closely related to subset sum and the literature for the big name NP complete problems is rather extensive. And as a general principle algorithms can come in all shapes and sizes -- it's not impossible for me to imagine doing say, randomization, approximation, (just choose the error parameter to be sufficiently small!) plain old reductions to other NP complete problems (convert your problem into a giant boolean circuit and run a SAT solver). Yes these are different algorithms. Are they faster than a dynamic programming solution? Some of them, probably. Are they as simple to understand or implement, without say training beyond standard introduction to algorithms material? Probably not.
This is a variant of the Knapsack or subset-problem, where in terms of time (at the cost of exponential growing space requirements as the input size grows), dynamic programming is the most efficient method that CORRECTLY solves this problem. See Is this variant of the subset sum problem easier to solve? for a similar question to yours.
However, since your problem is not exactly the same, I'll provide an explanation anyways. Let dp[i][j] = true, if there is a subset of length i that sums to j and false if there isn't. The idea is that dp[][] will encode the sums of all possible subsets for every possible length. We can then simply find the largest j <= M such that dp[K][j] is true. Our base case dp[0][0] = true because we can always make a subset that sums to 0 by picking one of size 0.
The recurrence is also fairly straightforward. Suppose we've calculated the values of dp[][] using the first n values of the array. To find all possible subsets of the first n+1 values of the array, we can simply take the n+1_th value and add it to all the subsets we've seen before. More concretely, we have the following code:
initialize dp[0..K][0..M] to false
dp[0][0] = true
for i = 0 to N:
for s = 0 to K - 1:
for j = M to 0:
if dp[s][j] && A[i] + j < M:
dp[s + 1][j + A[i]] = true
for j = M to 0:
if dp[K][j]:
print j
break
We're looking for a subset of K elements for which the sum of the elements is a maximum, but less than M.
We can place bounds [X, Y] on the largest element in the subset as follows.
First we sort the (N) integers, values[0] ... values[N-1], with the element values[0] is the smallest.
The lower bound X is the largest integer for which
values[X] + values[X-1] + .... + values[X-(K-1)] < M.
(If X is N-1, then we've found the answer.)
The upper bound Y is the largest integer less than N for which
values[0] + values[1] + ... + values[K-2] + values[Y] < M.
With this observation, we can now bound the second-highest term for each value of the highest term Z, where
X <= Z <= Y.
We can use exactly the same method, since the form of the problem is exactly the same. The reduced problem is finding a subset of K-1 elements, taken from values[0] ... values[Z-1], for which the sum of the elements is a maximum, but less than M - values[Z].
Once we've bound that value in the same way, we can put bounds on the third-largest value for each pair of the two highest values. And so on.
This gives us a tree structure to search, hopefully with much fewer combinations to search than N choose K.
Felix is correct that this is a special case of the knapsack problem. His dynamic programming algorithm takes O(K*M) size and O(K*K*M) amount of time. I believe his use of the variable N really should be K.
There are two books devoted to the knapsack problem. The latest one, by Kellerer, Pferschy and Pisinger [2004, Springer-Verlag, ISBN 3-540-40286-1] gives an improved dynamic programming algorithm on their page 76, Figure 4.2 that takes O(K+M) space and O(KM) time, which is huge reduction compared to the dynamic programming algorithm given by Felix. Note that there is a typo on the book's last line of the algorithm where it should be c-bar := c-bar - w_(r(c-bar)).
My C# implementation is below. I cannot say that I have extensively tested it, and I welcome feedback on this. I used BitArray to implement the concept of the sets given in the algorithm in the book. In my code, c is the capacity (which in the original post was called M), and I used w instead of A as the array that holds the weights.
An example of its use is:
int[] optimal_indexes_for_ssp = new SubsetSumProblem(12, new List<int> { 1, 3, 5, 6 }).SolveSubsetSumProblem();
where the array optimal_indexes_for_ssp contains [0,2,3] corresponding to the elements 1, 5, 6.
using System;
using System.Collections.Generic;
using System.Collections;
using System.Linq;
public class SubsetSumProblem
{
private int[] w;
private int c;
public SubsetSumProblem(int c, IEnumerable<int> w)
{
if (c < 0) throw new ArgumentOutOfRangeException("Capacity for subset sum problem must be at least 0, but input was: " + c.ToString());
int n = w.Count();
this.w = new int[n];
this.c = c;
IEnumerator<int> pwi = w.GetEnumerator();
pwi.MoveNext();
for (int i = 0; i < n; i++, pwi.MoveNext())
this.w[i] = pwi.Current;
}
public int[] SolveSubsetSumProblem()
{
int n = w.Length;
int[] r = new int[c+1];
BitArray R = new BitArray(c+1);
R[0] = true;
BitArray Rp = new BitArray(c+1);
for (int d =0; d<=c ; d++) r[d] = 0;
for (int j = 0; j < n; j++)
{
Rp.SetAll(false);
for (int k = 0; k <= c; k++)
if (R[k] && k + w[j] <= c) Rp[k + w[j]] = true;
for (int k = w[j]; k <= c; k++) // since Rp[k]=false for k<w[j]
if (Rp[k])
{
if (!R[k]) r[k] = j;
R[k] = true;
}
}
int capacity_used= 0;
for(int d=c; d>=0; d--)
if (R[d])
{
capacity_used = d;
break;
}
List<int> result = new List<int>();
while (capacity_used > 0)
{
result.Add(r[capacity_used]);
capacity_used -= w[r[capacity_used]];
} ;
if (capacity_used < 0) throw new Exception("Subset sum program has an internal logic error");
return result.ToArray();
}
}

Resources