Related
Given a (finite) set of intervals [a_i, b_i], where a_i <= b_i are integers, I'd like an algorithm to compute a minimal (in cardinality) set C of integers that intersects each interval.
In case some readers are distracted by the interval notation above, this problem is not about non-integer numbers.
If this is a known problem, even if it an NP-complete one, that would be useful to know.
Here is a O(n logn) solution.
The first step is to sort the intervals, according to the values of the end b.
Then, for each interval considered according to this order, the corresponding b value is added, but only if this interval is not already intersected by a point. This is simply checked by comparing the value of its a parameter with the value of the last selected b point.
This second step has a complexity O(n). The overall complexity is then dominated by the sorting complexity O(n logn).
Here is a simple C++ implementation to illustrate the simplicity of the algorithm.
Output:
Intervals before sorting: [2, 4] [0, 3] [6, 7] [3, 3] [3, 5]
Intervals after sorting: [0, 3] [3, 3] [2, 4] [3, 5] [6, 7]
set of points: 3 7
#include <iostream>
#include <vector>
#include <algorithm>
#include <string>
struct Interval {
int a, b;
friend std::ostream& operator << (std::ostream& os, const Interval& x) {
os << '[' << x.a << ", " << x.b << ']';
return os;
}
friend bool operator< (const Interval& v1, const Interval& v2) {return v1.b < v2.b;}
};
template <typename T>
void print (const std::vector<T> &x, const std::string& str = "") {
std::cout << str;
for (const T& i: x) {
std::cout << i << " ";
}
std::cout << "\n";
}
std::vector<int> min_intersection (std::vector<Interval>& interv) {
std::vector<int> points;
std::sort (interv.begin(), interv.end());
print (interv, "Intervals after sorting: ");
if (interv.size() == 0) return points;
int last_point = interv[0].a - 1;
for (auto& seg: interv) {
if (seg.a <= last_point) continue;
last_point = seg.b;
points.push_back (last_point);
}
return points;
}
int main() {
std::vector<Interval> interv = {{2, 4}, {0, 3}, {6, 7}, {3, 3}, {3, 5}};
print (interv, "Intervals before sorting: ");
auto points = min_intersection (interv);
print (points, "set of points: ");
return 0;
}
Well this is off the top of my head, but it looks reasonable to me.
We have a set Is of 'intervals'. Each I in Is is { n in N | a<=n<=b}. We want set C of integers such that for all I in Is there is c in C with c in I, and indeed we want the C of minimal cardinality
a/ Suppose I in Is is [a,a]. Then we must have a in C.
So let
C0 = {a in N | [a,a] in Is}
Is1 = { I in Is | for all c in C0, c not in I }
If we can find a solution C1 for Is1, C0 union C1 is a solution for Is
b/ Let I be in Is and suppose
Js = { J in Is | I subset J }.
Then we can throw away all the Js. For if C is a solution for Is\Js then since there is a c in C with c in I, then c in K for all K in Js, so C is a solution for Is.
To implement this, sort Is by a then b. Suppose I is the least element, and J its successor.
If a(I)==a(J) then we must have b(I)<=b(J) and so I subset J, and we throw away J, and move on to consider I and the successor of J
If a(I)<a(J) and b(J)<=b(I) then J subset I and we throw away I, and move on to consider J and its successor.
After this we have that for each I in Is with successor J,
a(I) < a(J) and b(I) < b(J).
If Is is now just one interval, we choose any element of the unique element in Is and stop.
Let I be the least interval in Is, and let
Js = { J in Is | a(J)<=b(I)}.
If Js is empty we choose any element of I, remove I from Is and continue.
Otherwise let k = b(I). Then k in I; if J in Js then
a(j)<=b(I)<b(J)
so k in J. Moreover for J != I and J not in Js, I and J are disjoint, so no element of I can be in more elements of Is than is k.
We add k to C, remove I and the Js from Is, and continue.
Given a set S, for its each non-empty subset, find smallest and largest elements and take their logical OR. Find sum of these ORs across all such subsets.
For example: S = {1, 2, 3}, then subsets
{1} smallest=1 largest=1 OR=1
{2} smallest=2 largest=2 OR=2
{3} smallest=3 largest=3 OR=3
{1, 2} smallest=1 largest=2 OR=3
{2, 3} smallest=2 largest=3 OR=3
{1, 3} smallest=1 largest=3 OR=3
{1, 2, 3} smallest=1 largest=3 OR=3
Answer is 18.
I have read How to find Sum of differences of maximum and minimum of all possible subset of an array but not able to use that logic here.
Algorithm
Sort the input data
Loop from i = 0 to n where n is the length of the input, and j = i to n, Since the input is sorted input[i] will be smallest and input[j] will be the largest in the range [i,j]
Now that we know that input[i] is the lowest and input[j] is the largest we also know that there are j - i -1 middle elements of the array whose combinations will result in the same lowest and largest values hence we multiple the OR of the low and high with the total number of permutations possible with these middle numbers.
For ex. For input = [1, 2, 3, 4] and i = 0 and j = 3 i.e.) lowest = 1 and largest = 4 we know the elements [2, 3] can appear in the subsets without changing the lowest and largest value. [1, 2, 4], [1, 3, 4], [1, 2, 3, 4] are all valid. The number of combinations possible with the middle elements are 2 ^ (count of middle elements).
Repeat this for all lowest and largest pair.
Here is the code in C++.
#include <iostream>
int main() {
vector<int> input {3, 2, 1};
sort(input.begin(), input.end());
int answer = 0;
for(int i=0; i < input.size(); ++i)
{
for(int j=i; j < input.size(); ++j)
{
int elements = (j - i) - 1;
int multiple = elements > 0 ? pow(2, elements) : 1;
answer += ((input[i] | input[j]) * multiple);
cout << input[i] << ' ' << input[j] << ' ' << answer << endl;
}
cout << endl;
}
cout << answer <<endl;
}
For example, if A={0,1,2,3,4}, r=3 and B={1,4}, the result would be:
[0, 1, 2]
[0, 1, 3]
[0, 1, 4]
[0, 2, 4]
[0, 3, 4]
[1, 2, 3]
[1, 2, 4]
[1, 3, 4]
[2, 3, 4]
That's all the r-long combinations of A, excluding [0, 2, 3], because that one doesn't contain either 1 or 4.
The solution that I currently have is the following, using the fastest algorithm for getting normal combinations I know of, and just doing a simple check to see if combinations generated also contain an element of B (java):
int[] A = new int[]{0,1,2,3,4};
int[] B = new int[]{1,4};
int n = A.length;
int r = 3;
int[] picks = new int[r]; //Holds indexes of elements in A
for (int i = 0; i < picks.length; i++)
picks[i] = i;
int lastindex = picks.length - 1;
outer:
while (true) {
int at = lastindex;
while (true) {
picks[at] += 1;
if (picks[at] < n) {
int displacement = picks[at] - at; // at + displacement = picks[at], at + displacement + 1 = picks[at] + 1 ,...
// Make all picks elements after at y = picks[at] + x, so picks={0, 2, 4, 6, 18, 30} & at=3 --> picks={0, 2, 4, 5, 6, 7}
// (Note that this example will never take place in reality, because the 18 or the 30 would be increased instead, depending on what n is)
// Do the last one first, because that one is going to be the biggest,
picks[lastindex] = lastindex + displacement;
if (picks[lastindex] < n) { // and check if it doesn't overflow
for (int i = at + 1; i < lastindex; i++)
picks[i] = i + displacement;
int[] combination = new int[r];
for (int i = 0; i < r; i++)
combination[i] = A[picks[i]];
System.out.print(Arrays.toString(combination));
//^With this, all r-long combinations of A get printed
//Straightforward, bruteforce-ish way of checking if int[] combination
//contains any element from B
presence:
for (int p : combination) {
for (int b : B) {
if (p==b) {
System.out.print(" <-- Also contains an element from B");
break presence;
}
}
}
System.out.println();
break;
}
}
at--;
if (at < 0) {
//Moving this check to the start of the while loop will make this natively support pick 0 cases (5C0 for example),
//but reduce performance by I believe quite a bit. Probably better to special-case those (I haven't
// done that in this test tho)
break outer;
}
}
}
output:
[0, 1, 3] <-- Also contains an element from B
[0, 1, 4] <-- Also contains an element from B
[0, 2, 3]
[0, 2, 4] <-- Also contains an element from B
[0, 3, 4] <-- Also contains an element from B
[1, 2, 3] <-- Also contains an element from B
[1, 2, 4] <-- Also contains an element from B
[1, 3, 4] <-- Also contains an element from B
[2, 3, 4] <-- Also contains an element from B
As written in the comments, I believe this method to be very rudimentary. Can anyone think of a faster way to do this?
Assuming you have a int[][] FindCombinations(int[] set, int length) function that returns a list of all the length-long combinations of set, do the following (pseudo-code):
for i=1 to B.length
{
int bi = B[i];
A = A - bi; // remove bi from A
foreach C in FindCombinations(A, r-1)
{
output C+bi // output the union of C and {bi}
}
}
This way all combinations contain at least one element from B (and may also contain elements of B that have not yet been used) without much extra work. All other combinations are eliminated at no cost (the don't have to be found at all) and also the test that a combination contains an element from B for each combination is also eliminated.
Whether this algorithm is faster, greatly depends on how efficently you can add/remove elements from a set and the percentage of included vs excluded combinations (i.e. if you only end up excluding 1% of the total combinations it is probably not worth it)
Note that when getting the combinations to union with {b[i]} these may also contain an element B[j] where j>i. When you get to the point that you get the combinations to union with B[j] none of them will contain B[i], so all combinations are unique.
I'm looking to explore different algorithms, both recursive and dynamic programming, that checks if one arrayA is a subsequence of arrayB. For example,
arrayA = [1, 2, 3]
arrayB = [5, 6, 1, 7, 2, 9, 3]
thus, arrayA is indeed a subsequence of arrayB.
I've tried a few different searches, but all I can seem to find is algorithms to compute the longest increasing subsequence.
Since you must match all elements of arrayA to some elements of arrayB, you never need to backtrack. In other words, if there are two candidates in arrayB to match an element of arrayA, you can pick the earliest one, and never retract the choice.
Therefore, you do not need DP, because a straightforward linear greedy strategy will work:
bool isSubsequence(int[] arrayA, int[] arrayB) {
int startIndexB = 0;
foreach (int n in arrayA) {
int next = indexOf(arrayB, startIndexB , n);
if (next == NOT_FOUND) {
return false;
}
startIndexB = next+1;
}
return true;
}
As dasblinkenlight has correctly said(and i could not have phrased it better than his answer!!) a greedy approach works absolutely fine. You could use the following pseudocode (with just a little more explanation but totally similar to what dasblinkenlight has written)which is similar to the merging of two sorted arrays.
A = {..}
B = {..}
j = 0, k = 0
/*j and k are variables we use to traverse the arrays A and B respectively*/
for(j=0;j<A.size();){
/*We know that not all elements of A are present in B as we
have reached end of B and not all elements of A have been covered*/
if(k==B.size() && j<A.size()){
return false;
}
/*we increment the counters j and k both because we have found a match*/
else if(A[j]==B[k]){
j++,k++;
}
/*we increment k in the hope that next element may prove to be an element match*/
else if(A[j]!=B[k]){
k++;
}
}
return true; /*cause if we have reached this point of the code
we know that all elements of A are in B*/
Time Complexity is O(|A|+|B|) in the worst case, where |A| & |B| are the number of elements present in Arrays A and B respectively. Thus you get a linear complexity.
As #sergey mentioned earlier, there is no need to do backtracking in this case.
Here just another Python version to the problem: [Time complexity: O(n) - worst]
>>> A = [1, 2, 3]
>>> B = [5, 6, 1, 7, 8, 2, 4, 3]
>>> def is_subsequence(A, B):
it = iter(B)
return all(x in it for x in A)
>>> is_subsequence(A, B)
True
>>> is_subsequence([1, 3, 4], B)
False
>>>
Here is an example in Ruby:
def sub_seq?(a_, b_)
arr_a = [a_,b_].max_by(&:length);
arr_b = [a_,b_].min_by(&:length);
arr_a.select.with_index do |a, index|
arr_a.index(a) &&
arr_b.index(a) &&
arr_b.index(a) <= arr_a.index(a)
end == arr_b
end
arrayA = [1, 2, 3]
arrayB = [5, 6, 1, 7, 2, 9, 3]
puts sub_seq?(arrayA, arrayB).inspect #=> true
Here is an example in GOLANG...
func subsequence(first, second []int) bool {
k := 0
for i := 0; i < len(first); i++ {
j := k
for ; j < len(second); j++ {
if first[i] == second[j] {
k = j + 1
break
}
}
if j == len(second) {
return false
}
}
return true
}
func main(){
fmt.Println(subsequence([]int{1, 2, 3}, []int{5, 1, 3, 2, 4}))
}
I just did a Top Coder SRM where there was a question I had problem solving. I am trying to searching online for details of the algorithm but I can't seem to find it.
The question went around the lines of:
You have an array, for example [12, 10, 4]
Each round, you can apply any permutation of 9,3,1 to subtract from [12,10,4]
Return the minimum amount of permutations needed to be applied to get to 0 for all numbers in the array.
Any help?
Edit: Let me be somewhat more descriptive so that the question can be understood better.
One question would be
input: [12 10 4]
output: 2 (minimum rounds)
How it would work:
[12 10 4] - [9 3 1] = [3 7 3]
[12 10 4] - [9 1 3] = [3 9 1]
[12 10 4] - [3 9 1] = ..
[12 10 4] - [3 1 9] = ..
[12 10 4] - [1 3 9] =
[12 10 4] - [1 9 3] =
[3 7 3] - [3 9 1] = ...
..
[9 1 3] - [9 1 3] = [0 0 0] <-- achieved with only two permutation subtractions
Edit2:
Here is the topcoder question:
http://community.topcoder.com/stat?c=problem_statement&pm=13782
Edit3:
Can anyone also explain the Overlapping Subproblem and Optimal Substructure if they believe a solution is with dynamic programming?
EDIT :
after seeing the original question, here is the solution which gave correct output for every testcase provided in above question link, if u want give input {60} , u shud give as {60,0,0} .
Basic idea is , u need to get all of them less than or equal to zero atlast , so if number is divisible by 3 , u can get zero by subtracting 9 or 3 and if not divisible, subtract by 1 to make it divisible by 3
first , sort the given triplet , then
check the largest number is divisible by 3, if so, subtract 9 and the number still can be divisible by 3
now check the next largest number , if it is divisible by 3, subtract 3 else subtract 1
if largest number is not divisible by 3, subtract 1 so that it may be divisible by 3
now subtract next largest number by 9 and the other one by 3
here is the implementation
#include <iostream>
#include <algorithm>
#include<cmath>
using namespace std;
int f(int* a,int k){
sort(a,a+3);
if(a[2]<=0 )
{cout << k<< endl ;
return 0;}
if(a[1]<=0){
cout << k+ceil((double)a[2]/9) << endl;
return 0;
}
if(a[2]%3==0 ){
a[2]=a[2]-9;
if(a[1]%3==0 )
{
a[1] = a[1] -3;
a[0] = a[0] -1;
}
else{
if(a[2]%2==0 && (a[1]-1)%2==0 && a[1]!=0){
a[2]=a[2] +9 -3;
a[1] = a[1] -9;
a[0] = a[0]-1;
}
else{
a[1] = a[1] -1;
a[0] = a[0] - 3;}
}
return f(a,++k);
}
else{
a[2] = a[2] -1;
a[1] = a[1] -9;
a[0]=a[0] -3;
return f(a,++k);
}
}
int main() {
int a[] = {54,18,6};
f(a,0);
return 0;
}
hope this is helpful
Example:
input is [12,4,10]
sort it [12,4,10] -> [12,10,4]
now check if largest number is divisible by 3, here Yes
so
[12-9,10,4] -> [3,10,4]
now check next largest number divisible by 3, here N0
so
[3,10-1,4-3] ->[3,9,1]
now increment count and pass this to function (recursive)
input -> [3,9,1]
sort [3,9,1] -> [9,3,1]
now check if largest number is divisible by 3, here Yes
so [9-9,3,1] -> [0,3,1]
now check next largest number divisible by 3, here Yes
so [0,3-3,1-1] -> [0,0,0]
now increment the count and pass the array
as largest element is 0 , we will print count and that is 2
Let the number of elements be n, let p1 to pn! be the permutations of the array you want to subtract, and let A be your original array. Then you want the natural numbers solutions of the linear system
A - k1*p1 - ... - kn!*pn! = 0
which is equivalent to
A = k1*p1 + ... + kn!*pn!
where 0 is the n-item array with all zeroes. You can't figure that out using the obvious linear algebra solution since ℕ^n is not an ℕ-vector space; actually, finding solutions to linear systems over natural numbers in general is NP-complete by a reduction from subset sum. I couldn't adapt the reduction to your version of the problem ad hoc, so maybe think about that for a bit. It should remain NP-hard, at least that's what I would expect.
It can be solved using dynamic programming. If you look at common solutions to dynamic programming problems, they follow the same general structure. We use an array to store the minimum number of permutations needed to reach each value, and update each value in turn using nested loops. The answer will end up in d[0][0][0] which is the number of permutations required to get to [0, 0, 0].
public static int count(int a, int b, int c) {
int[][] permutations = {{9, 3, 1}, {9, 1, 3}, {1, 9, 3}, {1, 3, 9}, {3, 9, 1}, {3, 1, 9}};
int[][][] d = new int[a + 1][b + 1][c + 1];
// Set initial values to high value to represent no solution.
for(int x = 0; x <= a; x++) {
for(int y = 0; y <= b; y++) {
for(int z = 0; z <= c; z++) {
d[x][y][z] = Integer.MAX_VALUE / 2;
}
}
}
// Set number of permutations for initial value to 0.
d[a][b][c] = 0;
// Update all values.
for(int x = a; x >= 0; x--) {
for(int y = b; y >= 0; y--) {
for(int z = c; z >= 0; z--) {
for(int[] p:permutations) {
// Update count from [x, y, z] -> [nx, ny, nz] using permutation p.
int nx = x - p[0];
int ny = y - p[1];
int nz = z - p[2];
if(nx >= 0 && ny >= 0 && nz >= 0) {
d[nx][ny][nz] = Math.min(d[nx][ny][nz], d[x][y][z] + 1);
}
}
}
}
}
// Return final answer.
return d[0][0][0];
}