Suppose these two sets are given as input:
One set U as universe
And one set S containing some of the subsets of U.
The members of S are assigned with random flags 0 or 1. For each member of S, the probability of flag 1 is p and flag 0 is (1-p).
The desired output is: The probability of 'Union of the flag 1 subsets in S = U'
Although considering all the possible combinations of the flag 1 subsets in S is the trivial algorithm to lead to output, the running time of this brute force method is obviously exponential.
Is there any polynomial time algorithm which leads to the exact or approximate output? Or can we reduce the problem to any famous one like set-cover?
Getting an exact answer is #P-hard (counting analog of NP, thus at least as hard), since this problem generalizes monotone 2-CNF-SAT, which is known to be #P-hard (Welsh, Dominic; Gale, Amy (2001), "The complexity of counting problems", Aspects of complexity: minicourses in algorithmics, complexity and computational algebra: mathematics workshop, Kaikoura, January 7–15, 2000, pp. 115ff, Theorem 57.). The reduction is to set U to the set of clause identifiers and let each subset in S be the set of clauses in which some variable appears. EDIT: set p = 1/2 for each set, natch.
Related
Are there any efficient algorithms that could be used to generate all integer solutions to equations such as the ones below?
(8+3n)m = 11 | n ∈ {0,1}, m ∈ ℤ+
(5+(7+3x+2y)a+3z)b = 30 | x,y,z ∈ {0,1}, a,b ∈ ℤ+
Ideally I would like to be able to generate the set of all valid integer values for n,m and a,b,x,y,z respectively. At the very least I would like a way of testing if the equations are solvable at all. Given that these equations are nonlinear I would imagine that typical methods used to solve simple Diophantine equations would fail here.
I would really appreciate any help I could get!
Since 11 is prime, there are only 4 possible factorizations in Z:
8+3n=11 and m=1
8+3n=1 (impossible) and m=11
8+3n=-11 (impossible) and m=-1
8+3n=-1 m=-11
By restricting n in {0,1} only one solution remains...
For second case, there are quite more possibilities, since 30 is 2*3*5, you will have 16 possible products for your two terms in Z...
If you replace (x,y,z) by their 8 possible combinations, the first term degenerates in a 1st order polynomial in a, so that's only 8*16 = 128 polynomials to test for an integer root.
If all the problems degenerate in a product of polynomials of one variable after substitution of the variables that are in finite set (by brute force), then it's like finding integer roots of polynomials, which is trivial for 1st order polynomials like in the two problems above, and equivalent to factor a polynomial over the integers for higher degree...
If the factors remain multivariate, but linear (total order 1), then it's like solving linear systems. But finding integer solutions is not necessarily trivial, I recommend reading http://sites.math.rutgers.edu/~sk1233/courses/ANT-F14/lec3.pdf
Else if factors remain multivariate and of total order > 1, that's equivalent to solving polynomial systems... In some cases, that's possible, see https://en.wikipedia.org/wiki/Gr%C3%B6bner_basis.
I'm trying to solve the following:
The knapsack problem is as follows: given a set of integers S={s1,s2,…,sn}, and a given target number T, find a subset of S that adds up exactly to T. For example, within S={1,2,5,9,10} there is a subset that adds up to T=22 but not T=23. Give a correct programming algorithm for knapsack that runs in O(nT) time.
but the only algorithm I could come up with is generating all the 1 to N combinations and try the sum out (exponential time).
I can't devise a dynamic programming solution since the fact that I can't reuse an object makes this problem different from a coin rest exchange problem and from a general knapsack problem.
Can somebody help me out with this or at least give me a hint?
The O(nT) running time gives you the hint: do dynamic programming on two axes. That is, let f(a,b) denote the maximum sum <= b which can be achieved with the first a integers.
f satisfies the recurrence
f(a,b) = max( f(a-1,b), f(a-1,b-s_a)+s_a )
since the first value is the maximum without using s_a and the second is the maximum including s_a. From here the DP algorithm should be straightforward, as should outputting the correct subset of S.
I did find a solution but with O(T(n2)) time complexity. If we make a table from bottom to top. In other words If we sort the array and start with the greatest number available and make a table where columns are the target values and rows the provided number. We will need to consider the sum of all possible ways of making i- cost [j] +j . Which will take n^2 time. And this multiplied with target.
Lets assume that we have only integer numbers which values are in range 1 to N. Next we will split them into K-element multi-sets. How would you find such set which contains smallest possible number of those multi-sets yet sum of this multi-set contains all numbers from 1 to N? In case of ambiguity answer will be any set that matches criteria (first found).
For instance, we have N = 9, K = 3
(1,2,3)(4,5,6)(7,8,8)(8,7,6)(1,9,2)(4,4,3)
Smallest number of multi-sets that contains all the numbers from 1 to 9 is equal to 4 and can be either (1,2,3)(4,5,6)(7,8,8)(1,9,2) or (1,2,3)(4,5,6)(8,7,6)(1,9,2).
Any idea for efficient algorithm to find such set?
PS
After writing an answer I found yet another 4 element set: (4,5,6)(1,9,2)(4,4,3)(7,8,8) or (4,5,6)(1,9,2)(4,4,3)(8,7,6) But as I said algorithm finding any minimum set would be fine.
Your question is a restricted version the classic Set Covering problem, but it still easy to show that it is NP-Hard.
Any approximation technique for this problem would be reasonable here. In particular, the greedy solution of choosing the next subset covering the most uncovered items - is esp. easy to implement.
This problem, as #Ami Tavroy said, is NP-hard by reduction to 3-dimensional matching (here).
To do the reduction, note the restricted decision variant of 3-dimensional matching when it reduces to a exact cover (here):
...given a set T and an integer k, decide whether there exists a
3-dimensional matching M ⊆ T with |M| ≥ k. ... The problem is
NP-complete even in the special case that k = |X| = |Y| =
|Z|.1[4][5] In this case, a 3-dimensional (dominating) matching is
not only a set packing but also an exact cover: the set M covers each
element of X, Y, and Z exactly once.[6]
This variant can be solved in P if you can solve the other question in P - you can produce all the triples in O(N ^ 3) time and then do set cover, and check if K = N / 3 or not. Thus by reduction, the original questions is also NP-hard.
Suppose we have a finite set S and a list of subsets of S. Then, the set packing problem asks if some k subsets in the list are pairwise disjoint .
The optimization version of the problem, maximum set packing, asks for the maximum number of pairwise disjoint sets in the list.
http://en.wikipedia.org/wiki/Set_packing
So, Let S = {1,2,3,4,5,6,7,8,9,10}
and `Sa = {1,2,3,4}`
and `Sb = {4,5,6}`
and `Sc = {5,6,7,8}`
and `Sd = {9,10}`
Then the maximum number of pairwise disjoint sets are 3 ( Sa, Sc, Sd )
I could not find any articles about the algorithm involved. Can you shed some light on the same?
My approach:
Sort the sets according to the size. Start from the set of the smallest size. If no element of the next set intersects with the current set, then we unite the set and increase the count of maximum sets. Does this sound good to you? Any better ideas?
As hivert pointed out, this problem is NP-hard, so there's no efficient way to do this. However, if your input is relatively small, you can still pull it off. Exponential doesn't mean impossible, after all. It's just that exponential problems become impractical very quickly, as the input size grows. But for something like 25 sets, you can easily brute force it.
Here's one approach. Let's say you have n subsets, called S0, S1, ..., etc. We can try every combination of subsets, and pick the one with maximum cardinality. There are only 2^25 = 33554432 choices, so this is probably reasonable enough.
An easy way to do this is to notice that any non-negative number strictly below 2^N represents a particular choice of subsets. Look at the binary representation of the number, and choose the sets whose indices correspond to the bits that are on. So if the number is 11, the 0th, 1st and 3rd bits are on, and this corresponds to the combination [S0, S1, S3]. Then you just verify that these three sets are in fact disjoint.
Your procedure is as follows:
Iterate i from 0 to 2^N - 1
For each value of i, use the bits that are on to figure out the corresponding combination of subsets.
If those subsets are pairwise disjoint, update your best answer with this combination (i.e., use this if it is bigger than your current best).
Alternatively, use backtracking to generate your subsets. The two approaches are equivalent, modulo implementation tradeoffs. Backtracking will have some stack overhead, but can cut off entire lines of computation if you check disjointness as you go. For example, if S1 and S2 are not disjoint, then it will never bother with any bigger combinations containing those two, saving some time. The iterative method can't optimize itself in this way, but is fast and efficient because of the bitwise operations and tight loop.
The only nontrivial matter here is how to check if the subsets are pairwise disjoint. There are all sorts of tricks you can pull here as well, depending on the constraints.
A simple approach is to start with an empty set structure (pick whatever you want from the language of your choice) and add elements from each subset one by one. If you ever hit an element that's already in the set, then it occurs in at least two subsets, and you can give up on this combination.
If the original set S has m elements, and m is relatively small, you can map each of them to the range [0, m-1] and use bitmasks for each set. So if m <= 64, you can use a Java long to represent each subset. Turn on all the bits that correspond to the elements in the subset. This allows blazing fast set operation, because of the speed of bitwise operations. Bitwise AND corresponds to set intersection, and bitwise OR is a union. You can check if two subsets are disjoint by seeing if the intersection is empty (i.e., ANDing the two bitmasks gives you 0).
If you don't have so few elements, you can still avoid repeating the set intersections multiple times. You have very few sets, so precompute which ones are disjoint at the start. You can just store a boolean matrix D, such that D[i][j] = true iff i and j are disjoint. Then you just look up all pairs in a combination to verify pairwise disjointness, rather than doing real set operations.
You can solve the set packing problem searching a Maximum independent set. You encode your problem as follows:
for each set you put a vertex
you put an edge between two vertex if they share a common number.
Then you wan't a maximum set of vertex without two having two related vertex. Unfortunately this is a NP-Hard problem. Any know algorithm is exponential.
In the Set Covering problem, we are given a universe U, such that |U|=n, and sets S1,……,Sk are subsets of U. A set cover is a collection C of some of the sets from S1,……,Sk whose union is the entire universe U.
I'm trying to come up with an algorithm that will find the minimum number of set cover so that I can show that the greedy algorithm for set covering sometimes finds more sets.
Following is what I came up with:
repeat for each set.
1. Cover<-Seti (i=1,,,n)
2. if a set is not a subset of any other sets, then take take that set into cover.
but it's not working for some instances.
Please help me figure out an algorithm to find the minimum set cover.
I'm still having problem find this algorithm online. Anyone has any suggestion?
Set cover is NP-hard, so it's unlikely that there'll be an algorithm much more efficient than looking at all possible combinations of sets, and checking if each combination is a cover.
Basically, look at all combinations of 1 set, then 2 sets, etc. until they form a cover.
EDIT
This is an example pseudocode. Note that I do not claim that this is efficient. I simply claim that there isn't a much more efficient algorithm (algorithms will be worse than polynomial time unless something really cool is discovered)
for size in 1..|S|:
for C in combination(S, size):
if (union(C) == U) return C
where combination(K, n) returns all possible sets of size n whose elements come from K.
EDIT
However, I'm not too sure why you need an algorithm to find the minimum. In the question you state that you want to show that the greedy algorithm for set covering sometimes finds more sets. But this is easily achieved via a counterexample (and a counterexample is shown in the wikipedia entry for set cover). So I am quite puzzled.
EDIT
A possible implementation of combination(K, n) is:
if n == 0: return [{}] //a list containing an empty set
r = []
for k in K:
K = K \ {k} // remove k from K.
for s in combination(K, n-1):
r.append(union({k}, s))
return r
But in combination with the cover problem, one probably wants to perform the test of coverage from the base case n == 0 instead. Well.
Try Donald E. Knuth algorithm-X for exact set coverage, using a sparse matrix. Must be adapted a little to solve minimum set cover problems also.