Find if any set is covered by member sets - algorithm

[Please let me know if this maps to a known problem]
I have n sets of varying sizes. Each element in a set is unique. And each element can occur atmost in two different sets.
I want to perform an operation on these sets but avoid duplicates or missing any element.
Problem: Find out which all of these n sets should be removed because they are covered by other sets.
E.g. [a,b,c]; [a]; [b]. Remove [a], [b] since both are covered by the first one.
E.g. [a,b,c]; [a]; [b]; [c,d]. Remove [a,b,c] since all three elements are covered by remaining sets.
Note: here [a],[b] alone is not valid answer since 'c' is being duplicated. Similarly [a],[b],[c,d] is not valid answer since 'd' will be missed if removed.

I think that this is the Exact Cover problem. The last constraint—that each element is in at most two sets—doesn't seem to me to fundamentally change the problem (although I could easily be wrong about this). The Wikipedia web page contains a good summary of various algorithmic approaches. The algorithm of choice seems to be Dancing Links.

I think this is a case of a 2-sat problem that can be solved in linear time using a method based on Tarjan's algorithm.
Make a variable Ai for each set i. Ai is true if and only if set i is to be included.
For each element that appears in a single set add a clause that Ai=1
For each element that appears in 2 sets i and j, add clauses (Ai && ~Aj) || (~Ai && Aj). These clauses meant that exactly one of Ai and Aj must appear.
You can now solve this using a standard 2-sat algorithm to find whether this is impossible to achieve or a satisfying assignment if it is possible.
For a case with V sets and N elements you will have V variables and up to 2N clauses, so Tarjan's algorithm will have complexity O(V+2N).

Since an element in a set can appear in no more than two sets, then there are fairly straightforward connections between sets, which can be shown as a graph, the two examples are shown below. One example uses red lines to represent edges and the other uses black lines to represent edges.
The above shows that the sets can be divided into three groups.
Sets where all elements appear twice. These sets could potentially be removed and/or the sets that contain those elements could be removed.
Sets where one or more elements appear twice. The elements that appear twice could potentially link to sets that could be removed.
Sets where no elements appear twice. These sets can be ignored.
It's not really clear what happens if all of the sets are in either group 1 or group 3. However there seems to be a fairly simple criterion that allows for quickly removing sets, and the psudocode looks like so:
for each set in group2:
for each element that appears twice in that set:
if the other set that contains that element is in group1:
remove the other set
The performance is then linear in the number of elements.

I tried to find which sets to include rather than remove. Something like this?
(1) List of elements and the indexes of sets they are in
(2) Prime the answer list with indexes of sets that have elements that appear only in them
(3) Comb the map from (1) and if an element's set-index is not in the answer list, add to the answer the index of the smallest set that element is in.
Haskell code:
import Data.List (nub, minimumBy, intersect)
sets = [["a","b","c","e"],["a","b","d"],["a","c","e"]]
lengths = map length sets
--List elements and the indexes of sets they are in
mapped = foldr map [] (nub . concat $ sets) where
map a b = comb (a,[]) sets 0 : b
comb result [] _ = result
comb (a,list) (x:xs) index | elem a x = comb (a,index:list) xs (index + 1)
| otherwise = comb (a,list) xs (index + 1)
--List indexes of sets that have elements that appear only in them
haveUnique = map (head . snd)
. filter (\(element,list) -> null . drop 1 $ list)
$ mapped
--Comb the map and if an element's set-index is not in the answer list,
--add to the answer the index of the smallest set that element is in.
answer = foldr comb haveUnique mapped where
comb (a,list) b
| not . null . intersect list $ b = b
| otherwise =
minimumBy (\setIndexA setIndexB ->
compare (lengths!!setIndexA) (lengths!!setIndexB)) list : b
OUTPUT:
*Main> sets
[["a","b","c","e"],["a","b","d"],["a","c","e"]]
*Main> mapped
[("a",[2,1,0]),("b",[1,0]),("c",[2,0]),("e",[2,0]),("d",[1])]
*Main> haveUnique
[1]
*Main> answer
[2,1]

Related

Groupping a list of tuples into sets with coordinate wise matching

Suppose that there is a list of n l-tuples. One is interested in grouping this list into sets where each set containing m tuples such that there is a maximal match coordinate wise. For example:
input: {(1,2,3), (1,2,4), (2,3,1), (2,3,1), (4,3,1), (2,1,4)}, m = 3
output: {(1,2,3), (1,2,4), (2,1,4)}, {(2,3,1), (2,3,1), (4,3,1)}
It is important to note the possibility of those cases(with certain values for n and m) that result in having a set with fewer elements than others or a set with more elements than m elements(tuples).
Questions: What is the name of this problem in the literature? Is there exists an algorithm that performing this task? What about leaving the number of tuples in each partition, m not fixed, and determining the optimal such m with the given restriction, if it makes sense.
Thank you.

How to assign many subsets to their largest supersets?

My data has large number of sets (few millions). Each of those set size is between few members to several tens of thousands integers. Many of those sets are subsets of larger sets (there are many of those super-sets). I'm trying to assign each subset to it's largest superset.
Please can anyone recommend algorithm for this type of task?
There are many algorithms for generating all possible sub-sets of a set, but this type of approach is time-prohibitive given my data size (e.g. this paper or SO question).
Example of my data-set:
A {1, 2, 3}
B {1, 3}
C {2, 4}
D {2, 4, 9}
E {3, 5}
F {1, 2, 3, 7}
Expected answer: B and A are subset of F (it's not important B is also subset of A); C is a subset of D; E remains unassigned.
Here's an idea that might work:
Build a table that maps number to a sorted list of sets, sorted first by size with largest first, and then, by size, arbitrarily but with some canonical order. (Say, alphabetically by set name.) So in your example, you'd have a table that maps 1 to [F, A, B], 2 to [F, A, D, C], 3 to [F, A, B, E] and so on. This can be implemented to take O(n log n) time where n is the total size of the input.
For each set in the input:
fetch the lists associated with each entry in that set. So for A, you'd get the lists associated with 1, 2, and 3. The total number of selects you'll issue in the runtime of the whole algorithm is O(n), so runtime so far is O(n log n + n) which is still O(n log n).
Now walk down each list simultaneously. If a set is the first entry in all three lists, then it's the largest set that contains the input set. Output that association and continue with the next input list. If not, then discard the smallest item among all the items in the input lists and try again. Implementing this last bit is tricky, but you can store the heads of all lists in a heap and get (IIRC) something like O(n log k) overall runtime where k is the maximum size of any individual set, so you can bound that at O(n log n) in the worst case.
So if I got everything straight, the runtime of the algorithm is overall O(n log n), which seems like probably as good as you're going to get for this problem.
Here is a python implementation of the algorithm:
from collections import defaultdict, deque
import heapq
def LargestSupersets(setlists):
'''Computes, for each item in the input, the largest superset in the same input.
setlists: A list of lists, each of which represents a set of items. Items must be hashable.
'''
# First, build a table that maps each element in any input setlist to a list of records
# of the form (-size of setlist, index of setlist), one for each setlist that contains
# the corresponding element
element_to_entries = defaultdict(list)
for idx, setlist in enumerate(setlists):
entry = (-len(setlist), idx) # cheesy way to make an entry that sorts properly -- largest first
for element in setlist:
element_to_entries[element].append(entry)
# Within each entry, sort so that larger items come first, with ties broken arbitrarily by
# the set's index
for entries in element_to_entries.values():
entries.sort()
# Now build up the output by going over each setlist and walking over the entries list for
# each element in the setlist. Since the entries list for each element is sorted largest to
# smallest, the first entry we find that is in every entry set we pulled will be the largest
# element of the input that contains each item in this setlist. We are guaranteed to eventually
# find such an element because, at the very least, the item we're iterating on itself is in
# each entries list.
output = []
for idx, setlist in enumerate(setlists):
num_elements = len(setlist)
buckets = [element_to_entries[element] for element in setlist]
# We implement the search for an item that appears in every list by maintaining a heap and
# a queue. We have the invariants that:
# 1. The queue contains the n smallest items across all the buckets, in order
# 2. The heap contains the smallest item from each bucket that has not already passed through
# the queue.
smallest_entries_heap = []
smallest_entries_deque = deque([], num_elements)
for bucket_idx, bucket in enumerate(buckets):
smallest_entries_heap.append((bucket[0], bucket_idx, 0))
heapq.heapify(smallest_entries_heap)
while (len(smallest_entries_deque) < num_elements or
smallest_entries_deque[0] != smallest_entries_deque[num_elements - 1]):
# First extract the next smallest entry in the queue ...
(smallest_entry, bucket_idx, element_within_bucket_idx) = heapq.heappop(smallest_entries_heap)
smallest_entries_deque.append(smallest_entry)
# ... then add the next-smallest item from the bucket that we just removed an element from
if element_within_bucket_idx + 1 < len(buckets[bucket_idx]):
new_element = buckets[bucket_idx][element_within_bucket_idx + 1]
heapq.heappush(smallest_entries_heap, (new_element, bucket_idx, element_within_bucket_idx + 1))
output.append((idx, smallest_entries_deque[0][1]))
return output
Note: don't trust my writeup too much here. I just thought of this algorithm right now, I haven't proved it correct or anything.
So you have millions of sets, with thousands of elements each. Just representing that dataset takes billions of integers. In your comparisons you'll quickly get to trillions of operations without even breaking a sweat.
Therefore I'll assume that you need a solution which will distribute across a lot of machines. Which means that I'll think in terms of https://en.wikipedia.org/wiki/MapReduce. A series of them.
Read the sets in, mapping them to k:v pairs of i: s where i is an element of the set s.
Receive a key of an integers, along with a list of sets. Map them off to pairs (s1, s2): i where s1 <= s2 are both sets that included to i. Do not omit to map each set to be paired with itself!
For each pair (s1, s2) count the size k of the intersection, and send off pairs s1: k, s2: k. (Only send the second if s1 and s2 are different.
For each set s receive the set of supersets. If it is maximal, send off s: s. Otherwise send off t: s for every t that is a strict superset of s.
For each set s, receive the set of subsets, with s in the list only if it is maximal. If s is maximal, send off t: s for every t that is a subset of s.
For each set we receive the set of maximal sets that it is a subset of. (There may be many.)
There are a lot of steps for this, but at its heart it requires repeated comparisons between pairs of sets with a common element for each common element. Potentially that is O(n * n * m) where n is the number of sets and m is the number of distinct elements that are in many sets.
Here is a simple suggestion for an algorithm that might give better results based on your numbers (n = 10^6 to 10^7 sets with m = 2 to 10^5 members, a lot of super/subsets). Of course it depends a lot on your data. Generally speaking complexity is much worse than for the other proposed algorithms. Maybe you could only process the sets with less than X, e.g. 1000 members that way and for the rest use the other proposed methods.
Sort the sets by their size.
Remove the first (smallest) set and start comparing it against the others from behind (largest set first).
Stop as soon as you found a superset and create a relation. Just remove if no superset was found.
Repeat 2. and 3. for all but the last set.
If you're using Excel, you could structure it as follows:
1) Create a cartesian plot as a two-way table that has all your data sets as titles on both the side and the top
2) In a seperate tab, create a row for each data set in the first column, along with a second column that will count the number of entries (ex: F has 4) and then just stack FIND(",") and MID formulas across the sheet to split out all the entries within each data set. Use the counter in the second column to do COUNTIF(">0"). Each variable you find can be your starting point in a subsequent FIND until it runs out of variables and just returns a blank.
3) Go back to your cartesian plot, and bring over the separate entries you just generated for your column titles (ex: F is 1,2,3,7). Use an AND statement to then check that each entry in your left hand column is in your top row data set using an OFFSET to your seperate area and utilizing your counter as the width for the OFFSET

Algorithm: separate a list of values into subsets by minimizing the maximal difference in the sums of elements between all sublists

I have a list of values (integers) that I would like to split into B non-empty sublists without changing their initial order. The goal is to adjust the size of text to fit it into a defined area.
Each sublist will have one metric associated to it : the sum of its values. I would like to minimise the difference DIFF between the biggest sum and smallest sum among all the sublists. This would allow me to divide text into lines with approximately the same amount of text.
EDIT
As suggested, it would also work to minimise the maximal sum, as that would result in minimising the maximal length of a line of text.
Examples:
Given the list L = {2,3,4,5,6} and B = 2.
Solution : L1 = {2,3,4} and L2 = {5,6}. Sum(L1) = 9, Sum(L2) = 11 and DIFF = 2
Given the List L = {1,1,8,1,1,1,8,1} and B = 3
Solution : L1 = {1,1,8}, L2 = {1,1,1} and L3 = {8,1}.Sum(L1) = 10, Sum(L2) = 3, Sum(L3) = 9 and DIFF = 7
My suggestion
As I don't have an IT background, I'm not sure how to approach this.
First, I tried to figure out the number of combinations I could split my original set into B sublists. The number of elements in the original list is N, then there would be a number of possible splits equal to:
Then I tried to see what would be an appropriate algorithm to find the global minimum. I thought that if I ran into a situation where both of the conditions below are respected, I would have hit the global minimum.
Moving an element from (one of) the biggest sublist(s) into (one of) its neighbour(s) doesn't improve DIFF.
Moving an element from the (one of) the smallest sublist(s) into (one of) its neighbour(s) doesn't improve DIFF.
(As the sublists must not be empty, moving an element from a sublist with only one element requires to change several sublists)
Questions
Are the two conditions mentioned sufficient to guarantee a global minimum (for DIFF) ?
Do you know / remember an algorithm solving this problem ? Or do you have a suggestion to solve this ?
Do you have any reading recommendations to help me to tackle this kind of problem ?
As I said, I don't have an It background and don't have much experience with such computer theory problems.
Thank you !
Q: Are the two conditions mentioned sufficient to guarantee a global minimum (for DIFF) ?
A: NO
consider the following list: {6,5,2,4,3,7} with B=3
and the following potential solution:
{6} {5,2,4} {3,7}; Sums=(6,11,10), DIFF = 11-6 = 5
All one-element changes from the largest group make DIFF worse, or leave it the same:
{6,5} {2,4} {3,7}; Sums=(6,11,10), DIFF = 11-6 = 5
{6} {5,2} {4,3,7}; Sums=(6,7,14), DIFF = 14-6 = 8
{6} {5,2,4,3} {7}; Sums=(6,14,7), DIFF = 14-6 = 8
But there is a better solution:
{6,5} {2,4,3} {7}; Sums=(11,9,7), DIFF = 11-7 = 5
So your method only finds local minima, not global ones.

Fast way to compare cyclical data

Suppose I have the data set {A,B,C,D}, of arbitrary type, and I want to compare it to another data set. I want the comparison to be true for {A,B,C,D}, {B,C,D,A}, {C,D,A,B}, and {D,A,B,C}, but not for {A,C,B,D} or any other set that is not ordered similarly. What is a fast way to do this?
Storing them in arrays,rotating, and doing comparison that way is an O(n^2) task so that's not very good.
My first intuition would be to store the data as a set like {A,B,C,D,A,B,C} and then search for a subset, which is only O(n). Can this be done any faster?
There is a fast algorithm for finding the minimum rotation of a string - https://en.wikipedia.org/wiki/Lexicographically_minimal_string_rotation. So you can store and compare the minimum rotation.
One option is to use a directed graph. Set up a graph with the following transitions:
A -> B
B -> C
C -> D
D -> A
All other transitions will put you in an error state. Thus, provided each member is unique (which is implied by your use of the word set), you will be able to determine membership provided you end on the same graph node on which you started.
If a value can appear multiple times in your search, you'll need a smarter set of states and transitions.
This approach is useful if you precompute a single search and then match it to many data points. It's not so useful if you have to constantly regenerate the graph. It could also be cache-inefficient if your state table is large.
Well Dr Zoidberg, if you are interested in order, as you are, then you need to store your data in a structure that preserves order and also allows for easy rotation.
In Python a list would do.
Find the smallest element of the list then rotate each list you want to compare until the smallest element of them is at the beginning. Note: this is not a sort, but a rotation. With all the lists for comparison so normalised, a straight forward list compare between any two would tell if they are the same after rotation.
>>> def rotcomp(lst1, lst2):
while min(lst1) != lst1[0]:
lst1 = lst1[1:] + [lst1[0]]
while min(lst2) != lst2[0]:
lst2 = lst2[1:] + [lst2[0]]
return lst1 == lst2
>>> rotcomp(list('ABCD'), list('CDAB'))
True
>>> rotcomp(list('ABCD'), list('CDBA'))
False
>>>
>>> rotcomp(list('AABC'), list('ABCA'))
False
>>> def rotcomp2(lst1, lst2):
return repr(lst1)[1:-1] in repr(lst2 + lst2)
>>> rotcomp2(list('ABCD'), list('CDAB'))
True
>>> rotcomp2(list('ABCD'), list('CDBA'))
False
>>> rotcomp2(list('AABC'), list('ABCA'))
True
>>>
NEW SECTION: WITH DUPLICATES?
If the input may contain duplicates then, (from the possible twin question mentioned under the question), An algorithm is to see if one list is a sub-list of the other list repeated twice.
function rotcomp2 uses that algorithm and a textual comparison of the repr of the list contents.

Algorithm/Data Structure for finding combinations of minimum values easily

I have a symmetric matrix like shown in the image attached below.
I've made up the notation A.B which represents the value at grid point (A, B). Furthermore, writing A.B.C gives me the minimum grid point value like so: MIN((A,B), (A,C), (B,C)).
As another example A.B.D gives me MIN((A,B), (A,D), (B,D)).
My goal is to find the minimum values for ALL combinations of letters (not repeating) for one row at a time e.g for this example I need to find min values with respect to row A which are given by the calculations:
A.B = 6
A.C = 8
A.D = 4
A.B.C = MIN(6,8,6) = 6
A.B.D = MIN(6, 4, 4) = 4
A.C.D = MIN(8, 4, 2) = 2
A.B.C.D = MIN(6, 8, 4, 6, 4, 2) = 2
I realize that certain calculations can be reused which becomes increasingly important as the matrix size increases, but the problem is finding the most efficient way to implement this reuse.
Can point me in the right direction to finding an efficient algorithm/data structure I can use for this problem?
You'll want to think about the lattice of subsets of the letters, ordered by inclusion. Essentially, you have a value f(S) given for every subset S of size 2 (that is, every off-diagonal element of the matrix - the diagonal elements don't seem to occur in your problem), and the problem is to find, for each subset T of size greater than two, the minimum f(S) over all S of size 2 contained in T. (And then you're interested only in sets T that contain a certain element "A" - but we'll disregard that for the moment.)
First of all, note that if you have n letters, that this amounts to asking Omega(2^n) questions, roughly one for each subset. (Excluding the zero- and one-element subsets and those that don't include "A" saves you n + 1 sets and a factor of two, respectively, which is allowed for big Omega.) So if you want to store all these answers for even moderately large n, you'll need a lot of memory. If n is large in your applications, it might be best to store some collection of pre-computed data and do some computation whenever you need a particular data point; I haven't thought about what would work best, but for example computing data only for a binary tree contained in the lattice would not necessarily help you anything beyond precomputing nothing at all.
With these things out of the way, let's assume you actually want all the answers computed and stored in memory. You'll want to compute these "layer by layer", that is, starting with the three-element subsets (since the two-element subsets are already given by your matrix), then four-element, then five-element, etc. This way, for a given subset S, when we're computing f(S) we will already have computed all f(T) for T strictly contained in S. There are several ways that you can make use of this, but I think the easiest might be to use two such subset S: let t1 and t2 be two different elements of T that you may select however you like; let S be the subset of T that you get when you remove t1 and t2. Write S1 for S plus t1 and write S2 for S plus t2. Now every pair of letters contained in T is either fully contained in S1, or it is fully contained in S2, or it is {t1, t2}. Look up f(S1) and f(S2) in your previously computed values, then look up f({t1, t2}) directly in the matrix, and store f(T) = the minimum of these 3 numbers.
If you never select "A" for t1 or t2, then indeed you can compute everything you're interested in while not computing f for any sets T that don't contain "A". (This is possible because the steps outlined above are only interesting whenever T contains at least three elements.) Good! This leaves just one question - how to store the computed values f(T). What I would do is use a 2^(n-1)-sized array; represent each subset-of-your-alphabet-that-includes-"A" by the (n-1) bit number where the ith bit is 1 whenever the (i+1)th letter is in that set (so 0010110, which has bits 2, 4, and 5 set, represents the subset {"A", "C", "D", "F"} out of the alphabet "A" .. "H" - note I'm counting bits starting at 0 from the right, and letters starting at "A" = 0). This way, you can actually iterate through the sets in numerical order and don't need to think about how to iterate through all k-element subsets of an n-element set. (You do need to include a special case for when the set under consideration has 0 or 1 element, in which case you'll want to do nothing, or 2 elements, in which case you just copy the value from the matrix.)
Well, it looks simple to me, but perhaps I misunderstand the problem. I would do it like this:
let P be a pattern string in your notation X1.X2. ... .Xn, where Xi is a column in your matrix
first compute the array CS = [ (X1, X2), (X1, X3), ... (X1, Xn) ], which contains all combinations of X1 with every other element in the pattern; CS has n-1 elements, and you can easily build it in O(n)
now you must compute min (CS), i.e. finding the minimum value of the matrix elements corresponding to the combinations in CS; again you can easily find the minimum value in O(n)
done.
Note: since your matrix is symmetric, given P you just need to compute CS by combining the first element of P with all other elements: (X1, Xi) is equal to (Xi, X1)
If your matrix is very large, and you want to do some optimization, you may consider prefixes of P: let me explain with an example
when you have solved the problem for P = X1.X2.X3, store the result in an associative map, where X1.X2.X3 is the key
later on, when you solve a problem P' = X1.X2.X3.X7.X9.X10.X11 you search for the longest prefix of P' in your map: you can do this by starting with P' and removing one component (Xi) at a time from the end until you find a match in your map or you end up with an empty string
if you find a prefix of P' in you map then you already know the solution for that problem, so you just have to find the solution for the problem resulting from combining the first element of the prefix with the suffix, and then compare the two results: in our example the prefix is X1.X2.X3, and so you just have to solve the problem for
X1.X7.X9.X10.X11, and then compare the two values and choose the min (don't forget to update your map with the new pattern P')
if you don't find any prefix, then you must solve the entire problem for P' (and again don't forget to update the map with the result, so that you can reuse it in the future)
This technique is essentially a form of memoization.

Resources