How many common item - algorithm

Let's say we have these information:
UPDATE
Group A - Item 1, Item 2, Item 3
Group B - Item 1, Item 3
Group C - Item 3, Item 4
I'd like to know which groups contains the most common items:
Output:
Group A - (Item 1 and Item 3)
Group B - (Item 1 and Item 3)
What algorithm would you use?

First of all you have to represent the dataset:
data[A] = {1,2,3}
data[B] = {1,3}
data[C] = {3,4}
It is better to use numbers so you can use for loops, counters, etc.. so:
data[0] = {1,2,3}
data[1] = {1,3}
data[2] = {3,4}
then I would have another data structure with a counter of how many matches between groups you have, so for example matches[A][B] = 2, matches[A][C] = 1 and so on. That is the data structure that you will need to calculate. If you do that, then your problem is reduced to finding the maximum value in that data structure.
for i = 0; i < 3; i++
for item in data[i]
for j = 0; j < 3; j++
//optimize a little bit (match[A][A] doesn't make sense)
if j == i
next
if item in data[j]
matches[i][j]++
Of course you can optimize this some more. For example, we know that matches[A][B] is going to be equal to matches[B][A], sou you can skip those iterations.

So given a list of groups and their contained items, you want to output the identities of the all the groups that have the same, maximum number of items in common with one other group.
Let's get a list of groups and items:
group_items = (
('Group A', ('Item 1', 'Item 2', 'Item 3')),
('Group B', ('Item 1', 'Item 3')),
('Group C', ('Item 3', 'Item 4')),
)
Then let's store the max # items shared value for each group, so we can collect all matching groups at the end. We'll also track the max of the maxes since we can (rather than go back and re-compute it).
max_shared = {item[0]:0 for item in group_items}
num_groups = len(group_items)
group_sets = {}
max_max = 0
Now we're going to have compare every group with every other group, but we can ignore certain comparisons. As #Perroloco mentions, comparing Group A with Group A isn't useful, and computing intersect(A,B) is symmetric with computing intersect(B,A), so we can range from 0 to N and then from i+1 to N, instead of doing 0..N cross 0..N.
I'm using the set data type, which costs something to construct. So I cached the sets because we aren't modifying the membership, just counting the membership of the intersection.
It's worth pointing out that while intersection(A,B) == intersection(B,A), it is not the case that the MAX for A is the same as the MAX for B. Thus, there are separate comparisons for the inner max and the outer max.
for i in range(num_groups):
outer_name, outer_mem = group_items[i]
if outer_name not in group_sets:
group_sets[outer_name] = set(outer_mem)
outer_set = group_sets[outer_name]
outer_max = max_shared[outer_name]
for j in range(i+1, num_groups):
inner_name, inner_mem = group_items[j]
if inner_name not in group_sets:
group_sets[inner_name] = set(inner_mem)
inner_set = group_sets[inner_name]
ni = len(outer_set.intersection(inner_set))
if ni > outer_max:
outer_max = max_shared[outer_name] = ni
if ni > max_max:
max_max = ni
if ni > max_shared[inner_name]:
max_shared[inner_name] = ni
print("Overall max # of shared items:", max_max)
results = [grp for grp,mx in max_shared.items() if mx == max_max]
print("Groups with that many shared items:", results)

Related

Count subsets of array which qualify min(subset)+max(subset) < k

Was asked this question in an interview, didn't have a better answer than generating all possible subsets.
Example:
a = [4,2,5,7] k = 8
output = 4
[2],[4,2],[2,5],[4,2,5]
Interviewer tried implying sorting the array should help, but I still couldn't figure out a better-than-brute-force solution. Will appreciate your input.
The interviewer implied that sorting the array would help and it does help. I'll try to explain.
Taking the array and k values you stated:
a = [4,2,5,7]
k = 8
Sorting the array will yield:
a_sort = [2,4,5,7]
Now we can consider the following procedure:
set ii = 0, jj = 1
choose a_sort[ii] as a part of your subset
2.1. If 2 * a_sort[ii] >= k, you are done. else, the subset [a_sort[ii]] holds the condition and is a part of the solution.
add a_sort[ii+jj] to your subset
3.1. If a_sort[ii] + a_sort[ii+jj] < k,
3.1.1. the subset [a_sort[ii], a_sort[ii+jj]] holds the condition and is a part of the solution, as well as any subset which consists of any additional number of elements a_sort[kk] where ii< kk < ii+jj
3.1.2. set jj += 1 and go back to step 3.
3.2. else, set ii += 1, jj = ii + 1, go back to step 2
With your input this procedure should return:
[[2], [2,4],[2,5],[2,4,5]]
# [2,7] results in 9 > 8 and therefore you move to [4]
# Note that for [4] subset you get 8 = 8 which is not smaller than 8, we are done
Explenation
if you have a subset of [a_sort[ii]] which does not hold 2 * a_sort[ii] < k, adding additional numbers to the subset will only yield min(subset)+max(subset) > 2 * a_sort[ii] > k and therefore there will not be any additional subsets which hold the wanted condition. Moreover, by setting a subset of [a_sort[ii+1]] will results in 2 * a_sort[ii+1] >= 2 * a_sort[ii] > k` sinse a_sort is sorted. Therefore you will not find any additional subsets.
for jj > ii, if a_sort[ii] + a_sort[ii+jj] < k then you can push any number if members from a_sort into the subset, as long as the index kk will be bigger than ii and lower than ii+jj since a_sort is sorted, and adding these members to the subset will not change the value of min(subset)+max(subset) which will remain a_sort[ii] + a_sort[ii+jj] and we already know that this value is smaller thank k
Getting the count
In case you simply want to the possible subsets, this can be done easier than generating the subsets themselves.
Assuming that for ii > jj the condition holds, i.e. a_sort[ii] + a_sort[ii+jj] < k. If jj = ii + 1 there is an addition of 1 possible subset. If jj > ii + 1 there are jj - ii - 1 additional elements which can be either present not not without a change of the value a_sort[ii] + a_sort[ii+jj]. Therefore there are a total of 2**(jj-ii-1) additional subsets available to add to the solution group (jj-ii-1 elements, each is independently present or not). This also holds for jj = ii + 1 since in this case 2**(jj-ii-1) = 2**0 = 1
Looking at the example above:
[2] adds 1 count
[2,4] adds 1 count (1 = 0 + 1)
[2,5] adds 2 counts (2 = 0 + 2 --> 2 **(2 - 0 - 1) = 2**1 = 2)
A total count of 4
Sort the array
For an element x at index l, do a binary search on the array to get index of the maximum integer in the array which is < k-x. Let the index be r.
For all subsets where min(subset) = x, we can have any element with index in range (l,r]. Number of subsets with min(subset) = x becomes the total number of possible subsets for (r-l) elements, so count = 2^(r-l) (or 0 if r<l).
(Note: in all such subsets, we are fixing x. That's why the range (l,r] isn't inclusive of l)
You have to iterate over the array, use the above process for each element/index to get the count of subsets where our current element is the minimum and the subset satisfies the given constraint. If you find an element with count=0, break the iteration.
This should work with a 0(N*log(N)) complexity, good enough for an interview question imo.
For the given example, sorted array = [2,4,5,7].
For element 2, l=0 and r=2. Count = 2^(2-0) = 4 (covers [2],[4,2],[2,5],[4,2,5]
For element 4, l=1 and r=0. Count = 0, and we break the iteration.

Sort Thousands of Chuck E. Cheese Tickets

I need to sort an n-thousand size array of random unique positive integers into groups of consecutive integers, each of group size k or larger, and then further grouped into dividends of some arbitrary positive integer j.
In other words, let's say I work at Chuck E. Cheese and we sometimes give away free tickets. I have a couple hundred thousand tickets on the floor and want to find out what employee handed out what but only for ticket groupings of consecutive integers that are larger than 500. Each employee has a random number from 0 to 100 assigned to them. That number corresponds to what "batch" of tickets where handed out, i.e. tickets from #000000 to #001499 where handed out by employee 1, tickets from #001500 to #002999 were handed out by employee 2, and so on. A large number of tickets are lost or are missing. I only care about groups of consecutive ticket numbers larger than 500.
What is the fastest way for me to sort through this pile?
Edit:
As requested by #trincot, here is a worked out example:
I have 150,000 unique tickets on the floor ranging from ticket #000000 to #200000 (i.e. missing 50,001 random tickets from the pile)
Step 1: sort each ticket from smallest to largest using an introsort algorithm.
Step 2: go through the list of tickets one by one and gather only tickets with "consecutiveness" greater than 500. i.e. I keep a tally of how many consecutive values I have found and only keep those with tallys 500 or higher. If I have tickets #409 thru #909 but not #408 or #1000 then I would keep that group but if that group had missed a ticket anywhere from #409 to #909, I would have thrown out the group and moved on.
Step 3: combine all my newly sorted groups together, each of which are size 500 or larger.
Step 4: figure out what tickets belong to who by going through the final numbers one by one again, dividing each by 1500, rounding down to nearest whole number, and putting them in their respective pile where each pile represents an employee.
The end result is a set of piles telling me which employees gave out more than 500 tickets at a time, how many times they did so, and what tickets they did so with.
Sample with numbers:
where k == 3 and j = 1500; k is minimum consecutive integer grouping size, j is final ticket interval grouping size i.e. 5,6, and 7 fall into the 0th group of intervals of size 1500 and 5996, 5997, 5998, 5999 fall into the third group of intervals of size 1500.
Input: [5 , 5996 , 8111 , 1000 , 1001, 5999 , 8110 , 7 , 5998 , 2500 , 1250 , 6 , 8109 , 5997]
Output:[ 0:[5, 6, 7] , 3:[5996, 5997, 5998, 5999] , 5:[8109, 8110, 8111] ]
Here is how you could do it in Python:
from collections import defaultdict
def partition(data, k, j):
data = sorted(data)
start = data[0] # assuming data is not an empty list
count = 0
output = defaultdict(list) # to automatically create a partition when referenced
for value in data:
bucket = value // j # integer division
if value % j == start % j + count: # in same partition & consecutive?
count += 1
if count == k:
# Add the k entries that we skipped so far:
output[bucket].extend(list(range(start, start+count)))
elif count > k:
output[bucket].append(value)
else:
start = value
count = 1
return dict(output)
# The example given in the question:
data = [5, 5996, 8111, 1000, 1001, 5999, 8110, 7, 5998, 2500, 1250, 6, 8109, 5997]
print(partition(data, k=3, j=1500))
# outputs {0: [5, 6, 7], 3: [5996, 5997, 5998, 5999], 5: [8109, 8110, 8111]}
Here is untested Python for the fastest approach that I can think of. It will return just pairs of first/last ticket for each range of interest found.
def grouped_tickets (tickets, min_group_size, partition_size):
tickets = sorted(tickets)
answer = {}
min_ticket = -1
max_ticket = -1
next_partition = 0
for ticket in tickets:
if next_partition <= ticket or max_ticket + 1 < ticket:
if min_group_size <= max_ticket - min_ticket + 1:
partition = min_ticket // partition_size
if partition in answer:
answer[partition].append((min_ticket, max_ticket))
else:
answer[partition] = [(min_ticket, max_ticket)]
# Find where the next partition is.
next_partition = (ticket // partition_size) * partition_size + partition_size
min_ticket = ticket
max_ticket = ticket
else:
max_ticket = ticket
# And don't lose the last group!
if min_group_size <= max_ticket - min_ticket + 1:
partition = min_ticket // partition_size
if partition in answer:
answer[partition].append((min_ticket, max_ticket))
else:
answer[partition] = [(min_ticket, max_ticket)]
return answer

Arranging the number 1 in a 2d matrix

Given the number of rows and columns of a 2d matrix
Initially all elements of matrix are 0
Given the number of 1's that should be present in each row
Given the number of 1's that should be present in each column
Determine if it is possible to form such matrix.
Example:
Input: r=3 c=2 (no. of rows and columns)
2 1 0 (number of 1's that should be present in each row respectively)
1 2 (number of 1's that should be present in each column respectively)
Output: Possible
Explanation:
1 1
0 1
0 0
I tried solving this problem for like 12 hours by checking if summation of Ri = summation of Ci
But I wondered if wouldn't be possible for cases like
3 3
1 3 0
0 2 2
r and c can be upto 10^5
Any ideas how should I move further?
Edit: Constraints added and output should only be "possible" or "impossible". The possible matrix need not be displayed.
Can anyone help me now?
Hint: one possible solution utilizes Maximum Flow Problem by creating a special graph and running the standard maximum flow algorithm on it.
If you're not familiar with the above problem, you may start reading about it e.g. here https://en.wikipedia.org/wiki/Maximum_flow_problem
If you're interested in the full solution please comment and I'll update the answer. But it requires understading the above algorithm.
Solution as requested:
Create a graph of r+c+2 nodes.
Node 0 is the source, node r+c+1 is the sink. Nodes 1..r represent the rows, while r+1..r+c the columns.
Create following edges:
from source to nodes i=1..r of capacity r_i
from nodes i=r+1..r+c to sink of capacity c_i
between all the nodes i=1..r and j=r+1..r+c of capacity 1
Run maximum flow algorithm, the saturated edges between row nodes and column nodes define where you should put 1.
Or if it's not possible then the maximum flow value is less than number of expected ones in the matrix.
I will illustrate the algorithm with an example.
Assume we have m rows and n columns. Let rows[i] be the number of 1s in row i, for 0 <= i < m,
and cols[j] be the number of 1s in column j, for 0 <= j < n.
For example, for m = 3, and n = 4, we could have: rows = {4 2 3}, cols = {1 3 2 3}, and
the solution array would be:
1 3 2 3
+--------
4 | 1 1 1 1
2 | 0 1 0 1
3 | 0 1 1 1
Because we only want to know whether a solution exists, the values in rows and cols may be permuted in any order. The solution of each permutation is just a permutation of the rows and columns of the above solution.
So, given rows and cols, sort cols in decreasing order, and rows in increasing order. For our example, we have cols = {3 3 2 1} and rows = {2 3 4}, and the equivalent problem.
3 3 2 1
+--------
2 | 1 1 0 0
3 | 1 1 1 0
4 | 1 1 1 1
We transform cols into a form that is better suited for the algorithm. What cols tells us is that we have two series of 1s of length 3, one series of 1s of length 2, and one series of 1s of length 1, that are to be distributed among the rows of the array. We rewrite cols to capture just that, that is COLS = {2/3 1/2 1/1}, 2 series of length 3, 1 series of length 2, and 1 series of length 1.
Because we have 2 series of length 3, a solution exists only if we can put two 1s in the first row. This is possible because rows[0] = 2. We do not actually put any 1 in the first row, but record the fact that 1s have been placed there by decrementing the length of the series of length 3. So COLS becomes:
COLS = {2/2 1/2 1/1}
and we combine our two counts for series of length 2, yielding:
COLS = {3/2 1/1}
We now have the reduced problem:
3 | 1 1 1 0
4 | 1 1 1 1
Again we need to place 1s from our series of length 2 to have a solution. Fortunately, rows[1] = 3 and we can do this. We decrement the length of 3/2 and get:
COLS = {3/1 1/1} = {4/1}
We have the reduced problem:
4 | 1 1 1 1
Which is solved by 4 series of length 1, just what we have left. If at any step, the series in COLS cannot be used to satisfy a row count, then no solution is possible.
The general processing for each row may be stated as follows. For each row r, starting from the first element in COLS, decrement the lengths of as many elements count[k]/length[k] of COLS as needed, so that the sum of the count[k]'s equals rows[r]. Eliminate series of length 0 in COLS and combine series of same length.
Note that because elements of COLS are in decreasing order of lengths, the length of the last element decremented is always less than or equal to the next element in COLS (if there is a next element).
EXAMPLE 2 : Solution exists.
rows = {1 3 3}, cols = {2 2 2 1} => COLS = {3/2 1/1}
1 series of length 2 is decremented to satisfy rows[0] = 1, and the 2 other series of length 2 remains at length 2.
rows[0] = 1
COLS = {2/2 1/1 1/1} = {2/2 2/1}
The 2 series of length 2 are decremented, and 1 of the series of length 1.
The series whose length has become 0 is deleted, and the series of length 1 are combined.
rows[1] = 3
COLS = {2/1 1/0 1/1} = {2/1 1/1} = {3/1}
A solution exists for rows[2] can be satisfied.
rows[2] = 3
COLS = {3/0} = {}
EXAMPLE 3: Solution does not exists.
rows = {0 2 3}, cols = {3 2 0 0} => COLS = {1/3 1/2}
rows[0] = 0
COLS = {1/3 1/2}
rows[1] = 2
COLS = {1/2 1/1}
rows[2] = 3 => impossible to satisfy; no solution.
SPACE COMPLEXITY
It is easy to see that it is O(m + n).
TIME COMPLEXITY
We iterate over each row only once. For each row i, we need to iterate over at most
rows[i] <= n elements of COLS. Time complexity is O(m x n).
After finding this algorithm, I found the following theorem:
The Havel-Hakimi theorem (Havel 1955, Hakimi 1962) states that there exists a matrix Xn,m of 0’s and 1’s with row totals a0=(a1, a2,… , an) and column totals b0=(b1, b2,… , bm) such that bi ≥ bi+1 for every 0 < i < m if and only if another matrix Xn−1,m of 0’s and 1’s with row totals a1=(a2, a3,… , an) and column totals b1=(b1−1, b2−1,… ,ba1−1, ba1+1,… , bm) also exists.
from the post Finding if binary matrix exists given the row and column sums.
This is basically what my algorithm does, while trying to optimize the decrementing part, i.e., all the -1's in the above theorem. Now that I see the above theorem, I know my algorithm is correct. Nevertheless, I checked the correctness of my algorithm by comparing it with a brute-force algorithm for arrays of up to 50 cells.
Here is the C# implementation.
public class Pair
{
public int Count;
public int Length;
}
public class PairsList
{
public LinkedList<Pair> Pairs;
public int TotalCount;
}
class Program
{
static void Main(string[] args)
{
int[] rows = new int[] { 0, 0, 1, 1, 2, 2 };
int[] cols = new int[] { 2, 2, 0 };
bool success = Solve(cols, rows);
}
static bool Solve(int[] cols, int[] rows)
{
PairsList pairs = new PairsList() { Pairs = new LinkedList<Pair>(), TotalCount = 0 };
FillAllPairs(pairs, cols);
for (int r = 0; r < rows.Length; r++)
{
if (rows[r] > 0)
{
if (pairs.TotalCount < rows[r])
return false;
if (pairs.Pairs.First != null && pairs.Pairs.First.Value.Length > rows.Length - r)
return false;
DecrementPairs(pairs, rows[r]);
}
}
return pairs.Pairs.Count == 0 || pairs.Pairs.Count == 1 && pairs.Pairs.First.Value.Length == 0;
}
static void DecrementPairs(PairsList pairs, int count)
{
LinkedListNode<Pair> pair = pairs.Pairs.First;
while (count > 0 && pair != null)
{
LinkedListNode<Pair> next = pair.Next;
if (pair.Value.Count == count)
{
pair.Value.Length--;
if (pair.Value.Length == 0)
{
pairs.Pairs.Remove(pair);
pairs.TotalCount -= count;
}
else if (pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
{
pair.Value.Count += pair.Next.Value.Count;
pairs.Pairs.Remove(pair.Next);
next = pair;
}
count = 0;
}
else if (pair.Value.Count < count)
{
count -= pair.Value.Count;
pair.Value.Length--;
if (pair.Value.Length == 0)
{
pairs.Pairs.Remove(pair);
pairs.TotalCount -= pair.Value.Count;
}
else if(pair.Next != null && pair.Next.Value.Length == pair.Value.Length)
{
pair.Value.Count += pair.Next.Value.Count;
pairs.Pairs.Remove(pair.Next);
next = pair;
}
}
else // pair.Value.Count > count
{
Pair p = new Pair() { Count = count, Length = pair.Value.Length - 1 };
pair.Value.Count -= count;
if (p.Length > 0)
{
if (pair.Next != null && pair.Next.Value.Length == p.Length)
pair.Next.Value.Count += p.Count;
else
pairs.Pairs.AddAfter(pair, p);
}
else
pairs.TotalCount -= count;
count = 0;
}
pair = next;
}
}
static int FillAllPairs(PairsList pairs, int[] cols)
{
List<Pair> newPairs = new List<Pair>();
int c = 0;
while (c < cols.Length && cols[c] > 0)
{
int k = c++;
if (cols[k] > 0)
pairs.TotalCount++;
while (c < cols.Length && cols[c] == cols[k])
{
if (cols[k] > 0) pairs.TotalCount++;
c++;
}
newPairs.Add(new Pair() { Count = c - k, Length = cols[k] });
}
LinkedListNode<Pair> pair = pairs.Pairs.First;
foreach (Pair p in newPairs)
{
while (pair != null && p.Length < pair.Value.Length)
pair = pair.Next;
if (pair == null)
{
pairs.Pairs.AddLast(p);
}
else if (p.Length == pair.Value.Length)
{
pair.Value.Count += p.Count;
pair = pair.Next;
}
else // p.Length > pair.Value.Length
{
pairs.Pairs.AddBefore(pair, p);
}
}
return c;
}
}
(Note: to avoid confusion between when I'm talking about the actual numbers in the problem vs. when I'm talking about the zeros in the ones in the matrix, I'm going to instead fill the matrix with spaces and X's. This obviously doesn't change the problem.)
Some observations:
If you're filling in a row, and there's (for example) one column needing 10 more X's and another column needing 5 more X's, then you're sometimes better off putting the X in the "10" column and saving the "5" column for later (because you might later run into 5 rows that each need 2 X's), but you're never better off putting the X in the "5" column and saving the "10" column for later (because even if you later run into 10 rows that all need an X, they won't mind if they don't all go in the same column). So we can use a somewhat "greedy" algorithm: always put an X in the column still needing the most X's. (Of course, we'll need to make sure that we don't greedily put an X in the same column multiple times for the same row!)
Since you don't need to actually output a possible matrix, the rows are all interchangeable and the columns are all interchangeable; all that matter is how many rows still need 1 X, how many still need 2 X's, etc., and likewise for columns.
With that in mind, here's one fairly simple approach:
(Optimization.) Add up the counts for all the rows, add up the counts for all the columns, and return "impossible" if the sums don't match.
Create an array of length r+1 and populate it with how many columns need 1 X, how many need 2 X's, etc. (You can ignore any columns needing 0 X's.)
(Optimization.) To help access the array efficiently, build a stack/linked-list/etc. of the indices of nonzero array elements, in decreasing order (e.g., starting at index r if it's nonzero, then index r−1 if it's nonzero, etc.), so that you can easily find the elements representing columns to put X's in.
(Optimization.) To help determine when there'll be a row can't be satisfied, also make note of the total number of columns needing any X's, and make note of the largest number of X's needed by any row. If the former is less than the latter, return "impossible".
(Optimization.) Sort the rows by the number of X's they need.
Iterate over the rows, starting with the one needing the fewest X's and ending with the one needing the most X's, and for each one:
Update the array accordingly. For example, if a row needs 12 X's, and the array looks like [..., 3, 8, 5], then you'll update the array to look like [..., 3+7 = 10, 8+5−7 = 6, 5−5 = 0]. If it's not possible to update the array because you run out of columns to put X's in, return "impossible". (Note: this part should never actually return "impossible", because we're keeping count of the number of columns left and the max number of columns we'll need, so we should have already returned "impossible" if this was going to happen. I mention this check only for clarity.)
Update the stack/linked-list of indices of nonzero array elements.
Update the total number of columns needing any X's. If it's now less than the greatest number of X's needed by any row, return "impossible".
(Optimization.) If the first nonzero array element has an index greater than the number of rows left, return "impossible".
If we complete our iteration without having returned "impossible", return "possible".
(Note: the reason I say to start with the row needing the fewest X's, and work your way to the row with the most X's, is that a row needing more X's may involve examining updating more elements of the array and of the stack, so the rows needing fewer X's are cheaper. This isn't just a matter of postponing the work: the rows needing fewer X's can help "consolidate" the array, so that there will be fewer distinct column-counts, making the later rows cheaper than they would otherwise be. In a very-bad-case scenario, such as the case of a square matrix where every single row needs a distinct positive number of X's and every single column needs a distinct positive number of X's, the fewest-to-most order means you can handle each row in O(1) time, for linear time overall, whereas the most-to-fewest order would mean that each row would take time proportional to the number of X's it needs, for quadratic time overall.)
Overall, this takes no worse than O(r+c+n) time (where n is the number of X's); I think that the optimizations I've listed are enough to ensure that it's closer to O(r+c) time, but it's hard to be 100% sure. I recommend trying it to see if it's fast enough for your purposes.
You can use brute force (iterating through all 2^(r * c) possibilities) to solve it, but that will take a long time. If r * c is under 64, you can accelerate it to a certain extent using bit-wise operations on 64-bit integers; however, even then, iterating through all 64-bit possibilities would take, at 1 try per ms, over 500M years.
A wiser choice is to add bits one by one, and only continue placing bits if no constraints are broken. This will eliminate the vast majority of possibilities, greatly speeding up the process. Look up backtracking for the general idea. It is not unlike solving sudokus through guesswork: once it becomes obvious that your guess was wrong, you erase it and try guessing a different digit.
As with sudokus, there are certain strategies that can be written into code and will result in speedups when they apply. For example, if the sum of 1s in rows is different from the sum of 1s in columns, then there are no solutions.
If over 50% of the bits will be on, you can instead work on the complementary problem (transform all ones to zeroes and vice-versa, while updating row and column counts). Both problems are equivalent, because any answer for one is also valid for the complementary.
This problem can be solved in O(n log n) using Gale-Ryser Theorem. (where n is the maximum of lengths of the two degree sequences).
First, make both sequences of equal length by adding 0's to the smaller sequence, and let this length be n.
Let the sequences be A and B. Sort A in non-decreasing order, and sort B in non-increasing order. Create another prefix sum array P for B such that ith element of P is equal to sum of first i elements of B.
Now, iterate over k's from 1 to n, and check for
The second sum can be calculated in O(log n) using binary search for index of last number in B smaller than k, and then using precalculated P.
Inspiring from the solution given by RobertBaron I have tried to build a new algorithm.
rows = [int(x)for x in input().split()]
cols = [int (ss) for ss in input().split()]
rows.sort()
cols.sort(reverse=True)
for i in range(len(rows)):
for j in range(len(cols)):
if(rows[i]!= 0 and cols[j]!=0):
rows[i] = rows[i] - 1;
cols[j] =cols[j]-1;
print("rows: ",rows)
print("cols: ",cols)
#if there is any non zero value, print NO else print yes
flag = True
for i in range(len(rows)):
if(rows[i]!=0):
flag = False
break
for j in range(len(cols)):
if(cols[j]!=0):
flag = False
if(flag):
print("YES")
else:
print("NO")
here, i have sorted the rows in ascending order and cols in descending order. later decrementing particular row and column if 1 need to be placed!
it is working for all the test cases posted here! rest GOD knows

Algorithm for all combinations to divide set into equally sized subsets [duplicate]

Let's say I have a set of elements S = { 1, 2, 3, 4, 5, 6, 7, 8, 9 }
I would like to create combinations of 3 and group them in a way such that no number appears in more than one combination.
Here is an example:
{ {3, 7, 9}, {1, 2, 4}, {5, 6, 8} }
The order of the numbers in the groups does not matter, nor does the order of the groups in the entire example.
In short, I want every possible group combination from every possible combination in the original set, excluding the ones that have a number appearing in multiple groups.
My question: is this actually feasible in terms of run time and memory? My sample sizes could be somewhere around 30-50 numbers.
If so, what is the best way to create this algorithm? Would it be best to create all possible combinations, and choose the groups only if the number hasn't already appeared?
I'm writing this in Qt 5.6, which is a C++ based framework.
You can do this recursively, and avoid duplicates, if you keep the first element fixed in each recursion, and only make groups of 3 with the values in order, eg:
{1,2,3,4,5,6,7,8,9}
Put the lowest element in the first spot (a), and keep it there:
{a,b,c} = {1, *, *}
For the second spot (b), iterate over every value from the second-lowest to the second-highest:
{a,b,c} = {1, 2~8, *}
For the third spot (c), iterate over every value higher than the second value:
{1, 2~8, b+1~9}
Then recurse with the rest of the values.
{1,2,3} {4,5,6} {7,8,9}
{1,2,3} {4,5,7} {6,8,9}
{1,2,3} {4,5,8} {6,7,9}
{1,2,3} {4,5,9} {6,7,8}
{1,2,3} {4,6,7} {5,8,9}
{1,2,3} {4,6,8} {5,7,9}
{1,2,3} {4,6,9} {5,7,8}
{1,2,3} {4,7,8} {5,6,9}
{1,2,3} {4,7,9} {5,6,8}
{1,2,3} {4,8,9} {5,6,7}
{1,2,4} {3,5,6} {7,8,9}
...
{1,8,9} {2,6,7} {3,4,5}
Wen I say "in order", that doesn't have to be any specific order (numerical, alphabetical...), it can just be the original order of the input. You can avoid having to re-sort the input of each recursion if you make sure to pass the rest of the values on to the next recursion in the order you received them.
A run-through of the recursion:
Let's say you get the input {1,2,3,4,5,6,7,8,9}. As the first element in the group, you take the first element from the input, and for the other two elements, you iterate over the other values:
{1,2,3}
{1,2,4}
{1,2,5}
{1,2,6}
{1,2,7}
{1,2,8}
{1,2,9}
{1,3,4}
{1,3,5}
{1,3,6}
...
{1,8,9}
making sure the third element always comes after the second element, to avoid duplicates like:
{1,3,5} &lrarr; {1,5,3}
Now, let's say that at a certain point, you've selected this as the first group:
{1,3,7}
You then pass the rest of the values onto the next recursion:
{2,4,5,6,8,9}
In this recursion, you apply the same rules as for the first group: take the first element as the first element in the group and keep it there, and iterate over the other values for the second and third element:
{2,4,5}
{2,4,6}
{2,4,8}
{2,4,9}
{2,5,6}
{2,5,8}
{2,5,9}
{2,6,7}
...
{2,8,9}
Now, let's say that at a certain point, you've selected this as the second group:
{2,5,6}
You then pass the rest of the values onto the next recursion:
{4,8,9}
And since this is the last group, there is only one possibility, and so this particular recursion would end in the combination:
{1,3,7} {2,5,6} {4,8,9}
As you see, you don't have to sort the values at any point, as long as you pass them onto the next recursion in the order you recevied them. So if you receive e.g.:
{q,w,e,r,t,y,u,i,o}
and you select from this the group:
{q,r,u}
then you should pass on:
{w,e,t,y,i,o}
Here's a JavaScript snippet which demonstrates the method; it returns a 3D array with combinations of groups of elements.
(The filter function creates a copy of the input array, with elements 0, i and j removed.)
function clone2D(array) {
var clone = [];
for (var i = 0; i < array.length; i++) clone.push(array[i].slice());
return clone;
}
function groupThree(input) {
var result = [], combination = [];
group(input, 0);
return result;
function group(input, step) {
combination[step] = [input[0]];
for (var i = 1; i < input.length - 1; i++) {
combination[step][1] = input[i];
for (var j = i + 1; j < input.length; j++) {
combination[step][2] = input[j];
if (input.length > 3) {
var rest = input.filter(function(elem, index) {
return index && index != i && index != j;
});
group(rest, step + 1);
}
else result.push(clone2D(combination));
}
}
}
}
var result = groupThree([1,2,3,4,5,6,7,8,9]);
for (var r in result) document.write(JSON.stringify(result[r]) + "<br>");
For n things taken 3 at a time, you could use 3 nested loops:
for(k = 0; k < n-2; k++){
for(j = k+1; j < n-1; j++){
for(i = j+1; i < n ; i++){
... S[k] ... S[j] ... S[i]
}
}
}
For a generic solution of n things taken k at a time, you could use an array of k counters.
I think You can solve it by using coin change problem with dynamic programming, just assume You are looking for change of 3 and every index in array is a coin value 1, then just output coins(values in Your array) that has been found.
Link: https://www.youtube.com/watch?v=18NVyOI_690

How can I write this combinatorics algorithm more efficiently?

A group contains a set of entities and each entity has a value.
Each entity can be a part of more than one group.
Problem: Find largest N groups where each entity appears no more than once in the result. An entity can be excluded from a group if necessary.
Example:
Entities with values:
A = 2
B = 2
C = 2
D = 3
E = 3
Groups
1: (A,B,C) total value: 2+2+2 = 6
2: (B,D) total value: 2 + 3 = 5
3: (C,E) total value: 2 + 3 = 5
4: (D) total value: 3
5: (E) total value: 3
**Answers**:
Largest 1 group is obviously (A,B,C) with total value 6
Largest 2 groups are (B,D), (C,E) with total value 10
Largest 3 groups are either {(A,B,C),(D),(E)}, {(A,B),(C,E),(D)} or {(A,C), (B,D), (E)} with total value 12
The input data to the algorithm should be:
A set of entities with values
Groups containing one or more of the entities
The amount of groups in the result
If there are multiple answers then finding one of them is sufficient.
I included the example to try to make the problem clear, the amount of entities in practise should be less than about 50, and amount of groups should be less than the amount of entities. The amount of N groups to find will be between 1 and 10.
I am currently solving this problem by generating all possible combinations of N groups, excluding the results that contains duplicate entities and then picking the combination with largest total value. This is of course extremely inefficient but i cant get my head around how to obtain a general result in a more efficient way.
My question is if it's possible to solve this in a more efficient way, and if so, how? Any hints or answers are greatly appreciated.
edit
To be clear, in my solution i generate "fake" groups where duplicate entities are excluded from "real" groups. In the example entities (B, C, D, E) are duplicates (exist in more than one group. Then for group 1 (A,B,C) i add the fake groups (A,B),(A,C),(A) to the list of groups that I generate combinations for.
This problem can be formulated as a linear integer program. Although the integer programming is not super efficient in terms of complexity, it works very quick with this number of variables.
Here is how we turn this problem into an integer program.
Let v be a vector of size K representing the entity values.
Let G be a K x M binary matrix that defines the groups: G(i,j)=1 means that the entity i belongs to the group j and G(i,j)=0 otherwise.
Let x be a binary vector of size M, which represents the choice of groups: x[j]=1 indicates we pick the group j.
Let y be a binary vector of size K, which represents the inclusion of entities: y[i]=1 means that the entity i is included in the outcome.
Our goal is to choose x and y so as to maximize sum(v*y) under the following conditions:
G x >= y ... all included entities must belong to at least one of chosen groups
sum(x) = N ... we choose exactly N groups.
Below is an implementation in R. It uses lpSolve library, an interface to lpsolve.
library(lpSolve)
solver <- function(values, groups, N)
{
n_group <- ncol(groups)
n_entity <- length(values)
object <- c(rep(0, n_group), values)
lhs1 <- cbind(groups, -diag(n_entity))
rhs1 <- rep(0, n_entity)
dir1 <- rep(">=", n_entity)
lhs2 <- matrix(c(rep(1, n_group), rep(0, n_entity)), nrow=1)
rhs2 <- N
dir2 <- "="
lhs <- rbind(lhs1, lhs2)
rhs <- c(rhs1, rhs2)
direc <- c(dir1, dir2)
lp("max", object, lhs, direc, rhs, all.bin=TRUE)
}
values <- c(A=2, B=2, C=2, D=3, E=3)
groups <- matrix(c(1,1,1,0,0,
0,1,0,1,0,
0,0,1,0,1,
0,0,0,1,0,
0,0,0,0,1),
nrow=5, ncol=5)
rownames(groups) <- c("A", "B", "C", "D", "E")
ans <- solver(values, groups, 1)
print(ans)
names(values)[tail(ans$solution, length(values))==1]
# Success: the objective function is 6
# [1] "A" "B" "C"
ans <- solver(values, groups, 2)
print(ans)
names(values)[tail(ans$solution, length(values))==1]
# Success: the objective function is 10
# [1] "B" "C" "D" "E"
ans <- solver(values, groups, 3)
print(ans)
names(values)[tail(ans$solution, length(values))==1]
# Success: the objective function is 12
# [1] "A" "B" "C" "D" "E"
Below is to see how this can work with large problem. It finishes in one second.
# how does it scale?
n_entity <- 50
n_group <- 50
N <- 10
entity_names <- paste("X", 1:n_entity, sep="")
values <- sample(1:10, n_entity, replace=TRUE)
names(values) <- entity_names
groups <- matrix(sample(c(0,1), n_entity*n_group,
replace=TRUE, prob=c(0.99, 0.01)),
nrow=n_entity, ncol=n_group)
rownames(groups) <- entity_names
ans <- solver(values, groups, N)
print(ans)
names(values)[tail(ans$solution, length(values))==1]
if the entity values are always positive, I think you can get a solution without generating all combinations:
sort the groups by their largest element, 2nd largest element, nth largest element. in this case you would have 3 copies since the largest group has 3 elements.
for each copy, make one pass from the largest to the smallest adding the group to the solution only if it doesn't contain an element you've already added. this yields 3 results, take the largest. there shouldn't be a larger possible solution unless weights could be negative.
here's an implementation in C#
var entities = new Dictionary<char, int>() { { 'A', 2 }, { 'B', 2 }, { 'C', 2 }, { 'D', 3 }, { 'E', 3 } };
var groups = new List<string>() { "ABC", "BD", "CE", "D", "E" };
var solutions = new List<Tuple<List<string>, int>>();
for(int i = 0; i < groups.Max(x => x.Length); i++)
{
var solution = new List<string>();
foreach (var group in groups.OrderByDescending(x => x.Length > i ? entities[x[i]] : -1))
if (!group.ToCharArray().Any(c => solution.Any(g => g.Contains(c))))
solution.Add(group);
solutions.Add(new Tuple<List<string>, int>(solution, solution.Sum(g => g.ToCharArray().Sum(c => entities[c]))));
}
solutions.Dump();
solutions.OrderByDescending(x => x.Item2).First().Dump();
output:

Resources