Data Structure with fast access to nth of elements satisfying condition - algorithm

I'm filling a stack/vector (a dynamically sized container with fast random access by index with insertion only at the end) with composite data (a struct, class, tuple…). For a specific attribute with a small set of possible values, I will want to access the nth of all elements in the stack where this attribute satisfies a condition. To achieve this, additional information can be stored along each composite or in a separate data structure.
Note that the vector is large and that the compared attribute has a small value range but is compared to a set of allowed values. Also the attributes aren't distributed evenly throughout composites in the vector.
Pseudocode of a O(n) naïve approach. How can I improve this:
enum Fruit { apple, orange, banana, potato };
struct c {
Fruit a;
Data d;
}
// Let's assume v has a length of many thousand and that the distribution of fruits is *not* completely random e.g. that maybe potato only rarely occurs or that bananas tend to come in packs
c getFruit(vector<c> v, set<Fruit> s, int n) {
int counter=0;
// iterate over all of v's indices
for(int i=0 ; i<v.length; i+=1) {
if(v[i].a in s) {
if(n==counter) {
return v[i];
}
counter+=1;
}
}
}
// note: The attribute is compared to a set (arbitrary combination of fruits)!
getFruit(largeVector, set{apple, orange, potato}, 15234)
Another approach would be to create a vector for each possible set of fruits which would be super fast O(1) but not so memory efficient.
(Although I do have to implement this now, I'm really just asking out of curiousity because my data is small enough to just go with the naïve approach.)
Any argument why there doesn't seem to a more efficient way is very much approved as well.
Edit: It should be noted that new elements may be appended between queries for indices using the algorithm in question so any caches have to grow with the vector and both growing the vector and this filtered access should be fast.

For each index of the vector, store the preceding number of each fruit.
Then you can do a binary search to find the first index where the sum of the desired fruit counts is sufficient.
If you don't want to use that much memory, then store the counts in a separate arrays, and only store them for every 16th index or so in the main array. Your binary search will then get you an index within 16 positions of the desired answer, and you can do a linear scan from there.

Related

Generating in-order constrained sets

First I will paste the scenario and then pose my question:
Suppose you have a list of Categories, for example:
Food,Meat,Dairy,Fruit,Vegetable,Grain,Wheat,Barley
Now you have a list of items that fits into one or more of the categories listed above.
Here is a sample list of items:
Pudding,Cheese,Milk,Chicken,Barley,Bread,Couscous,Fish,Apple,Tomato,
Banana,Grape,Lamb,Roast,Honey,Potato,Rice,Beans,Legume,Barley Soup
As you see every item fits into at least one category, it could fit into more, or possibly all but the minimum is always one.
For example Cheese is a Food and Dairy.
Each item has two attributes:
1) A Price Tag
2) A Random Value
A set is defined as having every category mapped to an item.
In other words all categories must be present in a set.
A set from the items above could be:
[Pudding,Lamb,Milk,Apple,Tomato,Legume,Bread,Barley Soup]
As you see each item is mapped to a category slot:
Pudding is mapped to Food Category
Lamb is mapped to Meat Category
Milk is mapped to Dairy Category
Apple is mapped to Fruit Category
Tomato is mapped to Vegetable Category
Legume is mapped to Grain Category
Bread is mapped to Wheat Category
Barley Soup is mapped to Barley Category
My question is, what is the most efficient algorithm for generating in-order sets of the above categories from a list of items given.
The best set is defined as having the highest Random Value in total.
The only constraint is that any generated set cannot, in total, exceed a certain fixed amount, in other words, all generated sets should be within this Price Cap.
Hope I am clear, thank you!
What you are trying to achieve is a form of maximal matching, and I don't know if there is an efficient way to compute in-order sets, but still this reduction might help you.
Define a bipartite graph with on one side one node per category, and on the other side one node per item. Add an edge between an item and a category if that items belongs in that category, with a weight defined by the random value of the item.
A "set" as you defined it is a maximum-cardinality matching in that graph.
They can be enumerated in reasonable time, as proved by Takeaki Uno in
"A Fast Algorithm for Enumerating Non-Bipartite Maximal Matchings", and it is likely to be even faster in your situation because your graph is bipartite.
Among those sets, you are looking for the ones with maximal weight and under a price constraint. Depending on your data, it may be enough to just enumerate them all, filter them based on the price, and sort the remaining results if there are not too many.
If that is not the case, then you may find the best set by solving the combinatorial optimization problem whose objective function is the total weight, and whose constraints are the price limit and the cardinal (known as maximum-weighted matching in the litterature). There may even be solvers already online once you write the problem in this form. However, this will only provide one such set rather than a sorted list, but as this problem is very hard in the general case, this is the best you can hope for. You would need more assumptions on your data to have better results (like bounds on the maximum number of such sets, the maximum number of items that can belong to more than k categories, etc.)
Alright, here is my second try to answer this question.
Lets say we have following input
class Item {
public:
// using single unsigned int bitwise check sets
unsigned int category;
int name;
int value;
int price;
...
};
class ItemSet
{
public:
set<Item> items;
int sum;
};
First, sort input data based on highest random value, then lowest price
bool operator<(const Item& item1, const Item& item2) {
if (item1.value == item2.value) {
if (item1.price == item2.price) {
return item1.name < item2.name;
}
return item1.price > item2.price;
}
return item1.value < item2.value;
}
...
vector<Item> v = generateTestItem();
sort(v.begin(), v.end(), greater<Item>());
Next use backtracking to collect top sets into heap until conditions met. Having sorted input data in our backtracking algorithm guarantees that collected data is top based on highest value and lowest price. One more thing to note, I compared item categories (currentCats) with bit manipulation which gives O(1) complexity.
priority_queue<ItemSet> output;
void helper(vector<Item>& input, set<Item>& currentItems, unsigned int currentCats, int sum, int index)
{
if (index == input.size()) {
// if index reached end of input, exit
addOutput(currentItems);
return;
}
if (output.size() >= TOP_X) {
// if output data size reached required size, exit
return;
}
if (sum + input[index].price < PRICE_TAG) {
if ((input[index].category & currentCats) == 0) {
// this category does not exists in currentCats
currentItems.insert(input[index]);
helper(input, currentItems, currentCats | input[index].category, sum + input[index].price, index + 1);
}
} else {
addOutput(currentItems);
return;
}
if (currentItems.find(input[index]) != currentItems.end()) {
currentItems.erase(input[index]);
}
helper(input, currentItems, currentCats, sum, index + 1);
return;
}
void getTopItems(vector<Item>& items)
{
set<Item> myset;
helper(items, myset, 0, 0, 0);
}
In the worst case this backtracking would run O(2^N) complexity, but since TOP_X is limited value it should not take too long in real.
I tried to test this code by generating random values and it seems working fine. Full code can be found here
I'm not exactly sure what you mean by "generating in-order sets".
I think any algorithm is going to generate sets, score them, and then try to generate better sets. Given all the constraints, I do not think you can generate the best set efficiently in one pass.
The 0-1 knapsack problem has been shown to be NP-hard, which means there is no known polynomial time (i.e. O(n^k)) solution. That problem is the same as you would have if, in your input, the random number was always equal to the price and there was only 1 category. In other words, your problem is at least as hard as the knapsack problem, so you cannot expect to find a guaranteed polynomial time solution.
You can generate all valid sets combinatorially pretty easily using nested loops: loop per category, looping over the items in that category. Early on you can improve the efficiency by skipping over an item if it has already been chosen and by skipping over the whole set once you find it is over the price cap. Put those results in a heap and then you can spit them out in order.
If your issue is that you want something with better performance than that, it seems to me more like a constraint programming, or, more specifically, a constraint satisfaction problem. I suggest you look at the techniques used to handle those kinds of problems.

Algorithm / Data structure for largest set intersection in a collection of sets with a given set

I have a large collection of several million sets, C. The elements of my sets come from a universe of about 2000 possible elements. I need to know, for a given set, s, which set in C has the largest intersection with s? (Or the k sets in C with the k-largest intersections). I will be making many of these queries, sequentially, for different s.
I know that the obvious way to do this is to just to loop over every set in C and compute the intersection and take the max. Are there any smart data structures / programming tricks that can speed up my search? It would be great if I could do this faster than O(C).
EDIT: approximate answers would be alright too
I don't think there's a clever data structure that will help with asymptotic performance. But this is a perfect map reduce problem. A GPGPU would do nicely. For a universe of 2048 elements, a set as a bitmap is only 256 bytes. 4 million is only a gigabyte. Even a modestly spec'ed Nvidia has that. E.g. programming in CUDA, you'd copy C to graphics card RAM, map a chunk of the gigabyte to each GPU core for searching and then reduce across cores to find the final answer. This ought to take on the order of a very few milliseconds. Not fast enough? Just buy hotter hardware.
If you re-phrase your question along these lines, you'll probably get answers from experts in this kind of programming, which I'm not.
One simple trick is to sort the list of sets C in decreasing order by size, then proceed with brute force intersection tests as usual. As you go along, keep track of the set b with the biggest intersection so far. If you find a set whose intersection with the query set s has size |s| (or equivalently, has intersection equal to s -- use whichever of these tests is faster), you can immediately stop and return it as this is the best possible answer. Otherwise, if the next set from C has fewer than |b| elements, you can immediately stop and return b. This can easily be generalised to finding the top k matches.
I don't see any way to do this in less than O(C) per query, but I have some ideas on how to maximize efficiency. The idea is basically to build a lookup table for each element. If some elements are rare and some are common, you can have positive and negative lookup tables:
s[i] // your query, an array of size 2 thousand, true/false
sign[i] // whether the ith element is positive/negative lookup. +/- 1
sets[i] // a list of all the sets that the ith element belongs/(doesn't) to
query(s):
overlaps[i] // an array of size C, initialized to 0's
for i in len(s):
if s[i]:
for j in sets[i]:
overlaps[j] += sign[i]
return max_index(overlaps)
Especially if many of your elements are of widely differing probabilities (as you said), this approach should save you some time: very rare or very common elements can be dealt with almost instantly.
To further optimize: you can sort the structure so that the elements that are most common/most rare are dealt with first. After you have done the first e.g. 3/4, you can do a quick pass to see if the closest matching set is so far ahead of the next set that it is not necessary to continue, though again whether that is worthwhile depends on the details of your data's distribution.
Yet another refinement: make sets[i] one of two possible structures: if the element is very rare or common, sets[i] is just a list of the sets that the ith element is in/not in. However, suppose the ith element is in half the sets. Then sets[i] is just a list of indices half as long as the number of sets, looping through it and incrementing overlaps is wasteful. Have a third value for sign[i]: if sign[i] == 0, then the ith element is relatively close to 50% commonality (this may just mean between 5% and 95%, or anything else), and instead of a list of sets in which it appears, it will simply be an array of 1's and 0's with length equal to C. Then you would just add the array in its entirety to overlaps which would be faster.
Put all of your elements, from the million sets into a Hashtable. The key will be the element, the value will be a set of indexes that point to a containing set.
HashSet<Element>[] AllSets = ...
// preprocess
Hashtable AllElements = new Hashtable(2000);
for(var index = 0; index < AllSets.Count; index++) {
foreach(var elm in AllSets[index]) {
if(!AllElements.ContainsKey(elm)) {
AllElements.Add(elm, new HashSet<int>() { index });
} else {
((HashSet<int>)AllElements[elm]).Add(index);
}
}
}
public List<HashSet<Element>> TopIntersect(HashSet<Element> set, int top = 1) {
// <index, count>
Dictionar<int, int> counts = new Dictionary<int, int>();
foreach(var elm in set) {
var setIndices = AllElements[elm] As HashSet<int>;
if(setIndices != null) {
foreach(var index in setIndices) {
if(!counts.ContainsKey(index)) {
counts.Add(index, 1);
} else {
counts[index]++;
}
}
}
}
return counts.OrderByDescending(kv => kv.Value)
.Take(top)
.Select(kv => AllSets[kv.Key]).ToList();
}

Is there an efficient data structure for row and column swapping?

I have a matrix of numbers and I'd like to be able to:
Swap rows
Swap columns
If I were to use an array of pointers to rows, then I can easily switch between rows in O(1) but swapping a column is O(N) where N is the amount of rows.
I have a distinct feeling there isn't a win-win data structure that gives O(1) for both operations, though I'm not sure how to prove it. Or am I wrong?
Without having thought this entirely through:
I think your idea with the pointers to rows is the right start. Then, to be able to "swap" the column I'd just have another array with the size of number of columns and store in each field the index of the current physical position of the column.
m =
[0] -> 1 2 3
[1] -> 4 5 6
[2] -> 7 8 9
c[] {0,1,2}
Now to exchange column 1 and 2, you would just change c to {0,2,1}
When you then want to read row 1 you'd do
for (i=0; i < colcount; i++) {
print m[1][c[i]];
}
Just a random though here (no experience of how well this really works, and it's a late night without coffee):
What I'm thinking is for the internals of the matrix to be a hashtable as opposed to an array.
Every cell within the array has three pieces of information:
The row in which the cell resides
The column in which the cell resides
The value of the cell
In my mind, this is readily represented by the tuple ((i, j), v), where (i, j) denotes the position of the cell (i-th row, j-th column), and v
The would be a somewhat normal representation of a matrix. But let's astract the ideas here. Rather than i denoting the row as a position (i.e. 0 before 1 before 2 before 3 etc.), let's just consider i to be some sort of canonical identifier for it's corresponding row. Let's do the same for j. (While in the most general case, i and j could then be unrestricted, let's assume a simple case where they will remain within the ranges [0..M] and [0..N] for an M x N matrix, but don't denote the actual coordinates of a cell).
Now, we need a way to keep track of the identifier for a row, and the current index associated with the row. This clearly requires a key/value data structure, but since the number of indices is fixed (matrices don't usually grow/shrink), and only deals with integral indices, we can implement this as a fixed, one-dimensional array. For a matrix of M rows, we can have (in C):
int RowMap[M];
For the m-th row, RowMap[m] gives the identifier of the row in the current matrix.
We'll use the same thing for columns:
int ColumnMap[N];
where ColumnMap[n] is the identifier of the n-th column.
Now to get back to the hashtable I mentioned at the beginning:
Since we have complete information (the size of the matrix), we should be able to generate a perfect hashing function (without collision). Here's one possibility (for modestly-sized arrays):
int Hash(int row, int column)
{
return row * N + column;
}
If this is the hash function for the hashtable, we should get zero collisions for most sizes of arrays. This allows us to read/write data from the hashtable in O(1) time.
The cool part is interfacing the index of each row/column with the identifiers in the hashtable:
// row and column are given in the usual way, in the range [0..M] and [0..N]
// These parameters are really just used as handles to the internal row and
// column indices
int MatrixLookup(int row, int column)
{
// Get the canonical identifiers of the row and column, and hash them.
int canonicalRow = RowMap[row];
int canonicalColumn = ColumnMap[column];
int hashCode = Hash(canonicalRow, canonicalColumn);
return HashTableLookup(hashCode);
}
Now, since the interface to the matrix only uses these handles, and not the internal identifiers, a swap operation of either rows or columns corresponds to a simple change in the RowMap or ColumnMap array:
// This function simply swaps the values at
// RowMap[row1] and RowMap[row2]
void MatrixSwapRow(int row1, int row2)
{
int canonicalRow1 = RowMap[row1];
int canonicalRow2 = RowMap[row2];
RowMap[row1] = canonicalRow2
RowMap[row2] = canonicalRow1;
}
// This function simply swaps the values at
// ColumnMap[row1] and ColumnMap[row2]
void MatrixSwapColumn(int column1, int column2)
{
int canonicalColumn1 = ColumnMap[column1];
int canonicalColumn2 = ColumnMap[column2];
ColumnMap[row1] = canonicalColumn2
ColumnMap[row2] = canonicalColumn1;
}
So that should be it - a matrix with O(1) access and mutation, as well as O(1) row swapping and O(1) column swapping. Of course, even an O(1) hash access will be slower than the O(1) of array-based access, and more memory will be used, but at least there is equality between rows/columns.
I tried to be as agnostic as possible when it comes to exactly how you implement your matrix, so I wrote some C. If you'd prefer another language, I can change it (it would be best if you understood), but I think it's pretty self descriptive, though I can't ensure it's correctedness as far as C goes, since I'm actually a C++ guys trying to act like a C guy right now (and did I mention I don't have coffee?). Personally, writing in a full OO language would do it the entrie design more justice, and also give the code some beauty, but like I said, this was a quickly whipped up implementation.

Algorithms for testing a poker hand for a straight draw (4 to a straight)?

I'm in the throes of writing a poker evaluation library for fun and am looking to add the ability to test for draws (open ended, gutshot) for a given set of cards.
Just wondering what the "state of the art" is for this? I'm trying to keep my memory footprint reasonable, so the idea of using a look up table doesn't sit well but could be a necessary evil.
My current plan is along the lines of:
subtract the lowest rank from the rank of all cards in the set.
look to see if certain sequence i.e.: 0,1,2,3 or 1,2,3,4 (for OESDs) is a subset of the modified collection.
I'm hoping to do better complexity wise, as 7 card or 9 card sets will grind things to a halt using my approach.
Any input and/or better ideas would be appreciated.
The fastest approach probably to assign a bit mask for each card rank (e.g. deuce=1, three=2, four=4, five=8, six=16, seven=32, eight=64, nine=128, ten=256, jack=512, queen=1024, king=2048, ace=4096), and OR together the mask values of all the cards in the hand. Then use an 8192-element lookup table to indicate whether the hand is a straight, an open-ender, a gut-shot, or a nothing of significance (one could also include the various types of backdoor straight draw without affecting execution time).
Incidentally, using different bitmask values, one can quickly detect other useful hands like two-of-a-kind, three-of-a-kind, etc. If one has 64-bit integer math available, use the cube of the indicated bit masks above (so deuce=1, three=8, etc. up to ace=2^36) and add together the values of the cards. If the result, and'ed with 04444444444444 (octal) is non-zero, the hand is a four-of-a kind. Otherwise, if adding plus 01111111111111, and and'ing with 04444444444444 yields non-zero, the hand is a three-of-a-kind or full-house. Otherwise, if the result, and'ed with 02222222222222 is non-zero, the hand is either a pair or two-pair. To see if a hand contains two or more pairs, 'and' the hand value with 02222222222222, and save that value. Subtract 1, and 'and' the result with the saved value. If non-zero, the hand contains at least two pairs (so if it contains a three-of-a-kind, it's a full house; otherwise it's two-pair).
As a parting note, the computation done to check for a straight will also let you determine quickly how many different ranks of card are in the hand. If there are N cards and N different ranks, the hand cannot contain any pairs or better (but might contain a straight or flush, of course). If there are N-1 different ranks, the hand contains precisely one pair. Only if there are fewer different ranks must one use more sophisticated logic (if there are N-2, the hand could be two-pair or three-of-a-kind; if N-3 or fewer, the hand could be a "three-pair" (scores as two-pair), full house, or four-of-a-kind).
One more thing: if you can't manage an 8192-element lookup table, you could use a 512-element lookup table. Compute the bitmask as above, and then do lookups on array[bitmask & 511] and array[bitmask >> 4], and OR the results. Any legitimate straight or draw will register on one or other lookup. Note that this won't directly give you the number of different ranks (since cards six through ten will get counted in both lookups) but one more lookup to the same array (using array[bitmask >> 9]) would count just the jacks through aces.
I know you said you want to keep the memory footprint as small as possible, but there is one quite memory efficient lookup table optimization which I've seen used in some poker hand evaluators and I have used it myself. If you're doing heavy poker simulations and need the best possible performance, you might wanna consider this. Though I admit in this case the difference isn't that big because testing for a straight draw isn't very expensive operation, but the same principle can be used for pretty much every type of hand evaluation in poker programming.
The idea is that we create a kind of a hash function that has the following properties:
1) calculates a unique value for each different set of card ranks
2) is symmetric in the sense that it doesn't depend on the order of the cards
The purpose of this is to reduce the number of elements needed in the lookup table.
A neat way of doing this is to assign a prime number to each rank (2->2, 3->3, 4->5, 5->7, 6->11, 7->13, 8->17, 9->19, T->23, J->29, Q->31, K->37, A->41), and then calculate the product of the primes. For example if the cards are 39TJQQ, then the hash is 36536259.
To create the lookup table you go through all the possible combinations of ranks, and use some simple algorithm to determine whether they form a straight draw. For each combination you also calculate the hash value and then store the results in a map where Key is the hash and Value is the result of the straight draw check. If the maximum number of cards is small (4 or less) then even a linear array might be feasible.
To use the lookup table you first calculate the hash for the particular set of cards and then read the corresponding value from the map.
Here's an example in C++. I don't guarantee that it's working correctly and it could probably be optimized a lot by using a sorted array and binary search instead of hash_map. hash_map is kinda slow for this purpose.
#include <iostream>
#include <vector>
#include <hash_map>
#include <numeric>
using namespace std;
const int MAXCARDS = 9;
stdext::hash_map<long long, bool> lookup;
//"Hash function" that is unique for a each set of card ranks, and also
//symmetric so that the order of cards doesn't matter.
long long hash(const vector<int>& cards)
{
static const int primes[52] = {
2,3,5,7,11,13,17,19,23,29,31,37,41,
2,3,5,7,11,13,17,19,23,29,31,37,41,
2,3,5,7,11,13,17,19,23,29,31,37,41,
2,3,5,7,11,13,17,19,23,29,31,37,41
};
long long res=1;
for(vector<int>::const_iterator i=cards.begin();i!=cards.end();i++)
res *= primes[*i];
return res;
}
//Tests whether there is a straight draw (assuming there is no
//straight). Only used for filling the lookup table.
bool is_draw_slow(const vector<int>& cards)
{
int ranks[14];
memset(ranks,0,14*sizeof(int));
for(vector<int>::const_iterator i=cards.begin();i!=cards.end();i++)
ranks[ *i % 13 + 1 ] = 1;
ranks[0]=ranks[13]; //ace counts also as 1
int count = ranks[0]+ranks[1]+ranks[2]+ranks[3];
for(int i=0; i<=9; i++) {
count += ranks[i+4];
if(count==4)
return true;
count -= ranks[i];
}
return false;
};
void create_lookup_helper(vector<int>& cards, int idx)
{
for(;cards[idx]<13;cards[idx]++) {
if(idx==cards.size()-1)
lookup[hash(cards)] = is_draw_slow(cards);
else {
cards[idx+1] = cards[idx];
create_lookup_helper(cards,idx+1);
}
}
}
void create_lookup()
{
for(int i=1;i<=MAXCARDS;i++) {
vector<int> cards(i);
create_lookup_helper(cards,0);
}
}
//Test for a draw using the lookup table
bool is_draw(const vector<int>& cards)
{
return lookup[hash(cards)];
};
int main(int argc, char* argv[])
{
create_lookup();
cout<<lookup.size()<<endl; //497419
int cards1[] = {1,2,3,4};
int cards2[] = {0,1,2,7,12};
int cards3[] = {3,16,29,42,4,17,30,43};
cout << is_draw(vector<int>(cards1,cards1+4)) <<endl; //true
cout << is_draw(vector<int>(cards2,cards2+5)) <<endl; //true
cout << is_draw(vector<int>(cards3,cards3+8)) <<endl; //false
}
This may be a naive solution, but I am pretty sure it would work, although I am not sure about the perfomance issues.
Assuming again that the cards are represented by the numbers 1 - 13, then if your 4 cards have a numeric range of 3 or 4 (from highest to lowest card rank) and contain no duplicates then you have a possible straight draw.
A range of 3 implies you have an open-ended draw eg 2,3,4,5 has a range of 3 and contains no duplicates.
A range of 4 implies you have a gutshot (as you called it) eg 5,6,8,9 has a range of 4 and contains no duplicates.
Update: per Christian Mann's comment... it can be this:
let's say, A is represented as 1. J as 11, Q as 12, etc.
loop through 1 to 13 as i
if my cards already has this card i, then don't worry about this case, skip to next card
for this card i, look to the left for number of consecutive cards there is
same as above, but look to the right
if count_left_consecutive + count_right_consecutive == 4, then found case
you will need to define the functions to look for the count of left consecutive cards and right consecutive cards... and also handle the case when when looking right consecutive, after K, the A is consecutive.

An efficient way to find matching items in N lists?

Given a number of lists of items, find the lists with matching items.
The brute force pseudo-code for this problem looks like:
foreach list L
foreach item I in list L
foreach list L2 such that L2 != L
for each item I2 in L2
if I == I2
return new 3-tuple(L, L2, I) //not important for the algorithm
I can think of a number of different ways of going about this - creating a list of lists and removing each candidate list after searching the others for example - but I'm wondering if there is a better algorithm for this?
I'm using Java, if that makes a difference to your implementation.
Thanks
Create a Map<Item,List<List>>.
Iterate through every item in every list.
each time you touch an item, add the current list to that item's entry in the Map.
You now have a Map entry for each item that tells you what lists that item appears in.
This algorithm is about O(N) where N is the number of lists (the exact complexity will be affected by how good your Map implementation is). I believe your algorithm was at least O(N^2).
Caveat: I am comparing number of comparisons, not memory use. If your lists are super huge and full of mostly non duplicated items, the map that my method creates might become too big.
As per your comment you want a MultiMap implementation. A multimap is like a Map but it can map each key to multiple values. Store the value and a reference to all the maps that contain that value.
Map<Object, List>
of course you should use a type safe instead of Object and a type safe List as the value. What you are trying to do is called an Inverted Index.
I'll start with the assumption that the datasets can fit in memory. If not, then you will need something fancier.
I refer below to a "set", where I am thinking of something like a C++ std::set. I don't know the Java equivalent, but any storage scheme that permits rapid lookup (tree, hash table, whatever).
Comparing three lists: L0, L1 and L2.
Read L0, placing each element in a set: S0.
Read L1, placing items that match an element of S0 into a new set: S1, and discarding others.
Discard S0.
Read L2, keeping items that match an element of S1 and discarding others.
Update
Just realised that the question was for "n" lists, not three. However the extension should be obvious. (I hope)
Update 2
Some untested C++ code to illustrate the algorithm
#include <string>
#include <vector>
#include <set>
#include <cassert>
typedef std::vector<std::string> strlist_t;
strlist_t GetMatches(std::vector<strlist_t> vLists)
{
assert(vLists.size() > 1);
std::set<std::string> s0, s1;
std::set<std::string> *pOld = &s1;
std::set<std::string> *pNew = &s0;
// unconditionally load first list as "new"
s0.insert(vLists[0].begin(), vLists[0].end());
for (size_t i=1; i<vLists.size(); ++i)
{
//swap recently read "new" to "old" now for comparison with new list
std::swap(pOld, pNew);
pNew->clear();
// only keep new elements if they are matched in old list
for (size_t j=0; j<vLists[i].size(); ++j)
{
if (pOld->end() != pOld->find(vLists[i][j]))
{
// found match
pNew->insert(vLists[i][j]);
}
}
}
return strlist_t(pNew->begin(), pNew->end());
}
You can use a trie, modified to record what lists each node belongs to.

Resources