Top down vs bottom up recursive data definition? - data-structures

While reading "Essentials of Programming Languages" I came across top down and bottom up definitions for list of integers.While I understand what these definitions say. But I am not able to understand fine details of top down vs. bottom up approach. How do I look at a definition and say weather it is top down or bottom up?
top-down
A Scheme list is a list of integers
if and only if either
it is the empty list, or
it is a pair whose car is an integer and whose cdr is a list of integers.
bottom-up
The set List-of-Int is the smallest
set of Scheme lists satisfying the following two properties:
() ∈ List-of-Int, and
if n ∈ Int and l ∈ List-of-Int, then (n . l) ∈ List-of-Int.

These two concepts are related to the notion of induction and recursion. Both of these concepts are ways of describing infinitely large families of objects, though they differ in their approach.
When you're defining something bottom-up, you are defining it inductively. The idea is that you start out with a set of fixed elements and a way of combining those elements into new elements. In the bottom-up definition above, initially the only element in the set of all list of integers is the empty list. You also have a rule which allows you to take a list in the set of lists of integers and grow it into something one step larger by prepending an integer.
When you're defining something top-down, you are defining it recursively. The idea is that you're beginning with some very large family of objects - in this case, every possible list - and then describing just those lists that are composed solely of integers. Usually elements defined coinductively are defined by taking existing objects and ruling out objects that don't match. For example, in the example of lists of integers, you define whether something is a list of integers by taking any list that you feel like and then verifying that if you keep breaking it down and down and down you eventually bottom out at some objects that you know are lists of integers (in this case, just the empty list).
The two forms are actually equivalent to one another, but they serve different purposes. Induction tries to build up the entire set of valid objects, then defines all objects matching the description. Recursion doesn't initially define anything, but then checks whether any object you have matches some criteria by piecing it apart and verifying it. Due to the magical way in which the two are mathematically defined, any inductive definition can be turned into a recursive definition and vice-versa (assuming that all objects you're talking about are finite).
EDIT: If you're really up for a fun ride, you might want to check out the related concepts of coinduction and corecursion. These are a mathematical dual to induction and recursion and provide an entirely different way of thinking about how to define a data structure. In particular, they allow for infinitely large data structures, which can't normally be defined inductively. Interestingly, there's a connection between coinduction, corecursion, induction, and recursion in terms of fixed points. You can think of the inductive definition of a data structure as the smallest set meeting some property, while the coinductive definition is the largest set with that property. It's really cool!

Related

Find mutually compatible options from list of list of options

For purposes of this question, let us call a list of mutually incompatible options for "OptionS". I have a list of such OptionS, where each Option, apart from disqualifying all other Options in it's own OptionS list, also disqualify some Options from the other OptionS lists. These rules are symmetrical, so if A forbids B, B forbids A.
I want to pick exactly one Option from each list, such that no Options disqualify each other. There are too many Options (and OptionS) and too few disqualifications in each step to brute force a backtracking solution.
It reminds be a bit of Sudoku, but it is not an exact analog. From certain external factors, I have a rough likelihood for the different Options, or at least an ordering.
Is there a known better solution to this problem? Is it in NP?
Currently, I plan to just take random "paths" through the solution space, weighted by likelihood. A sort of simulated annealing.
EDIT - Clarification
I have a number, let's say between 5 and 500, of vectors.
Each vector contains a number, between 10 and 10000, of elements
Each element rules out a number of elements in the other vectors
This relation is symmetric
I want to pick exactly one element from each vector in a way that no elements disqualify each other
If there is no way to choose one from each vector, I want to at least choose as many as possible. The nature of the data is such that there will always be at least one (and at most a few) solution (or almost-solution - with just a few misses).
I cannot share the real data, but an example would be that the elements are integers between 1 and 10e9 and that only elements whose pairwise sum has more than P prime factors are allowed. Some numbers are more likely than others to "fit" other numbers, since larger numbers tend to have more factors, which makes some choices more likely just like the real one.
Pick P and the sizes and number of vectors as needed to make it suitably challenging :).
My naive solution:
I order the elements by how many other elements they rule out and try those who rule out few first (because that gives you a larger chance to be able to pick one from each).
Then I order the vectors by how many elements the "best" element rules out. Vectors that rule out many other elements are first. So the most constrained vector is tried first, even though the least constrained elements of that vector are tried first.
I then search depth first
The problem with this approach is that if the first choice is wrong, then the depth first search will never have time to reach the next choice.
A better way, which I try to explain in a comment below, would be to score each partial choice (node in the search tree) of elements according to how many you have chosen and how many elements are left. Then I could look deeper in the highest scoring node at each step, so the first choice is less rigid.
A similar way, which I might try first because it is slightly easier, is to do simulated annealing and take random paths, weighted by how many possibilities they keep, down the tree.
Depending on what constraints are allowed, I think you can reduce SAT to this.
Take a SAT expression e.g. (A|B|C)(~A|C|~D)...
Replace ~A by a and make a vector out of each term giving you {A,B,C} {a,C,d}...
You now have the problem of choosing one element from each vector, subject to the constraint that you cannot choose both versions of a variable - the constraints say that A is incompatible with a, B is incompatible with b, and so on.
If you can solve this instance of your problem you can solve SAT by setting to true variables that are chosen in your problem as A, B, C,... to false variables that are chosen as a, b, c,.. and making an arbitrary choice for anything not chosen - therefore your problem is at least as hard as SAT. (Except if you don't encounter these sorts of constraints, in which case I have not proved that your problem is this hard).
Given an instance of your problem, associate a variable with each element, write the constraints as boolean expressions (typically with only 2 variables) to give something which looks like 2-SAT, except that you need an expression for each vector of the form (A|B|C|D|...) to say that you must choose at least one element from each vector - so the exact solution version of your problem, at least, might code up quite nicely as input for a SAT-solver - so it is in NP and since we have already shown it is NP-hard it is NP-complete.
My first recommendation would be to find an off-the-shelf constraint solver and try that (request a maximum-weight solution with the log-likelihoods as weights), but if you're determined to implement a solver from scratch, then I would suggest that you start with something like WalkSAT. To summarize the link in the language of your question: at all times, keep a list of option choices (one from each option list, not necessarily compatible) and a list of conflicts (i.e., a set of pairs of indexes into the list of option lists). Repeatedly choose a conflict at random and resolve it by choosing differently for one half of the conflict or the other (most of the time) so as to decrease the number of conflicts afterward as much as possible or (some of the time) randomly, perhaps according to the likelihoods. Good data structures will be essential in making this run fast.

Optimized Algorithm: Fastest Way to Derive Sets

I'm writing a program for a competition and I need to be faster than all the other competitors. For this I need a little algorithm help; ideally I'd be using the fastest algorithm.
For this problem I am given 2 things. The first is a list of tuples, each of which contains exactly two elements (strings), each of which represents an item. The second is an integer, which indicates how many unique items there are in total. For example:
# of items = 3
[("ball","chair"),("ball","box"),("box","chair"),("chair","box")]
The same tuples can be repeated/ they are not necessarily unique.) My program is supposed to figure out the maximum number of tuples that can "agree" when the items are sorted into two groups. This means that if all the items are broken into two ideal groups, group 1 and group 2, what are the maximum number of tuples that can have their first item in group 1 and their second item in group 2.
For example, the answer to my earlier example would be 2, with "ball" in group 1 and "chair" and "box" in group 2, satisfying the first two tuples. I do not necessarily need know what items go in which group, I just need to know what the maximum number of satisfied tuples could be.
At the moment I'm trying a recursive approach, but its running on (n^2), far too inefficient in my opinion. Does anyone have a method that could produce a faster algorithm?
Thanks!!!!!!!!!!
Speed up approaches for your task:
1. Use integers
Convert the strings to integers (store the strings in an array and use the position for the tupples.
String[] words = {"ball", "chair", "box"};
In tuppls ball now has number 0 (pos 0 in array) , chair 1, box 2.
comparing ints is faster than Strings.
2. Avoid recursion
Recursion is slow, due the recursion overhead.
For example look at binarys search algorithm in a recursive implementatiion, then look how java implements binSearch() (with a while loop and iteration)
Recursion is helpfull if problems are so complex that a non recursive implementation is to complex for a human brain.
An iterataion is faster, but not in the case when you mimick recursive calls by implementing your own stack.
However you can start implementing using a recursiove algorithm, once it works and it is a suited algo, then try to convert to a non recursive implementation
3. if possible avoid objects
if you want the fastest, the now it becomes ugly!
A tuppel array can either be stored in as array of class Point(x,y) or probably faster,
as array of int:
Example:
(1,2), (2,3), (3,4) can be stored as array: (1,2,2,3,3,4)
This needs much less memory because an object needs at least 12 bytes (in java).
Less memory becomes faster, when the array are really big, then your structure will hopefully fits in the processor cache, while the objects array does not.
4. Programming language
In C it will be faster than in Java.
Maximum cut is a special case of your problem, so I doubt you have a quadratic algorithm for it. (Maximum cut is NP-complete and it corresponds to the case where every tuple (A,B) also appears in reverse as (B,A) the same number of times.)
The best strategy for you to try here is "branch and bound." It's a variant of the straightforward recursive search you've probably already coded up. You keep track of the value of the best solution you've found so far. In each recursive call, you check whether it's even possible to beat the best known solution with the choices you've fixed so far.
One thing that may help (or may hurt) is to "probe": for each as-yet-unfixed item, see if putting that item on one of the two sides leads only to suboptimal solutions; if so, you know that item needs to be on the other side.
Another useful trick is to recurse on items that appear frequently both as the first element and as the second element of your tuples.
You should pay particular attention to the "bound" step --- finding an upper bound on the best possible solution given the choices you've fixed.

Algorithm to cover set of k-combinations of M with subsets of M

I'm working on an application for which I want to take the set C of all the possible k-combinations of elements in M (with ||M|| = m), and cover C with the sets of k-combinations of subsets N_i of M, with ||N_i|| = n < m ∀ N_i
So there are (m choose k) combinations to cover, and each set Q_i of n elements will contain (n choose k) combinations.
What I'd like is an algorithm that constructs the sets Qi such that q is minimized (i.e., as close to (m choose k) / (n choose k) as possible)
So, for example, if m=100, k=3, and n=10, I would want the smallest set of sets of 10 elements such that their respective sets of 3-combinations covered the set of (100 choose 3) 3-combinations of M.
I'm not sure if this will help or not, but I have written a class to handle common functions for working with the binomial coefficient, which is the type of problem that your problem falls under. It performs the following tasks:
Outputs all the K-indexes in a nice format for any N choose K to a file. The K-indexes can be substituted with more descriptive strings or letters. This method makes solving this type of problem quite trivial.
Converts the K-indexes to the proper index of an entry in the sorted binomial coefficient table. This technique is much faster than older published techniques that rely on iteration. It does this by using a mathematical property inherent in Pascal's Triangle. My paper talks about this. I believe I am the first to discover and publish this technique, but I could be wrong.
Converts the index in a sorted binomial coefficient table to the corresponding K-indexes.
Uses Mark Dominus method to calculate the binomial coefficient, which is much less likely to overflow and works with larger numbers.
The class is written in .NET C# and provides a way to manage the objects related to the problem (if any) by using a generic list. The constructor of this class takes a bool value called InitTable that when true will create a generic list to hold the objects to be managed. If this value is false, then it will not create the table. The table does not need to be created in order to perform the 4 above methods. Accessor methods are provided to access the table.
There is an associated test class which shows how to use the class and its methods. It has been extensively tested with 2 cases and there are no known bugs.
To read about this class and download the code, see Tablizing The Binomial Coeffieicent.
It should not be hard to convert this class to the language of your choice.
From your description of the problem, it looks like you should set up one loop for N (another for K if it changes as well), and then create a binomial coefficient object (BC) for that N,K combination. Call the unsigned long version of GetBinCoeff() with the BC object to get the total number of combinations. Then set up another loop to go through the total number of combinations of each BC object. Inside that loop, call the BC GetKIndexes method to get the K-Indexes for each index (i.e combination) and then do your calculation. I'm not exactly sure what you are trying to minimize. If my suggestion is not clear or helpful enough, try posting a more detailed example that clearly shows the results you are looking for.
I cross-posted this question on Math Overflow
It turns out that this is a well-trodden problem in combinatorics called the covering design problem.
There is, in general, no algorithm that guarantees a minimum, although there are algorithms that are pretty close to the minimum. You can find existing known coverings and research here

Quickly checking if set is superset of stored sets

The problem
I am given N arrays of C booleans. I want to organize these into a datastructure that allows me to do the following operation as fast as possible: Given a new array, return true if this array is a "superset" of any of the stored arrays. With superset I mean this: A is a superset of B if A[i] is true for every i where B[i] is true. If B[i] is false, then A[i] can be anything.
Or, in terms of sets instead of arrays:
Store N sets (each with C possible elements) into a datastructure so you can quickly look up if a given set is a superset of any of the stored sets.
Building the datastructure can take as long as possible, but the lookup should be as efficient as possible, and the datastructure can't take too much space.
Some context
I think this is an interesting problem on its own, but for the thing I'm really trying to solve, you can assume the following:
N = 10000
C = 1000
The stored arrays are sparse
The looked up arrays are random (so not sparse)
What I've come up with so far
For O(NC) lookup: Just iterate all the arrays. This is just too slow though.
For O(C) lookup: I had a long description here, but as Amit pointed out in the comments, it was basically a BDD. While this has great lookup speed, it has an exponential number of nodes. With N and C so large, this takes too much space.
I hope that in between this O(N*C) and O(C) solution, there's maybe a O(log(N)*C) solution that doesn't require an exponential amount of space.
EDIT: A new idea I've come up with
For O(sqrt(N)C) lookup: Store the arrays as a prefix trie. When looking up an array A, go to the appropriate subtree if A[i]=0, but visit both subtrees if A[i]=1.
My intuition tells me that this should make the (average) complexity of the lookup O(sqrt(N)C), if you assume that the stored arrays are random. But: 1. they're not, the arrays are sparse. And 2. it's only intuition, I can't prove it.
I will try out both this new idea and the BDD method, and see which of the 2 work out best.
But in the meantime, doesn't this problem occur more often? Doesn't it have a name? Hasn't there been previous research? It really feels like I'm reinventing the wheel here.
Just to add some background information to the prefix trie solution, recently I found the following paper:
I.Savnik: Index data structure for fast subset and superset queries. CD-ARES, IFIP LNCS, 2013.
The paper proposes the set-trie data structure (container) which provides support for efficient storage and querying of sets of sets using the trie data structure, supporting operations like finding all the supersets/subsets of a given set from a collection of sets.
For any python users interested in an actual implementation, I came up with a python3 package based partly on the above paper. It contains a trie-based container of sets and also a mapping container where the keys are sets. You can find it on github.
I think prefix trie is a great start.
Since yours arrays are sparse, I would additionally test them in bulk. If (B1 ∪ B2) ⊂ A, both are included. So the idea is to OR-pack arrays by pairs, and to reiterate until there is only one "root" array (it would take only twice as much space). It allows to answer 'Yes' to your question earlier, which is mainly useful if you don't need to know with array is actually contained.
Independently, you can apply for each array a hash function preserving ordering.
Ie : B ⊂ A ⇒ h(B) ≺ h(A)
ORing bits together is such a function, but you can also count each 1-bit in adequate partitions of the array. Here, you can eliminate candidates faster (answering 'No' for a particular array).
You can simplify the problem by first reducing your list of sets to "minimal" sets: keep only those sets which are not supersets of any other ones. The problem remains the same because if some input set A is a superset of some set B you removed, then it is also a superset of at least one "minimal" subset C of B which was not removed. The advantage of doing this is that you tend to eliminate large sets, which makes the problem less expensive.
From there I would use some kind of ID3 or C4.5 algorithm.
Building on the trie solution and the paper mentioned by #mmihaltz, it is also possible to implement a method to find subsets by using already existing efficient trie implementations for python. Below I use the package datrie. The only downside is that the keys must be converted to strings, which can be done with "".join(chr(i) for i in myset). This, however, limits the range of elements to about 110000.
from datrie import BaseTrie, BaseState
def existsSubset(trie, setarr, trieState=None):
if trieState is None:
trieState = BaseState(trie)
trieState2 = BaseState(trie)
trieState.copy_to(trieState2)
for i, elem in enumerate(setarr):
if trieState2.walk(elem):
if trieState2.is_terminal() or existsSubset(trie, setarr[i:], trieState2):
return True
trieState.copy_to(trieState2)
return False
The trie can be used like dictionary, but the range of possible elements has to be provided at the beginning:
alphabet = "".join(chr(i) for i in range(100))
trie = BaseTrie(alphabet)
for subset in sets:
trie["".join(chr(i) for i in subset)] = 0 # the assigned value does not matter
Note that the trie implementation above works only with keys larger than (and not equal to) 0. Otherwise, the integer to character mapping does not work properly. This problem can be solved with an index shift.
A cython implementation that also covers the conversion of elements can be found here.

Tricky algorithm for sorting symbols in an array while preserving relationships via order

The problem
I have multiple groups which specify the relationships of symbols.. for example:
[A B C]
[A D E]
[X Y Z]
What these groups mean is that (for the first group) the symbols, A, B, and C are related to each other. (The second group) The symbols A, D, E are related to each other.. and so forth.
Given all these data, I would need to put all the unique symbols into a 1-dimension array wherein the symbols which are somehow related to each other would be placed closer to each other. Given the example above, the result should be something like:
[B C A D E X Y Z]
or
[X Y Z D E A B C]
In this resulting array, since the symbol A has multiple relationships (namely with B and C in one group and with D and E in another) it's now located between those symbols, somewhat preserving the relationship.
Note that the order is not important. In the result, X Y Z can be placed first or last since those symbols are not related to any other symbols. However, the closeness of the related symbols is what's important.
What I need help in
I need help in determining an algorithm that takes groups of symbol relationships, then outputs the 1-dimension array using the logic above. I'm pulling my hair out on how to do this since with real data, the number of symbols in a relationship group can vary, there is also no limit to the number of relationship groups and a symbol can have relationships with any other symbol.
Further example
To further illustrate the trickiness of my dilemma, IF you add another relationship group to the example above. Let's say:
[C Z]
The result now should be something like:
[X Y Z C B A D E]
Notice that the symbols Z and C are now closer together since their relationship was reinforced by the additional data. All previous relationships are still retained in the result also.
The first thing you need to do is to precisely define the result you want.
You do this by defining how good a result is, so that you know which is the best one. Mathematically you do this by a cost function. In this case one would typically choose the sum of the distances between related elements, the sum of the squares of these distances, or the maximal distance. Then a list with a small value of the cost function is the desired result.
It is not clear whether in this case it is feasible to compute the best solution by some special method (maybe if you choose the maximal distance or the sum of the distances as the cost function).
In any case it should be easy to find a good approximation by standard methods.
A simple greedy approach would be to insert each element in the position where the resulting cost function for the whole list is minimal.
Once you have a good starting point you can try to improve it further by modifying the list towards better solutions, for example by swapping elements or rotating parts of the list (local search, hill climbing, simulated annealing, other).
I think, because with large amounts of data and lack of additional criteria, it's going to be very very difficult to make something that finds the best option. Have you considered doing a greedy algorithm (construct your solution incrementally in a way that gives you something close to the ideal solution)? Here's my idea:
Sort your sets of related symbols by size, and start with the largest one. Keep those all together, because without any other criteria, we might as well say their proximity is the most important since it's the biggest set. Consider every symbol in that first set an "endpoint", an endpoint being a symbol you can rearrange and put at either end of your array without damaging your proximity rule (everything in the first set is an endpoint initially because they can be rearranged in any way). Then go through your list and as soon as one set has one or more symbols in common with the first set, connect them appropriately. The symbols that you connected to each other are no longer considered endpoints, but everything else still is. Even if a bigger set only has one symbol in common, I'm going to guess that's better than smaller sets with more symbols in common, because this way, at least the bigger set stays together as opposed to possibly being split up if it was put in the array later than smaller sets.
I would go on like this, updating the list of endpoints that existed so that you could continue making matches as you went through your set. I would keep track of if I stopped making matches, and in that case, I'd just go to the top of the list and just tack on the next biggest, unmatched set (doesn't matter if there are no more matches to be made, so go with the most valuable/biggest association). Ditch the old endpoints, since they have no matches, and then all the symbols of the set you just tacked on are the new endpoints.
This may not have a good enough runtime, I'm not sure. But hopefully it gives you some ideas.
Edit: Obviously, as part of the algorithm, ditch duplicates (trivial).
The problem as described is essentially the problem of drawing a graph in one dimension.
Using the relationships, construct a graph. Treat the unique symbols as the vertices of the graph. Place an edge between any two vertices that co-occur in a relationship; more sophisticated would be to construct a weight based on the number of relationships in which the pair of symbols co-occur.
Algorithms for drawing graphs place well-connected vertices closer to one another, which is equivalent to placing related symbols near one another. Since only an ordering is needed, the symbols can just be ranked based on their positions in the drawing.
There are a lot of algorithms for drawing graphs. In this case, I'd go with Fiedler ordering, which orders the vertices using a particular eigenvector (the Fiedler vector) of the graph Laplacian. Fiedler ordering is straightforward, effective, and optimal in a well-defined mathematical sense.
It sounds like you want to do topological sorting: http://en.wikipedia.org/wiki/Topological_sorting
Regarding the initial ordering, it seems like you are trying to enforce some kind of stability condition, but it is not really clear to me what this should be from your question. Could you try to be a bit more precise in your description?

Resources