Problems with a simple dependency algorithm - algorithm

In my webapp, we have many fields that sum up other fields, and those fields sum up more fields. I know that this is a directed acyclic graph.
When the page loads, I calculate values for all of the fields. What I'm really trying to do is to convert my DAG into a one-dimensional list which would contain an efficient order to calculate the fields in.
For example:
A = B + D, D = B + C, B = C + E
Efficient calculation order: E -> C -> B -> D -> A
Right now my algorithm just does simple inserts into a List iteratively, but I've run into some situations where that starts to break. I'm thinking what would be needed instead would be to work out all the dependencies into a tree structure, and from there convert that into the one dimensional form? Is there a simple algorithm for converting such a tree into an efficient ordering?

Are you looking for topological sort? This imposes an ordering (a sequence or list) on a DAG. It's used by, for example, spreadsheets, to figure out dependencies between cells for calculations.

What you want is a depth-first search.
function ExamineField(Field F)
{
if (F.already_in_list)
return
foreach C child of F
{
call ExamineField(C)
}
AddToList(F)
}
Then just call ExamineField() on each field in turn, and the list will be populated in an optimal ordering according to your spec.
Note that if the fields are cyclic (that is, you have something like A = B + C, B = A + D) then the algorithm must be modified so that it doesn't go into an endless loop.
For your example, the calls would go:
ExamineField(A)
ExamineField(B)
ExamineField(C)
AddToList(C)
ExamineField(E)
AddToList(E)
AddToList(B)
ExamineField(D)
ExamineField(B)
(already in list, nothing happens)
ExamineField(C)
(already in list, nothing happens)
AddToList(D)
AddToList(A)
ExamineField(B)
(already in list, nothing happens)
ExamineField(C)
(already in list, nothing happens)
ExamineField(D)
(already in list, nothing happens)
ExamineField(E)
(already in list, nothing happens)
And the list would end up C, E, B, D, A.

Related

Breadth-First Search (BFS) Using only a List of Available Functions

My question is mostly algorithm-related and not specific to a specific programming language
Assuming we have a graph represented by a list of lists, where each internal list represents two nodes and a numbered edge, Is is possible to implement a recursive BFS (breadth-first search) function using ONLY the following 12 functions? Our bfs recursive function is supposed to take 5 arguments:
the graph to be searched
the element to find
the search queue
list of visited nodes
current element we're looking at
An example of a graph:
e1
/ \
e2 e3
| \ |
e5 e4
((e1 1 e2) (e1 2 e3) (e2 3 e4) (e3 4 e4) (e2 5 e5))
in the following functions, I refer to each individual list in our graph as a graph element
here are the functions:
create-graph ; creates an empty list
is-graph-element: ; check if a list is of the format above (for graph elements)
element-contains-node ; check if a graph element contains an atom representing a node (e.g., a)
is-member ; check if a [generic] list contains an atom
push-unique ; gets a list and an atom and inserts it at the end of the list if it's not already there
remove-visited ; gets a graph (list of list), removing all the graph elements containing the specified atom
remove-all-visited ; same as above, but we can pass a list of atoms to be removed
del-from-list ; remove all occurrences of an atom from a list
del-all-from-list ; same as above, but we can pass a list of atoms to be removed
first-member ; return the first member of a graph element (e.g., for [a, 1, b], return a
third-member ; return third member of a graph element
graph-to-list ; receives a graph and returns a flat list of all first and third members listed in order
This is how I have implemented the recursive BFS function:
I have actually two base cases for my recursive calls: when the queue is empty (which means that we couldn't reach the element we were searching for), and when the element to be popped from the queue is the element we were searching for
in each call, I have defined a function to find the paths from the current node (to find the nodes we can go to), and then I push these nodes to the queue
I then push the current node to the visited list and recur
my problem is that I have defined two functions (one to find the paths and one to push the target node of those paths to the search queue) that are not in the function list above.
I was wondering if it's possible to do the recursive BFS using ONLY those functions?
PS: The list we have for the graph is supposed to represent an undirected graph, but I'm not sure how that would change the problem
Any help of any kind is sincerely appreciated...

Order objects in hierarchic lists acording to their mutual dependence

I've some objects with mutual dependencies.
A
B (depend of A)
C (depend of B)
D
E (depend of B and F)
F (depend of C)
G
H (depend of B)
I want to create hierarchic list of these objects where an object is placed in a list who are after the list containing it's dependences.
The previou object list will be placed like this:
A, D, G (these objects have no dependencies)
B (B depend of A)
C, H (C and H depends of B)
F (F depend of C)
E (E depend of B and F)
Wich algorithm can resolve this problem ?
If you haven't any helping structure already, then you should use brute force.
I would use a list, which would extend vertically in the schema and one for every node of the vertical list.
So, in your example, I would have 5 nodes in the vertical list and for the first node, I would have a list with 3 nodes (the first node will host A, the second D and the third G). The second node of the vertical list, will have a list with one node, which will hold B and so on.
So the algorithm would be something like this:
1.For every item
2. if item.dependency in list
3. append item in the correct node
4. else // item not in list
5. create node in list with the dependency of the item
6. append item in the created node
So, in the above pseudo-code, when I say list, I mean the vertical one.
At step 1, we check all the items you have.
At step 2, you check if the item you are currently processing exists in the list (by searching the entire list and returning the position found or something smarter).
At step 3, you go at the position found in step 2 and you insert at the end of list located at the node in position the value of your item (you don't need to store the dependency again). Note that if the node in position has no entries in it's list, then you need to also create it's list.
At step 4, we are in the case that our item is not found in the list.
At step 5, we create a new node in the list, which will have as value the dependency of the item .
At step 6, we insert the item in the node created at step 5.
Hope this helps!

How does the algorithm for recursively printing permutations of an array work exactly?

I just can't understand how this algorithm works. All the explanations I've seen say that if you have a set such as {A, B, C} and you want all the permutations, start with each letter distinctly, then find the permutations of the rest of the letters. So for example {A} + permutationsOf({B,C}).
But all the explanations seem to gloss over how you find the permutations of the rest. An example being this one.
Could someone try to explain this algorithm a little more clearly to me?
To understand recursion you need to understand recursion..
(c) Programmer's wisdom
Your question is about that fact, that "permutations of the rest" is that recursive part. Recursion always consist of two parts: trivial case and recursion case. Trivial case points to a case when there's no continue for recursion and something should be returned.
In your sample, trivial part would be {A} - there's only one permutation of this set - itself. Recursion part will be union of current element and this "rest part" - i.e. if you have more than one element, then your result will be union of permutation between this element and "rest part". In terms of permutation: the rest part is current set without selected element. I.e. for set {A,B,C} on first recursion step that will be {A} and "rest part": {B,C}, then {B} and "rest part": {A,C} - and, finally, {C} with "rest part": {A,B}
So your recursion will last till the moment when "the rest part" will be single element - and then it will end.
That is the whole point of recursive implementation. You define the solution recursively assuming you already have the solution for the simpler problem. With a little tought you will come to the conclusion that you can do the very same consideration for the simpler case making it even more simple. Going on until you reach a case that is simple enough to solve. This simple enough case is known as bottom for the recursion.
Also please note that you have to iterate over all letters not just A being the first element. Thus you get all permutations as:
{{A} + permutationsOf({B,C})} +{{B} + permutationsOf({A,C})} + {{C} + permutationsOf({A,B})}
Take a minute and try to write down all the permutations of a set of four letters say {A, B, C, D}. You will find that the algorithm you use is close to the recursion above.
The answer to your question is in the halting-criterion (in this case !inputString.length).
http://jsfiddle.net/mzPpa/
function permutate(inputString, outputString) {
if (!inputString.length) console.log(outputString);
else for (var i = 0; i < inputString.length; ++i) {
permutate(inputString.substring(0, i) +
inputString.substring(i + 1),
outputString + inputString[i]);
}
}
var inputString = "abcd";
var outputString = "";
permutate(inputString, outputString);
So, let's analyze the example {A, B, C}.
First, you want to take single element out of it, and get the rest. So you would need to write some function that would return a list of pairs:
pairs = [ (A, {B, C})
(B, {A, C})
(C, {A, B}) ]
for each of these pairs, you get a separate list of permutations that can be made out of it, like that:
for pair in pairs do
head <- pair.fst // e.g. for the first pair it will be A
tails <- perms(pair.snd) // e.g. tails will be a list of permutations computed from {B, C}
You need to attach the head to each tail from tails to get a complete permutation. So the complete loop will be:
permutations <- []
for pair in pairs do
head <- pair.fst // e.g. for the first pair it will be A
tails <- perms(pair.snd) // e.g. tails will be a list of permutations computed from {B, C}
for tail in tails do
permutations.add(head :: tail); // here we create a complete permutation
head :: tail means that we attach one element head to the beginning of the list tail.
Well now, how to implement perms function used in the fragment tails <- perm(pair.snd). We just did! That's what recursion is all about. :)
We still need a base case, so:
perms({X}) = [ {X} ] // return a list of one possible permutation
And the function for all other cases looks like that:
perms({X...}) =
permutations <- []
pairs <- createPairs({X...})
for pair in pairs do
head <- pair.fst // e.g. for the first pair it will be A
tails <- perms(pair.snd) // e.g. tails will be a list of permutations computed from {B, C}
for tail in tails do
permutations.add( head :: tail ); // here we create a complete permutation
return permutations

How does maplist work in a program that inserts elements into an AVL Tree

I am studying Prolog and I could not follow the lessons so I have some doubts relating to a particular use of the maplist built in SWI Prolog predicate.
So let me explain my situation:
I have a personal predicate named addavl(Tree/Height, Element, NewTree/Height) that insert the element Element into the AVL tree Tree (where Height is the height of this original tree) and generate a new AVL tree named NewTree that contains Element and that have a new height Height
Now I have a list of elements and I would add these elements to an AVL tree (that at the beginning is void), so I have the following predicate (that work fine).
I have some doubts related to the use of maplist/4 SWI Prolog built in predicate and I would also know if my general interpretation of all this predicate is correct or if am I missing something.
/* Predicate that from a given list of elements build an AVL Tree: */
buildTree(List,Tree):-
length(List, N), % N: the number of element in List
/* Create L1 as a list that have the same number of element of List
For example if List have N=4 element L1=[A,B,C,D] where A,B,C,D
are variable that are not yet set
*/
length(L1,N),
/* Se fosse: append(L1,[Tree],NewList) otterrei: NewList=[A,B,C,D|Tree]
ma essendo NewList=[_|L2] so L2=[B,C,D|Tree]
*/
append(L1,[Tree],[_|L2]),
/* Put the couple (nil,0) in the L1 head so I have that A=(nil,0) that
represents an empty AVL tree:
*/
L1=[nil/0 |_],
/* Call addavl passing as parameter the tree in the L1 head, the value
to insert in the List Head and the head of L2 as the current new Tree-
When the variable Tree is reached it represents the final AVL tree
*/
maplist(addavl, L1, List, L2).
My interpretation of the entire predicate is the following one:
First N variable contains the length of the original element list List that I would to insert in the AVL tree
Then is created a new list L1 that have the same number of element of the original list List, but in this case L1 contains variables that are not yet set up with a value.
So for example, if the original element list is:
List = [5, 8, 3, 4] the L1 list will be something like: L1 = [A, B, C, D]
where A, B, C, D are variables that are not yet valorized.
Now there is this statement that must be satisfied:
append(L1,[Tree],[_|L2]),
that I read in this way:
if I had append(L1,[Tree],NewList) instead the previous statement I would have that:
NewList = [A,B,C,D,Tree] where A,B,C,D are the previous not set variable of the L1 List and Tree is a new not set variable.
But in my case I have that NewList = [_|L2] so L2 = [B,C,D,Tree].
Now the meaning of the previous append operation is the creation of the L2 list that at the beginning contains n not valorized variables (in the previous example 4 not valorized variables: B,C,D, Tree).
Each of these variables represents a Tree in which it was inserted a new element in the original list List
So, at the beginning the program put the void AVL tree in the head of this list (in my example in the A variable) by this instruction: L1=[nil/0 |_],
So in the head of the L1 variabile there is the void tree that have 0 as height (and a void tree is a correct AVL Tree)
And now I have the FIRST DOUBT: by the previous operation I have valorized the head variable of the L1 list but previous of this operation I have created the NewList=[_|L2] list using this statement:
append(L1,[Tree],[_|L2])
This means that the _ anonymous variable of [_|L2] list match with nil/0 AVL tree?
This work in this way also if I have valorized the L1 head after that I have create the [_|L2] list appending L1 list to [Tree]?
Ok, if my interpretation is correct go on to my SECOND DOUBT that it is related to how exactly work the maplist SWI Prolog built in predicate..
I have:
maplist(addavl, L1, List, L2).
What exactly do this predicate?
Reading on the official documentation: http://www.swi-prolog.org/pldoc/man?predicate=maplist%2F2
it seems to me that work in the following way:
I have the addavl predicate that is the GOAL that have to be satisfied on each element of the lists
Remembering that addval predicate work in this way: addavl(Tree/Height, Element, NewTree/Height).
So:
1) L1 is the list of the AVL Tree (the first is the void AVL Tree: nil/0)
2) List is the original list that contain the element to insert
3) L2 is the list that contains the AVL tree that I will create.
So I think that now work in the following way:
First take the void AVL Tree (nil/0) from the head of L1, take the first element do add from List, execute the GOAL (insert this element into the void AVL Tree) and put the result in the head of the L2 list (that, according to my previous example, is the B variable, so the B variable contains the AVL Tree that contain the first element of the List element) list**
Then repeat this procedure inserting all the other elements in of the elements list List and finally, the last element of the L2 list (the Tree variable) will represent the final AVL Tree in which all the elements were inserted.
Is my reasoning correct or am I missing something?
in Prolog we say "not yet instantiated variable" or "uninstantited variable".
About L1=[nil/0 |_], we can call it "initializing it with an initial value".
the _ in [_|L2] does indeed match the init value, and we don't care about it.
(this gives an idea of calling append(L1, [Tree], [nil/0 | L2]), instead of the two calls in original code).
Yes, order of operations is not important. X=t(A), A=2 or A=2, X=t(A) both result in the same substitution, A=2, X=t(2).
maplist( pred, ...Lists ...) works so that pred must be satisfied on elements from lists, taken pair-wise (or by columns). E.g. maplist(plus, [1,2,3],[4,X,8],[5,6,Y]).
The lists L1 and L2 both share structure:
nil/0 B C D Tree
------------------
L1
-----------------
L2
maplist sees them, and processes them, by feeding them by columns to addavl:
nil/0 B C D % L1
E1 E2 E3 E4 % List
B C D Tree % L2
so yes, it does it like you say.
I don't think your teacher will accept this as an answer. You should write a direct recursive solution instead. Both variants describe same iterative computational process of progressively adding elements into a tree, using the output of previous call as input to the next. But in any given implementation one can be optimized far better than the other. Using lists, here, will most probably be less efficient than a recursive variant.

Seeking algorithm to invert (reverse? mirror? turn inside-out) a DAG

I'm looking for an algorithm to "invert" (reverse? turn inside-out?) a
DAG:
A* # I can't ascii-art the arrows, so just
/ \ # pretend the slashes are all pointing
B C # "down" (south-east or south-west)
/ / \ # e.g.
G E D # A -> (B -> G, C -> (E -> F, D -> F))
\ /
F
The representation I'm using is immutable truly a DAG (there are no
"parent" pointers). I'd like to traverse the graph in some fashion
while building a "mirror image" graph with equivalent nodes, but with
the direction of relations between nodes inverted.
F*
/ \
G* E D # F -> (E -> C -> A, D -> C -> A), G -> B -> A
\ \ / #
B C # Again, arrows point "down"
\ / #
A #
So the input is a set of "roots" (here, {A}). The output should be a
set of "roots" in the result graph: {G, F}. (By root I mean a node
with no incoming references. A leaf is a node with no outgoing
references.)
The roots of the input become the leaves of the output and visa
versa. The transformation should be an inverse of itself.
(For the curious, I'd like to add a feature to a library I'm using to
represent XML for structural querying by which I can map each node in
the first tree to its "mirror image" in the second tree (and back
again) to provide more navigational flexibility for my query rules.)
Traverse the graph building a set of reversed edges and a list of leaf nodes.
Perform a topological sort of the reversed edges using the leaf (which are now root) nodes to start with.
Construct the reversed graph based on the reversed edges starting from the end of the sorted list. As the nodes are constructed in reverse topological order, you are guaranteed to have constructed the children of a given node before constructing the node, so creating an immutable representation is possible.
This is either O(N) if you use structures for your intermediate representation which track all links in both directions associated with a node, or O(NlnN) if you use sorting to find all the links of a node. For small graphs, or languages which don't suffer from stack overflows, you can just construct the graph lazily rather than explicitly performing the topological sort. So it depends a little what you're implementing it all in how different this would be.
A -> (B -> G, C -> (E -> F, D -> F))
original roots: [ A ]
original links: [ AB, BG, AC, CE, EF, CD, DF ]
reversed links: [ BA, GB, CA, EC, FE, DC, FD ]
reversed roots: [ G, F ]
reversed links: [ BA, CA, DC, EC, FE, FD, GB ] (in order of source)
topologically sorted: [ G, B, F, E, D, C, A ]
construction order : A, C->A, D->C, E->C, F->(D,E), B->A, G->B
Just do a depth-first search marking where you have already been, and each time you traverse an arrow you add the reverse to your result DAG. Add the leaves as roots.
My intuitive suggestion would be to perform a Depth First traversal of your graph, and construct your mirrored graph simultaneously.
When traversing each node, create a new node in the mirrored graph, and create an edge between it and its predecessor in the new graph.
If at any point you reach a node which has no children, mark it as a root.
I solved this with a simple graph traversal. Keep in mind topological sorting will only be useful for directed acyclic graphs.
I used an adjacency list, but you can do a similar thing with an adjacency matrix.
In Python it looks like this:
# Basic Graph Structure
g = {}
g[vertex] = [v1, v2, v3] # Each vertex contains a lists of its edges
To find all the edges for v, you then traverse the list g[v] and that will give you all (v, u) edges.
To build the reversed graph make a new dictionary and build it something like this:
reversed = {}
for v in g:
for e in g[v]:
if e not in reversed:
reversed[e] = []
reversed[e].append(v)
This is very memory intensive for large graphs (doubling your memory usage), but it is a very easy way to work with them and quite quick. There may be more clever solutions out there involving building a generator and using a dfs algorithm of some sort, but I have not put a lot of thought into it.
Depth-first search might be able to generate what you're after: Note your path through the tree and each time you traverse add the reverse to the resulting DAG (leaves are roots).

Resources