dequeuemin in Fibonacci heaps - data-structures

I wanted to ask about Fibonacci heaps.
If I have this scenario:
A
|
B
Then, we add two more nodes C and D:
A
|\
B C
|
D
Now we delete B:
A
|
C
|
D
Now we add E and F.
I saw it creates a tree like that:
E
|\
F A
|
C
|
D
But I don't understand why E and F are connected with the tree. From what I read, we connect trees with the same rank (for example, a tree of one node with another tree of one node), am I wrong?
Thank you very much.

If you add the nodes C and D into the first Fibonacci heap, you would not end up with the tree you drew. Instead, you'd have this heap:
A C D
|
B
Remember that Fibonacci heaps lazily add each newly added value to the top-level list of trees and only coalesce things on a deletion. If you were to delete B, you'd promote it to the top level, like this:
A B C D
You'd then remove B:
A C D
Now, you'd scan the root list and coalesce the trees of the same order together. Let's suppose you scan the nodes in the order A, C, D. First, you'll coalesce A and C together, like this:
A D
|
C
At this point, no further coalesces will happen, since there's exactly one tree of each order.
If you then add in E and F, you'd put them at the top level, like this:
A D E F
|
C
So yes, you're right, those nodes shouldn't get added into the tree.

Related

remove a node in xor linklist

I find the below code in C++ that insert and traverse a XOR linklist.
How do we remove a node ? As it seems when we remove a node, all the address of the node in the list need to get updated ?
Or my intuition is not correct this time ?
https://en.wikipedia.org/wiki/XOR_linked_list
https://www.geeksforgeeks.org/xor-linked-list-a-memory-efficient-doubly-linked-list-set-1/
https://www.geeksforgeeks.org/xor-linked-list-a-memory-efficient-doubly-linked-list-set-2/
No, you don't update all of the addresses, merely the adjacent ones. Look at the example list; let's extend it two more nodes:
z A B C D E F
<–> z⊕B <–> A⊕C <-> B⊕D <-> C⊕E <-> D⊕F <->
The only values that use the address of C are those for B and D. If we remove C, we need only alter those values by the next ones moving in:
B.link = B.link ⊕ C ⊕ D
D.link = D.link ⊕ C ⊕ B
This gives us
z A B D E F
<–> z⊕B <–> A⊕D <-> B⊕E <-> D⊕F <->
Do you see how that works? There's very little additional work involved: we have already traversed the list to C to find the item to remove; all we need do is keep the back-pointer as we move along (to operate on B), and then take one more step to access D.

Restricted combinations (algorithm)

Consider the following example:
I have a list of 5 items, each with their occurrence with either 1 or 0:
{a, b, c, d, e}
The restricted combinations are as follows:
the occurrence of a, c, and e cannot be 1 at any given time.
the occurrence of b, d, and e cannot be 1 at any given time.
basically, if found in database that occurrence of a and c is already 1, and if a given input is e (giving e an occurrence of 1) is not allowed (clause 1) or vice versa.
another example, d and e has an occurrence of 1 respectively in the database, a new input of b will not be allowed (following clause 2).
An even more solid example:
LETTER | COUNT(OCCURRENCE)
------------------------------
a | 1
b | 1
c | 1
d | 0
e | 0
Therefore, a new input of e would be rejected because of the violation of clause 1.
What is the best algorithm/practice for this solution?
I thought of having many if-else statements, but that doesn't seem efficient enough. What if I had a dynamic list of elements instead? Or at least have a better extensibility to this piece of program.
As mentioned by BKassem(I think) in the comments(removed for whatever reason).
The algorithm for this scenario:
(count(a) * count(c) * count(e)) == 0 //proceed to further actions
Worked flawlessly!

Construction from many sets

I have four sets:
A={a,b,c}, B={d,e}, C={c,d}, D={a,b,c,e}
I want to search the sequence of sets that give me: a b c d
Example: the sequence A A A C can give me a b c d because "a" is an element of A, "b" is an element of A, "c" is an element of A and "d" is an element of C.
The same thing for : D A C B, etc.
I want an algorithm to enumerate all sequences possibles or a mathematical method to find the sequences.
You should really come up with some code of your own and then ask specific questions about problems with it. But it's interesting, so I'll share some thoughts.
You want a b c d.
a can come from A, D
b can come from A, D
c can come from A, C, D
d can come from B, C
So the problem reduces to finding all of the 2*2*3*2=24 ways to combine those options.
One way is recursion with backtracking. Build it from left to right, output when you have a complete set. Like the 8 queens problem, but much simpler since everything is independent.
Another way is to count the integers and map them into a mixed-base system. First digit base 2, then 2, 3, 2. So 0 becomes AAAB, 1 is AAAC, 2 is AACB, etc. 23 is DDDC and 24 needs five digits so you stop there.

algorithm for ant movement in 2 row table

Problem:
There is a table with below constraints:
a) it has 2 row only.
b) and n col. so basically its 2xN table but N is a power of two.
c) its short ends are joint together you can move from last element of the row to the first element of row if first element is not visited.
Now you are given 2 initial position i1 and i2 for ants and final destination f1 and f2. Now ant have to reach f1 and f2, but either of the ant can reach either of point . example if i1 reach f2 and i2 have to reach f1.
Allowed moves:-
1) Ant can move horizontally and vertically only no diagonal movement.
2) each cell can be visited at most by one ant and all the cell must be visited in the end.
Output:- path traced by two ants if all the cell are marked visited else -1. Need complexity for the algorithm also.
Max flow can be used to compute two disjoint paths, but it is not possible to express the constraint of visiting all squares in a generic fashion (it's possible that there's a one-off trick). This dynamic programming solution isn't the slickest, but the ideas behind it can be applied to many similar problems.
The first step is to decompose the instance recursively, bottoming out with little pieces. (The constraints on these pieces will become apparent shortly.) For this problem, the root piece is the whole 2-by-n array. The two child pieces of the root are 2-by-n/2 arrays. The four children of those pieces are 2-by-n/4 arrays. We bottom out with 2-by-1 arrays.
Now, consider what a solution looks like from the point of view of a piece.
+---+-- ... --+---+
A | B | | C | D
+---+-- ... --+---+
E | F | | G | H
+---+-- ... --+---+
Squares B, C, F, G are inside the piece. Squares A, E are the left boundary. Squares D, H are the right boundary. If an ant enters this piece, it does so from one of the four boundary squares (or its initial position, if that's inside the piece). Similarly, if an ant leaves this piece, it does so to one of the four boundary squares (or its final position, if that's inside the piece). Since each square can be visited at most once, there is a small number of possible permutations for the comings and goings of both ants:
Ant 1: enters A->B, leaves C->D
Ant 2: enters E->F, leaves G->H
Ant 1: enters A->B, leaves G->H
Ant 2: does not enter
Ant 1: enters A->B, leaves C->D, enters H->G, leaves F->E
Ant 2: does not enter
Ant 1: enters A->B, leaves F->E, enters H->G, leaves C->D
Ant 2: does not enter
...
The key fact is that what the ants do strictly outside of this piece has no bearing on what happens strictly inside. For each piece in the recursive decomposition, we can compute the set of comings and goings that are consistent with all squares in the piece being covered. For the 2-by-1 arrays, this can be accomplished with brute force.
+---+
A | B | C
+---+
D | E | F
+---+
In general, neither ant starts or ends inside this piece. Then the some of the possibilities are
Ant 1: A->B, B->E, E->F; Ant 2: none
Ant 1: A->B, B->C, F->E, E->D; Ant 2: none
Ant 1: A->B, B->C; Ant 2: D->E; E->F
Ant 1: A->B, B->C, D->E, E->F; Ant 2: none .
Now, suppose we have computed the sets (DP tables hereafter) for two adjacent pieces that are the children of one parent piece.
1|2
+---+-- ... --+---+---+-- ... --+---+
A | B | | C | D | | E | F
+---+-- ... --+---+---+-- ... --+---+
G | H | | I | J | | K | L
+---+-- ... --+---+---+-- ... --+---+
1|2
Piece 1 is to the left of the dividing line. Piece 2 is to the right of the dividing line. Once again, there is a small number of possibilities for traffic on the named squares. The DP table for the parent piece is indexed by the traffic at A, B, E, F, G, H, K, L. For each entry, we try all possibilities for the traffic at C, D, I, J, using the children's DP tables to determine whether the combined comings and goings are feasible.

Seeking algorithm to invert (reverse? mirror? turn inside-out) a DAG

I'm looking for an algorithm to "invert" (reverse? turn inside-out?) a
DAG:
A* # I can't ascii-art the arrows, so just
/ \ # pretend the slashes are all pointing
B C # "down" (south-east or south-west)
/ / \ # e.g.
G E D # A -> (B -> G, C -> (E -> F, D -> F))
\ /
F
The representation I'm using is immutable truly a DAG (there are no
"parent" pointers). I'd like to traverse the graph in some fashion
while building a "mirror image" graph with equivalent nodes, but with
the direction of relations between nodes inverted.
F*
/ \
G* E D # F -> (E -> C -> A, D -> C -> A), G -> B -> A
\ \ / #
B C # Again, arrows point "down"
\ / #
A #
So the input is a set of "roots" (here, {A}). The output should be a
set of "roots" in the result graph: {G, F}. (By root I mean a node
with no incoming references. A leaf is a node with no outgoing
references.)
The roots of the input become the leaves of the output and visa
versa. The transformation should be an inverse of itself.
(For the curious, I'd like to add a feature to a library I'm using to
represent XML for structural querying by which I can map each node in
the first tree to its "mirror image" in the second tree (and back
again) to provide more navigational flexibility for my query rules.)
Traverse the graph building a set of reversed edges and a list of leaf nodes.
Perform a topological sort of the reversed edges using the leaf (which are now root) nodes to start with.
Construct the reversed graph based on the reversed edges starting from the end of the sorted list. As the nodes are constructed in reverse topological order, you are guaranteed to have constructed the children of a given node before constructing the node, so creating an immutable representation is possible.
This is either O(N) if you use structures for your intermediate representation which track all links in both directions associated with a node, or O(NlnN) if you use sorting to find all the links of a node. For small graphs, or languages which don't suffer from stack overflows, you can just construct the graph lazily rather than explicitly performing the topological sort. So it depends a little what you're implementing it all in how different this would be.
A -> (B -> G, C -> (E -> F, D -> F))
original roots: [ A ]
original links: [ AB, BG, AC, CE, EF, CD, DF ]
reversed links: [ BA, GB, CA, EC, FE, DC, FD ]
reversed roots: [ G, F ]
reversed links: [ BA, CA, DC, EC, FE, FD, GB ] (in order of source)
topologically sorted: [ G, B, F, E, D, C, A ]
construction order : A, C->A, D->C, E->C, F->(D,E), B->A, G->B
Just do a depth-first search marking where you have already been, and each time you traverse an arrow you add the reverse to your result DAG. Add the leaves as roots.
My intuitive suggestion would be to perform a Depth First traversal of your graph, and construct your mirrored graph simultaneously.
When traversing each node, create a new node in the mirrored graph, and create an edge between it and its predecessor in the new graph.
If at any point you reach a node which has no children, mark it as a root.
I solved this with a simple graph traversal. Keep in mind topological sorting will only be useful for directed acyclic graphs.
I used an adjacency list, but you can do a similar thing with an adjacency matrix.
In Python it looks like this:
# Basic Graph Structure
g = {}
g[vertex] = [v1, v2, v3] # Each vertex contains a lists of its edges
To find all the edges for v, you then traverse the list g[v] and that will give you all (v, u) edges.
To build the reversed graph make a new dictionary and build it something like this:
reversed = {}
for v in g:
for e in g[v]:
if e not in reversed:
reversed[e] = []
reversed[e].append(v)
This is very memory intensive for large graphs (doubling your memory usage), but it is a very easy way to work with them and quite quick. There may be more clever solutions out there involving building a generator and using a dfs algorithm of some sort, but I have not put a lot of thought into it.
Depth-first search might be able to generate what you're after: Note your path through the tree and each time you traverse add the reverse to the resulting DAG (leaves are roots).

Resources