I've been playing around with the forward-backward algorithm to find the most efficient (determined by a cost function dependent on how a current state differs from the next state) path to go from State 1 to State N. In the picture below, a short version of the problem can be seen with just 3 States and 2 Nodes per State. I do forward-backward algorithm on that and find the best path like normal. The red bits in the pictures are the paths checked during forward propagation bit in the code.
Now the interesting bit, I now want to find the best 3-State Length path (as before) but now only Nodes in the first State are known. The other 4 are now free-floating and can be considered to be in any State (State 2 or State 3). I want to know if you guys have a good idea of how to do this.
Picture: http://i.imgur.com/JrQ2tul.jpg
Note: Bear in mind the original problem consists of around 25 States and 100 Nodes per State. So, you'll know the State of around 100 Nodes in State 1 but the other 24*100 Nodes are Stateless. In this case, I want find a 25-State length path (with minimum cost).
Addendum: Someone pointed out a better algorithm would be Viterbi's algorithm. So here is a problem with more variables thrown in. Can you guys explain how would that be implemented? Same rules apply, the path should start from one of the Nodes in State 1 (Node a or Node b). Also, the cost function using the norm doesn't make sense in this case since we only have one property (Size of node) but in the actual problem I'm expecting a lot more properties.
A variation of Dijkstra's algorithm might be faster for your problem than the forward-backward algorithm, because it does not analyze all nodes at once. Dijkstra is a DP algorithm after all.
Let a node be specified by
Node:
Predecessor : Node
Total cost : Number
Visited nodes : Set of nodes (e.g. a hash set or other performant set)
Initialize the algorithm with
open set : ordered (by total cost) set of nodes = set of possible start nodes (set visitedNodes to the one-element set with the current node)
( = {a, b} in your example)
Then execute the algorithm:
do
n := pop element from open set
if(n.visitedNodes.count == stepTarget)
we're done, backtrace the path from this node
else
for each n2 in available nodes
if not n2 in n.visitedNodes
push copy of n2 to open set (the same node might appear multiple times in the set):
.cost := n.totalCost + norm(n2 - n)
.visitedNodes := n.visitedNodes u { n2 } //u = set union
.predecessor := n
next
loop
If calculating the norm is expensive, you might want to calculate it on demand and store it in a map.
Related
Given an un-directed graph, a starting vertex and ending vertex. Find the number of walks (so a vertex can be visited more than once) from the source to the sink that involve exactly h hops. For example, if the graph is a triangle, the number of such paths with h hops is given by the h-th Jakobstahl number. This can be extended to a fully connected k-node graph, producing the recurrence (and closed form solution) here.
When the graph is an n-sided polygon, the accepted answer here expresses the number of walks as a sum of binomial terms.
I assume there might be an efficient algorithm for finding this number for any given graph? We can assume the graph is provided in adjacency matrix or adjacency list or any other convenient notation.
A solution to this would be to use a modified BFS with two alternating queues and a per-node counter for paths to this node of a certain length:
paths(start, end, n):
q = set(start)
q_next = set()
path_ct = map()
path_ct_next = map()
path_ct[start] = 1
for i in [0, n): # counting loop
for node in q: # queue loop
for a in adjacent(node): # neighbor-loop
path_ct_next[a] += path_ct[node]
q_next.add(a)
q = q_next
q_next = set()
path_ct = path_ct_next
path_ct_next = map()
return path_ct_next[end]
The basic assumption here is that map() produces a dictionary that returns zero, if the entry doesn't yet exist. Otherwise it returns the previously set value. The counting-loop simply takes care of doing exactly as many iterations as hops as required. The queue loop iterates over all nodes that can be reached using exactly i hops. In the neighbor-loop finally all nodes that can be reached in i + 1 hops are found. In this loop the adjacent nodes will be stored into a queue for the next iteration of counting-loop. The number of possible paths to reach such a node is the sum of the number of paths to reach it's predecessors. Once this is done for each node of the current iteration of counting-loop, the queues and tables are swapped/replaced by empty instances and the algorithm can start over.
If you take the adjacency matrix of a graph and raise it to the nth power, the resulting matrix counts the number of paths from each node to each other that uses exactly n edges. That would provide one way of computing the number you want - plus many others you aren’t all that interested in. :-)
Assuming the number of paths is “small” (say, something that fits into a 64-bit integer), you could use exponentiation by squaring to compute the matrix in O(log n) multiplies for a total cost of O(|V|ω log n), where ω is the exponent of the fastest matrix multiplication algorithm. However, if the quantity you’re looking for doesn’t fit into a machine word, then the cost of this approach will depend on how big the answer is as the multiplies will take variable amounts of time. For most graphs and small n this won’t be an issue, but if n is large and there are other parts of the graph that are densely connected this will slow down a bit.
Hope this helps!
You can make an algorithm that keep searching all possible paths , but with a variable that will contain your number of hops
For each possible path , each hop will decrement that variable and when arriving to zero your algorithm goes to trying another path , and if ever a path arrives to the target before making variable reachs zero , this path will be added to the list of your desired paths
Question:
You are given a Tree with n Nodes(can be upto 10^5) and n-1 bidirectional edges. And lets say each node contains two values:
It's index(Just a unique number for node), lets say it will be from 1 to n.
And It's value Vi, which can vary from 1 to 10^8
Now there will be multiple same type of queries(number of queries can be upto 10^5) on this same tree, as follows:
You are given node1, node2 and a value P(can vary from 1 to 10^8).
And for every this type of query, you just have to find number of nodes in path from node1 to node2 whose value is less than P.
NOTE: There will be unique path between all the nodes and no two edges belong to same pair of nodes.
Required Time Complexity O(nLog(n)) or can be in other terms but should be solvable in 1 Sec with given constraints.
What I have Tried:
(A). I could solve it easily if value of P would be fixed, using LCA approach in O(nLog(n)) by storing following info at each node:
Number of nodes whose value is less than P, from root to given node.
But here P is varying way too much so this will not help.
(B). Other approach I was thinking is, using simple DFS. But that will also take O(nq), where q is number of queries. Again as n and q both are varying between 1 to 10^5 so this will not help too in given time constraint.
I could not think anything else. Any help would be appreciated. :)
Source:
I read this problem somewhere on SPOJ I guess. But cannot find it now. Tried searching it on Web but could not find solution for it anywhere (Codeforces, CodeChef, SPOJ, StackOverflow).
Let ans(v, P) be the answer on a vertical path from the root to v and the given value of P.
How can we compute it? There's a simple offline solution: we can store all queries for the given node in a vector associated with it, run the depth-first search keep all values on the current path from the path in data structure that can do the following:
add a value
delete a value
count the number elements smaller than X
Any balanced binary-search tree would do. You can make it even simpler: if you know all the queries beforehand, you can compress the numbers so that they're in the [0..n - 1] range and use a binary index tree.
Back to the original problem: the answer to a (u, v, P) query is clearly ans(v, P) + ans(u, P) - 2 * ans(LCA(u, v), P).
That's it. The time complexity is O((N + Q) log N).
I have a question about the beam search algorithm.
Let's say that n = 2 (the number of nodes we are going to expand from every node). So, at the beginning, we only have the root, with 2 nodes that we expand from it. Now, from those two nodes, we expand two more. So, at the moment, we have 4 leafs. We will continue like this till we find the answer.
Is this how beam search works? Does it expand only n = 2 of every node, or it keeps 2 leaf nodes at all the times?
I used to think that n = 2 means that we should have 2 active nodes at most from each node, not two for the whole tree.
In the "standard" beam search algorithm, at every step, the total number of the nodes you currently "know about" is limited - and NOT the number of nodes you will follow from each node.
Concretely, if n = 2, it means that the "beam" will be of size at most 2, at all times. So, initially, you start from one node, then you discover all nodes that are reachable from it, but discard all of them but two, and finish step 1 with 2 nodes. At step 2, you have two nodes, and you will expand both, and discard all nodes again, except exactly 2 nodes (total, not from each!). In the next steps, similarly, you will keep 2 nodes after each step.
Choosing which node to keep is usually done by some heuristic function that evaluates which node is closest to the target.
Note that the beam search algorithm is not complete (i.e., it may not find a solution if one exists) nor optimal (i.e. it may not find the best solution). The best way to see this is witnessing that when n = 1, it basically reduces to best-first-search.
In beam search, instead of choosing the best token to generate at each timestep, we keep k possible tokens at each step. This fixed-size memory footprint k is called the beam width, on the metaphor of a flashlight beam that can be parameterized to be wider or narrower.
Thus at the first step of decoding, we compute a softmax over the entire vocabulary, assigning a probability to each word. We then select the k-best options from this softmax output. These initial k outputs are the search frontier and these k initial words are called hypotheses. A hypothesis is an output sequence, a translation-so- far, together with its probability.
At subsequent steps, each of the k best hypotheses is extended incrementally by being passed to distinct decoders, which each generate a softmax over the entire vocabulary to extend the hypothesis to every possible next token. Each of these k∗V hypotheses is scored by P(yi|x,y<i): the product of the probability of current word choice multiplied by the probability of the path that led to it. We then prune the k∗V hypotheses down to the k best hypotheses, so there are never more than k hypotheses at the frontier of the search, and never more than k decoders.
The beam size(or beam width) is the k aforementioned.
Source: https://web.stanford.edu/~jurafsky/slp3/ed3book.pdf
I have two sets of points plotted in a coordinate system. Each point in a set must be matched to at least one point at the other set, in a way that the sum of the length of the lines drawn by joining those points should be as low as possible. To make it clear, line drawing is just an abstraction, the actual output is just the pairs of points that must be matched.
I've seen this question about a similar problem, except that in my case there's no single-link restriction since the sets may have different sizes. Is there any kind of problem that describes this situation? More specifically, what algorithm could I use to solve this, assuming each set may have a maximum of 10 points?
Algorithm
You can model this as a network flow problem.
By having a source of 1 at each point in the first set, and a sink of 1 at each point in the second set, plus an extra node 'dest' for any left over capacity, any valid flow will always connect every point.
Make edges between the points with cost according to the distance between the points.
So far we have a network whose solution will be the lowest cost matching of set 1 to set 2 (i.e. each point will have a single link).
To allow multiple links you can simply make the following additions:
add 0 weight edges between each point in set2 and 'dest' (this allows points in set 2 to be multiply connected)
add 0 weight edges between 'dest' and each point in set2 (this allows points in set 1 to be multiply connected)
Example Python code using Networkx
import networkx as nx
import random
G=nx.DiGraph()
set1=['A','B','C','D','E','F','G','H','I']
set2=['a','b','c']
# Assume set1 > set2 (or swap sets)
assert len(set1)>=len(set2)
G.add_node('dest',demand=len(set1)-len(set2))
A=[]
for person in set1:
G.add_node(person,demand=-1)
G.add_edge('dest',person,weight=0)
for project in set2:
cost = random.randint(1,10) # Assign appropriate costs here
G.add_edge(person,project,weight=cost) # Edge taken if person does this project
for project in set2:
G.add_node(project,demand=1)
G.add_edge(project,'dest',weight=0)
flowdict = nx.min_cost_flow(G)
for person in set1:
for project,flow in flowdict[person].items():
if flow:
print person,'->',project
You can use a discrete optimization approach (Integer Programming).
We have two sets A, of size X, and B, of size Y. This means a maximum of X*Y links, each described by a boolean variable: L(i,j) = L(Y*i+j) is 1 if nodes A(i) and B(j) are linked, 0 if not. If X = Y = 10, we can write link L(7,3) as L73.
We can rewrite the problem like this:
Node A(i) has at least one link: X (say, ten) criteria with i from 0 to X-1, each of them comprised of Y components:
L(i,0)+L(i,1)+L(i,2)+...+L(i,Y-1) >= 1
Node B(j) has at least one link, and there are Y criteria made up of X components:
L(0,j)+L(1,j)+L(2,j)+...+L(X-1,j) >= 1
The minimal cost requirement becomes:
cost = SUM(C(0,0)*L(0,0)+C(0,1)*L(0,1)+...+C(9,9)*L(9,9)
With these conventions, we can easily build the matrices for an ILP problem, that can be passed to our favorite ILP solving package or library (C, Java, Python, even PHP).
====
A self-contained "greedy" algorithm which is not guaranteed to find a minimum, but is reasonably quick and should give reasonable results unless you feed it a pathological data set, is:
- connect all points in the smaller set, each to its nearest point in the other set.
- connect all unconnected points remaining in the larger set, each to its
nearest point in the first set, whether it's already connected or not.
As an optimization, you can then enumerate the points in the larger data set; if one of them (say A) is singly connected to a point in the first data set (say B) which is multiply connected, and is not its nearest neighbour C, you can switch the link from A-B to A-C. This takes care of one of the simplest problems that may arise from the "greediness" of the algorithm.
I'm trying to find the width of a directed acyclic graph... as represented by an arbitrarily ordered list of nodes, without even an adjacency list.
The graph/list is for a parallel GNU Make-like workflow manager that uses files as its criteria for execution order. Each node has a list of source files and target files. We have a hash table in place so that, given a file name, the node which produces it can be determined. In this way, we can figure out a node's parents by examining the nodes which generate each of its source files using this table.
That is the ONLY ability I have at this point, without changing the code severely. The code has been in public use for a while, and the last thing we want to do is to change the structure significantly and have a bad release. And no, we don't have time to test rigorously (I am in an academic environment). Ideally we're hoping we can do this without doing anything more dangerous than adding fields to the node.
I'll be posting a community-wiki answer outlining my current approach and its flaws. If anyone wants to edit that, or use it as a starting point, feel free. If there's anything I can do to clarify things, I can answer questions or post code if needed.
Thanks!
EDIT: For anyone who cares, this will be in C. Yes, I know my pseudocode is in some horribly botched Python look-alike. I'm sort of hoping the language doesn't really matter.
I think the "width" you're considering here isn't really what you want - the width depends on how you assign levels to each node where you have some choice. You noticed this when you were deciding whether to assign all sources to level 0 or all sinks to the max level.
Instead, you just want to count the number of nodes and divide by the "critical path length", which is the longest path in the dag. This gives the average parallelism for the graph. It depends only on the graph itself, and it still gives you an indication of how wide the graph is.
To compute the critical path length, just do what you're doing - the critical path length is the maximum level you end up assigning.
In my opinion when you're doing this type of last minute development, its best to keep the new structures separate from the ones you are already using. At this point, if I were pressed by time I would go for a simpler solution.
Create an adjacency matrix for the graph using the parent data (should be easy)
Perform a topological sort using this matrix. (or even use tsort if pressed for time)
Now that you have a topological sort, create an array level, one element for each node.
For each node:
If the node has no parents set its level to 0
Otherwise set it to the minimum of level its parents + 1.
Find the maximum level width.
The question is as Keith Randall asked, is this the right measurement you need?
Here's what I (Platinum Azure, the original author) have so far.
Preparations/augmentations:
Add "children" field to linked list ("DAG") node
Add "level" field to "DAG" node
Add "children_left" field to "DAG" node. This is used to make sure that all children are examined before a parent is examined (in a later stage of the algorithm).
Algorithm:
Find the number of immediate children for all nodes; also, determine leaves by adding nodes with children==0 to list.
for l in L:
l.children = 0
for l in L:
l.level = 0
for p in l.parents:
++p.children
Leaves = []
for l in L:
l.children_left = l.children
if l.children == 0:
Leaves.append(l)
Assign every node a "reverse depth" level. Normally by depth, I mean topologically sort and assign depth=0 to nodes with no parents. However, I'm thinking I need to reverse this, with depth=0 corresponding to leaves. Also, we want to make sure that no node is added to the queue without all its children "looking at it" first (to determine its proper "depth level").
max_level = 0
while !Leaves.empty():
l = Leaves.pop()
for p in l.parents:
--p.children_left
if p.children_left == 0:
/* we only want to append parents with for sure correct level */
Leaves.append(p)
p.level = Max(p.level, l.level + 1)
if p.level > max_level:
max_level = p.level
Now that every node has a level, simply create an array and then go through the list once more to count the number of nodes in each level.
level_count = new int[max_level+1]
for l in L:
++level_count[l.level]
width = Max(level_count)
So that's what I'm thinking so far. Is there a way to improve on it? It's linear time all the way, but it's got like five or six linear scans and there will probably be a lot of cache misses and the like. I have to wonder if there isn't a way to exploit some locality with a better data structure-- without actually changing the underlying code beyond node augmentation.
Any thoughts?