Optimized algorithm to schedule tasks with dependency? - algorithm

There are tasks that read from a file, do some processing and write to a file. These tasks are to be scheduled based on the dependency. Also tasks can be run in parallel, so the algorithm needs to be optimized to run dependent tasks in serial and as much as possible in parallel.
eg:
A -> B
A -> C
B -> D
E -> F
So one way to run this would be run
1, 2 & 4 in parallel. Followed by 3.
Another way could be
run 1 and then run 2, 3 & 4 in parallel.
Another could be run 1 and 3 in serial, 2 and 4 in parallel.
Any ideas?

Let each task (e.g. A,B,...) be nodes in a directed acyclic graph and define the arcs between the nodes based on your 1,2,....
You can then topologically order your graph (or use a search based method like BFS). In your example, C<-A->B->D and E->F so, A & E have depth of 0 and need to be run first. Then you can run F,B and C in parallel followed by D.
Also, take a look at PERT.
Update:
How do you know whether B has a higher priority than F?
This is the intuition behind the topological sort used to find the ordering.
It first finds the root (no incoming edges) nodes (since one must exist in a DAG). In your case, that's A & E. This settles the first round of jobs which needs to be completed. Next, the children of the root nodes (B,C and F) need to be finished. This is easily obtained by querying your graph. The process is then repeated till there are no nodes (jobs) to be found (finished).

Given a mapping between items, and items they depend on, a topological sort orders items so that no item precedes an item it depends upon.
This Rosetta code task has a solution in Python which can tell you which items are available to be processed in parallel.
Given your input the code becomes:
try:
from functools import reduce
except:
pass
data = { # From: http://stackoverflow.com/questions/18314250/optimized-algorithm-to-schedule-tasks-with-dependency
# This <- This (Reverse of how shown in question)
'B': set(['A']),
'C': set(['A']),
'D': set(['B']),
'F': set(['E']),
}
def toposort2(data):
for k, v in data.items():
v.discard(k) # Ignore self dependencies
extra_items_in_deps = reduce(set.union, data.values()) - set(data.keys())
data.update({item:set() for item in extra_items_in_deps})
while True:
ordered = set(item for item,dep in data.items() if not dep)
if not ordered:
break
yield ' '.join(sorted(ordered))
data = {item: (dep - ordered) for item,dep in data.items()
if item not in ordered}
assert not data, "A cyclic dependency exists amongst %r" % data
print ('\n'.join( toposort2(data) ))
Which then generates this output:
A E
B C F
D
Items on one line of the output could be processed in any sub-order or, indeed, in parallel; just so long as all items of a higher line are processed before items of following lines to preserve the dependencies.

Your tasks are an oriented graph with (hopefully) no cycles.
I contains sources and wells (sources being tasks that don't depends (have no inbound edge), wells being tasks that unlock no task (no outbound edge)).
A simple solution would be to give priority to your tasks based on their usefulness (lets call that U.
Typically, starting by the wells, they have a usefulness U = 1, because we want them to finish.
Put all the wells' predecessors in a list L of currently being assessed node.
Then, taking each node in L, it's U value is the sum of the U values of the nodes that depends on him + 1. Put all parents of the current node in the L list.
Loop until all nodes have been treated.
Then, start the task that can be started and have the biggest U value, because it is the one that will unlock the largest number of tasks.
In your example,
U(C) = U(D) = U(F) = 1
U(B) = U(E) = 2
U(A) = 4
Meaning you'll start A first with E if possible, then B and C (if possible), then D and F

first generate a topological ordering of your tasks. check for cycles at this stage. thereafter you can exploit parallelism by looking at maximal antichains. roughly speaking these are task sets without dependencies between their elements.
for a theoretical perspective, this paper covers the topic.

Without considering the serial/parallel aspect of the problem, this code can at least determine the overall serial solution:
def order_tasks(num_tasks, task_pair_list):
task_deps= []
#initialize the list
for i in range(0, num_tasks):
task_deps[i] = {}
#store the dependencies
for pair in task_pair_list:
task = pair.task
dep = pair.dependency
task_deps[task].update({dep:1})
#loop through list to determine order
while(len(task_pair_list) > 0):
delete_task = None
#find a task with no dependencies
for task in task_deps:
if len(task_deps[task]) == 0:
delete_task = task
print task
task_deps.pop(task)
break
if delete_task == None:
return -1
#check each task's hash of dependencies for delete_task
for task in task_deps:
if delete_key in task_deps[task]:
del task_deps[task][delete_key]
return 0
If you update the loop that checks for dependencies that have been fully satisfied to loop through the entire list and execute/remove tasks that no longer have any dependencies all at the same time, that should also allow you to take advantage of completing the tasks in parallel.

Related

Cartesian product using spark

I have two sequences A and B. We want to generate a Boolean sequence where each element in A has a subsequence which occurs in B. For example:
a = ["abababab", "ccffccff", "123123", "56575656"]
b = ["ab", "55", "adfadf", "123", "5656"]
output = [True, False, True, True]
A and B do not fit in memory. One solution may be as follows:
val a = sc.parallelize(List("abababab", "ccffccff", "123123", "56575656"))
val b = sc.parallelize(List("ab", "55", "adfadf", "123", "5656"))
a.cartesian(b)
.map({case (x,y) => (x, x contains y) })
.reduceByKey(_ || _).map(w => w._1 + "," + w._2).saveAsTextFile("./output.txt")
One could appreciate that there is no need to compute the cartesian product because once we find a first couple of sequence that meets our condition we can stop the search. Take for example the first element of A. If we start iterating B from the beginning, the first element of B is a subsequence and therefore the output is True. In this case, we have been very lucky but in general there is no need to verify all combinations.
The question is: is there any other way to optimize this computation?
I believe the short answer is 'NO' :)
I also don't think it's fair to compare what Spark does with iterating. You have to remember that Spark is for huge data set where sequential processing is not an option. It runs your function in parallel with potentially thousands of tasks executed concurrently on many different machines. And it does this to ensure that processing will finish in a reasonable time even if the first element of A matches the very last element of B.
In contrast, iterating or looping is a sequential operation comparing two elements at the time. It is well suited for small data sets, but not for huge data sets and definitely not for distributed processing.

Spark example program runs very slow

I tried to use Spark to work on simple graph problem. I found an example program in Spark source folder: transitive_closure.py, which computes the transitive closure in a graph with no more than 200 edges and vertices. But in my own laptop, it runs more than 10 minutes and doesn't terminate. The command line I use is: spark-submit transitive_closure.py.
I wonder why spark is so slow even when computing just such small transitive closure result? Is it a common case? Is there any configuration I miss?
The program is shown below, and can be found in spark install folder at their website.
from __future__ import print_function
import sys
from random import Random
from pyspark import SparkContext
numEdges = 200
numVertices = 100
rand = Random(42)
def generateGraph():
edges = set()
while len(edges) < numEdges:
src = rand.randrange(0, numEdges)
dst = rand.randrange(0, numEdges)
if src != dst:
edges.add((src, dst))
return edges
if __name__ == "__main__":
"""
Usage: transitive_closure [partitions]
"""
sc = SparkContext(appName="PythonTransitiveClosure")
partitions = int(sys.argv[1]) if len(sys.argv) > 1 else 2
tc = sc.parallelize(generateGraph(), partitions).cache()
# Linear transitive closure: each round grows paths by one edge,
# by joining the graph's edges with the already-discovered paths.
# e.g. join the path (y, z) from the TC with the edge (x, y) from
# the graph to obtain the path (x, z).
# Because join() joins on keys, the edges are stored in reversed order.
edges = tc.map(lambda x_y: (x_y[1], x_y[0]))
oldCount = 0
nextCount = tc.count()
while True:
oldCount = nextCount
# Perform the join, obtaining an RDD of (y, (z, x)) pairs,
# then project the result to obtain the new (x, z) paths.
new_edges = tc.join(edges).map(lambda __a_b: (__a_b[1][1], __a_b[1][0]))
tc = tc.union(new_edges).distinct().cache()
nextCount = tc.count()
if nextCount == oldCount:
break
print("TC has %i edges" % tc.count())
sc.stop()
There can many reasons why this code doesn't perform particularly well on your machine but most likely this is just another variant of the problem described in Spark iteration time increasing exponentially when using join. The simplest way to check if it is indeed the case is to provide spark.default.parallelism parameter on submit:
bin/spark-submit --conf spark.default.parallelism=2 \
examples/src/main/python/transitive_closure.py
If not limited otherwise, SparkContext.union, RDD.join and RDD.union set a number of partitions of the child to the total number of partitions in the parents. Usually it is a desired behavior but can become extremely inefficient if applied iteratively.
The useage says the command line is
transitive_closure [partitions]
Setting default parallelism will only help with the joins in each partition, not the inital distribution of work.
Im going to argue that that MORE partitions should be used. Setting the default parallelism may still help, but the code you posted sets the number explicitly (the argument passed or 2, whichever is greater). The absolute minimum should be the cores available to Spark, otherwise you're always working at less than 100%.

Algorithm for topological sorting if cycles exist

Some programming languages (like haskell) allow cyclic dependencies between modules. Since the compiler needs to know all definitions of all modules imported while compiling one module, it usually has to do some extra work if some modules import each other mutually or any other kind of cycle occurs. In that case, the compiler may not be able to optimize code as much as in modules that have no import cycles, since imported functions may have not yet been analyzed. Usually only one module of a cycle has to be compiled that way, as a binary object has no dependecies. Let's call such a module loop-breaker
Especially if the import cycles are interleaved it is interesting to know, how to minimize the number of loop-breakers when compiling a big project composed of hundreds of modules.
Is there an algorithm that given a set of dependecies outputs a minimal number of modules that need to be compiled as loop-breakers to compile the program successfully?
Example
I try to clarify what I mean in this example.
Consider a project with the four modules A, B, C and D. This is a list of dependencies between these modules, an entry X y means y depends on x:
A C
A D
B A
C B
D B
The same relation visualized as an ASCII-diagram:
D ---> B
^ / ^
| / |
| / |
| L |
A ---> C
There are two cycles in this dependency-graph: ADB and ACB. To break these cycles one could for instance compile modules C and D as loop-breakers. Obviously, this is not the best approach. Compiling A as a loop-breaker is completely sufficient to break both loops and you need to compile one less module as a loop-breaker.
This is the NP-hard (and APX-hard) problem known as minimum feedback vertex set. An approximation algorithm due to Demetrescu and Finocchi (pdf, Combinatorial Algorithms for Feedback Problems in Directed Graphs (2003)") works well in practice when there are no long simple cycles, as I would expect for your application.
Here is how to do it in Python:
from collections import defaultdict
def topological_sort(dependency_pairs):
'Sort values subject to dependency constraints'
num_heads = defaultdict(int) # num arrows pointing in
tails = defaultdict(list) # list of arrows going out
for h, t in dependency_pairs:
num_heads[t] += 1
tails[h].append(t)
ordered = [h for h in tails if h not in num_heads]
for h in ordered:
for t in tails[h]:
num_heads[t] -= 1
if not num_heads[t]:
ordered.append(t)
cyclic = [n for n, heads in num_heads.iteritems() if heads]
return ordered, cyclic
def is_toposorted(ordered, dependency_pairs):
'''Return True if all dependencies have been honored.
Raise KeyError for missing tasks.
'''
rank = {t: i for i, t in enumerate(ordered)}
return all(rank[h] < rank[t] for h, t in dependency_pairs)
print topological_sort('aa'.split())
ordered, cyclic = topological_sort('ah bg cf ch di ed fb fg hd he ib'.split())
print ordered, cyclic
print is_toposorted(ordered, 'ah bg cf ch di ed fb fg hd he ib'.split())
print topological_sort('ah bg cf ch di ed fb fg hd he ib ba xx'.split())
The runtime is linearly proportional to the number of edges (dependency pairs).
The algorithm is organized around a lookup table called num_heads that keeps a count the number of predecessors (incoming arrows). In the ah bg cf ch di ed fb fg hd he ib example, the counts are:
node number of incoming edges
---- ------------------------
a 0
b 2
c 0
d 2
e 1
f 1
g 2
h 2
i 1
The algorithm works by "visting" nodes with no predecessors. For example, nodes a and c have no incoming edges, so they are visited first.
Visiting means that the nodes are output and removed from the graph. When a node is visited, we loop over its successors and decrement their incoming count by one.
For example, in visiting node a, we go to its successor h to decrement its incoming count by one (so that h 2 becomes h 1.
Likewise, when visiting node c, we loop over its successors f and h, decrementing their counts by one (so that f 1 becomes f 0 and h 1 becomes h 0).
The nodes f and h no longer have incoming edges, so we repeat the process of outputting them and removing them from the graph until all the nodes have been visited. In the example, the visitation order (the topological sort is):
a c f h e d i b g
If num_heads ever arrives at a state when there are no nodes without incoming edges, then it means there is a cycle that cannot be topologically sorted and the algorithm exits.

what is the best algorithm to traverse a graph with negative nodes and looping nodes

I have a really difficult problem to solve and Im just wondering what what algorithm can be used to find the quickest route. The undirected graph consist of positive and negative adjustments, these adjustments effect a bot or thing which navigate the maze. The problem I have is mazes which contain loops that can be + or -. An example might help:-
node A gives 10 points to the object
node B takes 15 from the object
node C gives 20 points to the object
route=""
the starting node is A, and the ending node is C
given the graph structure as:-
a(+10)-----b(-15)-----c+20
node() means the node loops to itself - and + are the adjustments
nodes with no loops are c+20, so node c has a positive adjustment of 20 but has no loops
if the bot or object has 10 points in its resource then the best path would be :-
a > b > c the object would have 25 points when it arrives at c
route="a,b,c"
this is quite easy to implement, the next challenge is knowing how to backtrack to a good node, lets assume that at each node you can find out any of its neighbour's nodes and their adjustment level. here is the next example:-
if the bot started with only 5 points then the best path would be
a > a > b > c the bot would have 25 points when arriving at c
route="a,a,b,c"
this was a very simple graph, but when you have lots of more nodes it becomes very difficult for the bot to know whether to loop at a good node or go from one good node to another, while keeping track of a possible route.
such a route would be a backtrack queue.
A harder example would result in lots of going back and forth
bot has 10 points
a(+10)-----b(-5)-----c-30
a > b > a > b > a > b > a > b > a > b > c having 5 pts left.
another way the bot could do it is:-
a > a > a > b > c
this is a more efficient way, but how the heck you can program this is partly my question.
does anyone know of a good algorithm to solve this, ive already looked into Bellman-fords and Dijkstra but these only give a simple path not a looping one.
could it be recursive in some way or some form of heuristics?
referring to your analogy:-
I think I get what you mean, a bit of pseudo would be clearer, so far route()
q.add(v)
best=v
hash visited(v,true)
while(q is not empty)
q.remove(v)
for each u of v in G
if u not visited before
visited(u,true)
best=u=>v.dist
else
best=v=>u.dist
This is a straightforward dynamic programming problem.
Suppose that for a given length of path, for each node, you want to know the best cost ending at that node, and where that route came from. (The data for that length can be stored in a hash, the route in a linked list.)
Suppose we have this data for n steps. Then for the n+1st we start with a clean slate, and then take each answer for the n'th, move it one node forward, and if we land on a node we don't have data for, or else that we're better than the best found, then we update the data for that node with our improved score, and add the route (just this node linking back to the previous linked list).
Once we have this for the number of steps you want, find the node with the best existing route, and then you have your score and your route as a linked list.
========
Here is actual code implementing the algorithm:
class Graph:
def __init__(self, nodes=[]):
self.nodes = {}
for node in nodes:
self.insert(node)
def insert(self, node):
self.nodes[ node.name ] = node
def connect(self, name1, name2):
node1 = self.nodes[ name1 ]
node2 = self.nodes[ name2 ]
node1.neighbors.add(node2)
node2.neighbors.add(node1)
def node(self, name):
return self.nodes[ name ]
class GraphNode:
def __init__(self, name, score, neighbors=[]):
self.name = name
self.score = score
self.neighbors = set(neighbors)
def __repr__(self):
return self.name
def find_path (start_node, start_score, end_node):
prev_solution = {start_node: [start_score + start_node.score, None]}
room_to_grow = True
while end_node not in prev_solution:
if not room_to_grow:
# No point looping endlessly...
return None
room_to_grow = False
solution = {}
for node, info in prev_solution.iteritems():
score, prev_path = info
for neighbor in node.neighbors:
new_score = score + neighbor.score
if neighbor not in prev_solution:
room_to_grow = True
if 0 < new_score and (neighbor not in solution or solution[neighbor][0] < new_score):
solution[neighbor] = [new_score, [node, prev_path]]
prev_solution = solution
path = prev_solution[end_node][1]
answer = [end_node]
while path is not None:
answer.append(path[0])
path = path[1]
answer.reverse()
return answer
And here is a sample of how to use it:
graph = Graph([GraphNode('A', 10), GraphNode('B', -5), GraphNode('C', -30)])
graph.connect('A', 'A')
graph.connect('A', 'B')
graph.connect('B', 'B')
graph.connect('B', 'B')
graph.connect('B', 'C')
graph.connect('C', 'C')
print find_path(graph.node('A'), 10, graph.node('C'))
Note that I explicitly connected each node to itself. Depending on your problem you might want to make that automatic.
(Note, there is one possible infinite loop left. Suppose that the starting node has a score of 0 and there is no way off of it. In that case we'll loop forever. It would take work to add a check for this case.)
I'm a little confused by your description, it seems like you are just looking for shortest path algorithms. In which case google is your friend.
In the example you've given you have -ve adjustments which should really be +ve costs in the usual parlance of graph traversal. I.e. you want to find a path with the lowest cost so you want more +ve adjustments.
If your graph has loops that are beneficial to traverse (i.e. decrease cost or increase points through adjustments) then the best path is undefined because going through the loop one more time will improve your score.
Here's some psuedocode
steps = []
steps[0] = [None*graph.#nodes]
step = 1
while True:
steps[step] = [None*graph.#nodes]
for node in graph:
for node2 in graph:
steps[step][node2.index] = max(steps[step-1][node.index]+node2.cost, steps[step][node2.index])
if steps[step][lastnode] >= 0:
break;

Dynamic programming question

I am stuck with one of the algorithm homework problem. Can anyone give me some hint to solve it? Here is the question:
Consider a chain structured computation represented by a weighted graph G = (V;E) where
V = {v1; v2; ... ; vn} and E = {(vi; vi+1) such that 1<= i <= n-1. We are also given a chain-structure m identical processors P = {P1; ... ; Pm} (i.e., there exists a communication link between Pk and Pk+1 for 1 <= k <= m - 1).
The set of vertices V represents computation modules, and the set of edges E represents
communication between the two modules. Each node vi is assigned a weight wi denoting the
execution time of the module on a single processor. Each edge (vi; vi+1) is assigned a weight ci denoting the amount of communication time between the two modules if they are assigned two different processors. If multiple modules are assigned to the same processor, the modules assigned to the same processor must be consecutive. Suppose modules va; va+1; .. ; vb are assigned to Processor Pk. Then, the time taken by Pk, denoted by Tk, is the time to compute assigned modules plus the time to communicate between neighboring processors. Hence, Tk = wa+...+ wb + ca-1 + cb. Note here that ca-1 = 0 if a = 1 and cb = 0 if b = n.
The objective of the problem is to find an assignment V to P such that max1<=k<=m Tk
is minimized, where we assume that each processor must take at least one module. (This
assumption can be relaxed by adding m dummy modules with zero weight on computational
and communication time.)
Develop a dynamic programming algorithm to solve this problem in polynomial time(i.e O(mn))
I tried to find the minimum execution time for each Pk and then find the max, but I doubt my solution is dynamic programming since there is no recursive formula. Please give me some hints!
Thanks!
I think you might be able to modify the Viterbi algorithm to solve this problem.
okay. this is easy.
decompose your problem to be a function you need to minimise, say F(n,k). which results into the minimum assignment of the first n nodes to k first processors.
Then derive your formula like this, collecting the number of nodes on the kth processor.
F(n,k) = min[i=0..n]( max(F(i,k-1), w[i]+...+w[n]+c[i-1]+c[n]) )
c[0] = 0
F(*,0) = inf
F(0,*) = inf

Resources