I don’t understand why the paths printed here would be correct.
How does the path parameter not permanently get changed after child nodes are added to the path here?
For example, say I have a graph like
1 with child nodes 2 3, where 2 has child nodes 4 5.
The path upon hitting node 4 will be [1,2,4]. But, from the attached code, why isn’t the path variable still [1,2,4] before entering node 5? There’s nowhere in this code that involves a popping of nodes upon exiting. I just don’t see how the code here makes it such the path is [1,2,5]. Does the answer involve recursive stack frames?
Related
I'm using Castalia and my topology has only two nodes (node 0 and node 1). I need to make node 0 boot in a time between 0 to 91 randomly.
I tried to use the function SN.node[1].startupRandomizations = 91 however this function only adds a delay and not a drawing of the value.
I looked for something like that in the Castalia and Omnet manual but I couldn't find it. Could you suggest me a solution?
The correct parameter name is SN.node[0].startupRandomization (without the s at the end). Also note that you used node index 1 in your example above, while you say you want node 0.
I am not sure what you mean by "only adds a delay and not a drawing of the value". If you set this parameter to 91 it will draw a random value in the interval [0-91] and added it to any startupOffset the node already has. This will indeed randomise the startup time the way you want it.
Is there an option in CPLEX that allows the addition of cuts only at the root?
I would expect, yes, but I can't find the name of the option.
There are several ways:
set the node limit to 1 (or 0?) so that CPLEX only work on root node. You can add your cuts, then you relax the node limit, then solve it.
When you try to add a cut, do a query first to find out the node count or something like that using the query callback. Only add when the node count is 0 (or 1?)
Drop all the integer constraints and turn it into a LP. Then add your cuts, then add the integral constraints back on and solve it.
I'm trying to write an algorithm which will propagate values from a starting node to the entire connected component. Basically, if A receives 5 requests, and A sends 5 requests to B for each request A receives, B will receive 25 requests.
So basically, I'm trying to go from this
to this
I've written the following snippet in neo4j:
MATCH (a:Loc)-[r:ROAD]->(b:Loc)
SET b.volume = b.volume + a.volume * r.cost
RETURN a,r,b
But, what I don't know is how I am supposed to specify a starting point for this algorithm to start? It appears as if neo4j is updating the values correctly in this case, but I don't think this will work for a larger graph. I want to explicitly make the algorithm start propagating values from the START node.
Thanks.
I'm sure there will be a better answer, and this approach has some limitations since some assumptions are made about the graph, but this works for your example.
Note that I added an id property to the :Loc nodes, but I only used it to select the start (and for printing the node id at the end).
MATCH p=(n:Loc)<-[:ROAD*]-(:Loc {id: 0})
WITH DISTINCT n, max(length(p)) as maxLp
ORDER BY maxLp // order the nodes by their maximum distance from start
MATCH (n)<-[r:ROAD]-(p:Loc)
SET n.volume = n.volume + r.cost * p.volume
RETURN DISTINCT n.id, n.volume
And here's the result:
n.id n.volume
1 4000
2 200000
3 200000
4 16400000
5 508000000
6 21632000000
The idea here was to get the longest paths to each node from the starting node. These are ordered by "closeness" and then the volumes are updated in order of "closeness".
In this case the planner will use the labels to find starting places for the query (you can run an EXPLAIN of the query to see the query plan), so it's going to match to all :Loc nodes and expand the pattern and modify the properties accordingly.
This will be for all :Loc nodes, is that what you want, or do you only want this to apply for some smaller portion of your graph reachable from some starting node?
Question: Divide the set of vertices of the graph in Problem 1 into strongly connected components
(SCC). Namely, specify which vertices are in the first strongly connected component, which
in the second, and so on.
is any one able to confirm ive done this correctly? namely when i reach vertex 4 i have the option to make the first SCC either 1,7,2,4,3 (as shown) or 1,7,2,4,6,5 depending on which way i choose to travel. Is there a method to this, or can i simply just choose?
order:
1,2,7,3,4,5,8,6
SCC:
1,7,2,4,3
5
8
6
The strongly connected component is {1,2,3,4,5,6,7}. If you don't get that, your algorithm (or your implementation) has a bug. There is a definition of Strongly Connected Component, and several well-known algorithms; both can be found easily in Wikipedia (and many other internet resources) and, most likely, in your textbook and/or course notes. (If you don't have course notes, you'll easily find some for similar courses.)
you can add 5 and 6 to 1,7,2,4,3 since both are reachable from others via 4
In DFS
you have to continue visting node and creating tree while the stack is not empty
if so, then restsrt with the lowest id which is still white
I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually.
alt text http://img179.imageshack.us/img179/4315/pon.jpg
Problem I am trying to solve:
1. Constructing the dependency graph
Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that
node 3 depends on node 4
node 2 depends on node 3
node 3 depends on node 5
But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system.
2. Execute the request order
Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. I am not sure if this is a really a problem but I somehow have a feeling that there might exist more than one order in which case, it is required to choose the best order.
First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions?
PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.
It looks like you are trying to determine an ordering on requests you send to nodes with dependencies (or "partial ordering" for google) between nodes.
If you google "partial order dependency graph", you get a link to here, which should give you enough information to figure out a good solution.
In general, you want to sort the nodes in such a way that nodes come after their dependencies; AKA topological sort.
I'm a bit confused by your ordering constraints vs. the graphs that you picture: nothing matches up. That said, it sounds like you have soft ordering constraints (A should come before B, but doesn't have to) with costs for violating the constraint. An optimal algorithm for scheduling that is NP-hard, but I bet you could get a pretty good schedule using a DFS biased towards large-weight edges, then deleting all the back edges.
If you know in advance the dependencies of each node, you can easily build layers.
It's amusing, but I faced the very same problem when organizing... the compilation of the different modules of my application :)
The idea is simple:
def buildLayers(nodes):
layers = []
n = nodes[:] # copy the list
while not len(n) == 0:
layer = _buildRec(layers, n)
if len(layer) == 0: raise RuntimeError('Cyclic Dependency')
for l in layer: n.remove(l)
layers.append(layer)
return layers
def _buildRec(layers, nodes):
"""Build the next layer by selecting nodes whose dependencies
already appear in `layers`
"""
result = []
for n in nodes:
if n.dependencies in flatten(layers): result.append(n) # not truly python
return result
Then you can pop the layers one at a time, and each time you'll be able to send the request to each of the nodes of this layer in parallel.
If you keep a set of the already selected nodes and the dependencies are also represented as a set the check is more efficient. Other implementations would use event propagations to avoid all those nested loops...
Notice in the worst case you have O(n3), but I only had some thirty components and there are not THAT related :p