I am wondering is it possible to fix both of them in MrBayes at the same time?
I see some previous posts about fixing the tree topology is first to define the tree and then use "fixed" function like (assuming we have defined species_topology already)
The first line fix the tree topology and the second line defines the probability of moving to other topologies is zero.
Prset topologypr=fixed(species_topology);
propset eTBR(Tau)$prob=0;
I am wondering can I do something similar to fix the branch length as well?
In terms of the proposal move to the branch lengths, I think we should use
propset nslider(V)$prob=0
But I am not sure what to put in:
Prset brlenpr=
Thanks in advance! I have searched a lot of posts in MrBayes's mailing list but they did not have questions about fixing both of them at the same time. Usually, they just fix the topology.
After searching for a lot of MrBayes tutorials, I find a way to achieve this by myself. In MrBayes 3.2, it allows us to fix both the tree topology and the branch length. We can set the prior of both topology and branch length to a fixed tree in newick format and then define the probability of moving from the initial state as zero. Remember you must define your fixed tree with fixed branch length in the nexus file where you save your sequence information data.
For example, you can define your tree in this way:
begin trees;
tree tree_1 = ((t6:0.3279207193,t9:0.9545036491):0.04205953353,t5:0.8895393161,(((t4:0.6557057991,t10:0.7085304682):0.9942697766,t3:0.5440660247):0.6405068138,((t8:0.1471136473,(t2:0.9022990451,t1:0.6907052784):0.9630242325):0.2891597373,t7:0.7954674177):0.5941420204):0.9388911405);
;
end;
Then in your MrBayes template you can use the following commands to fix the tree topology and branch length:
prset topologypr=fixed(tree_1);
prset brlenspr=fixed(tree_1);
Tau represents the topology and V denotes the branch length.
Related
I have been studying about Suffix Automata string matching algorithm for a few days. I watched these videos and reed documents but I really can't get why we need to make a new node (under special condition) and clone it. I know how it works now but I am eager to learn the reason behind it. What would be the problem if we keep previous nodes? for example in the picture below we have new node (red Circle) for 'b' character. Can some one explain it to me? Appreciate.
There's no difference for your test case.
Another test case abbcbb. Which node should string bb be belonging to?
So clone a node is necessary to guarantee that the node corresponding to each substring is unique.
Is there an option in CPLEX that allows the addition of cuts only at the root?
I would expect, yes, but I can't find the name of the option.
There are several ways:
set the node limit to 1 (or 0?) so that CPLEX only work on root node. You can add your cuts, then you relax the node limit, then solve it.
When you try to add a cut, do a query first to find out the node count or something like that using the query callback. Only add when the node count is 0 (or 1?)
Drop all the integer constraints and turn it into a LP. Then add your cuts, then add the integral constraints back on and solve it.
I am trying to write a compiler/code editor.
To speed up the process I want a red and black tree that returns a node which I can then use to get the strings under it, and it's position value, and use it's parent node as a place to store a token (such as alphanumeric_word or left_parenthesis).
I am having trouble finding the best way to go about this.
I basically want something that can do the following:
tree.insert("01234567890123456789",0);
node = tree.at(10);
tree.insert("string",5);
node.index(); //should be 10+length("string")
node.value(); //should be '0'
node.tokenPtr.value; //should point to a token with the value of NUMBER
I am looking for the simplest implementation of such a tree that I could modify since these can be frustrating to build and debug from scratch.
The following code is sort of what I am looking for (it has parent nodes), but it lacks an indexing feature for index look up. This is needed because I want to create a map that uses the node as it's key and node.index() as it's sorting value so that I don't have to update the keys in that map.
[[archive.gamedev.net/archive/reference/programming/features/TStorage/page2.html]]
I have tried to look at sgi's rope implimentation, but the code is overwhelming and difficult to understand.
This tutorial seems to be helpfull, however it also doesn't provide a doubly linked tree which I think can be used to find the index of a node:
[[eternallyconfuzzled.com/tuts/datastructures/jsw_tut_rbtree.aspx]]
Update:
I have found an implementation that has a parent node, however it still lacks an index count property:
[[web.mit.edu/~emin/Desktop/ref_to_emin/www.old/source_code/red_black_tree/index.html]]
I have found one solution and another that might work.
You have to use the sgi stl rope mutable_begin()+(index) iterator.
There is also this function, however I am still having trouble analyzing the sgi rope code to see what it does:
mutable_reference_at(index)
If I got this right then it is possible that there is more than one max value in the local alignment matrix. So in order to get all optimal local alignments, instead of only one, I would have to find the location of all these maximum values in the matrix and trace each of them back individually, right?
Example:
XGTCXXGTCX
|||
AGTCA
XGTCXXGTCX
|||
AGTCA
There is no such thing as ALL optimal alignments. There should only be one optimal alignment. I guess there could be multiple paths for the same alignment but they would have the same overall score and it doesn't look like that's the kind of question you are asking.
What your diagram in your post shows is multiple (primer?) hits. In such a case, what I do is I run smith-waterman once, get the optimal alignment. Then I generate a new alignment where the subject sequence has been trimmed to only include the downstream sequence. The advantage to this way is I don't have to modify any S-W code or have to dig in the internals of 3rd party code.
So it would look like this:
Alignment 1
XGTCXXGTCX
|||
AGTCA
Delete upstream subject sequence:
XGTCXXGTCX => XGTCX
Alignment 2
XGTCX
|||
AGTCA
The only tricky part is you have to keep track of how many bases have been deleted from the alignment so you can correctly adjust the match coordinates.
I know this post is pretty old nowadays but since I found that, other People might also find this while looking for help and in my opinion, the correct answer has not been given, yet. So:
Clearly, there can be MULTIPLE optimal local alignments. You've just shown an example of such. Yet, there is EXACTLY ONE optimal local alignment SCORE. Looking at the original paper that presented the SmithWaterman-Algorithm, Smith and Waterman already indicate how to find the second best alignment, third best alignment...
here's a Reprint to read that stuff (for your Problem, check page 196):
https://pdfs.semanticscholar.org/40c5/441aad96b366996e6af163ca9473a19bb9ad.pdf
So (in contrast to other answers on here), the SmithWaterman-Algorithm also gives second best local alignments and so on.
Just check for the second best score within your Scoringmatrix (in your case there'll be several entries with the same best score), that is not associated with your best local alignment, do the usual backtracking and you solved your Problem. :)
I was wondering if someone could help me understand this problem. I prepared a small diagram because it is much easier to explain it visually.
alt text http://img179.imageshack.us/img179/4315/pon.jpg
Problem I am trying to solve:
1. Constructing the dependency graph
Given the connectivity of the graph and a metric that determines how well a node depends on the other, order the dependencies. For instance, I could put in a few rules saying that
node 3 depends on node 4
node 2 depends on node 3
node 3 depends on node 5
But because the final rule is not "valuable" (again based on the same metric), I will not add the rule to my system.
2. Execute the request order
Once I built a dependency graph, execute the list in an order that maximizes the final connectivity. I am not sure if this is a really a problem but I somehow have a feeling that there might exist more than one order in which case, it is required to choose the best order.
First and foremost, I am wondering if I constructed the problem correctly and if I should be aware of any corner cases. Secondly, is there a closely related algorithm that I can look at? Currently, I am thinking of something like Feedback Arc Set or the Secretary Problem but I am a little confused at the moment. Any suggestions?
PS: I am a little confused about the problem myself so please don't flame on me for that. If any clarifications are needed, I will try to update the question.
It looks like you are trying to determine an ordering on requests you send to nodes with dependencies (or "partial ordering" for google) between nodes.
If you google "partial order dependency graph", you get a link to here, which should give you enough information to figure out a good solution.
In general, you want to sort the nodes in such a way that nodes come after their dependencies; AKA topological sort.
I'm a bit confused by your ordering constraints vs. the graphs that you picture: nothing matches up. That said, it sounds like you have soft ordering constraints (A should come before B, but doesn't have to) with costs for violating the constraint. An optimal algorithm for scheduling that is NP-hard, but I bet you could get a pretty good schedule using a DFS biased towards large-weight edges, then deleting all the back edges.
If you know in advance the dependencies of each node, you can easily build layers.
It's amusing, but I faced the very same problem when organizing... the compilation of the different modules of my application :)
The idea is simple:
def buildLayers(nodes):
layers = []
n = nodes[:] # copy the list
while not len(n) == 0:
layer = _buildRec(layers, n)
if len(layer) == 0: raise RuntimeError('Cyclic Dependency')
for l in layer: n.remove(l)
layers.append(layer)
return layers
def _buildRec(layers, nodes):
"""Build the next layer by selecting nodes whose dependencies
already appear in `layers`
"""
result = []
for n in nodes:
if n.dependencies in flatten(layers): result.append(n) # not truly python
return result
Then you can pop the layers one at a time, and each time you'll be able to send the request to each of the nodes of this layer in parallel.
If you keep a set of the already selected nodes and the dependencies are also represented as a set the check is more efficient. Other implementations would use event propagations to avoid all those nested loops...
Notice in the worst case you have O(n3), but I only had some thirty components and there are not THAT related :p