phylogenetic tree comparison - algorithm

I developed new algorithm for phylogenetic tree comparison(phylogenetic tree is simply rooted binary tree). As an input we have two trees, we want to calculate their similarity percentage. one example of these type of algorithms is here.
But most of these algorithms(as I know all of them) did not offer a good way to check the accuracy of their algorithms. e.g if you look at the following figure, you can see there is more similarity between T1 and T3, versus T1 and T2.
I need a method for checking its accuracy of similarity measure, To be sure that my algorithm is better than previous algorithms !!! (it is not difficult in most of the case by human eyes but I don't know how to extend it to my application)
your validity measure should be independent from algorithm.

Take a look at "Graph similarity scoring and matching" and "A Method for Comparing Two Hierarchical Clusterings". Maybe they (or linked references) will be helpful.

Related

Are the standard metrics to describe trees?

If I have a set of numbers, there are statistics I can take on them: average, std. deviation, min, max, etc. I can imagine (and know of) several similar measures I could use on trees.
I could calculate breadth, depths, average number of children per (non-leaf) node, etc. But are there "standard" metrics used to give a quick description of the tree?
If you wanted to give a brief characterization of a tree, what metrics would you use to describe if it was bushy, deep, sparse, etc? This sounds like something that should be mathematically studied, but my searches haven't been fruitful. (There being a structure called Metric Trees hasn't helped).
(The end goal is to be able to take a very sparse representation of the user's data, without actually capturing any of the underlying data. If we know that their file system (say) is a tree with metrics X,Y,Z (and then some metrics about average file size and other items in the tree) then we could build a similar tree in a test bed and run tests on it.)
Thanks!

Indexing strategy for finding similar strings

I am working on devising indexing strategy for finding similar hashes. The hashes are generated for images. i.e
String A = "00007c3fff1f3b06738f390079c627c3ffe3fb11f0007c00fff07ff03f003000" //Image 1
String B = "6000fc3efb1f1b06638f1b0071c667c7fff3e738d0007c00fff03ff03f803000" //Image 2
These two hashes are similar (based on Hamming distance and Levenshtein distance) and hence similar images. I have more than 190 million such hashes. I have to select a suitable indexing data structure where the worst case complexity for finding similar hash is not O(n). Hash data structure won't work because it will search for <, = and > (or will it?). I can find Hamming distance or other distance to calculate the similarity but in worst case I will end up calculating it 190 million times.
This is my strategy now:
Currently I am working on BTree where I will rank all the keys in a node based on no. of consecutive same characters and traverse the key which is highest ranked and if the child's keys rank is less than other key's rank in parent node, I will start traversing that key in the parent node. If all the rank of parent is same I will do normal BTree traverse (givenkey < nodeKey --> go to Child node of nodeKey..using ASCII comparison) which is where my issue is.
Because it would lead to lot of false negatives in search. As in the worst case I will traverse only one part of tree where potentially similar key can be found in other traversals. Else I have to search entire tree which is again O(n) where I might as well not have tree.
I feel there has to be a better way and right now I am stuck and it would be great to hear any inputs on breaking down the problem. Please share your thoughts.
P.S : and I cannot use any external database.
First, this is a very difficult problem. Don't expect neat, tidy answers.
One approximate data structure I have seen is Spatial Approximation Sample Hierarchy (SASH).
A SASH (Spatial Approximation Sample Hierarchy) is a general-purpose data structure for efficiently computing approximate answers for similarity queries. Similarity queries naturally arise in a number of important computing contexts, in particular content-based retrieval on multimedia databases, and nearest-neighbor methods for clustering and classification.
SASH uses only a distance function to build a data structure, so the distance function (and in your case, the image hash function as well) needs to be "good". The basic intuition is roughly that if A ~ B (image A is close to image B) and B ~ C, then usually A ~ C. The data structure creates links between items that are relatively close, and you prune your search by only looking for things that are closer to your query. Whether this strategy actually works depends on the nature of your data and the distance function.
It has been 10 years or so since I looked at SASH, so there are probably newer developments as well. Michael Houle's page seems to indicate he has newer research on something called Rank Cover Trees, which seem similar in purpose to SASH. This should at least get you started on research in the area; read some papers and follow the reference trail.

Decision Tree clarification

I just want to ask/clarify if decision trees are essentially binary trees where each node is a boolean, and it continues down until a desired result is reached?
Not necessarily. Some nodes may share children which is not the case in Binary trees. However, the essence of the decision tree is what you mentioned.
It's a tree where based on the probability of an outcome you move down the graph until you hit an outcome.
See Wikipedia's page on desicion trees for more info.
As mentioned by Ares, not all decision trees are binary (they can be "n-ary") although most implementation I have seen are binary trees.
For instance if you have a color variable (i.e. categorical) that can take three values : red, blue or green; you might want to split in three directly at a node instead of splitting in two and then again in two (or more).
The choice between binary and "n-ary" will usually depends on your data. I suspect that most people use binary trees anyway because it is relatively easier to implement and more flexible.
Then as you said the tree is developed until the desired outcome is reached. Decision Tree suffers major drawbacks such as overfitting and there exist many different ways to tackle this issue (pruning, boosting, etc.) but this is beyond the scope of the question/answer.
I recommend to have a look at this great visualization that explains well the decision tree : http://www.r2d3.us/visual-intro-to-machine-learning-part-1/
Will be happy to give more details about decision tree

Similarities Between Trees

I am working on a problem of Clustering of Results of Keyword Search on Graph. The results are in the form of Tree and I need to cluster those threes in group based on their similarities. Every node of the tree has two keys, one is the table name in the SQL database(semantic form) and second is the actual values of a record of that table(label).
I have used Zhang and Shasha, Klein, Demaine and RTED algorithms to find the Tree Edit Distance between the trees based on these two keys. All algorithms use no of deletion/insertion/relabel operation need to modify the trees to make them look same.
**I want some more matrices of to check the similarities between two trees e.g. Number of Nodes, average fan outs and more so that I can take a weighted average of these matrices to reach on a very good similarity matrix which takes into account both the semantic form of the tree (structure) and information contained in the tree(Labels at the node).
Can you please suggest me some way out or some literature which can be of some help?**
Can anyone suggest me some good paper
Even if you had the (pseudo-)distances between each pair of possible trees, this is actually not what you're after. You actually want to do unsupervised learning (clustering) in which you combine structure learning with parameter learning. The types of data structures you want to perform inference on are trees. To postulate "some metric space" for your clustering method, you introduce something that is not really necessary. To find the proper distance measure is a very difficult problem. I'll point in different directions in the following paragraphs and hope they can help you on your way.
The following is not the only way to represent this problem... You can see your problem as Bayesian inference over all possible trees with all possible values at the tree nodes. You probably would have some prior knowledge on what kind of trees are more likely than others and/or what kind of values are more likely than others. The Bayesian approach would allow you to define priors for both.
One article you might like to read is "Learning with Mixtures of Trees" by Meila and Jordan, 2000 (pdf). It explains that it is possible to use a decomposable prior: the tree structure has a different prior from the values/parameters (this of course means that there is some assumption of independence at play here).
I know you were hinting at heuristics such as the average fan-out etc., but you might find it worthwhile to check out these new applications of Bayesian inference. Note, for example that within nonparametric Bayesian method it is also feasible to reason about infinite trees, as done e.g. by Hutter, 2004 (pdf)!

Performance of an A* search implemented in Clojure

I have implemented an A* search algorithm for finding a shortest path between two states.
Algorithm uses a hash-map for storing best known distances for visited states. And one hash-map for storing child-parent relationships needed for reconstruction of the shortest path.
Here is the code. Implementation of the algorithm is generic (states only need to be "hashable" and "comparable") but in this particular case states are pairs (vectors) of ints [x y] and they represent one cell in a given heightmap (cost for jumping to neighboring cell depends on the difference in heights).
Question is whether it's possible to improve performance and how? Maybe by using some features from 1.2 or future versions, by changing logic of the algorithm implementation (e.g. using different way to store path) or changing state representation in this particular case?
Java implementation runs in an instant for this map and Clojure implementation takes about 40 seconds. Of course, there are some natural and obvious reasons for this: dynamic typing, persistent data structures, unnecessary (un)boxing of primitive types...
Using transients didn't make much difference.
Using priority-map instead of sorted-set
I first used sorted-set for storing open nodes (search frontier), switching to priority-map improved performance: now it takes 15-20 seconds for this map (before it took 40s).
This blog post was very helpful. And "my" new implementation is pretty much the same.
New a*-search can be found here.
I don't know Clojure, but I can give you some general advice about improving the performance of Vanilla A*.
Consider implementing IDA*, which is a variant of A* that uses less memory, if it's suitable for your domain.
Try a different heuristic. A good heuristic can have a significant impact on the number of node expansions required.
Use a cache, Often called a "transposition table" in search algorithms. Since search graphs are usually Directed Acyclic Graphs and not true trees, you can end up repeating the search of a state more than once; a cache to remember previously-searched nodes reduces node expansions.
Dr. Jonathan Schaeffer has some slides on this subject:
http://webdocs.cs.ualberta.ca/~jonathan/Courses/657/Notes/10.Single-agentSearch.pdf
http://webdocs.cs.ualberta.ca/~jonathan/Courses/657/Notes/11.Evaluations.pdf

Resources