Approximate Edit distance tree - Exact Edit path - algorithm

I've been looking for an algorithm to efficiently compute an edit path between two trees, a path that does not have to correspond to shortest edit distance but preferably a relatively short one.
The case is that I have two directory trees, consisting of directories and files, and want to compute a sequence of deletes, inserts and renames that will transform one to the other.
I have tried searching both stackoverflow and the wild web but all I find is algorithms for computing shortest edit distance, but they all have high scaling factors.
So my question is, is there any more efficient way then for example "Zhang and Shasha" when I don't need the optimum distance?
Kind regards

There is the Klein algorithm that performs slightly better than "Zhang and Sasha", however it remains of very high complexity in both space and time for practical purpose.
There is an algorithm here that is in fact an heuristic, since the authors misused the term approximation.
It reduces the problem to a series of maximum weighted cliques for which it exists several approximation and heuristics, even a greedy approach could here perform reasonably well.
What is true for graphs is true for trees, you could therefore use a graph kernel convolution approach.
If you are looking for an off the shelf implementation (of an unspeficied algorithm, I woudl guess Zhang or Klein), you can check here

Related

Weighted bipartite matching

I'm aware of there's a lot of similar topics. But most of them left me some doubts in my case. What I want to do is find perfect matching (or as close to perfect as possible in case there's no perfect matching of course) and then from all of those matchings where you are able to match k out of n vertexes (where k is highest possible), I want to choose the highest possible total weight.
So simply put what I'm saying is following priority:
Match as many vertexes as possible
Because (non weighted) maximum matching in most cases is unambiguous, I want choose the one that have the biggest sum of weights on edges. If there are several of them with same weight it doesn't matter which would be chosen.
I've heard about Ford-Fulkerson algorithm. Is it working in the way I describe it or I need other algorithm?
If you're implementing this yourself, you probably want to use the Hungarian algorithm. Faster algorithms exist but aren't as easy to understand or implement.
Ford-Fulkerson is a maximum flow algorithm; you can use it easily to solve unweighted matching. Turning it into a weighted matcing algorithm requires an additional trick; with that trick, you wind up with the Hungarian algorithm.
You can also use a min-cost flow algorithm to do weighted bipartite matching, but it might not work quite as well. There's also the network simplex method, but it seems to be mostly of historical interest.

Is A* really better than Dijkstra in real-world path finding?

I'm developing a path finding program. It is said theoretically that A* is better than Dijkstra. In fact, the latter is a special case of the former. However, when testing in the real world, I begin to doubt that is A* really better?
I used data of New York City, from 9th DIMACS Implementation Challenge - Shortest Paths, in which each node's latitude and longitude is given.
When applying A*, I need to calculate the spherical distance between two points, using Haversine Formula, which involves sin, cos, arcsin, square root. All of those are very very time-consuming.
The result is,
Using Dijkstra: 39.953 ms, expanded 256540 nodes.
Using A*, 108.475 ms, expanded 255135 nodes.
Noticing that in A*, we expanded less 1405 nodes. However, the time to compute a heuristic is much more than that saved.
To my understanding, the reason is that in a very large real graph, the weight of the heuristic will be very small, and the effect of it can be ignored, while the computing time is dominating.
I think you're more or less missing the point of A*. It is intended to be a very performant algorithm, partially by intentionally doing more work but with cheap heuristics, and you're kind of tearing that to bits when burdening it with a heavy extremely accurate prediction heuristic.
For node selection in A* you should just use an approximation of distance. Simply using (latdiff^2)+(lngdiff^2) as the approximated distance should make your algorithm much more performant than Dijkstra, and not give much worse results in any real world scenario. Actually the results should even be exactly the same if you do calculate the travelled distance on a selected node properly with the Haversine. Just use a cheap algorithm for selecting potential next traversals.
A* can be reduced to Dijkstra by setting some trivial parameters. There are three possible ways in which it does not improve on Dijkstra:
The heuristic used is incorrect: it is not a so-called admissible heuristic. A* should use a heuristic which does not overestimate the distance to the goal as part of its cost function.
The heuristic is too expensive to calculate.
The real-world graph structure is special.
In the case of the latter you should try to build on existing research, e.g. "Highway Dimension, Shortest Paths, and Provably Efficient Algorithms" by Abraham et al.
Like everything else in the universe, there's a trade-off. You can take dijkstra's algorithm to precisely calculate the heuristic, but that would defeat the purpose wouldn't it?
A* is a great algorithm in that it makes you lean towards the goal having a general idea of which direction to expand first. That said, you should keep the heuristic as simple as possible because all you need is a general direction.
In fact, more precise geometric calculations that are not based on the actual data do not necessarily give you a better direction. As long as they are not based on the data, all those heuristics give you just a direction which are all equally (in)correct.
In general A* is more performant than Dijkstra's but it really depends the heuristic function you use in A*. You'll want an h(n) that's optimistic and finds the lowest cost path, h(n) should be less than the true cost. If h(n) >= cost, then you'll end up in a situation like the one you've described.
Could you post your heuristic function?
Also bear in mind that the performance of these algorithms highly depends on the nature of the graph, which is closely related to the accuracy of the heuristic.
If you compare Dijkstra vs A* when navigating out of a labyrinth where each passage corresponds to a single graph edge, there is probably very little difference in performance. On the other hand, if the graph has many edges between "far-away" nodes, the difference can be quite dramatic. Think of a robot (or an AI-controlled computer game character) navigating in almost open terrain with a few obstacles.
My point is that, even though the New York dataset you used is definitely a good example of "real-world" graph, it is not representative of all real-world path finding problems.

Why do these maze generation algorithms produce mazes with different properties?

I was browsing the Wikipedia entry on maze generation algorithms and found that the article strongly insinuated that different maze generation algorithms (randomized depth-first search, randomized Kruskal's, etc.) produce mazes with different characteristics. This seems to suggest that the algorithms produce random mazes with different probability distributions over the set of all single-solution mazes (spanning trees on a rectangular grid).
My questions are:
Is this correct? That is, am I reading this article correctly, and is the article correct?
If so, why? I don't see an intuitive reason why the different algorithms would produce different distributions.
Uh well I think it's pretty obvious different algorithms generate different mazes. Let's just talk about spanning trees of a grid. Suppose you have a grid G and you have two algorithms to generate a spanning tree for the grid:
Algorithm A:
Pick any edge of the grid, with 99% probability choose a horizontal one, otherwise a vertical one
Add the edge to the maze, unless adding it would create a cycle
Stop when every vertex is connected to every other vertex (spanning tree complete)
Algorithm B:
As algorithm A, but set the probability to 1% instead of 99%
"Obviously" algorithm A produces mazes with lots of horizontal passages and algorithm B mazes with lots of vertical passages. That is, there is a statistical correlation between the number of horizontal passages in a maze and the maze being produced by algorithm A.
Of course the differences between the Wikipedia algorithms are more intricate but the principle is the same. The algorithms sample the space of possible mazes for a given grid in a non-uniform, structured way.
LOL I remember a scientific conference where a researcher presented her results about her algorithm that did something "for graphs". The results were statistical and presented for "random graphs". Someone asked from the audience "which distribution of random graphs did you draw the graphs from?" The answer: "uh... they were produced by our graph generation program". Duh!
Interesting question. Here my random 2c.
Comparing Prim's to, say, DFS, the latter seems to have a proclivity for producing deeper trees simply due to the fact that the first 'runs' have more space to create deep trees with less branches. Prim's algorithm, on the other hand, appears to create trees with more branching due to the fact that any open branch can be selected at each iteration.
One way to ask this would be to look at what is the probability that each algorithm will produce a tree of depth > N. I have a hunch that they would be different. A more formal approach to do proving this might be to assign some weights to each part of the tree and show it's more likely to be taken or attempt to characterize the space some other way, but I'll be hand wavy and guessing it's correct :). I'm interested in what lead to you think it wouldn't be, because my intuition was the opposite. And no, the Wiki article doesn't give a convincing argument.
EDIT
One simple way to see this to consider an initial tree with two children with a total of k nodes
e.g.,
*---* ... *
\--* ... *
Choose a random node as the start and end. DFS will produce one of two mazes, either the entire tree, or the part of it with the direct path from start to end. Prim's algorithm will produce the 'maze' with the direct path from start to end with secondary paths of length 1 ... k.
It is not statistical until you request that each algorithm produce every solution it can.
What you are perceiving as statistical bias is only a bias towards the preferred, first solution.
That bias may not be algorithmic (set-theory-wise) but implementation dependent (like the bias in the choice of the pivot in quicksort).
Yes, it is correct. You can produce different mazes by starting the process in different ways. Some algorithms start with a fully closed grid and remove walls to generate a path through the maze while some start with a empty grid and add walls leaving behind a path. This alone can produce different results.

search of the K-first short paths algorithm

I have found many algorithms and approaches that talk about finding the shortest path or the best/optimal solution to a problem. However, what I want to do is an algorithm that finds the first K-shortest paths from one point to another. The problem I'm facing is more like searching through a tree, when in each step you take there are multiple options each one with its weight. What kinds of algorithms are used to face this kind of problems?
There is the 2006 paper by Jose Santos
comparing three different K-shortest path finding algorithms.
Yen's algorithm implementation:
http://code.google.com/p/k-shortest-paths/
Easier algorithm & discussion:
Suggestions for KSPA on undirected graph
EDIT: apparently I clicked on a link, because I thought I was answering to a new question; ignore this if - as is very likely - this question isn't important to you anymore.
Given the restricted version of the problem you're dealing with, this becomes a lot simpler to implement. The most important thing to notice is that in trees, shortest paths are the only paths between two nodes. So what you do is solve all pairs shortest paths, which is O(n²) in trees by doing n BFS traversals, and then you get the k minimal values. This probably can be optimized in some way, but the naive approach to do that is sort the O(n²) distances in O(n² log n) time and take the k smallest values; with some book keeping, you can keep track of which distance corresponds to which path without time complexity overhead. This will give you better complexity than using a KSPA algorithm for O(n²) possible s-t-pairs.
If what you actually meant is fixing a source and get the k nodes with the smallest distance from that source, one BFS will do. In case you meant fixing both source and target, one BFS is enough as well.
I don't see how you can use the fact that all edges going from a node to the nodes in the level below have the same weight without knowing more about the structure of the tree.

Viterbi algorithm in linear time

I have a problem where given a Hidden Markov model and the states S I need to find an algorithm that returns the most probable path through the Hidden Markov model for a given sequence X in time O(|S|).
I was thinking of developing a graph where I would have all the different states at different positions in X and run a shortest path algorithm on this graph. However I will have n|S|^2 edges (where n is the number of states in X) and n|S| vertices.
The best algorithm I have found is the acyclic shortest path which runs in time O(|E|+|V|) which is O(|S|^2) in my case.
Is there an algorithm I can develop for it to run in time O(|S|)? All I need is the general idea.
Thanks
I think if you want to retrieve the exact most likely sequence you cannot do it in linear time on all instances. However if your symbol space is discrete the average case time complexity may be reduced. Take a look at Ukkonen's optimization for computing edit distances and its generalizations. Also take a look at compression based techniques this is also based on Ukkonen's work.

Resources