Finding fastest path at a cost, less or equal to a specified - algorithm

Here's visualisation of my problem.
I've been trying to use djikstra on that however, It haven't worked.

The complication, as I see it, is that Dijkstra's algorithm throws away information that you need to keep around: if you are trying to get from A to E in
B
/ \
A D - E
\ /
C
And ABD is shorter than ACD, Dijkstra's will forget that ACD was ever a possibility (it uses ACD as the canonical route from A to D). But if ABD has a higher cost than ACD, and ABDE is above the quota while ACDE is below, the now eliminated ACD was correct. The problem is that Dijkstra's algorithm assumes that if one path is at least as long as another, it is weakly dominated: there is no reason to prefer it. And in one dimension of comparison, paths are weakly ordered: given any two paths, one weakly dominates the other.
But here we have two dimensions of comparison, and so ordering does not hold: one path can be shorter, the other cheaper. Since we can only discard dominated paths, we must keep all paths that do not already exceed the budget and are not dominated. I have put a bit of work into implementing this approach; it looks doable but cannot find an argument for a worst-case bound below exponential complexity (although normal performance should be much better, since in a sane graphs most paths are dominated).
You can also, as Billiska notes, use k-th shortest routes algorithms and then proceed through their results until you find one below the budget. That uses time O(m+ K*n*log(m/n)); but unless someone sees an upper bound on K such that K is guaranteed to include a path under the budget (if one exists), we need to set K to be the total number of paths, again yielding exponential complexity (although again a strategy of incrementally increasing K would likely yield a reasonable average runtime, at least if length and cost are reasonably correlated).
EDIT:
Complicating (perhaps fatally) the implementation of my proposed modification is that Dijkstra's algorithm relies on an ordering of the accessibility of nodes, such that we know that if we take the unexplored node to which we have the shortest path, we will never find a better route to it (since all other routes are already known to be longer). If that shortest route is also expensive, that need not hold; even after exploring a node, we must be prepared to update paths out of it on the basis of longer but cheaper routes into it. I suspect that this will prevent it from reaching polynomial time in the worst case.

Basically you need to find the first shortest-path, check if it works, then find the second shortest-path, check if it works, and so on...
Dijkstra's algorithm isn't designed to work with such task.
And just a Google search on this new definition of the problem,
I arrive at Stack Overflow question on finding kth-shortest-paths.
I haven't read into it yet, so don't ask me.
I hope this helps.

I think you can do it with Dijkstra, but you have to change the way you are calculating the tentative distance in each step. Instead of just taking into account the distance, consider also the cost. the new distance should be 2-d number (dist, cost), when you will choose what is the minimal distance you should take the one with minimal dist AND cost <= 6, that's it.
I hope this is correct.

Related

Is Dijkstra's algorithm deterministic?

I think that Dijkstra's algorithm is determined, so that if you choose the same starting vertex, you will get the same result (the same distances to every other vertex). But I don't think that it is deterministic (that it has defined the following operation for each operation), because that would mean that it wouldn't have to search for the shortest distances in the first place.
Am I correct? If I'm wrong, could you please explain why it is deterministic, and maybe give an example?
I'm not sure there is a universal definition of determinism, but Wikipedia defines it as...
... an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states.
So this requires both determinism of the output and determinism of the execution. The output of Dijkstra's algorithm is deterministic no matter how you look at it, because it's the length of the shortest path, and there is only one such length.
The execution of Dijkstra's algorithm in the abstract sense is not deterministic, because the final step is:
Otherwise, select the unvisited node that is marked with the smallest tentative distance, set it as the new "current node", and go back to step 3.
If there are multiple nodes with the same smallest tentative distance, the algorithm is free to select one arbitrarily. This doesn't affect the output, but it does affect the order of operations within the algorithm.
A particular implementation of Dijkstra's algorithm, however, probably is deterministic, because the nodes will be stored in a deterministic data structure like a min heap. This will result in the same node being selected each time the program is run. Although things like hashtable salts may also affect determinism even here.
Allow me to expand on Thomas's answer.
If you look at an implementation of Dijkstra, such as this example: http://graphonline.ru/en/?graph=NnnNwZKjpjeyFnwx you'll see a graph like this
In the example graph, 0→1→5, 0→2→5, 0→3→5 and 0→4→5 are all the same length. To find "the shortest path" is not necessarily unique, as is evidenced by this diagram.
Using the wording on Wikipedia, at some point the algorithm instructs us to:
select the unvisited node that is marked with the smallest tentative distance.
The problem here is the word the, suggesting that it is somehow unique. It may not be. For an implementation to actually pick one node from many equal candidates requires further specification of the algorithm regarding how to select it. But any such selected candidate having the required property will determine a path of the shortest length. So the algorithm doesn't commit. The modern approach to wording this algorithm would be to say:
select any unvisited node that is marked with the smallest tentative distance.
From a mathematical graph theory algorithm standpoint, that algorithm would technically proceed with all such candidates simultaneously in a sort of multiverse. All answers it may arrive at are equally valid. And when proving the algorithm works, we would prove it for all such candidates in all the multiverses and show that all choices arrive at a path of the same distance, and that the distance is the shortest distance possible.
Then, if you want to use the algorithm to just compute one such answer because you want to either A) find one such path, or B) determine the distance of such a path, then it is left up to you to select one specific branch of the multiverse to explore. All such selections made according to the algorithm as defined will yield a path whose length is the shortest length possible. You can define any additional non-conflicting criteria you wish to make such a selection.
The reason the implementation I linked is deterministic and always gives the same answer (for the same start and end nodes, obviously) is because the nodes themselves are ordered in the computer. This additional information about the ordering of the nodes is not considered for graph theory. The nodes are often labelled, but not necessarily ordered. In the implementation, the computer relies on the fact that the nodes appear in an ordered array of nodes in memory and the implementation uses this ordering to resolve ties. Possibly by selecting the node with the lowest index in the array, a.k.a. the "first" candidate.
If an implementation resolved ties by randomly (not pesudo-randomly!) selecting a winner from equal candidates, then it wouldn't be deterministic either.
Dijkstra's algorithm as described on Wikipedia just defines an algorithm to find the shortest paths (note the plural paths) between nodes. Any such path that it finds (if it exists) it is guaranteed to be of the shortest distance possible. You're still left with the task of deciding between equivalent candidates though at step 6 in the algorithm.
As the tag says, the usual term is "deterministic". And the algorithm is indeed deterministic. For any given input, the steps executed are always identical.
Compare it to a simpler algorithm like adding two multi-digit numbers. The result is always the same for two given inputs, the steps are also the same, but you still need to add the numbers to get the outcome.
By deterministic I take it you mean it will give the same answer to the same query for the same data every time and give only one answer, then it is deterministic. If it were not deterministic think of the problems it would cause by those who use it. I write in Prolog all day so I know non-deterministic answers when I see them.
Here I just introduced a simple mistake in Prolog and the answer was not deterministic, and with a simple fix it is deterministic.
Non-deterministic
spacing_rec(0,[]).
spacing_rec(Length0,[' '|T]) :-
succ(Length,Length0),
spacing_rec(Length,T).
?- spacing(0,Atom).
Atom = '' ;
false.
Deterministic
spacing_rec(0,[]) :- !.
spacing_rec(Length0,[' '|T]) :-
succ(Length,Length0),
spacing_rec(Length,T).
?- spacing(0,Atom).
Atom = ''.
I will try and keep this short and simple, there are so many great explanations on this on here and online as well, if some good research is done of course.
Dijkstra's algorithm is a greedy algorithm, the main goal of a Dijsktra's algorithm is to find the shortest path between two nodes of a weighted graph.
Wikipedia does a great job with explaining what a deterministic and non-deterministic algorithms are and how you can 'determine' which algorithm would fall either either category:
From Wikipedia Source:
Deterministic Algorithm:
In computer science, a deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. Deterministic algorithms are by far the most studied and familiar kind of algorithm, as well as one of the most practical, since they can be run on real machines efficiently.
Formally, a deterministic algorithm computes a mathematical function; a function has a unique value for any input in its domain, and the algorithm is a process that produces this particular value as output.
Nondeterministic Algorithm
In computer science, a nondeterministic algorithm is an algorithm that, even for the same input, can exhibit different behaviors on different runs, as opposed to a deterministic algorithm. There are several ways an algorithm may behave differently from run to run. A concurrent algorithm can perform differently on different runs due to a race condition.
So going back to the goal of Dijkstra's algorithm is like saying I want to get from X location to Z location but to do that I have options going through shorter nodes that will get my to my end a lot quicker and more efficiently than other longer routes or nodes...
Thinking through cases where Dijsktra's algorithm could be deterministic would be a good idea as well.

Why do you need to pick a heuristic close to the actual path costs?

I'm a computing science student and I currently have a subject about artificial intelligence. This subject covers various best-first search pathfinding algorithms, such as A* search.
My study material claims that it is usual to pick a heuristic following the next rule:
0 ≤ h(n) ≤ h(n)*
I understand this bit, but after that the material claims your heuristic should be optimistic or close to f(n). According to the material this should result in less nodes being expanded.
Why (and should you) you pick a heuristic as close as possible to f(n) and why does this result in (a lot) less nodes being expanded?
Thanks in advance for replying, you'll be helping me out so much for my next exam!
Example:
Find the shortest way to a specific field on a chess-board like field with obsticles.
Lets say you only have the possibility to go left right up or down.
A heuristic gives you a guess how many steps to the goal you will need for every one of the four possible fields you can go at every iteration.
Now your heuristic can be:
allways optimal: if thats the case you will allways go to the correct next field and initially find your best path.
allways lower or optimal: here it might occur that you go to the wrong field some time but if you reached your goal (or a view fields later) your algorithm will see that the path you found (or the actual heuristic) is greater than the heuristic of the field you should have gone before.
In other words: your heuristic gives you allways a lower or equal number than the actual steps you have to make. Hence if you find a path shorter or equal to all heuristics of fileds you didn't visit you can be sure your path is optimal.
"sometimes higher" if your heuristic gives you sometimes more steps than you would actually need you never can be sure that the path you have found is an optimal path!
So the worst thing to happen is an overestimation of the way by your heuristic because you might not find the optimal path. Therfore you have the condition 0 ≤ h(n) ≤ h(n)*
And the closer you hare to your optimal heuristic, the less "wrong fields" you visit in your search. And the less wrong fields you visit, the faster you are.

optimal search algorithm without admissible heuristic

Please forgive me if I'm not using the correct terms or have overlooked an existing solution. I'm not experienced in search algorithms and the theories behind it. I just would like to solve a problem.
I've previously used what I was told to be the A* algorithm to solve a different problem. But reading up on it I've realized that what I learned is not quite what wikipedia tells me.
What I learned was:
Start at your origin node
Open a new solution for each path you can take
Recursively create a new subsolution for each path you can take from there
When you arrive at the same place with multiple solutions, drop those who took longer than the fastest
Now if I understand wikipedia correctly, this is what I was supposed to do:
Start at your origin node
Open a new solution for each path you can take
Order the solutions by "cost of path taken" + "estimated cost to target"
Take cheapest solution and create subsolutions for each possible path
order those solutions into the others then rinse repeat
I can see how this would help with not calculating quite as many solutions but my problem is that I see no possiblity to create an "optimistic" estimate.
I'm not searching for a path on a geographical map. I'm trying to find the best sequence of actions. There's a minimum sequence of - say - ABCDEFGH. You cannot do F before E but repeating previous actions in particilar ordering might make later actions more efficient.
Do I need a different search algorithm? Do I do what I originally learned and just live with the fact that doing more work is the price for not having a good heuristic function?
I believe my teacher recognized this problem. And what I learned was simply A* with a heuristic function of f(n) = 0.
I'm not searching for a path on a geographical map. I'm trying to find
the best sequence of actions. There's a minimum sequence of - say -
ABCDEFGH. You cannot do F before E but repeating previous actions in
particular ordering might make later actions more efficient.
It is not clear to me whether you can repeat one action, i.e., a solution is ABCDEFGH, but would ABBBBCDEFGH be possible?
If not, then you might be able to have A* algorithm, implemented like this:
1. At some stage (say the first, "empty"), you have one of several actions
available.
2. The cost of going from Empty City to A City is the cost of action A.
3. The cost of going from Empty City to B city is the cost of action B.
When you've reached B, the cost of doing C is constant (if it is not, then you can't use A* as is) and you insert the cost of going from B City to C City as the cost of C.
So you can handle the case in which an action has different costs, provided that this difference is completely described by the previous state. For example, if you can only do C if you have done A or B, and the cost of C is 5 and 8, you enter the "distance" between A and C as 5, and B to C as 8.
If the cost of, say, D depends on the two previous states, you can still use a more complicated A* implementation where you define the virtual "cities" BC, AB and AC, and the distance from BC to D is "the cost of D having done B and C", and so on. The cost of reaching BC from A is "the cost of B given A, and the cost of C given A and B". So if these costs depend on the previous states, things get even more complicated.
In the end, the complexity of this revised A* will grow until it becomes your algorithm, where every state depends potentially on the sequence of all preceding states. The more this is true, the more your algorithm is convenient; the more every state is a cost unto itself, the more A* is convenient.
And of course the possibility of closed loops (visiting the same state/action twice, making this a cyclic graph) blows A* straight out of the water.

A* heuristic, overestimation/underestimation?

I am confused about the terms overestimation/underestimation. I perfectly get how A* algorithm works, but i am unsure of the effects of having a heuristic that overestimate or underestimate.
Is overestimation when you take the square of the direct birdview-line? And why would it make the algorithm incorrect? The same heuristic is used for all nodes.
Is underestimation when you take the squareroot of the direct birdview-line? And why is the algorithm still correct?
I can't find an article which explains it nice and clear so I hope someone here has a good description.
You're overestimating when the heuristic's estimate is higher than the actual final path cost. You're underestimating when it's lower (you don't have to underestimate, you just have to not overestimate; correct estimates are fine). If your graph's edge costs are all 1, then the examples you give would provide overestimates and underestimates, though the plain coordinate distance also works peachy in a Cartesian space.
Overestimating doesn't exactly make the algorithm "incorrect"; what it means is that you no longer have an admissible heuristic, which is a condition for A* to be guaranteed to produce optimal behavior. With an inadmissible heuristic, the algorithm can wind up doing tons of superfluous work examining paths that it should be ignoring, and possibly finding suboptimal paths because of exploring those. Whether that actually occurs depends on your problem space. It happens because the path cost is 'out of joint' with the estimate cost, which essentially gives the algorithm messed up ideas about which paths are better than others.
I'm not sure whether you will have found it, but you may want to look at the Wikipedia A* article. I mention (and link) mainly because it's almost impossible to Google for it.
From the Wikipedia A* article, the relevant part of the algorithm description is:
The algorithm continues until a goal node has a lower f value than any node in the queue (or until the queue is empty).
The key idea is that, with understimation, A* will only stop exploring a potential path to the goal once it knows that the total cost of the path will exceed the cost of a known path to the goal. Since the estimate of a path's cost is always less than or equal to the path's real cost, A* can discard a path as soon as the estimated cost exceeds the total cost of a known path.
With overestimation, A* has no idea when it can stop exploring a potential path as there can be paths with lower actual cost but higher estimated cost than the best currently known path to the goal.
Intuitive Answer
For A* to work correctly (always finding the 'best' solution, not just any), your estimation function needs to be optimistic.
Optimism here means that your expectations are always better than reality.
An optimist will try many things that might disappoint in the end, but they will find all the good opportunities.
A pessimist expects bad results, and so will not try many things. Because of this, they may miss some golden opportunities.
So for A*, being optimistic means to always underestimate the costs (i.e. "it's probably not that far"). When you do that, once you found a path, then you might still feel excited about several unexplored options, that could be even better.
That means you won't stop at the first solution, and still try those other ones. Most will probably disappoint (not be better), but it guarantees you will always find the best solution. Of course trying out more options takes more work (time).
A pessimistic A* will always overestimate cost (e.g. "that option is probably pretty bad"). Once it has found a solution and it knows the true cost of the path, every other path will seem worse (because estimates are always worse than reality), and it will never try any alternative once the goal is found.
The most effective A* is one that never under-estimates, but estimates either perfectly or just slightly over-optimistic. Then you'll not be naive and try too many bad options.
A nice lesson for everyone. Always be slightly optimistic!
Short answer
#chaos answer is bit misleading imo (can should be highlighted)
Overestimating doesn't exactly make the algorithm "incorrect"; what it means is that you no longer have an admissible heuristic, which is a condition for A* to be guaranteed to produce optimal behavior. With an inadmissible heuristic, the algorithm can wind up doing tons of superfluous work
as #AlbertoPL is hinting
You can find an answer quicker by overestimating, but you may not find the shortest path.
In the end (beside the mathematical optimum), the optimal solution strongly depends on whether you consider computing resources, runtime, special types of "Maps"/State Spaces, etc.
Long answer
As an example I could think of an realtime application where a robot gets faster to the target by using an overestimating heuristic because the time advantage by starting earlier is bigger than the time advantage by taken the shortest path but waiting longer for computing this solution.
To give you a better impression, I share some exemplary results that I quickly created with Python. The results stem from the same A* algorithm, only the heuristic differs. Each node(grid cell) has got edges to all eight neighbors except walls. Diagonal edges cost sqrt(2)=1.41
The first picture shows the returned paths of the algorithm for an simple example case. You can see some suboptimal paths from overestimating heuristics (red and cyan). On the other hand there are multiple optimal paths (blue, yellow, green) and it depends on the heuristic which one is found first.
The different images show all expanded nodes when the target is reached. The color shows the estimated path cost using this node (considering the "already taken" path from start to this node as well; mathematically it's the cost so far plus the heuristic for this node). At any time the algorithm expands the node with lowest estimated total cost (described before).
1. Zero (blue)
Corresponds to the Dijkstra algorithm
Nodes expanded: 2685
Path length: 89.669
2. As the crow flies (yellow)
Nodes expanded: 658
Path length: 89.669
3. Ideal (green)
Shortest path without obstacles (if you follow the eight directions)
Highest possible estimate without overestimating (hence "ideal")
Nodes expanded: 854
Path length: 89.669
4. Manhattan (red)
Shortest path without obstacles (if you don't move diagonally; in other words: cost of "moving diagonally" is estimated as 2)
Overestimates
Nodes expanded: 562
Path length: 92.840
5. As the crow flies times ten (cyan)
Overestimates
Nodes expanded: 188
Path length: 99.811
As far as I know, you want to typically underestimate so that you may still find the shortest path. You can find an answer quicker by overestimating, but you may not find the shortest path. Hence why overestimation is "incorrect", whereas underestimating can still provide the best solution.
I'm sorry that I cannot provide any insight as to the birdview lines...
Consider heuristic as f(x)=g(x)+h(x), where g(x) is the real cost from start-node to current-node, and h(x) the prediction cost from current-node to goal. Assume the optimal cost is R then:
The h(x) makes difference in the early stage of the searching. Given three node A,B,C
(*) => current pos: A
A -------> B - 。。。 -> C
|_______________________| => the prediction range of h(x)
Once you step on B, the cost from A to B is truth, the prediction h(x) doesn't include it anymore:
(*) => current pos: B
A -------> B - 。。。 -> C
|____________| => the prediction range of h(x)
When we say under-estimate, it means that your h(x) will cause f(x) < R for all x on the way to goal.
Over-estimation indeed makes the algorithm incorrect:
Assume R is 19. And given that the two cost 20, 21 are the cost of the paths that already reach the goal:
Front Rear
------------------------- => This is a priority queue PQ.
| 20 | 20 | 30 | ... | 99 |
^-------- => This is the "fake" optimal.
But say f(y)=g(y)+h(y), and y is indeed on the right path to achieve the optimal cost R, but since h(y) is over-estimated, so the f(y) is currently 30 in the PQ, so before we can update 30 to 19, the algorithm already will pop 20 from the PQ and wrongly assume that it were an "optimal" solution.

Matching algorithm

Odd question here not really code but logic,hope its ok to post it here,here it is
I have a data structure that can be thought of as a graph.
Each node can support many links but is limited to a value for each node.
All links are bidirectional. and each link has a cost. the cost depends on euclidian difference between the nodes the minimum value of two parameters in each node. and a global modifier.
i wish to find the maximum cost for the graph.
wondering if there was a clever way to find such a matching, rather than going through in brute force ...which is ugly... and i'm not sure how i'd even do that without spending 7 million years running it.
To clarify:
Global variable = T
many nodes N each have E,X,Y,L
L is the max number of links each node can have.
cost of link A,B = Sqrt( min([a].e | [b].e) ) x
( 1 + Sqrt( sqrt(sqr([a].x-[b].x)+sqr([a].y-[b].y)))/75 + Sqrt(t)/10 )
total cost =sum all links.....and we wish to maximize this.
average values for nodes is 40-50 can range to (20..600)
average node linking factor is 3 range 0-10.
For the sake of completeness for anybody else that looks at this article, i would suggest revisiting your graph theory algorithms:
Dijkstra
Astar
Greedy
Depth / Breadth First
Even dynamic programming (in some situations)
ect. ect.
In there somewhere is the correct solution for your problem. I would suggest looking at Dijkstra first.
I hope this helps someone.
If I understand the problem correctly, there is likely no polynomial solution. Therefore I would implement the following algorithm:
Find some solution by beng greedy. To do that, you sort all edges by cost and then go through them starting with the highest, adding an edge to your graph while possible, and skipping when the node can't accept more edges.
Look at your edges and try to change them to archive higher cost by using a heuristics. The first that comes to my mind: you cycle through all 4-tuples of nodes (A,B,C,D) and if your current graph has edges AB, CD but AC, BD would be better, then you make the change.
Optionally the same thing with 6-tuples, or other genetic algorithms (they are called that way because they work by mutations).
This is equivalent to the traveling salesman problem (and is therefore NP-Complete) since if you could solve this problem efficiently, you could solve TSP simply by replacing each cost with its reciprocal.
This means you can't solve exactly. On the other hand, it means that you can do exactly as I said (replace each cost with its reciprocal) and then use any of the known TSP approximation methods on this problem.
Seems like a max flow problem to me.
Is it possible that by greedily selecting the next most expensive option from any given start point (omitting jumps to visited nodes) and stopping once all nodes are visited? If you get to a dead end backtrack to the previous spot where you are not at a dead end and greedily select. It would require some work and probably something like a stack to keep your paths in. I think this would work quite effectively provided the costs are well ordered and non negative.
Use Genetic Algorithms. They are designed to solve the problem you state rapidly reducing time complexity. Check for AI library in your language of choice.

Resources