Which algorithm is appropriate? - algorithm

Consider that the agent is a robot that moves physically inside the labyrinth. It does not
know the map of the labyrinth but it can sense its own orientation, which ways are open out of its
current square and it knows that the exit is in the lower right corner. The robot wants to reach the
exit as fast as possible. Which algorithm would you use?
My professor said that Depth First Search with modifications would be the solution. These modifications would be; remember what we searched to avoid an infinite loop and give a higher priority for moves that go to the right or down.
My understanding of DFS however is that it is not optimal and is not guaranteed to find a solution. I know that A* would find the path that cost the least but speed I am unsure.
If it helps the grid is 10x10.

Depth First Search, if implemented correctly with the suggested modifications, will find the solution for the problem you described. You mention that the agent should reach the exit as fast as possible which probably meant with the shortest path. In this sense DFS is not optimal because it will return the first path to reach the exit. A* will find the shortest path. A* and its variants are the preferred way to solving path-finding problems like you described and have been used in real-time computer games for decades.

First, DFS would find a solution if one exist (assuming it doesn't go in circles, which your professor addressed). Your assertion that this is not guaranteed is wrong.
Second, the priority directions are essentially what A* is about - it expands the path that is most likely to lead to the solution.

For me, I will use A* with an appropriate heuristic function adjusted from the characteristic of labyrinth. Both DFS, BFS and A* will find the solution. But with proper heuristic function, A* will find the solution faster.

Related

What is the difference between Hill Climbing Search and Best First Search?

I am trying to learn some search concepts but ran into a wall in the process. Can anyone explain to me what the difference is between hill climbing search and best first search? To me, they both look like expanding the nodes with the heuristic value closest to the goal. If anyone can explain the difference to me, it'd be greatly appreciated. Thanks!
You can view search algorithm as having a queue of remaining nodes to search. This answer demonstrates this principle.
In depth-first search, you add the current node's children to the front of the queue (a stack). In breadth-first search, you add the current node's children to the back of the queue. Think for a moment about how this leads to the right behaviour for those algorithms.
Now, in hill-climbing search, you sort[1] the current node's children before adding them to the queue. In best-first search, you add the current node's children to the queue in any old order, then sort[1] the entire queue. If you think about the effect that might have on the order in which nodes are searched, you should get an idea of the practical difference.
I found this concept too tangly to understand from purely abstract terms, but if you work through a couple of examples with a pencil it becomes simple.
[1]: sort according to some problem-specific evaluation of the solution node, for example "distance from destination" in a path-finding search.
A liitle late, but here goes.
In BFS, it's about finding the goal. So it's about picking the best node (the one which we hope will take us to the goal) among the possible ones. We keep trying to go towards the goal.
But in hill climbing, it's about maximizing the target function. We pick the node which provides the highest ascent. So unlike BFS, the 'value' of the parent node is also taken into account. If we can't go higher, we just give up. In that case we may not even reach the goal. We might be at a local maxima.
The difference lies in understanding what is more concerned in searching for the goal state.
Ask the question what is our aim...
the final goal state?
or the Best path to reach the goal state
The Best First Search is a systematic search algorithm where systematicity is achieved by moving forward iteratively on the basis of finding out the
best heuristic value for the adjacent nodes for every current node.
Here the evaluation function ( heuristic function ) calculates the
best possible path to achieve the goal state. So here we could see the Best First search is concerned about the best PATH to reach to the goal state.
However there are many problems where "Path to the Goal" is not a concern, the only thing is concerned is to achieve the final state in any possible ways or paths.
(For eg: 8-queens problem).
Hence for this local search algorithms are used.
Local search algorithms operate using a single current node and generally move only to neighbor of that node.
Hill Climbing algorithm is a local search algorithm.
So here we need to understand the approach to get to the goal state not the best path to reach when thinking about hill climbing.
(As stated in AI-A Modern Approach,SR & PN)
Basically, to understand local search we need to consider state-space landscape.
A landscape has both
(i) location (defined by the state) and
(ii) Elevation (defined by the value of heuristic function or objective function)
We need to understand the two types of elevations,
(i) If elevation corresponds to an objective function,then the aim is to find the highest peak i.e. a Global Maximum.
(So these type of elevation are useful in different scenarios which are not concerned with cost and which are only concerned with finding the best instant moves)
(ii) If elevation corresponds to cost, then the aim is to find the lowest valley i.e. a Global Minimum.
(Here is the common thing i.e. Steepest(always stepping up with better estimations i.e. no plateau problem or any other) hill climbing is similar to Best First Search. Here the elevation function is the heuristic function which provide the best minimum cost. And hill climbing here is only concerned with current node and iterates through the adjacent nodes for minimum value and proceeds with expanding the best node which is similar to Best First Search)
Note:
Hill Climbing algorithm does not look ahead beyond the immediate neighbors of the current state. It is only concerned with best neighboring node to expand.And the best neighboring is decided by the above evaluations functions.
Whereas, Best First Search algorithm look ahead of the immediate neighbors to find the best path to the goal(using Heuristic evaluation) and then proceeding with the best one.
So difference lies in approach of local searches and systematic search algorithms.
Understand the difference in approaches, you will get to know why both are named different.
Let me Wiki that for you:
In simple hill climbing, the first
closer node is chosen, whereas in
steepest ascent hill climbing all
successors are compared and the
closest to the solution is chosen.
...
Steepest ascent
hill climbing is similar to best-first
search, which tries all possible
extensions of the current path instead
of only one.
So Basically Steepest HC in the second step itself gets the answer to the problem as it's second move will be the most optimal solution and leading to the answer

Heuristic and A* algorithm

I was reading about dijkstra algorithm and A* star algorithm. I know that the difference is the heuristic used. But what is a heuristic and how this influence the algorithms? Heuristic is just an way to measure the distance? But dijkstra considers the distance too? Sorry, but my question is about heuristic and what it means and why to use them... (I had read about it, but don't understant)
Other question: When should be used each one?
Thank you
In this context, a heuristic is a way of providing the algorithm with some form of extra evaluative information, so that the algorithm can find a 'good enough' solution, without exhaustively searching every possible solution.
Dijkstra's Algorithm does not use a heuristic. It expands outwards from the start node, and examines every node in the graph in order to find the shortest path. While this is accurate, it can be computationally expensive.
By comparison, the A* algorithm uses a distance + cost heuristic, to guide the algorithm in its choice of the next node to explore. This means that the algorithm finds a possible search solution without examining every node on the graph. It is therefore much cheaper to run, but at a loss of complete accuracy. It works because the result is usually close enough to the optimal solution, and is found cheaper than an exhaustive search of the entire graph.
As to when you should use each, it really depends on the application. However, using the A* algorithm requires an admissible heuristic, so this may not be applicable in situations where such information is unavailable to the algorithm.
A heuristic basically means an idea or an intuition! Any strategy that you use to solve a hard problem is a heuristic! In some fields (like combinatorial problems) it refers to the strategy that can help you solve a NP hard problem suboptimally within a polynomial time.
Dijkstra solves the single-source routing problem, i.e. it gives you the cost of going froma single point to any other point in the space.
A* solves a single-source single-target problem. It gives you a minimum distance path from a given point to another given point.
A* is usually faster than Dijkstra provided you give it an admissible heuristic, this is, an estimate of the distance to the goal, that never overestimates said distance. Contrary to a previous answer given here, if the heuristic used is admissible, A* is complete and will give you the optimal answer.

Fastest path to walk over all given nodes

I'm coding a simple game and currently doing the AI part. NPC gets a list of his 'interest points' which he needs to visit. Each point has a coordinate on the map. I need to find a fastest path for the character to visit all of the given points.
As far as I understand it, the task could be described as 'finding fastest traverse path in a strongly connected weighted undirected graph'.
I'd like to get either the name of some algorithm to calculate that or if there is no name - some keypoints on programming it myself.
Thanks in advance.
This is very similar to the Travelling Salesman problem, although I'm not going to try to prove equivalency offhand. The TSP is NP-complete, which means that solving the problem exactly may be impractical, depending on the number of interest points. There are approximation algorithms that you may find more useful.
See previous post regarding tree traversals:
Tree traversal algorithm for directory structures with a lot of files
I would use algorithm like: ant algorithm.
Not directly on point but what I did in an MMO emulator was to store waypoint indices along with the rest of the pathing data. If your requirement is to demonstrate solutions to TSP then ignore this. If not, it's worth consideration IMO.
In my case it was the best solution as otherwise the server could have potentially hundreds of mobs (re)spawning and along with all the other AI logic, would have to burn cycles computing route logic.

solve a maze by using DFS, BFS, A*

I want to know the change in results when we use open maze or closed maze for the DFS, BFS, and A* search algorithms? Is there any big difference in the output like increase in number of expanded nodes, cost, etc.?
A naive DFS can go into an infinite loop on certain open mazes, whereas on a closed maze it will always finish. I don't think BFS or A* can fall into that trap. (By "naive DFS" I mean one that doesn't mark nodes as "visited" as it traverses them.)
Edit: Daniel's comment has forced me to rethink this answer in the light of day rather than the sleepy moments before I went to bed. I will concede that A* marks nodes as visited as part of its basic functioning. However, I still think BFS can solve even open mazes without marking nodes. It won't be efficient, but if there is a solution to the maze, BFS will find it. By definition, it is trying all possible paths at a certain depth before moving onto the next depth. So if a solution exists with length 10, BFS will find it before trying any solutions of depth 11.
Yes. There is a big difference as the different strategies traverse the maze in totally different orders
A* can be quite efficient compared to naive dfs and bfs. But you need to find a good function to evaluate the cost from your current position to the target.

Astar-like algorithm with unknown endstate

A-star is used to find the shortest path between a startnode and an endnode in a graph. What algorithm is used to solve something were the target state isn't specifically known and we instead only have a criteria for the target state?
For example, can a sudoku puzzle be solved with an Astar-like algorithm? We dont know how the endstate will look like (which number is where) but we do know the rules of sudoku, a criteria for a winning state. Therefore I have a startnode and just a criteria for the endnode, which algorithm to use?
A* requires a graph, a cost function for traversal of that graph, a heuristic as to whether a node in the graph is closer to the goal than another, and a test whether the goal is reached.
Searching a Sudoku solution space doesn't really have a traversal cost to minimize, only a global cost ( the number of unsolved squares ), so all traversals would be equal cost, so A* doesn't really help - any cell you could assign costs one move and moves you one closer to the goal, so A* would be no better than choosing the next step at random.
It might be possible to apply an A* search based on the estimated/measured cost of applying the different techniques at each point, which would then try to find a path through the solution space with the least computational cost. In that case the graph would not just be the solution states of the puzzle, but you'd be choosing between the techniques to apply at that point - you'd know an estimate of the cost of a transition, but not where that transition 'goes', except that if successful, it's one step closer to the goal.
Yes, A* can be used when a specific goal state cannot be identified. (Pete Kirkham's answer implies this, but doesn't emphasise it much.)
When a specific goal state can't be identified, it's sometimes harder to come up with a useful heuristic lower bound on the remaining cost needed to complete a partial solution -- and the efficiency of A* depends on choosing an effective heuristic. But it doesn't mean it can't be applied. Any problem that can be solved on a computer can be solved using a breadth-first search, plus an array of flags indicating whether a state has been seen before; which is the same as A* with a heuristic lower bound that is always zero. (Of course, this is not the most efficient algorithm for solving many problems.)
You dont have to know the exact target endstate. It all comes down to the heuristic function, when it returns 0 you could assume to have found (at least) one of the valid endstates.
So during the a*, instead of checking if current_node == target_node, check if current_node.h() returns 0. If so, it should be infinitely close and/or overlapping the goal/endstate.

Resources