I am trying to solve a problem using different algorithms and Steepest Ascent Hill Climbing (SAHC) and Best First Search are two of these algorithms that I need to implement.
According to Wikipedia:
"In steepest ascent hill climbing all successors are compared and the closest to the solution is chosen..."
"Steepest ascent hill climbing is similar to best-first search, which tries all possible extensions of the current path instead of only one."
SAHC: All successors are compared and the closest to the solution is chosen.
BestFS: Tries all possible extensions of the current path instead of only one.
I don't really understand how these are different. Could some please help me with this, preferably with some sort of an explanation using trees?
They are quite similar. The difference is that best-first search considers all paths from the start node to the end node, whereas steepest ascent hill climbing only remembers one path during the search.
For example, say we have a graph like
start ---- A ---- B ---- end
\ /
------\ /---------
\ /
C
where each edge has the same weight: ignore my crappy ASCII art skills :).
Also suppose that in our heuristic function, A is considered as closer to the end than C. (This could still be an admissible heuristic – it just underestimates the true distance of A.)
Then steepest-ascent hill climbing would choose A first (because it has the lowest heuristic value), then B (because its heuristic value is lower than the start node's), and then the end node.
A best-first search, on the other hand, would
Add A and C to the open list.
Take A, the best value, off the open list, and add B. Then OPEN = {B, C}.
Take the best node off the open list (either B or C, more on that in a second), and add its successor, the goal state.
Take the goal state off the open list and return the path that got there.
The choice of B or C in step 3 depends on exactly the best-first search algorithm you're using. A* would evaluate the node as the cost to get there plus the estimate of the cost to get to the end, in which case C would win (and in fact, with an admissible heuristic, A* is guaranteed to always get you the optimal path). A "greedy best-first search" would choose between the two options arbitrarily. In any case, the search maintains a list of possible places to go from rather than a single one.
Is it clearer how the two are different now?
SAHC is going to choose a single, (possibly non-optimal) path greedily - it'll simply take the best node at each step until it hits the target.
Best-first, on the other hand, generates an entire search tree. Often (in the case of A*) it will find the optimal solution, this is not guaranteed for SAHC.
Dint know how to flag so pasted it again, Sorry for that. So here goes what I got as a difference.
The difference lies in understanding what is more concerned in searching for the goal state.
Ask the question what is our aim...
the final goal state?
or the Best path to reach the goal state
The Best First Search is a systematic search algorithm where systematicity is achieved by moving forward iteratively on the basis of finding out the
best heuristic value for the adjacent nodes for every current node.
Here the evaluation function ( heuristic function ) calculates the
best possible path to achieve the goal state. So here we could see the Best First search is concerned about the best PATH to reach to the goal state.
However there are many problems where "Path to the Goal" is not a concern, the only thing is concerned is to achieve the final state in any possible ways or paths.
(For eg: 8-queens problem).
Hence for this local search algorithms are used.
Local search algorithms operate using a single current node and generally move only to neighbor of that node.
Hill Climbing algorithm is a local search algorithm.
So here we need to understand the approach to get to the goal state not the best path to reach when thinking about hill climbing.
(As stated in AI-A Modern Approach,SR & PN)
Basically, to understand local search we need to consider state-space landscape.
A landscape has both
(i) location (defined by the state) and
(ii) Elevation (defined by the value of heuristic function or objective function)
We need to understand the two types of elevations,
(i) If elevation corresponds to an objective function,then the aim is to find the highest peak i.e. a Global Maximum.
(So these type of elevation are useful in different scenarios which are not concerned with cost and which are only concerned with finding the best instant moves)
(ii) If elevation corresponds to cost, then the aim is to find the lowest valley i.e. a Global Minimum.
(Here is the common thing i.e. Steepest(always stepping up with better estimations i.e. no plateau problem or any other) hill climbing is similar to Best First Search. Here the elevation function is the heuristic function which provide the best minimum cost. And hill climbing here is only concerned with current node and iterates through the adjacent nodes for minimum value and proceeds with expanding the best node which is similar to Best First Search)
Note:
Hill Climbing algorithm does not look ahead beyond the immediate neighbors of the current state. It is only concerned with best neighboring node to expand.And the best neighboring is decided by the above evaluations functions.
Whereas, Best First Search algorithm look ahead of the immediate neighbors to find the best path to the goal(using Heuristic evaluation) and then proceeding with the best one.
So difference lies in approach of local searches and systematic search algorithms.
Understand the difference in approaches, you will get to know why both are named different.
Related
In a tower defense game, you have an NxM grid with a start, a finish, and a number of walls.
Enemies take the shortest path from start to finish without passing through any walls (they aren't usually constrained to the grid, but for simplicity's sake let's say they are. In either case, they can't move through diagonal "holes")
The problem (for this question at least) is to place up to K additional walls to maximize the path the enemies have to take. For example, for K=14
My intuition tells me this problem is NP-hard if (as I'm hoping to do) we generalize this to include waypoints that must be visited before moving to the finish, and possibly also without waypoints.
But, are there any decent heuristics out there for near-optimal solutions?
[Edit] I have posted a related question here.
I present a greedy approach and it's maybe close to the optimal (but I couldn't find approximation factor). Idea is simple, we should block the cells which are in critical places of the Maze. These places can help to measure the connectivity of maze. We can consider the vertex connectivity and we find minimum vertex cut which disconnects the start and final: (s,f). After that we remove some critical cells.
To turn it to the graph, take dual of maze. Find minimum (s,f) vertex cut on this graph. Then we examine each vertex in this cut. We remove a vertex its deletion increases the length of all s,f paths or if it is in the minimum length path from s to f. After eliminating a vertex, recursively repeat the above process for k time.
But there is an issue with this, this is when we remove a vertex which cuts any path from s to f. To prevent this we can weight cutting node as high as possible, means first compute minimum (s,f) cut, if cut result is just one node, make it weighted and set a high weight like n^3 to that vertex, now again compute the minimum s,f cut, single cutting vertex in previous calculation doesn't belong to new cut because of waiting.
But if there is just one path between s,f (after some iterations) we can't improve it. In this case we can use normal greedy algorithms like removing node from a one of a shortest path from s to f which doesn't belong to any cut. after that we can deal with minimum vertex cut.
The algorithm running time in each step is:
min-cut + path finding for all nodes in min-cut
O(min cut) + O(n^2)*O(number of nodes in min-cut)
And because number of nodes in min cut can not be greater than O(n^2) in very pessimistic situation the algorithm is O(kn^4), but normally it shouldn't take more than O(kn^3), because normally min-cut algorithm dominates path finding, also normally path finding doesn't takes O(n^2).
I guess the greedy choice is a good start point for simulated annealing type algorithms.
P.S: minimum vertex cut is similar to minimum edge cut, and similar approach like max-flow/min-cut can be applied on minimum vertex cut, just assume each vertex as two vertex, one Vi, one Vo, means input and outputs, also converting undirected graph to directed one is not hard.
it can be easily shown (proof let as an exercise to the reader) that it is enough to search for the solution so that every one of the K blockades is put on the current minimum-length route. Note that if there are multiple minimal-length routes then all of them have to be considered. The reason is that if you don't put any of the remaining blockades on the current minimum-length route then it does not change; hence you can put the first available blockade on it immediately during search. This speeds up even a brute-force search.
But there are more optimizations. You can also always decide that you put the next blockade so that it becomes the FIRST blockade on the current minimum-length route, i.e. you work so that if you place the blockade on the 10th square on the route, then you mark the squares 1..9 as "permanently open" until you backtrack. This saves again an exponential number of squares to search for during backtracking search.
You can then apply heuristics to cut down the search space or to reorder it, e.g. first try those blockade placements that increase the length of the current minimum-length route the most. You can then run the backtracking algorithm for a limited amount of real-time and pick the best solution found thus far.
I believe we can reduce the contained maximum manifold problem to boolean satisifiability and show NP-completeness through any dependency on this subproblem. Because of this, the algorithms spinning_plate provided are reasonable as heuristics, precomputing and machine learning is reasonable, and the trick becomes finding the best heuristic solution if we wish to blunder forward here.
Consider a board like the following:
..S........
#.#..#..###
...........
...........
..........F
This has many of the problems that cause greedy and gate-bound solutions to fail. If we look at that second row:
#.#..#..###
Our logic gates are, in 0-based 2D array ordered as [row][column]:
[1][4], [1][5], [1][6], [1][7], [1][8]
We can re-render this as an equation to satisfy the block:
if ([1][9] AND ([1][10] AND [1][11]) AND ([1][12] AND [1][13]):
traversal_cost = INFINITY; longest = False # Infinity does not qualify
Excepting infinity as an unsatisfiable case, we backtrack and rerender this as:
if ([1][14] AND ([1][15] AND [1][16]) AND [1][17]:
traversal_cost = 6; longest = True
And our hidden boolean relationship falls amongst all of these gates. You can also show that geometric proofs can't fractalize recursively, because we can always create a wall that's exactly N-1 width or height long, and this represents a critical part of the solution in all cases (therefore, divide and conquer won't help you).
Furthermore, because perturbations across different rows are significant:
..S........
#.#........
...#..#....
.......#..#
..........F
We can show that, without a complete set of computable geometric identities, the complete search space reduces itself to N-SAT.
By extension, we can also show that this is trivial to verify and non-polynomial to solve as the number of gates approaches infinity. Unsurprisingly, this is why tower defense games remain so fun for humans to play. Obviously, a more rigorous proof is desirable, but this is a skeletal start.
Do note that you can significantly reduce the n term in your n-choose-k relation. Because we can recursively show that each perturbation must lie on the critical path, and because the critical path is always computable in O(V+E) time (with a few optimizations to speed things up for each perturbation), you can significantly reduce your search space at a cost of a breadth-first search for each additional tower added to the board.
Because we may tolerably assume O(n^k) for a deterministic solution, a heuristical approach is reasonable. My advice thus falls somewhere between spinning_plate's answer and Soravux's, with an eye towards machine learning techniques applicable to the problem.
The 0th solution: Use a tolerable but suboptimal AI, in which spinning_plate provided two usable algorithms. Indeed, these approximate how many naive players approach the game, and this should be sufficient for simple play, albeit with a high degree of exploitability.
The 1st-order solution: Use a database. Given the problem formulation, you haven't quite demonstrated the need to compute the optimal solution on the fly. Therefore, if we relax the constraint of approaching a random board with no information, we can simply precompute the optimum for all K tolerable for each board. Obviously, this only works for a small number of boards: with V! potential board states for each configuration, we cannot tolerably precompute all optimums as V becomes very large.
The 2nd-order solution: Use a machine-learning step. Promote each step as you close a gap that results in a very high traversal cost, running until your algorithm converges or no more optimal solution can be found than greedy. A plethora of algorithms are applicable here, so I recommend chasing the classics and the literature for selecting the correct one that works within the constraints of your program.
The best heuristic may be a simple heat map generated by a locally state-aware, recursive depth-first traversal, sorting the results by most to least commonly traversed after the O(V^2) traversal. Proceeding through this output greedily identifies all bottlenecks, and doing so without making pathing impossible is entirely possible (checking this is O(V+E)).
Putting it all together, I'd try an intersection of these approaches, combining the heat map and critical path identities. I'd assume there's enough here to come up with a good, functional geometric proof that satisfies all of the constraints of the problem.
At the risk of stating the obvious, here's one algorithm
1) Find the shortest path
2) Test blocking everything node on that path and see which one results in the longest path
3) Repeat K times
Naively, this will take O(K*(V+ E log E)^2) but you could with some little work improve 2 by only recalculating partial paths.
As you mention, simply trying to break the path is difficult because if most breaks simply add a length of 1 (or 2), its hard to find the choke points that lead to big gains.
If you take the minimum vertex cut between the start and the end, you will find the choke points for the entire graph. One possible algorithm is this
1) Find the shortest path
2) Find the min-cut of the whole graph
3) Find the maximal contiguous node set that intersects one point on the path, block those.
4) Wash, rinse, repeat
3) is the big part and why this algorithm may perform badly, too. You could also try
the smallest node set that connects with other existing blocks.
finding all groupings of contiguous verticies in the vertex cut, testing each of them for the longest path a la the first algorithm
The last one is what might be most promising
If you find a min vertex cut on the whole graph, you're going to find the choke points for the whole graph.
Here is a thought. In your grid, group adjacent walls into islands and treat every island as a graph node. Distance between nodes is the minimal number of walls that is needed to connect them (to block the enemy).
In that case you can start maximizing the path length by blocking the most cheap arcs.
I have no idea if this would work, because you could make new islands using your points. but it could help work out where to put walls.
I suggest using a modified breadth first search with a K-length priority queue tracking the best K paths between each island.
i would, for every island of connected walls, pretend that it is a light. (a special light that can only send out horizontal and vertical rays of light)
Use ray-tracing to see which other islands the light can hit
say Island1 (i1) hits i2,i3,i4,i5 but doesn't hit i6,i7..
then you would have line(i1,i2), line(i1,i3), line(i1,i4) and line(i1,i5)
Mark the distance of all grid points to be infinity. Set the start point as 0.
Now use breadth first search from the start. Every grid point, mark the distance of that grid point to be the minimum distance of its neighbors.
But.. here is the catch..
every time you get to a grid-point that is on a line() between two islands, Instead of recording the distance as the minimum of its neighbors, you need to make it a priority queue of length K. And record the K shortest paths to that line() from any of the other line()s
This priority queque then stays the same until you get to the next line(), where it aggregates all priority ques going into that point.
You haven't showed the need for this algorithm to be realtime, but I may be wrong about this premice. You could then precalculate the block positions.
If you can do this beforehand and then simply make the AI build the maze rock by rock as if it was a kind of tree, you could use genetic algorithms to ease up your need for heuristics. You would need to load any kind of genetic algorithm framework, start with a population of non-movable blocks (your map) and randomly-placed movable blocks (blocks that the AI would place). Then, you evolve the population by making crossovers and transmutations over movable blocks and then evaluate the individuals by giving more reward to the longest path calculated. You would then simply have to write a resource efficient path-calculator without the need of having heuristics in your code. In your last generation of your evolution, you would take the highest-ranking individual, which would be your solution, thus your desired block pattern for this map.
Genetic algorithms are proven to take you, under ideal situation, to a local maxima (or minima) in reasonable time, which may be impossible to reach with analytic solutions on a sufficiently large data set (ie. big enough map in your situation).
You haven't stated the language in which you are going to develop this algorithm, so I can't propose frameworks that may perfectly suit your needs.
Note that if your map is dynamic, meaning that the map may change over tower defense iterations, you may want to avoid this technique since it may be too intensive to re-evolve an entire new population every wave.
I'm not at all an algorithms expert, but looking at the grid makes me wonder if Conway's game of life might somehow be useful for this. With a reasonable initial seed and well-chosen rules about birth and death of towers, you could try many seeds and subsequent generations thereof in a short period of time.
You already have a measure of fitness in the length of the creeps' path, so you could pick the best one accordingly. I don't know how well (if at all) it would approximate the best path, but it would be an interesting thing to use in a solution.
I am trying to find a optimal solution for the following problem
The numbers denoted inside each node are represented as (x,y).
The adjacent nodes to a node always have a y value that is (current nodes y value +1).
There is a cost of 1 for a change in the x value when we go from one node to its adjacent
There is no cost for going from node to its adjacent, if there is no change in the value of x.
No 2 nodes with the same y value are considered adjacent.
The optimal solution is the one with the lowest cost, I'm thinking of using A* path finding algorithm for finding an optimal solution.
My question, Is A* a good choice for the this kind of problems, or should i look at any other algorithm, and also i was thinking of using recursive method to calculate the Heuristic cost, but i get the feeling that it is not a good idea.
This is the example of how I'm thinking the heuristic function will be like this
The heuristic weight of a node = Min(heuristic weight of it's child nodes)
The same goes for the child nodes too.
But as far as my knowledge goes, heuristic is meant to be an approximate, so I think I'm going in the wrong direction as far as the heuristic function is concerned
A* guarantees to find the lowest cost path in a graph with nonnegative edge path costs, provided that you use an appropriate heuristic. What makes a heuristic function appropriate?
First, it must be admissible, i. e. it should, for any node, produce either an underestimate or a correct estimate for the cost of the cheapest path from that node to any of goal nodes. This means the heuristic should never overestimate the cost to get from the node to the goal.
Note that if your heuristic computes the estimate cost of 0 for every node, then A* just turns into breadth-first exhaustive search. So h(n)=0 is still an admissible heuristic, only the worst possible one. So, of all admissible heuristics, the tighter one estimates the cost to the goal, the better it is.
Second, it must be cheap to compute. It should be certainly O(1), and should preferably look at the current node alone. Recursively evaluating the cost as you propose will make your search significantly slower, not faster!
The question of A* applicability is thus whether you can come up with a reasonably good heuristic. From your problem description, it is not clear whether you can easily come up with one.
Depending on the problem domain, A* may be very useful if requirements are relaxed. If heuristic becomes inadmissible, then you lose the guarantee of finding the best path. Depending on the degree of overestimation of the distance, hovewer, the solution might still be good enough (for problem specific definition of "good enough"). The advantage is that sometimes you can compute that "good enough" path much faster. In some cases, probabilistic estimate of heuristics works good (it can have additional constraints on it to stay in the admissible range).
So, in general, you have breadth-first search for tractable problems, next faster you have A* for tractable problems with admissible heuristic. If your problem is intractable for breadth-first exhaustive search and does not admit a heuristic, then your only option is to settle for a "good enough" suboptimal solution. Again, A* may still work with inadmissible heuristic here, or you should look at beam search varieties. The difference is that beam searches have a limit on the number of ways the graph is currently being explored, while A* limits them indirectly by choosing some subset of less costly ones. There are practical cases not solvable by A* even with relaxed admissbility, when difference in cost among different search path is slight. Beam search with its hard limit on the number of paths works more efficiently in such problems.
Seems like an overkill to use A* when something like Dijkstra's would work. Dijkstra's will only work on non-negative cost transitions which seems true in this example. If you can have negative cost transitions then Bellman-Ford should also work.
Say, we have a circular list representing a solution of the traveling salesman problem. This list is initially empty.
If the user is allowed to enter a city and it's coordinate one by one, what heuristics could be used to insert those coordinates into the already existing tour?
An example uses the nearest neighbor heuristic : it inserts the new coordinate after the nearest coordinate already in the tour.
What are some other options (pseudo-code if possible).
There are plenty of construction heuristics you can use, such as First Fit, First Fit Decreasing, Best Fit, Best Fit Decreasing and Cheapest Insertion.
Those constructions heuristics are applied on bin packing normally, but they can be converted to TSP too. Documentation about those heuristics is here.
Since you're only inserting 1 unassigned entity at at time, all of these basically revert to what you call nearest neighbor heuristic (with a slight variation on ties), but note that that is not what they usually call Nearest Neighbor. Nearest Neighbor always adds to the end of the line, the nearest neighbor of all unassigned entities.
Now, what you really want, is a decent solution, without having to restart your entire construction heuristics. That's harder: welcome to repeated planning and real-time planning (and this documentation). I am working on a open source example for TSP and vehicle routing that does real-time planning.
You can of course generalize the idea you have mentioned:
Define k'th_path(v) = minimum weight of a path including max{k,not_visited cities} cities
Note that calculating the k'th path is O(|V|^k) [this bound is not tight]
Special cases:
For k=1 you get the nearest neighbor, as you suggested.
for k=|V| you get an optimal solution [note it will be very expansive to calculate].
There are not other heuristic because TSP is always about to find the nearest coordinate. At least I don't know an algorithm that can insert a coordinate and knows the nearest coordinate but there are plenty algorithm to find a good tour. A good heuristic is for example the Christofides algorithm, it works only in euklidian space but it give you a guarantee of the solution to be within 3/2 of the optimum. It's not very easy to code. Especially the edmond blossom v algorithm is for an expert skill. The importance of a guarantee isn't high enough because how would you explain that your method can deliver non-sense in some rare situation?
I am trying to learn some search concepts but ran into a wall in the process. Can anyone explain to me what the difference is between hill climbing search and best first search? To me, they both look like expanding the nodes with the heuristic value closest to the goal. If anyone can explain the difference to me, it'd be greatly appreciated. Thanks!
You can view search algorithm as having a queue of remaining nodes to search. This answer demonstrates this principle.
In depth-first search, you add the current node's children to the front of the queue (a stack). In breadth-first search, you add the current node's children to the back of the queue. Think for a moment about how this leads to the right behaviour for those algorithms.
Now, in hill-climbing search, you sort[1] the current node's children before adding them to the queue. In best-first search, you add the current node's children to the queue in any old order, then sort[1] the entire queue. If you think about the effect that might have on the order in which nodes are searched, you should get an idea of the practical difference.
I found this concept too tangly to understand from purely abstract terms, but if you work through a couple of examples with a pencil it becomes simple.
[1]: sort according to some problem-specific evaluation of the solution node, for example "distance from destination" in a path-finding search.
A liitle late, but here goes.
In BFS, it's about finding the goal. So it's about picking the best node (the one which we hope will take us to the goal) among the possible ones. We keep trying to go towards the goal.
But in hill climbing, it's about maximizing the target function. We pick the node which provides the highest ascent. So unlike BFS, the 'value' of the parent node is also taken into account. If we can't go higher, we just give up. In that case we may not even reach the goal. We might be at a local maxima.
The difference lies in understanding what is more concerned in searching for the goal state.
Ask the question what is our aim...
the final goal state?
or the Best path to reach the goal state
The Best First Search is a systematic search algorithm where systematicity is achieved by moving forward iteratively on the basis of finding out the
best heuristic value for the adjacent nodes for every current node.
Here the evaluation function ( heuristic function ) calculates the
best possible path to achieve the goal state. So here we could see the Best First search is concerned about the best PATH to reach to the goal state.
However there are many problems where "Path to the Goal" is not a concern, the only thing is concerned is to achieve the final state in any possible ways or paths.
(For eg: 8-queens problem).
Hence for this local search algorithms are used.
Local search algorithms operate using a single current node and generally move only to neighbor of that node.
Hill Climbing algorithm is a local search algorithm.
So here we need to understand the approach to get to the goal state not the best path to reach when thinking about hill climbing.
(As stated in AI-A Modern Approach,SR & PN)
Basically, to understand local search we need to consider state-space landscape.
A landscape has both
(i) location (defined by the state) and
(ii) Elevation (defined by the value of heuristic function or objective function)
We need to understand the two types of elevations,
(i) If elevation corresponds to an objective function,then the aim is to find the highest peak i.e. a Global Maximum.
(So these type of elevation are useful in different scenarios which are not concerned with cost and which are only concerned with finding the best instant moves)
(ii) If elevation corresponds to cost, then the aim is to find the lowest valley i.e. a Global Minimum.
(Here is the common thing i.e. Steepest(always stepping up with better estimations i.e. no plateau problem or any other) hill climbing is similar to Best First Search. Here the elevation function is the heuristic function which provide the best minimum cost. And hill climbing here is only concerned with current node and iterates through the adjacent nodes for minimum value and proceeds with expanding the best node which is similar to Best First Search)
Note:
Hill Climbing algorithm does not look ahead beyond the immediate neighbors of the current state. It is only concerned with best neighboring node to expand.And the best neighboring is decided by the above evaluations functions.
Whereas, Best First Search algorithm look ahead of the immediate neighbors to find the best path to the goal(using Heuristic evaluation) and then proceeding with the best one.
So difference lies in approach of local searches and systematic search algorithms.
Understand the difference in approaches, you will get to know why both are named different.
Let me Wiki that for you:
In simple hill climbing, the first
closer node is chosen, whereas in
steepest ascent hill climbing all
successors are compared and the
closest to the solution is chosen.
...
Steepest ascent
hill climbing is similar to best-first
search, which tries all possible
extensions of the current path instead
of only one.
So Basically Steepest HC in the second step itself gets the answer to the problem as it's second move will be the most optimal solution and leading to the answer
A-star is used to find the shortest path between a startnode and an endnode in a graph. What algorithm is used to solve something were the target state isn't specifically known and we instead only have a criteria for the target state?
For example, can a sudoku puzzle be solved with an Astar-like algorithm? We dont know how the endstate will look like (which number is where) but we do know the rules of sudoku, a criteria for a winning state. Therefore I have a startnode and just a criteria for the endnode, which algorithm to use?
A* requires a graph, a cost function for traversal of that graph, a heuristic as to whether a node in the graph is closer to the goal than another, and a test whether the goal is reached.
Searching a Sudoku solution space doesn't really have a traversal cost to minimize, only a global cost ( the number of unsolved squares ), so all traversals would be equal cost, so A* doesn't really help - any cell you could assign costs one move and moves you one closer to the goal, so A* would be no better than choosing the next step at random.
It might be possible to apply an A* search based on the estimated/measured cost of applying the different techniques at each point, which would then try to find a path through the solution space with the least computational cost. In that case the graph would not just be the solution states of the puzzle, but you'd be choosing between the techniques to apply at that point - you'd know an estimate of the cost of a transition, but not where that transition 'goes', except that if successful, it's one step closer to the goal.
Yes, A* can be used when a specific goal state cannot be identified. (Pete Kirkham's answer implies this, but doesn't emphasise it much.)
When a specific goal state can't be identified, it's sometimes harder to come up with a useful heuristic lower bound on the remaining cost needed to complete a partial solution -- and the efficiency of A* depends on choosing an effective heuristic. But it doesn't mean it can't be applied. Any problem that can be solved on a computer can be solved using a breadth-first search, plus an array of flags indicating whether a state has been seen before; which is the same as A* with a heuristic lower bound that is always zero. (Of course, this is not the most efficient algorithm for solving many problems.)
You dont have to know the exact target endstate. It all comes down to the heuristic function, when it returns 0 you could assume to have found (at least) one of the valid endstates.
So during the a*, instead of checking if current_node == target_node, check if current_node.h() returns 0. If so, it should be infinitely close and/or overlapping the goal/endstate.