I have a linklist A->B->C->D
my head pointer is on A
i want to delete node c with only one head pointer.
i dont want any code just explanation.
Delete node C and make B->next to D.
When traversing the list you probably want to store the previous node in a variable, so when you hit C, you set the prev node's (which is B) next to D.
You walk the list, keep both the node you are currently looking at, and the previous one. When you find the node you want to delete, you change the link in the previous node to point to the next node.
You need to special case if you end up wanting to delete the head node as well.
Two steps
1 Update next link of the previous node, to point to the next node, relative to the removed node. in your case, you need to set B link to D
2 Dispose removed node.
Related
There is one thing about a star path finding algorithm that I do not understand. In the pseudocode; if the current node's (the node being analysed) g cost is less than the adjacent nodes g cost then recalculate the adjacent nodes g,h a f cost and reassign the parent node.
Why do you do this?
Why do you need to reevaluate the adjacent nodes costs and parent if it's gCost is greater than the current nodes gCost? I'm what instance would you need to do this?
Edit; I am watcing this video
https://www.youtube.com/watch?v=C0qCR18gXdU\
At at 8.19 he says: When you come across blocks (nodes) that have already been analysed, the question is should we change the properties of the block?
First a tip. You can actually add the time you want as a bookmark to get a video that starts right where you want. In this case https://www.youtube.com/watch?v=C0qCR18gXdU#t=08m19s is the bookmarked time link.
Now the quick answer to your question. We fill in a node the first time we find a path to it. But the first path we found to it might not be the cheapest one. We want the cheapest one, and if we find a cheaper one second, we want it.
Here is a visual metaphor. Imagine a running path with a fence next to it. The spot we want is on the other side of the fence. Actually draw this out, it will help.
The first path that our algorithm finds to it is run down the path, jump over the fence. The second path that we find is run part way down the path, go through the gate, then get to that spot. We don't want to throw away the idea of using the gate just because we already figured out that we could get there by jumping the fence!
Now in your picture put costs of moving from one spot to another that are reasonable for moving along a running path, an open field, through a gate, and jumping a fence. Run the algorithm by hand and you'll see that you figure out first that you can jump the fence, and then later that you really wanted to use the gate.
This guy is totally wrong because he says change the parent node however your succesors are based on your parent node and if you change parent Node then you you can't have a valid path because the path is simply by moving from parent to child.
Instead of changing parent, Pathmax function. It says that if a parent Node A have a child node whose cost (heuristic(A) <= heuristic(A) + accumulated(cost)) then set the cost of child equal to the cost of parent.
PathMax to ensure monotonicty:
Monotonicity: Every parent Node has cost greater or equal then the cost of it's child node.
A* has a property: It says that if your cost is monotonically increasing then the first (sub)path that A* finds is always the part of the final path. More precisely: Under monotonicity each node is reach first through the best path.
Do you see why?
Suppose you have a graph :(A,B) (B,C) (A,E) ,(E,D) here every tuple means they are connected. Suppose cost is monotonically increasing and your algortihm chooses (A,B),(B,C) and at this point you know your algorithm have chosen best path till now and everyother path which can reach this node,must have cost higher but if the cost is not monotonically increasing then it can be the case that (A,E) is cost greater than your current cost and from (E,D) it's zero. so you have better path there.
This algorithm relies on it's heuristic function , if it's underustimated then it gets corrected by accumulated cost but if it's over-estimated then it can explore extra node and i leave it for you why this happends.
Why do you need to re-evaluate an adjacent node that's already in the open list if it has a lower g cost to the current node?
Don't do this because it's just extra work.
Corollary: if you later come the same node from a node p with same cost then simply remove that node from queue. Do not extend it.
In a breadth first search of a directed graph (cycles possible), when a node is dequeued, all its children that has not yet been visited are enqueued, and the process continues until the queue its empty.
One time, I implement it the other way around, where all a node's children are enqueued, and the visitation status is checked instead when a node is dequeued. If a node being dequeued has been visited before, it is discarded and the process continue to the next in queue.
But the result is wrong. Wikipedia also says
depth-first search ...... The non-recursive implementation is similar
to breadth-first search but differs from it in two ways: it uses a
stack instead of a queue, and it delays checking whether a vertex has
been discovered until the vertex is popped from the stack rather than
making this check before pushing the vertex.
However, I cannot wrap my head around what exactly is the difference. Why does depth first search check when popping items out and breadth first search must check before enqueuing?
DFS
Suppose you have a graph:
A---B---E
| |
| |
C---D
And you search DFS from A.
You would expect it to search the nodes A,B,D,C,E if using a depth first search (assuming a certain ordering of the children).
However, if you mark nodes as visited before placing them on the stack, then you will visit A,B,D,E,C because C was marked as visited when we examined A.
In some applications where you just want to visit all connected nodes this is a perfectly valid thing to do, but it is not technically a depth first search.
BFS
In breadth first search you can mark the nodes as visited either before or after pushing to the queue. However, it is more efficient to check before as you do not end up with lots of duplicate nodes in the queue.
I don't understand why your BFS code failed in this case, perhaps if you post the code it will become clearer?
DFS checks whether a node has been visited when dequeing because it may have been visited at a "deeper" level. For example:
A--B--C--E
| |
-------
If we start at A, then B and C will be put on the stack; assume we put them on the stack so B will be processed first. When B is now processed, we want to go down to C and finally to E, which would not happen if we marked C as visited when we discovered it from A. Now once we proceed from B, we find the yet unvisited C and put it on the stack a second time. After we finished processing E, all C entries on the stack need to be ignored, which marking as visited will take care of for us.
As #PeterdeRivaz said, for BFS it's not a matter of correctness, but efficiency whether we check nodes for having been visited when enqueuing or dequeuing.
I've managed to implement a* and i got the correct path.
The problem is how i can reconstruct the path from start to end, given that every node (from end to start) has a parent link, but not the first one, so my character doesn't know where to go first.
What I'm doing is returning the closed-list and starting from index 0 to x until I reach the end. This usually works well, but I know there's gotta be another way.
Also, what is the best way to check neighboring nodes?
I've made each node to create a rectangle and see if it intersects and that's how I know they're connected. I also use this technique with the player to know when a node has been reached.
Thanks!!
You have your target node (You can simply cache it once it is found).
Go up the tree (using the parent field) until you find a node without it, this node is your root. The path you found by going up the links, is the shortest path in reversed order.
I once addressed a similar question, regarding BFS instead of A*
EDIT: A simple way to reverse the stack back to the original is to put the nodes in a stack while you go from target to source, and when you find the source - start popping elements out the stack.
From what I understand:
Add the current node to the closed list.
Find adjacent nodes to the current node, and if they are not unwalkable nodes and not on the closed list, add that node to the open list with the parent being the current node and calculate the F, G and H values. If the node already exists on the open list, check if going to that node through the current node will result in a lower G value - if yes, make the parent node of that node the current node.
Find the node on the open list with the highest F value, and make the current node that node.
Repeat until you end up in your destination, then go through the parents of the destination node, and you will come back to your start node. That will be the best path.
So, this makes sense to my brain, but when I actually try it out on a diagram, I think I'm not understanding it correctly.
(From the picture below) Go down from the starting green tile, the one with a F value of 60. That is on the open list, and has a lower F value than bottom-right 74 one. Why is the 74 one picked instead of the 60?
In my opinion, you should take a look at Amit's A* Pages. They really are great to explain how the algorithm works and how to make it work.
As for your case, the diagram shows the G score from the first node on the open list. When you look at the website, the whole diagram is built first for the first node evaluation, and the author shows that the first best node is the one to the right. Then, moving forward uses the G score based on the score of the current node plus the moving cost of the next one, which is not shown on the diagram.
It is said on the website though:
And the last square, to the immediate left of the current square, is checked to see if the G score is any lower if you go through the current square to get there. No dice.
It's G score would actually be 24 (14 (current cost) + 10 (horizontal movement cost)), just like the square below it, if I remember correctly.
The reason that you do not move to the square with an F-value of 60 is because it is in the 'closed list'. Where as the squares with an F-value of 88 and 74 are in the 'open list' and can be examined for the next move.
I am reading through the binary tree delete node algorithm used in the book Data Structures and Algorithms: Annotated Reference with Examples
on page 34 , case 4(Delete node which has both right and left sub trees), following algorithm described in the book looks doesn't work, probably I may be wrong could someone help me what I am missing.
//Case 4
get largestValue from nodeToRemove.Left
FindParent(largestValue).Right <- 0
nodeToRemove.Value<-largestValue.Value
How does the following line deletes the largest value from sub tree FindParent(largestValue).Right <- 0
When deleting a node with two children, you can either choose its in-order successor node or its in-order predecessor node. In this case it's finding the the largest value in the left sub-tree (meaning the right-most child of its left sub-tree), which means that it is finding the node's in-order predecessor node.
Once you find the replacement node, you don't actually delete the node to be deleted. Instead you take the value from the successor node and store that value in the node you want to delete. Then, you delete the successor node. In doing so you preserve the binary search-tree property since you can be sure that the node you selected will have a value that is lower than the values of all the children in the original node's left sub-tree, and greater that than the values of all the children in the original node's right sub-tree.
EDIT
After reading your question a little more, I think I have found the problem.
Typically what you have in addition to the delete function is a replace function that replaces the node in question. I think you need to change this line of code:
FindParent(largestValue).Right <- 0
to:
FindParent(largestValue).Right <- largestValue.Left
If the largestValue node doesn't have a left child, you simply get null or 0. If it does have a left child, that child becomes a replacement for the largestValue node. So you're right; the code doesn't take into account the scenario that the largestValue node might have a left child.
Another EDIT
Since you've only posted a snippet, I'm not sure what the context of the code is. But the snippet as posted does seem to have the problem you suggest (replacing the wrong node). Usually, there are three cases, but I notice that the comment in your snippet says //Case 4 (so maybe there is some other context).
Earlier, I alluded to the fact that delete usually comes with a replace. So if you find the largestValue node, you delete it according to the two simple cases (node with no children, and node with one child). So if you're looking at pseudo-code to delete a node with two children, this is what you'll do:
get largestValue from nodeToRemove.Left
nodeToRemove.Value <- largestValue.Value
//now replace largestValue with largestValue.Left
if largestValue = largestValue.Parent.Left then
largestValue.Parent.Left <- largestValue.Left //is largestValue a left child?
else //largestValue must be a right child
largestValue.Parent.Right <- largestValue.Left
if largestValue.Left is not null then
largestValue.Left.Parent <- largestValue.Parent
I find it strange that a Data Structures And Algorithms book would leave out this part, so I am inclined to think that the book has further split up the deletion into a few more cases (since there are three standard cases) to make it easier to understand.
To prove that the above code works, consider the following tree:
8
/ \
7 9
Let's say that you want to delete 8. You try to find largestValue from nodeToRemove.Left. This gives you 7 since the left sub-tree only has one child.
Then you do:
nodeToRemove.Value <- largestValue.Value
Which means:
8.value <- 7.Value
or
8.Value <- 7
So now your tree looks like this:
7
/ \
7 9
You need to get rid of the replacement node and so you're going to replace largestValue with largestValue.Left (which is null). So first you find out what kind of child 7 is:
if largestValue = largestValue.Parent.Left then
Which means:
if 7 = 7.Parent.Left then
or:
if 7 = 8.Left then
Since 7 is 8's left child, need to replace 8.Left with 7.Right (largestValue.Parent.Left <- largestValue.Left). Since 7 has no children, 7.Left is null. So largestValue.Parent.Left gets assigned to null (which effectively removes its left child). So this means that you end up with the following tree:
7
\
9
The idea is to simply take the value from the largest node on the left hand side and move it to the node that is being deleted, i.e., don't delete the node at all, just replace it's contents. Then you prune out the node with the value you moved into the "deleted" node. This maintains the tree ordering with every node's value larger than all of it's left children and smaller than all of it's right children.
I think you may need to clarify what doesn't work.
I will try and explain the concept of deletion in a binary tree in case this helps.
Lets assume that you have a node in the tree that has two child nodes that you wish to delete.
in the tree below lets say that you want to delete node b
a
/ \
b c
/ \ / \
d e f g
When we delete a node we need to reattach its dependant nodes.
ie. When we delete b we need to reattach nodes d and e.
We know that the left nodes are less than the right nodes in value and that the parent nodes are between the left and right node s in value. In this case d < b and b < e. This is part of the definition of a binary tree.
What is slightly less obvious is that e < a. So this means that we can replace b with e. Now we have reattached e we need to reattach d.
As stated before d < e so we can attach e as the left node of e.
The deletion is now complete.
( btw The process of moving a node up the tree and rearranging the dependant nodes in this fashion is known as promoting a node. You can also promote a node without deleting other nodes.)
a
/ \
d c
\ / \
e f g
Note that there is another perfectly legitimate outcome of deleteing node b.
If we chose to promote node d instead of node e the tree would look like this.
a
/ \
e c
/ / \
d f g
If I understand the pseudo-code, it works in the general case, but fails in the "one node in the left subtree" case. Nice catch.
It effectively replaces the node_to_remove with largest_value from it's left subtree (also nulls the old largest_value node).
Note that in a BST, the left subtree of node_to_remove will be all be smaller than node_to_remove. The right subtree of node_to_remove will all be larger than node_to_remove. So if you take the largest node in the left subtree, it will preserve the invariant.
If this is a "one node in the subtree case", it'll destroy the right subtree instead. Lame :(
As Vivin points out, it also fails to reattach left children of largestNode.
It may make more sense when you look at the Wikipedia's take on that part of the algorithm:
Deleting a node with two children:
Call the node to be deleted "N". Do
not delete N. Instead, choose either
its in-order successor node or its
in-order predecessor node, "R".
Replace the value of N with the value
of R, then delete R. (Note: R itself
has up to one child.)
Note that the given algorithm chooses the in-order predecessor node.
Edit: what appears to be missing the possibility that R (to use Wikipedia's terminology) has one child. A recursive delete might work better.