'pop' method in Ruby - ruby

I've created a 'pop' method to take the last node off of a linked list. However, the issue I'm receiving is that it's not removing the node, it's simply telling me the data in the node that should be removed.
I should preface with I'm using test driven development and the test was written to say 'assert_equal "blop", list.pop. "blop" is the value of the last node. It's great that I got my method to tell me that, but it still doesn't remove the node.
def pop
#count -= 1
return_string = ""
current_node = #head
until current_node.next_node == nil
current_node = current_node.next_node
end
return_string << current_node.data + " "
return_string.strip
current_node.next_node = Node.new(data)
end
My question is how do I return the value of what's being removed, as well as, removing the value from the linked list.

until current_node.next_node == nil
current_node = current_node.next_node
end
When that loop finishes, current_node is pointing to the last node (the node for which the next node is `nil).
That's the node you should remove, but in order to remove that one, you should point the previous node's next_node to nil.
However, at that point you don't have a reference to the previous node.
You could have another variable to track the previous node so that you can then remove it once you exit the loop.
You can check this related question (not Ruby specific) for ideas on the algorithm.
Linked List implementation for a stack
As a side note, one of the answers on that one implements this as making pop remove the first node of the list (and push add the node to the beginning), which makes it easier (and faster, since you don't go over the whole list each time).

Related

I tried solving reverse linkedlist on my own way. please spot the problem reversing the linkedlist

Hi everyone Can you tell me why this code not working for reversing linkedlist I tried my own way to solve but don't get what am I doing wrong
def reverselist(self):
temp=self.start
cur=None
prev=None
nxt=None
while(temp!=None):
nxt=temp.next
cur=temp
cur.next=prev
prev=cur
temp=temp.next
Look at these assignments:
cur=temp
cur.next=prev
prev=cur
temp=temp.next
The one that really mutates the list, is cur.next=prev. There you make the next pointer point backwards. But then after that change, you do temp=temp.next. So you are taking the modified .next value here.
If that does not clarify it, then realise that temp == cur (the first assignment), so also cur.next == temp.next. If you assign a new value to cur.next, then temp.next will be that new value.
The solution is to perform temp=temp.next before you change the next pointer:
cur=temp
temp=temp.next
cur.next=prev
prev=cur
Or, which I find a bit clearer, but really is the same thing:
cur=temp
temp=cur.next
cur.next=prev
prev=cur
Here you clearly see the pattern: what you have at the right side of the assignment, becomes the left side in the next assignment. This shows clearly that you first read the current value of a variable or property, and then write a new value to it.
NB: you never read nxt, so you don't need that variable.
Assign all at once
In Python you can make multiple assignments in one go, which saves you a variable (prev) because all expressions at the right side are evaluated before any of the assignments happen:
temp.next, cur, temp = cur, temp, temp.next
Try this below :
def reverselist(self):
temp=self.start
cur=None
prev=None
nxt=None
while(temp!=None):
nxt=temp.next
temp.next = prev
prev = temp
temp = nxt
self.start = prev

Extending (Monkey Patching) a Binary Search for Array Class and Syntactic Sugar

I've been studying a few searching algorithms and my last problem comes down to binary searching. I watched a few youtube videos to understand the concept and then tried to solve the problem, but keep getting an endless loop error. I've looked through stack overflow, and reddit, and wherever Google would lead me, but I can't quite find a solution that fits my method of coding. Also, please excuse the term 'monkey patching', it's been brought to my attention that the technical term is called 'extending' so the fault lies on my instructors for teaching it to us as 'monkey patching'.
Here's my code:
class Array
def my_bsearch(target)
return nil if self.empty?
middle_idx = self.length/2
left = self.take(middle_idx)
right = self.drop(middle_idx + 1)
return middle_idx if self[middle_idx] == target
until self[middle_idx] == target || self.nil? == nil
if self[middle_idx] < target
right.my_bsearch(target)
elsif self[middle_idx] > target
left.my_bsearch(target)
end
end
end
end
I have a solution, but I don't want to just memorize it-- and I'm having trouble understanding it; as I'm trying to translate it, learn from it, and implement what I'm missing into my own code.
class Array
def my_bsearch(target)
return nil if size == 0
mid = size/2
case self[mid] <=> target
when 0
return mid
when 1
return self.take(mid).my_bsearch(target)
else
search_res = self.drop(mid+1).my_bsearch(target)
search_res.nil? ? nil : mid + 1 + search_res
end
end
end
I guess I understand case/when despite not use to using it. I've tried following it with debugger, but I think I'm hung up on what's going on in the ELSE section. The syntactic sugar, while making this obviously more concise than my logic, isn't straight-forward/clean to someone of my ruby literacy level. So, yeah, my ignorance is most of the problem I guess.
Is there someone who is a little more literate, and patient, able to help me break this down into something I can understand a bit better so I can learn from this?
First, take and drop have sufficiently similar interfaces that you don't actually want your + 1 for drop. It will disregard one element in the array if you do.
Next, self.nil? will always be false (and never nil) for instances of this class. In fact, .nil? is a method exactly to avoid having to ever compare against nil with ==.
You want self.empty?. Furthermore, with the exception of setters, in Ruby messages are sent to self by default. In other words, the only time self. is a useful prefix is when the message ends in = and operates as an lvalue, as in self.instance_var = 'a constant', since without the self., the tokens instance_var = would be interpreted as a local variable rather than an instance variable setting. That's not the case here, so empty? will suffice just as well as self.empty?
So I figured it out, and I decided to answer my own post in hopes to help someone else out if they run into this issue.
So, if I have an Array and the target is the middle_element, then it will report middle_element_idx. That's fine. What if the target is less than middle_element? It recursively searches the left-side of the original Array. When it finds it, it reports the left_side_idx. There's no problem with that because elements in an array are sequentially counted left to right. So, it starts at 0 and goes up.
But what if the target is on the right side of the middle element?
Well, searching for the right side is easy. Relatively the same logic as searching left. Done recursively. And it will return a target_idx if it's found on that right side --however that's the target's idx as it was found in the right-side array! So, you need to take that returned target_idx and add 1 to it and the original middle_element_idx. See below:
def my_bsearch(target)
return nil if self.empty?
middle_idx = self.length/2
left = self.take(middle_idx)
right = self.drop(middle_idx + 1)
if self[middle_idx] == target
return middle_idx
elsif self[middle_idx] > target
return left.my_bsearch(target)
else
searched_right_side = 1 + right.my_bsearch(target)
return nil if searched_right_side.nil? == true
return searched_right_side + middle_idx
end
end
end
Notice how many more lines this solution is? The spaceship operator used in conjunction with case/when and a ternary method will reduce the number of lines significantly.
Based on suggestions/feedback from Tim, I updated it to:
def my_bsearch(target)
return nil if empty?
middle_idx = self.length/2
left = self.take(middle_idx)
right = self.drop(middle_idx)
if self[middle_idx] == target
return middle_idx
elsif self[middle_idx] > target
return left.my_bsearch(target)
else
searched_right_side = right.my_bsearch(target)
return nil if searched_right_side.nil?
return searched_right_side + middle_idx
end
end
end

O(n) time non-recursive procedure to traverse a binary tree

I'm reading a book called "Introduction to algorithms". I think many of you know it. I just bumped into a question which seems rather difficult:
Write an O(n)-time nonrecursive procedure that, given an n-node binary tree,
prints out the key of each node. Use no more than constant extra space outside of the tree itself and do not modify the tree, even temporarily, during the procedure.
I saw that there is another question like this: How to traverse a binary tree in O(n) time without extra memory but the main difference is that I can't modify the tree. I was thinking of using some visited flag but I haven't distilled a right solution yet. It's maybe something obvious I don't see. How would you devise an algirithm which solves this problem? Even some pointers to the answer would be appreciated.
If the tree is linked in both directions, you can do this:
assert root.parent is null
now, old = root, null
while now != null:
print now.label
if leaf(now):
now, old = now.parent, now
else:
if old == now.right:
now, old = now.parent, now
if old == now.left:
now, old = now.right, now
if old == now.parent:
now, old = now.left, now
This prints in root, left, right order, but you can get any order you like.
I don't think you can do it if the tree is only linked in one direction. You could have a look at Deforestation.
There is the complete code (in Ruby).
As an example, I have reproduced the same "n-node binary tree" as in the "Introduction to algorithms" book.
class Node
attr_accessor :key, :parent, :left, :right
def initialize(key, parent)
#key = key
#parent = parent
end
end
class Tree
def initialize(root)
#root = root
end
def print_with_constant_space
current, old = #root, nil
while current do
# Going UP
if old && old.parent == current
# Go right if exists
# otherwise continue up
next_node = (current.right || current.parent)
current, old = next_node, current
# Going DOWN
else
puts current.key
# Go left if exists
# otherwise go right if exists
# otherwise go up
next_node = (current.left || current.right || current.parent)
current, old = next_node, current
end
end
end
end
root = Node.new(0, nil)
root.left = (node_1 = Node.new(1, root))
node_1.left = (node_2 = Node.new(2, node_1))
node_2.right = (node_3 = Node.new(3, node_1))
node_3.left = (node_4 = Node.new(4, node_3))
node_1.right = (node_5 = Node.new(5, root))
node_5.left = (node_6 = Node.new(6, node_5))
node_6.right = (node_7 = Node.new(7, node_5))
node_7.right = (node_8 = Node.new(8, node_5))
node_8.left = (node_9 = Node.new(9, node_8))
node_9.right = (node_10 = Node.new(10, node_8))
node_8.right = (node_11 = Node.new(11, node_5))
node_5.right = (node_12 = Node.new(12, root))
node_12.left = (node_13 = Node.new(13, node_12))
tree = Tree.new(root)
tree.print_with_constant_space
I hope it helps...
You will have to do an in-order walk of the tree. In the same book, there is a hint in the first set of exercises on the chapter dealing with Binary Search Trees. To quote:
There is an easy solution that uses a stack as an auxiliary data structure and a more complicated but elegant solution that uses no stack but assumes that two pointers can be tested for equality.
You can find an implementation here: http://tech.technoflirt.com/2011/03/04/non-recursive-tree-traversal-in-on-using-constant-space/

Right Threading a Binary Tree

I'm having a hell of a time trying to figure this one out. Everywhere I look, I seem to be only running into explanations on how to actually traverse through the list non-recursively (the part I actually understand). Can anyone out there hammer in how exactly I can go through the list initially and find the actual predecessor/successor nodes so I can flag them in the node class? I need to be able to create a simple Binary Search Tree and go through the list and reroute the null links to the predecessor/successor. I've had some luck with a solution somewhat like the following:
thread(node n, node p) {
if (n.left !=null)
thread (n.left, n);
if (n.right !=null) {
thread (n.right, p);
}
n.right = p;
}
From your description, I'll assume you have a node with a structure looking something like:
Node {
left
right
}
... and that you have a binary tree of these set up using the left and right, and that you want to re-assign values to left and right such that it creates a doublely-linked-list from a depth first traversal of the tree.
The root (no pun intended) problem with what you've got so far is that the "node p" (short for previous?) that is passed during the traversal needs to be independent of where in the tree you currently are - it always needs to contain the previously visited node. To do that, each time thread is run it needs to reference the same "previous" variable. I've done some Python-ish pseudo code with one C-ism - if you're not familiar, '&' means "reference to" (or "ref" in C#), and '*' means "dereference and give me the object it is pointing to".
Node lastVisited
thread(root, &lastVisisted)
function thread(node, lastVisitedRef)
if (node.left)
thread(node.left, lastVisitedRef)
if (node.right)
thread(node.right, lastVisitedRef)
// visit this node, reassigning left and right
if (*lastVisitedRef)
node.right = *lastVisitedRef
(*lastVisitedRef).left = node
// update reference lastVisited
lastVisitedRef = &node
If you were going to implement this in C, you'd actually need a double pointer to hold the reference, but the idea is the same - you need to persist the location of the "last visited node" during the entire traversal.

How can I modify the breadth-first search algorithm to also include the solution path?

I have the following pseudo-code in my book for a breadth-first search:
function breadth_first_search:
begin
open := [Start]
closed := [];
while open != [] do
begin
remove leftmost state from open, call it X;
if X is a goal then return SUCCESS
else begin
generate children of X;
put X on closed;
discard children of X if already on open or closed;
put remaining children on right end of open;
end
end
return FAIL;
end
I've implemented a similar algorithm myself following these pseudo-code instructions. My question is, what is the simplest way to modify it so it maintains the solution path?
Simply knowing I can reach a solution isn't nearly as useful as having a list of transitions to get to the solution.
Set Parent[childNode] = currentNode as you enqueue each node (when you set Visible[Node] = 1).
Then recursively look up the Parent array, starting at the node you want and append each node you see in the Parent array to the path. Parent[root] is nil and the recursion will stop there.
Is there any possibility for you to change the Tree structure? If so, you might want to add a parent pointer in each node/leaf so when you find the solution you go up following parent pointers up to the root.

Resources