Method is slower after memoization - ruby

I'm working on the following algorithm problem on Leetcode:
Given a binary tree, determine if it is height-balanced. For this
problem, a height-balanced binary tree is defined as a binary tree in
which the depth of the two subtrees of every node never differ by more
than 1.
Here's the link: https://leetcode.com/problems/balanced-binary-tree/
I've written two solutions--both of which pass Leetcode's tests. The first one should be more expensive than the second one because it repeatedly scans each subtree's nodes to calculate the heights for each node. The second solution should collect all the node heights into a cache so that the calculation doesn't have to repeat. However, it turns out my second solution is actually slower than my first one, and I have no idea why. Any input?
Solution 1:
def is_balanced(root) #runs at 139 ms
return true if root.nil?
left_height = height(root.left)
right_height = height(root.right)
if !equal?(left_height, right_height)
return false
else
is_balanced(root.left) && is_balanced(root.right)
end
end
def height(node)
return 0 if node.nil?
[height(node.left), height(node.right)].max + 1
end
def equal?(num1, num2)
return true if (num1-num2).abs <= 1
false
end
Solution 2: (edited to include cobaltsoda's suggestion)
#memo = {}
def is_balanced(root)
#memo = {}
return true if root.nil?
left_height = height(root.left)
right_height = height(root.right)
if !equal?(left_height, right_height)
return false
else
is_balanced(root.left) && is_balanced(root.right)
end
end
def height(node)
#memo[node] = 0 if node.nil?
#memo[node] ||= [height(node.left), height(node.right)].max + 1
end
def equal?(num1, num2)
return true if (num1-num2).abs <= 1
false
end
Because the second solution uses memoization, shouldn't it cut out the redundancies and lower the time expenditure of the program? Not sure what I'm missing.
Also, with memoization this goes from O(NlogN) algorithm to O(N) correct?

Related

algo class question: compare n no. of sequence showing their comparison which leads to the particular sequence

Whenever you compare 3 no. it end up in 6 results and similarly 4 no it goes for 24 no. making permutation of no. of inputs.
The task is to compare n no. of sequence showing their comparison which leads to the particular sequence
For example your input is a,b,c
If a<b
If b<c
Abc
Else
If a<c
Acb
Else a>c
cab
Else b>c
Cba
Else
If a<c
Bac
Else
Bca
Else
Cba
The task is to print all the comparisons which took place to lead that sequence for n no.s and
confirm that there is no duplication.
Here is Python code that outputs valid Python code to assign to answer the sorted values.
The sorting algorithm here is mergesort. Which is not going to give the smallest possible decision tree, but it will be pretty good.
Here is Python code that outputs valid Python code to assign to answer the sorted values.
The sorting algorithm here is mergesort. Which is not going to give the smallest possible decision tree, but it will be pretty good.
#! /usr/bin/env python
import sys
class Tree:
def __init__ (self, node_type, value1=None, value2=None, value3=None):
self.node_type = node_type
self.value1 = value1
self.value2 = value2
self.value3 = value3
def output (self, indent='', is_continue=False):
rows = []
if self.node_type == 'answer':
rows.append("{}answer = [{}]".format(indent, ', '.join(self.value1)))
elif self.node_type == 'comparison':
if is_continue:
rows.append('{}elif {} < {}:'.format(indent, self.value1[0], self.value1[1]))
else:
rows.append('{}if {} < {}:'.format(indent, self.value1[0], self.value1[1]))
rows = rows + self.value2.output(indent + ' ')
if self.value3.node_type == 'answer':
rows.append('{}else:'.format(indent))
rows = rows + self.value3.output(indent + ' ')
else:
rows = rows + self.value3.output(indent, True)
return rows
# This call captures a state in the merging.
def _merge_tree (chains, first=None, second=None, output=None):
if first is None and second is None and output is None:
if len(chains) < 2:
return Tree('answer', chains[0])
else:
return _merge_tree(chains[2:], chains[0], chains[1], [])
elif first is None:
return _merge_tree(chains + [output])
elif len(first) == 0:
return _merge_tree(chains, second, None, output)
elif second is None:
return _merge_tree(chains + [output + first])
elif len(second) < len(first):
return _merge_tree(chains, second, first, output)
else:
subtree1 = _merge_tree(chains, first[1:], second, output + [first[0]])
subtree2 = _merge_tree(chains, first, second[1:], output + [second[0]])
return Tree('comparison', [first[0], second[0]], subtree1, subtree2)
def merge_tree (variables):
# Turn the list into a list of 1 element merges.
return _merge_tree([[x] for x in variables])
# This captures the moment when you're about to compare the next
# variable with the already sorted variable at position 'position'.
def insertion_tree (variables, prev_sorted=None, current_variable=None, position=None):
if prev_sorted is None:
prev_sorted = []
if current_variable is None:
if len(variables) == 0:
return Tree('answer', prev_sorted)
else:
return insertion_tree(variables[1:], prev_sorted, variables[0], len(prev_sorted))
elif position < 1:
return insertion_tree(variables, [current_variable] + prev_sorted)
else:
position = position - 1
subtree1 = insertion_tree(variables, prev_sorted, current_variable, position)
subtree2 = insertion_tree(variables, prev_sorted[0:position] + [current_variable] + prev_sorted[position:])
return Tree('comparison', [current_variable, prev_sorted[position]], subtree1, subtree2)
args = ['a', 'b', 'c']
if 1 < len(sys.argv):
args = sys.argv[1:]
for line in merge_tree(args).output():
print(line)
For giggles and grins, you can get insertion sort by switching the final call to merge_tree to insertion_tree.
In principle you could repeat the exercise for any sort algorithm, but it gets really tricky, really fast. (For quicksort you have to do continuation passing. For heapsort and bubble sort you have to insert fancy logic to only consider parts of the decision tree that you could actually arrive at. It is a fun exercise if you want to engage in it.)

Speeding up solution to algorithm

Working on the following algorithm:
Given an array of non-negative integers, you are initially positioned
at the first index of the array.
Each element in the array represents your maximum jump length at that
position.
Determine if you are able to reach the last index.
For example:
A = [2,3,1,1,4], return true.
A = [3,2,1,0,4], return false.
Below is my solution. It tries every single potential step, and then memoizes accordingly. So if the first element is three, the code takes three steps, two steps, and one step, and launches three separate functions from there. I then memoized with a hash. My issue is that the code works perfectly fine, but it's timing out for very large inputs. Memoizing helped, but only a little bit. Am I memoizing correctly or is backtracking the wrong approach here?
def can_jump(nums)
#memo = {}
avail?(nums, 0)
end
def avail?(nums, index)
return true if nums.nil? || nums.empty? || nums.length == 1 || index >= nums.length - 1
current = nums[index]
true_count = 0
until current == 0 #try every jump from the val to 1
#memo[index + current] ||= avail?(nums, index + current)
true_count +=1 if #memo[index + current] == true
current -= 1
end
true_count > 0
end
Here's a 𝑂(𝑛) algorithm:
Initialize 𝑚𝑎𝑥 to 0.
For each number 𝑛𝑖 in 𝑁:
If 𝑖 is greater than 𝑚𝑎𝑥, neither 𝑛𝑖 nor any subsequent number can be reached, so
return false.
If 𝑛𝑖+𝑖 is greater than 𝑚𝑎𝑥, set 𝑚𝑎𝑥 to 𝑛𝑖+𝑖.
If 𝑚𝑎𝑥 is greater than or equal to the last index in 𝑁
return true.
Otherwise return false.
Here's a Ruby implementation:
def can_jump(nums)
max_reach = 0
nums.each_with_index do |num, idx|
return false if idx > max_reach
max_reach = [idx+num, max_reach].max
end
max_reach >= nums.size - 1
end
p can_jump([2,3,1,1,4]) # => true
p can_jump([3,2,1,0,4]) # => false
See it on repl.it: https://repl.it/FvlV/1
Your code is O(n^2), but you can produce the result in O(n) time and O(1) space. The idea is to work backwards through the array keeping the minimum index found so far from which you can reach index n-1.
Something like this:
def can_jump(nums)
min_index = nums.length - 1
for i in (nums.length - 2).downto(0)
if nums[i] + i >= min_index
min_index = i
end
end
min_index == 0
end
print can_jump([2, 3, 1, 1, 4]), "\n"
print can_jump([3, 2, 1, 0, 4]), "\n"

Depth First Search Efficiency

I have implemented a DFS method which takes in a search value and a Binary Search Tree as arguments. The method then searches the tree for the given value, returning it when found.
When I call the method, there appears to be a duplicate node visit which may be affecting the efficiency. Is this visit in fact a duplication or just a reflection of the nature of DFS?
For instance, if my binary tree looks like the one below and I'm looking for the the value 3, the search is popping the 5 node off the stack, then the 2, then the 1, then retrieving the 2 node from the stack again before finding the 3. Is this stack retrieval of the 2 duplicative? Is it a proper DFS?
5
/ \
/ \
2 7
/ \ / \
1 3 6 8
\ \
4 9
Binary Tree
class Node
attr_accessor :value, :left, :right
def initialize(value)
#value = value
end
end
def build_tree(array, *indices)
array.sort.uniq!
mid = (array.length-1)/2
first_element = indices[0]
last_element = indices[1]
if !first_element.nil? && first_element >last_element
return nil
end
root = Node.new(array[mid])
root.left = build_tree(array[0..mid-1], 0, mid-1)
root.right = build_tree(array[mid+1..-1], mid+1, array.length-1)
return root
end
Depth First Search Method
def depth_first_search(search_value, tree)
stack = [tree]
visited = [tree]
while !stack.empty?
current = stack.last
visited << current
puts current
p current
if current.value == search_value
puts current
exit
elsif !current.left.nil? && !visited.include?(current.left)
if current.left.value == search_value
puts current.left
exit
else
visited << current.left
stack << current.left
end
elsif !current.right.nil? && !visited.include?(current.right)
if current.right.value == search_value
puts current.right
exit
else
visited << current.right
stack << current.right
end
else
stack.pop
end
end
puts "nil"
end
Method Call
binary_tree = build_tree([1,2,3,4,5,6,7,8,9])
depth_first_search(3, binary_tree)
Now, since it is DFS, it works that way. DFS in a binary-tree works exactly like pre-order traversal of the tree. So, for the example tree in the figure, DFS would be visiting like:
5-2-1-(2)-3-4-(3)-(2)-(5)-7-6-(7)-8-9
Here, the values in brackets is the "second visit" that you are calling, but, it does not actually visit those nodes. So, it is alright.
Also, I'd recommend using a binary search if the input tree is BST (not DFS).

Going from a top-down to a bottom-up algorithm (DP)

I have created this algorithm to compute the longest palindrome subsequence (word that is the same when mirrors, ie "aba", "racecar"), and have done so using a recursive top-down approach. I know that it's possible to turn these into iterative algorithms working from the bottom-up, but I am having trouble seeing how this could be accoplished
My code
def palindrome(string, r = {})
return 1 if string.length == 1
return 2 if string[0] == string[1] and string.length == 2
return r[string] if r.include?(string)
n = string.length
if string[0] == string[n-1]
r[string] = palindrome(string[1..n-2],r) + 2
else
r[string] = [palindrome(string[0..n-2],r), palindrome(string[1..n-1],r)].max
end
end
When you use negative numbers when fetching an item from an array, the array counts the elements from the end, so you don't have to keep the n variable
if string[0] == string[-1] # <= same as string[n-1]
r[string] = palindrome(string[1..-2],r) + 2 # <= same as string[1..n-2]
I don't know how performant this is, but here is a top-down suggestion:
def palindrome(string)
chars = string.chars
chars.length.downto(1).find do |length|
chars.each_cons(length).any? { |cons| cons == cons.reverse }
end
end

Is this a faithful rendition of the selection sort algorithm?

I've been reading an elementary book about sort algorithms. To get my head around it, I tried to write a simple program to implement the algorithm.
EDIT: I had omitted an important line that was in a previous version - see comment below.
This is my selection sort:
class SelectionSorter
attr_reader :sorted
def initialize(list)
#unsorted = list
#sorted = []
end
def select(list)
smallest = list.first
index = 0
list.each_with_index do |e,i|
if e < smallest
smallest = e
index = i
end
end
#sorted << list.delete_at(index)
end
def sort
#unsorted.length.times { self.select(#unsorted) }
end
end
Here's a test:
require 'minitest/autorun'
require_relative 'sort'
class SelectionSortTest < MiniTest::Test
describe SelectionSorter do
it 'sorts a randomly generated list' do
list = (1..20).map { rand(100-1) + 1 }
sorted_list = list.sort
sorter = SelectionSorter.new(list)
sorter.sort
sorter.sorted.must_equal sorted_list
end
end
end
I'd love comments, particularly around whether this is actually a faithful implementation of the algorithm.
EDIT 2:
OK - here's my in-place code. This is the sort of thing I wanted to avoid, as it feels nastily procedural, with nested loops. However, I think it's a faithful implementation.
class SelectionSorter
def sort(list)
sorted_boundary = (0..(list.length)-1)
sorted_boundary.each do |sorted_index|
smallest_index = sorted_index
smallest_value = list[smallest_index]
comparison = sorted_index + 1
(comparison..(list.length-1)).each do |next_index|
if list[next_index] < smallest_value
smallest_index = next_index
smallest_value = list[smallest_index]
end
end
unless sorted_index == smallest_index
list[smallest_index] = list[sorted_index]
list[sorted_index] = smallest_value
end
end
list
end
end
I'd love to do this in a more recursive fashion, with less stored state, and without nested loops. Any suggestions?
Try adding smallest = e immediately after index = i, so you are keeping a running tally of the smallest value found so far.
I'd also note that selection sort is usually implemented in-place, i.e., scan locations 1-N of your list for the min and then swap it with the first element, then repeat the process with elements 2-N, 3-N, etc. There's no need for a second array or the expense of removing elements from the middle of an array.
I don't know the selection sort algorithm, but I can tell that your code does not do sorting. In this part:
list.each_with_index do |e,i|
if e < smallest
index = i
end
end
you end up having as index the index of the last element of #unsorted that is smaller than the first element of #unsorted (If there is no such element, then index is 0). Then, by:
#sorted << list.delete_at(index)
you take that element from #unsorted, and push it into #sorted. And you repeat this process. That does not give you sort.

Resources