Update list with negative elements.
For example init:
List(1,-2,-3)
final:
List(1,2,3)
Scala solution:
def fpUpdateList(el: List[Int]): List[Int] = el.map(e => if (e < 0) e - e * 2; else e)
Not sure if this is a question or what, but here's a more concise way:
def fpUpdateList(el: List[Int]) = el.map(Math.abs)
Related
I was playing around with some implementations of Quicksort in Ruby. After implementing some of the inlace algorithms, I felt that using Ruby's partition method, even though it would not provide an in-place solution, it would provide a very nice readable solution.
My first solution was this, which other than always using the last element of the array as the pivot, seemed pretty nice.
def quick_sort3(ary)
return ary if ary.size <= 1
left,right = ary.partition { |v| v < ary.last }
pivot_value = right.pop
quick_sort3(left) + [pivot_value] + quick_sort3(right)
end
After some searching I found this answer which had a very similar solution with a better choice of the initial pivot, reproduced here using the same variable names and block passed to partition.
def quick_sort6(*ary)
return ary if ary.empty?
pivot_value = ary.delete_at(rand(ary.size))
left,right = ary.partition { |v| v < pivot_value }
return *quick_sort6(*left), pivot_value, *quick_sort6(*right)
end
I felt I could improve my solution by using the same method to select a random pivot.
def quick_sort4(ary)
return ary if ary.size <= 1
pivot_value = ary.delete_at(rand(ary.size))
left,right = ary.partition { |v| v < pivot_value }
quick_sort4(left) + [pivot_value] + quick_sort4(right)
end
The down side to this version quick_sort4 vs the linked answer quick_sort6, is that quick_sort4 changes the input array, while quick_sort6 does not. I am assuming this is why Jorg chose to receive the splat array vs array?
My fix for this was to simply duplicate the passed in array and then perform the delete_at on the copied array rather than the original array.
def quick_sort5(ary_in)
return ary_in if ary_in.size <= 1
ary = ary_in.dup
pivot_value = ary.delete_at(rand(ary.size))
left,right = ary.partition { |v| v < pivot_value }
quick_sort5(left) + [pivot_value] + quick_sort5(right)
end
My question is there any significant differences between quick_sort6 which uses the splats and quick_sort5 which uses dup? I am assuming the use of the splats was to avoid changing the input array, but is there something else I am missing?
In terms of performance, quick_sort6 is your best bet. Using some random data:
require 'benchmark'
def quick_sort3(ary)
return ary if ary.size <= 1
left,right = ary.partition { |v| v < ary.last }
pivot_value = right.pop
quick_sort3(left) + [pivot_value] + quick_sort3(right)
end
def quick_sort6(*ary)
return ary if ary.empty?
pivot_value = ary.delete_at(rand(ary.size))
left,right = ary.partition { |v| v < pivot_value }
return *quick_sort6(*left), pivot_value, *quick_sort6(*right)
end
def quick_sort4(ary)
return ary if ary.size <= 1
pivot_value = ary.delete_at(rand(ary.size))
left,right = ary.partition { |v| v < pivot_value }
quick_sort4(left) + [pivot_value] + quick_sort4(right)
end
def quick_sort5(ary_in)
return ary_in if ary_in.size <= 1
ary = ary_in.dup
pivot_value = ary.delete_at(rand(ary.size))
left,right = ary.partition { |v| v < pivot_value }
quick_sort5(left) + [pivot_value] + quick_sort5(right)
end
random_arrays = Array.new(5000) do
Array.new(500) { rand(1...500) }.uniq
end
Benchmark.bm do |benchmark|
benchmark.report("quick_sort3") do
random_arrays.each do |ra|
quick_sort3(ra.dup)
end
end
benchmark.report("quick_sort6") do
random_arrays.each do |ra|
quick_sort6(ra.dup)
end
end
benchmark.report("quick_sort4") do
random_arrays.each do |ra|
quick_sort4(ra.dup)
end
end
benchmark.report("quick_sort5") do
random_arrays.each do |ra|
quick_sort5(ra.dup)
end
end
end
Gives as result
user system total real
quick_sort3 1.389173 0.019380 1.408553 ( 1.411771)
quick_sort6 0.004399 0.000022 0.004421 ( 0.004487)
quick_sort4 1.208003 0.002573 1.210576 ( 1.214131)
quick_sort5 1.458327 0.000867 1.459194 ( 1.459882)
The problem with splat style in this case is that it would create an awkward API.
Most times the consumer code would have an array of things that need to be sorted:
stuff = [1, 2, 3]
sort(stuff)
The splat style makes the consumers do this instead:
stuff = [1, 2, 3]
sort(*stuff)
The two calls might end up doing the same thing, but as a user I am sorting an array, therefore I expect to pass the array to the method, not pass each array element individually to the method.
Another label for this phenomenon in abstraction leakage - you are allowing the implementation of the sort method define its interface. Usually in Ruby this is frowned upon.
I'm solving leetcode #139 and for some reason, I'm getting time limit exceeded. Am I using memoization improperly?
class Solution:
memo = set()
def wordBreak(self, s: str, wordDict: List[str]) -> bool:
self.memo = set(wordDict)
return self.word_break(s)
def word_break(self, s):
if s in self.memo:
return True
for i in range(1, len(s)):
head = s[:i]
tail = s[i:]
head_possible = self.word_break(head)
tail_possible = self.word_break(tail)
if head_possible:
self.memo.add(head)
if tail_possible:
self.memo.add(tail)
if head_possible and tail_possible:
return True
return False
Thank you!
Your solution works fine without TLE, if we would just enable lru_cache():
class Solution:
memo = set()
def wordBreak(self, s: str, wordDict: List[str]) -> bool:
self.memo = set(wordDict)
return self.word_break(s)
#lru_cache(None)
def word_break(self, s):
if s in self.memo:
return True
for i in range(1, len(s)):
head = s[:i]
tail = s[i:]
head_possible = self.word_break(head)
tail_possible = self.word_break(tail)
if head_possible:
self.memo.add(head)
if tail_possible:
self.memo.add(tail)
if head_possible and tail_possible:
return True
return False
This'd also pass without Time Limit Exceeded:
class Solution:
def wordBreak(self, s, words):
dp = [False] * len(s)
for i in range(len(s)):
for word in words:
k = i - len(word)
if word == s[k + 1:i + 1] and (dp[k] or k == -1):
dp[i] = True
return dp[-1]
I'm currently going over Robert Sedgewick's Algorithms book. In the book for the implementation of a Priority Queue there is the use of the Comparable module. While going over the top k frequent elements leetcode problem I noticed that there would be an error in my Ruby implementation.
def top_k_frequent(nums, k)
ans = []
h = Hash.new(0)
nums.each do |num|
h[num] += 1
end
heap = Heap.new
h.each do |k,v|
heap.insert({k => v})
end
k.times do
a = heap.del_max
ans.push(a.keys[0])
end
ans
end
class Heap
def initialize
#n = 0
#pq = []
end
def insert(v)
#pq[#n += 1] = v
swim(#n)
end
def swim(k)
while k > 1 && less((k / 2).floor, k)
swap((k / 2).floor, k)
k = k/2
end
end
def swap(i, j)
temp = #pq[i]
#pq[i] = #pq[j]
#pq[j] = temp
end
def less(i, j)
#pq[i].values[0] < #pq[j].values[0]
end
def del_max
max = #pq[1]
swap(1, #n)
#n -= 1
#pq[#n + 1] = nil
sink(1)
max
end
def sink(k)
while 2 * k <= #n
j = 2 * k
if !#pq[j + 1].nil?
j += 1 if j > 1 && #pq[j].values[0] < #pq[j + 1].values[0]
end
break if !less(k, j)
swap(k, j)
k = j
end
end
end
Above is the Java Priority Queue implementation.
Ruby's comparable operator is <=> which will return one of -1, 0, 1 and nil (nil mean could not compare).
In order to compare two objects , both need to implement a method def <=>(other). This is not on Object, so is not available on any objects that don't implement it or extend from a class that does implement it. Numbers and Strings, for example, do have an implementation. Hashes do not.
I think in your case, the issue is slightly different.
When you call queue.insert(my_hash) what you're expecting is for the algorithm to break up my_hash and build from that. Instead, the algorithm takes the hash as a single, atomic object and inserts that.
If you add something like:
class Tuple
attr_accessor :key, :value
def initialize(key, value)
#key = key
#value = value
end
def <=>(other)
return nil unless other.is_a?(Tuple)
value <=> other.value
end
end
then this will allow you to do something like:
hsh = { 1 => 3, 2 => 2, 3 => 1}
tuples = hsh.map { |k, v| Tuple.new(k, v) }
tuples.each { |tuple| my_heap.insert(tuple) }
you will have all of your data in the heap.
When you retrieve an item, it will be a tuple, so you can just call item.key and item.value to access the data.
I am trying to implement CYK algorithm in Ruby according to pseudocode from Wikipedia. My implementation fails to produce the correct parse table. In the method given below, grammar is a member of my own grammar class. Here is the code:
# checks whether a grammar accepts given string
# assumes input grammar to be in CNF
def self.parse(grammar, string)
n = string.length
r = grammar.nonterminals.size
# create n x n x r matrix
tbl = Array.new(n) { |_| Array.new(n) { |_| Array.new(r, false) } }
(0...n).each { |s|
grammar.rules.each { |rule|
# check if rule is unit production: A -> b
next unless rule.rhs.size == 1
unit_terminal = rule.rhs[0]
if unit_terminal.value == string[s]
v = grammar.nonterminals.index(rule.lhs)
tbl[0][s][v] = true
end
}
}
(1...n).each { |l|
(0...n - l + 1).each { |s|
(0..l - 1).each { |p|
# enumerate over A -> B C rules, where A, B and C are
# indices in array of NTs
grammar.rules.each { |rule|
next unless rule.rhs.size == 2
a = grammar.nonterminals.index(rule.lhs)
b = grammar.nonterminals.index(rule.rhs[0])
c = grammar.nonterminals.index(rule.rhs[1])
if tbl[p][s][b] and tbl[l - p][s + p][c]
tbl[l][s][a] = true
end
}
}
}
}
v = grammar.nonterminals.index(grammar.start_sym)
return tbl[n - 1][0][v]
end
I tested it with this simple example:
grammar:
A -> B C
B -> 'x'
C -> 'y'
string: 'xy'
The parse table tbl was the following:
[[[false, true, false], [false, false, true]],
[[false, false, false], [false, false, false]]]
The problem definitely lies in the second part of the algorithm - substrings of length larger than 1. The first layer (tbl[0]) contains correct values.
Help much appreciated.
The problem lies in the translation from the 1-based arrays in the pseudocode to the 0-based arrays in your code.
It becomes obvious when you look at the first indices in the condition tbl[p][s][b] and tbl[l-p][s+p][c] in the very first run of the loop. The pseudocode checks tbl[1] and tbl[1] and your code checks tbl[0] and tbl[1].
I think you have to make the 0-based correction when you access the array and not in the ranges for l and p. Otherwise the calculations with the indices are wrong.
This should work:
(2..n).each do |l|
(0...n - l + 1).each do |s|
(1..l - 1).each do |p|
grammar.rules.each do |rule|
next unless rule.rhs.size == 2
a = grammar.nonterminals.index(rule.lhs)
b = grammar.nonterminals.index(rule.rhs[0])
c = grammar.nonterminals.index(rule.rhs[1])
if tbl[p - 1][s][b] and tbl[l - p - 1][s + p][c]
tbl[l - 1][s][a] = true
end
end
end
end
end
I am attempting to memoize my implementation of a Pascal's triangle generator, as a Ruby learning experiment. I have the following working code:
module PascalMemo
#cache = {}
def PascalMemo::get(r,c)
if #cache[[r,c]].nil? then
if c == 0 || c == r then
#cache[[r,c]] = 1
else
#cache[[r,c]] = PascalMemo::get(r - 1, c) + PascalMemo::get(r - 1, c - 1)
end
end
#cache[[r,c]]
end
end
def pascal_memo (r,c)
PascalMemo::get(r,c)
end
Can this be made more concise? Specifically, can I create a globally-scoped function with a local closure more simply than this?
def pascal_memo
cache = [[1]]
get = lambda { |r, c|
( cache[r] or cache[r] = [1] + [nil] * (r - 1) + [1] )[c] or
cache[r][c] = get.(r - 1, c) + get.(r - 1, c - 1)
}
end
p = pascal_memo
p.( 10, 7 ) #=> 120
Please note that the above construct does achieve memoization, it is not just a simple recursive method.
Can this be made more concise?
It seems pretty clear, IMO, and moduleing is usually a good instinct.
can I create a globally-scoped function with a local closure more simply than this?
Another option would be a recursive lambda:
memo = {}
pascal_memo = lambda do |r, c|
if memo[[r,c]].nil?
if c == 0 || c == r
memo[[r,c]] = 1
else
memo[[r,c]] = pascal_memo[r - 1, c] + pascal_memo[r - 1, c - 1]
end
end
memo[[r,c]]
end
pascal_memo[10, 2]
# => 45
I have found a way to accomplish what I want that is slightly more satisfactory, since it produces a function rather than a lambda:
class << self
cache = {}
define_method :pascal_memo do |r,c|
cache[[r,c]] or
(if c == 0 or c == r then cache[[r,c]] = 1 else nil end) or
cache[[r,c]] = pascal_memo(r-1,c) + pascal_memo(r-1,c-1)
end
end
This opens up the metaclass/singleton class for the main object, then uses define_method to add a new method that closes over the cache variable, which then falls out of scope for everything except the pascal_memo method.