I'm having a little trouble with closures and I'd like to know what
the equivalent code for the canonical make-adder procedure would be in
Ruby.
In scheme it would be like:
(define (make-adder n)
(lambda (x) (+ x n))
It's actually very close...
def make_addr n
lambda { |x| x + n }
end
t = make_addr 100
t.call 1
101
In 1.9 you can use...
def make_addr n
->(x) { x + n }
end
One difference is that while Scheme has only one kind of procedure, Ruby has four. Most of the time, they behave similarly enough to your standard lambda, but you should try to understand all the details in depth.
Here's another way to do it in 1.9:
make_adder = -> n, x { n + x }
hundred_adder = make_adder.curry[100]
hundred_adder[4] # => 104
Here is a pretty nice screen-cast explaining blocks and closures in Ruby:
http://www.teachmetocode.com/screencasts/8
Related
I just want some help with how to write a recursive factorial function in Ruby. I have the following code which is lisp, but I want to do the same thing in Ruby.
(defun factorial (N)
(if (= N 1) 1
(* N (factorial (- N 1)))))
Here's how to write the your code in ruby:
def factorial(n)
return 1 if n == 1
n * factorial(n - 1)
end
factorial(5)
#=> 120
factorial(7)
#=> 5040
Edit for Stefan's comment:
To avoid a SystemStackError error with large values of n, use the tail-recursive method. Also Ruby's tailcall optimization must be enabled.
# before edit
factorial(100_000).to_s.size
#=> stack level too deep (SystemStackError)
To avoid SystemStackError
RubyVM::InstructionSequence.compile_option = {
tailcall_optimization: true,
trace_instruction: false
}
RubyVM::InstructionSequence.new(<<-CODE).eval
def factorial(n, acc = 1)
return acc if n == 1
factorial(n - 1, n * acc)
end
CODE
puts factorial(100_000).to_s.size
#=> 456574
Resource 1
Resource 2
I was trying to implement a pure functional Sieve of Eratosthenes' algorithm, based on this paper: https://www.cs.hmc.edu/~oneill/papers/Sieve-JFP.pdf
Following all the steps, I end up with a very performant Haskell code, and I tried to port it to Clojure. Problem is, the Clojure's version is very slow: it's as slow as trying to test all numbers to check if they are divisible or not. The code I ended up was the following:
(defn- sieve2 [[x & xs] table]
(let [reinsert (fn [table prime]
; (merge-with concat table {(+ x prime) [prime]})
(update table (+ x prime) #(cons prime %)))] ;(vec %) prime)))]
(if x
(if-let [facts (get table x)]
(recur xs (reduce reinsert (dissoc table x) facts))
(lazy-seq (cons x (sieve2 xs (assoc table (* x x) [x])))))
'())))
(defn real-sieve [xs] (sieve2 xs {}))
(merge with concat is commented because that was the Haskell's way, but its even slower).
With 30000 prime numbers, Haskell's version ran in 39ms, and Clojure, in 483ms. So, I ported my Clojure version to Scala:
val primes2 = {
def sieve(xs: Stream[Int], table: Map[Int, Vector[Int]]): Stream[Int] =
xs match {
case Stream() => xs
case x #:: xs => table get x match {
case Some(facts) =>
sieve(xs, facts.foldLeft(table - x) { (table, prime) =>
val key = x + prime
val value = table.getOrElse(key, Vector()) :+ x
table + (key -> value)
})
case None => x #:: sieve(xs, table + (x*x -> Vector(x)))
}
}
sieve(Stream.from(2), Map())
}
And it ran on 39ms. Then, I downloaded VisualVM and sampled my code, to see this:
Notice that most of the time, the performance killers are the HashMap key lookup and assoc. Is there some problem with my code?
Trying out OP's code, I indeed saw that the scala implementation was taking around 30 ms, while clojure's was about 500ms. That was odd.
So I compared the results, and found that the scala implementation was giving me lots of even numbers as primes. After some digging I learned that there were two bugs in the scala implementation.
The first:
val value = table.getOrElse(key, Vector()) :+ x // bug
val value = table.getOrElse(key, Vector()) :+ prime // corrected
This bug caused the evaluation to finish much quicker, since lots of non-prime numbers were included in the result.
The second bug with the scala version is the use of Int. Way before the 30000'th prime is reached an overflow occurs:
scala> 92683*92683
res1: Int = 203897 // an odd square??
So, I fixed that as well, and since scala does not have a Stream.from(Long), had to write that too (I dont speak fluent scala, so there might be a better way..):
object Test {
def sieve(xs: Stream[Long], table: Map[Long, Vector[Long]]): Stream[Long] =
xs match {
case Stream() => xs
case x #:: xs = {
table get x match {
case Some(facts) =>
sieve(xs, facts.foldLeft(table - x) { (table, prime) =>
val key = x + prime
val value = table.getOrElse(key, Vector()) :+ prime
table + (key -> value)
})
case None => {
x #:: sieve(xs, table + (x*x -> Vector(x)))
}}}}
def fromLong(start:Long) : Stream[Long] = Stream.cons(start, fromLong(start+1))
def main(args: Array[String]) {
sieve(fromLong(2), Map())
}
}
Running this again gave me comparable elapsed times for both scala and clojure:
scala> Test.time {Test.sieve(Test.fromLong(2), Map()).take(30000).last}
Elapsed time: 583 msecs
res14: Long = 350377
And clojure's version:
(time (last (take 30000 (real-sieve a))))
"Elapsed time: 536.646696 msecs"
350377
And this is in fact, the 30000th prime!
I wonder why the first approach to factorial does not work (infinite loop) in ruby while the second does.
def fac (x)
if x == 0
return 1
else
return (fac (x-1) * x)
end
end
def fact( num )
return 1 if num == 0
fact(num - 1) * num
end
The difference is the space after the method name, not the way you structured your if-else.
fac (x-1) * x is parsed as fac((x-1) * x). Basically if a method name is followed by a space (or any token that is not an opening parenthesis), ruby assumes you're calling the method without parentheses. So it interprets the parentheses around x-1 as grouping, not a part of the method call syntax.
I've been learning functional programming for some time, but I haven't read somewhere about sorting with the functional programming languages.
I know the sorting algorithms that are based on value exchanges are hard to implement with the functional idea, but I want to know that are there any sorting algorithms for use in functional programming? What are they?
Thank you.
In a functional language you write a function that given a list returns a sorted list, not touching (of course) the input.
Consider for example merge sorting... first you write a function that given two already sorted lists returns a single sorted list with the elements of both in it. For example:
def merge(a, b):
if len(a) == 0:
return b
elif len(b) == 0:
return a
elif a[0] < b[0]:
return [a[0]] + merge(a[1:], b)
else:
return [b[0]] + merge(a, b[1:])
then you can write a function that sorts a list by merging the resulting of sorting first and second half of the list.
def mergesort(x):
if len(x) < 2:
return x
else:
h = len(x) // 2
return merge(mergesort(x[:h]), mergesort(x[h:]))
About Python syntax:
L[0] is the first element of list L
L[1:] is the list of all remaining elements
More generally L[:n] is the list of up to the n-th element, L[n:] the rest
A + B if A and B are both lists is the list obtained by concatenation
[x] is a list containing just the single element x
PS: Note that python code above is just to show the concept... in Python this is NOT a reasonable approach. I used Python because I think it's the easiest to read if you know any other common imperative language.
Here are some links to sorting algorithms implemented in Haskell:
Quicksort
Insertion sort
Merge sort
Selection sort
Counting sort
Merge sort is often the best choice for sorting linked lists. Functional languages usually operates on lists although I have little knowledge on how most functional languages implements lists. In Common Lisp they are implemented as linked lists and I presume most functional languages do as well.
While quicksort can be written for linked lists it will suffer from poor pivot selection because of random access. While this does not matter on completely random input, on partially or completely sorted input pivot selection becomes very important. Other sorting algorithms may also suffer from the slow random-access performance of linked lists.
Merge sort on the other hand works well with linked lists and it is possible to implement the algorithm such that it only requires some constant of extra space with linked lists.
Here's the classic (pseudo-?)quicksort in Haskell:
sort [] = []
sort (p:xs) = sort [x | x<- xs, x <= p]
++ [p]
++ sort [x | x <- xs, x > p]
See, e.g., c2.com or LiteratePrograms.org. Merge sort isn't much harder to write and more reliable in practice. The same can be done in Scheme with:
(define (sort xs)
(if (null? xs)
'()
(let* ((p (car xs)) (xs (cdr xs)))
(call-with-values (lambda () (partition (lambda (x) (<= x p)) xs))
(lambda (l r)
(append (sort l) (list p) (sort r)))))))
with partition from SRFI-1 (untested code). See also chapter 4 of R6RS libraries.
You certainly can implement imperative, side-effecting sort algorithms in functional languages.
I've implemented a sorting algorithm that operates in-place in a functional programming language called ATS; all mutation is handled by linear types. If you're interested in this kind of thing, drop me a line.
I'm probably raising this question from the grave but I think a global comparison approach might be useful to some people (in case we're not sorting numbers for example). Here is a TypeScript version using ES6:
TL;DR
type Comparator<T> = (itemA: T, itemB: T) => number;
const mergeSort = <T>(list: T[], compare: Comparator<T>): T[] => {
if (list.length <= 1) return list;
const middleIndex = Math.floor(list.length / 2);
const listA = mergeSort(list.slice(0, middleIndex), compare);
const listB = mergeSort(list.slice(middleIndex), compare);
return merge(listA, listB, compare);
};
const merge = <T>(listA: T[], listB: T[], compare: Comparator<T>): T[] => {
if (listA.length === 0) return listB;
if (listB.length === 0) return listA;
return compare(listA[0], listB[0]) <= 0
? [listA[0], ...merge(listA.slice(1), listB, compare)]
: [listB[0], ...merge(listA, listB.slice(1), compare)];
};
Explaination
We can pass an extra compare function to the mergeSort function now. This compare function has the same definition as in JavaScript's Array.prototype.sort() method's parameter.
For instance, the number comparator would be:
const compareNumbers: Comparator<number> = (numberA, numberB) =>
numberA - numberB;
...while a User object comparator could be:
const compareUsersByAge: Comparator<User> = (userA, userB) =>
userA.age - userB.age;
...or anything more complicated if need be (e.g. string comparison).
I'm learning more about Scala, and I'm having a little trouble understanding the example of anonymous functions in http://www.scala-lang.org/node/135. I've copied the entire code block below:
object CurryTest extends Application {
def filter(xs: List[Int], p: Int => Boolean): List[Int] =
if (xs.isEmpty) xs
else if (p(xs.head)) xs.head :: filter(xs.tail, p)
else filter(xs.tail, p)
def modN(n: Int)(x: Int) = ((x % n) == 0)
val nums = List(1, 2, 3, 4, 5, 6, 7, 8)
println(filter(nums, modN(2)))
println(filter(nums, modN(3)))
}
I'm confused with the application of the modN function
def modN(n: Int)(x: Int) = ((x % n) == 0)
In the example, it's called with one argument
modN(2) and modN(3)
What does the syntax of modN(n: Int)(x: Int) mean?
Since it's called with one argument, I'm assuming they're not both arguments, but I can't really figure out how the values from nums get used by the mod function.
This is a fun thing in functional programming called currying. Basically Moses Schönfinkel and latter Haskell Curry (Schonfinkeling would sound weird though...) came up with the idea that calling a function of multiple arguments, say f(x,y) is the same as the chain of calls {g(x)}(y) or g(x)(y) where g is a function that produces another function as its output.
As an example, take the function f(x: Int, y: Int) = x + y. A call to f(2,3) would produce 5, as expected. But what happens when we curry this function - redefine it as f(x:Int)(y: Int)and call it as f(2)(3). The first call, f(2) produces a function taking an integer y and adding 2 to it -> therefore f(2) has type Int => Int and is equivalent to the function g(y) = 2 + y. The second call f(2)(3) calls the newly produced function g with the argument 3, therefore evaluating to 5, as expected.
Another way to view it is by stepping through the reduction (functional programmers call this beta-reduction - it's like the functional way of stepping line by line) of the f(2)(3) call (note, the following is not really valid Scala syntax).
f(2)(3) // Same as x => {y => x + y}
|
{y => 2 + y}(3) // The x in f gets replaced by 2
|
2 + 3 // The y gets replaced by 3
|
5
So, after all this talk, f(x)(y) can be viewed as just the following lambda expression (x: Int) => {(y: Int) => x + y} - which is valid Scala.
I hope this all makes sense - I tried to give a bit of a background of why the modN(3) call makes sense :)
You are partially applying the ModN function. Partial function application is one of the main features of functional languages. For more information check out these articles on Currying and Pointfree style.
In that example, modN returns a function that mods by the particular N. It saves you from having to do this:
def mod2(x:Int): Boolean = (x%2) == 0
def mod3(x:Int): Boolean = (x%3) == 0
The two pairs of parens delimit where you can stop passing arguments to the method. Of course, you can also just use a placeholder to achieve the same thing even when the method only has a single argument list.
def modN(n: Int, x: Int): Boolean = (x % n) == 0
val nums = List(1, 2, 3, 4, 5)
println(nums.filter(modN(2, _)))
println(nums.filter(modN(3, _)))