Dynamic Programing on Racket - algorithm

So I have an exercise where I should count the ways of climbing a ladder with n amount of steps, with the following restriction: You can only go up by 1, 3 or 5 steps.
The way I could solve the problem was with this code.
(define (climb n)
(cond [(< n 0) 0]
[(<= n 2) 1]
[(= n 3) 2]
[(> n 1) (+ (climb (- n 1)) (climb (- n 3)) (climb (- n 5)))]
)
)
But the problem now is that it is too difficult to get the result, you have to do a lot of calculations.
Is it possible to optimize this on the racket? What would the code be like? If I could save the previous 2 results and add it up, it would work better, but I don't know how.

The trick with dynamic programming is to store previous values that have already been computed, in this way we don't have to redo expensive recursive calls. Usually we store the values in an array, a matrix or a hash table - but for this case we can simply use a bunch of parameters and tail recursion. This works, and it's pretty fast!
(define (climb n)
(if (negative? n)
0
(let loop ([n n] [a 1] [b 1] [c 1] [d 2] [e 3] [f 5])
(if (zero? n)
a
(loop (sub1 n) b c d e f (+ a b c d e))))))
For example:
(climb 100)
=> 20285172757012753619

Related

In clojure[script], how to return nearest elements between 2 sorted vectors

In clojure[script], how to write a function nearest that receives two sorted vectors a, b and returns for each element of a the nearest element of b?
As an example,
(nearest [1 2 3 101 102 103] [0 100 1000]); [0 0 0 100 100 100]
I would like the solution to be both idiomatic and with good performances: O(n^2) is not acceptable!
Using a binary search or a sorted-set incurs a O(n*log m) time complexity where n is (count a) and m (count b).
However leveraging the fact that a and b are sorted the time complexity can be O(max(n, m)).
(defn nearest [a b]
(if-let [more-b (next b)]
(let [[x y] b
m (/ (+ x y) 2)
[<=m >m] (split-with #(<= % m) a)]
(lazy-cat (repeat (count <=m) x)
(nearest >m more-b)))
(repeat (count a) (first b))))
=> (nearest [1 2 3 101 102 103 501 601] [0 100 1000])
(0 0 0 100 100 100 100 1000)
Let n be (count a) and m be (count b). Then, if a and b are both ordered, then this can be done in what I believe ought to be O(n log(log m)) time, in other words, very close to linear in n.
First, let's re-implement abs and a binary-search (improvements here) to be independent of host (leveraging a native, e.g. Java's, version ought to be significantly faster)
(defn abs [x]
(if (neg? x) (- 0 x) x))
(defn binary-search-nearest-index [v x]
(if (> (count v) 1)
(loop [lo 0 hi (dec (count v))]
(if (== hi (inc lo))
(min-key #(abs (- x (v %))) lo hi)
(let [m (quot (+ hi lo) 2)]
(case (compare (v m) x)
1 (recur lo m)
-1 (recur m hi)
0 m))))
0))
If b is sorted, a binary search in b takes log m steps. So, mapping this over a is a O(n log m) solution, which for the pragmatist is likely good enough.
(defn nearest* [a b]
(map #(b (binary-search-nearest-index b %)) a))
However, we can also use the fact that a is sorted to divide and conquer a.
(defn nearest [a b]
(if (< (count a) 3)
(nearest* a b)
(let [m (quot (count a) 2)
i (binary-search-nearest-index b (a m))]
(lazy-cat
(nearest (subvec a 0 m) (subvec b 0 (inc i)))
[(b i)]
(nearest (subvec a (inc m)) (subvec b i))))))
I believe this ought to be O(n log(log m)). We start with the median of a and find nearest in b in log m time. Then we recurse on each half of a with split portions of b. If a m-proportional factor of b are split each time, you have O(n log log m). If only a constant portion is split off then the half of a working on that portion is linear time. If that continues (iterative halves of a work on constant size portions of b) then you have O(n).
Inspired by #amalloy, I have found this interesting idea by Chouser and wrote this solution:
(defn abs[x]
(max x (- x)))
(defn nearest-of-ss [ss x]
(let [greater (first (subseq ss >= x))
smaller (first (rsubseq ss <= x))]
(apply min-key #(abs (- % x)) (remove nil? [greater smaller]))))
(defn nearest[a b]
(map (partial nearest-of-ss (apply sorted-set a)) b))
Remark: It's important to create the sorted-set only once, in order to avoid performance penalty!

How to go about composing core functions, rather then using imperative style?

I have translated this code, the snippet below, from Python to Clojure. I replaced Python's while construct with Clojure's loop-recur here. But this doesn't look idiomatic.
(loop [d 2 [n & more] (list 256)]
(if (> n 1)
(recur (inc d)
(loop [x n sublist more]
(if (= (rem x d) 0)
(recur (/ x d) (conj sublist d))
(conj sublist x))))
(sort more)))
This routine gives me (3 3 31), that is prime factors of 279. For 256, it gives, (2 2 2 2 2 2 2 2), that means, 2^8.
Moreover, it performs worse for large values, say 987654123987546 instead of 279; whereas Python's counterpart works like charm.
How to start composing core functions, rather then translating imperative code as is? And specifically, how to improve this bit?
Thanks.
[Edited]
Here is the python code, I referred above,
def prime_factors(n):
factors = []
d = 2
while n > 1:
while n % d == 0:
factors.append(d)
n /= d
d = d + 1
return factors
A straight translation of the Python code in Clojure would be:
(defn prime-factors [n]
(let [n (atom n) ;; The Python code makes use of mutability which
factors (atom []) ;; isn't idiomatic in Clojure, but can be emulated
d (atom 2)] ;; using atoms
(loop []
(when (< 1 #n)
(loop []
(when (== (rem #n #d) 0)
(swap! factors conj #d)
(swap! n quot #d)
(recur)))
(swap! d inc)
(recur)))
#factors))
(prime-factors 279) ;; => [3 3 31]
(prime-factors 987654123987546) ;; => [2 3 41 14389 279022459]
(time (prime-factors 987654123987546)) ;; "Elapsed time: 13993.984 msecs"
;; same performance on my machine
;; as the Rosetta Code solution
You can improve this code to make it more idiomatic:
from nested loops to a single loop:
(loop []
(cond
(<= #n 1) #factors
(not= (rem #n #d) 0) (do (swap! d inc)
(recur))
:else (do (swap! factors conj #d)
(swap! n quot #d)
(recur))))))
get rid of the atoms:
(defn prime-factors [n]
(loop [n n
factors []
d 2]
(cond
(<= n 1) factors
(not= (rem n d) 0) (recur n factors (inc d))
:else (recur (quot n d) (conj factors d) d))))
replace == 0 by zero?:
(not (zero? (rem n d))) (recur n factors (inc d))
You can also overhaul it completely to make a lazy version of it:
(defn prime-factors [n]
((fn step [n d]
(lazy-seq
(when (< 1 n)
(cond
(zero? (rem n d)) (cons d (step (quot n d) d))
:else (recur n (inc d)))))
n 2))
I planned to have a section on optimization here, but I'm no specialist. The only thing I can say is that you can trivially make this code faster by interrupting the loop when d is greater than the square root of n:
(defn prime-factors [n]
(if (< 1 n)
(loop [n n
factors []
d 2]
(let [q (quot n d)]
(cond
(< q d) (conj factors n)
(zero? (rem n d)) (recur q (conj factors d) d)
:else (recur n factors (inc d)))))
[]))
(time (prime-factors 987654123987546)) ;; "Elapsed time: 7.124 msecs"
Not every loop unrolls cleanly into an elegant "functional" decomposition.
The Rosetta Code solution suggested by #edbond is pretty simple and concise; I would say it's idiomatic since no obvious "functional" solution is apparent. That solution runs noticeably faster on my machine than your Python version for 987654123987546.
More generally, if you're looking to expand your understanding of functional idioms, Bedra and Halloway's "Programming Clojure" (pp.90-95) presents an excellent comparison of different versions of the Fibonacci sequence, using loop, lazy seqs, and an elegant "functional" version. Chouser and Fogus's "Joy of Clojure" (MEAP version) also has a nice section on function composition.

Sort prime factors in ascending order Scheme

I am new to Scheme and I want to sort the prime factors of a number into ascending order. I found this code, but it does not sort.
(define (primefact n)
(let loop ([n n] [m 2] [factors (list)])
(cond [(= n 1) factors]
[(= 0 (modulo n m)) (loop (/ n m) 2 (cons m factors))]
[else (loop n (add1 m) factors)])))
Can you please help.
Thank you
I would say it sorts, but descending. If you want to sort the other way, just reverse the result:
(cond [(= n 1) (reverse factors)]
Usually when you need something sorted in the order you get them you can
cons them like this:
(define (primefact-asc n)
(let recur ((n n) (m 2))
(cond ((= n 1) '())
((= 0 (modulo n m)) (cons m (recur (/ n m) m))) ; replaced 2 with m
(else (recur n (+ 1 m))))))
Note that this is not tail recursive since it needs to cons the result, but since the amount of factors in an answer is few (thousands perhaps) it won't matter much.
Also, since it does find the factors in order you don't need to start at 2 every round but the number you found.
Which dialect of Scheme is used?
Three hints:
You need only to test for divisors less equal as the square-root of your Number.
a * b = N ; a < b ---> a <= sqrt( N ).
If you need all Prime-Numbers less some Number, you should use the sieve of eratothenes. See Wikipedia.
Before you start to write a program, look in Wikipedia.
If

map part of the vector efficiently in clojure

I wonder how this can be done in Clojure idiomatically and efficiently:
1) Given a vector containing n integers in it: [A0 A1 A2 A3 ... An]
2) Increase the last x items by 1 (let's say x is 100) so the vector will become: [A0 A1 A2 A3 ... (An-99 + 1) (An-98 + 1)... (An-1 + 1) (An + 1)]
One naive implementation looks like:
(defn inc-last [x nums]
(let [n (count nums)]
(map #(if (>= % (- n x)) (inc %2) %2)
(range n)
nums)))
(inc-last 2 [1 2 3 4])
;=> [1 2 4 5]
In this implementation, basically you just map the entire vector to another vector by examine each item to see if it needs to be increased.
However, this is an O(n) operation while I only want to change the last x items in the vector. Ideally, this should be done in O(x) instead of O(n).
I am considering using some functions like split-at/concat to implement it like below:
(defn inc-last [x nums]
(let [[nums1 nums2] (split-at x nums)]
(concat nums1 (map inc nums2))))
However, I am not sure if this implementation is O(n) or O(x). I am new to Clojure and not really sure what the time complexity will be for operations like concat/split-at on persistent data structures in Clojure.
So my questions are:
1) What the time complexity here in second implementation?
2) If it is still O(n), is there any idiomatic and efficient implementation that takes only O(x) in Clojure for solving this problem?
Any comment is appreciated. Thanks.
Update:
noisesmith's answer told me that split-at will convert the vector into a list, which was a fact I did not realised previously. Since I will do random access for the result (call nth after processing the vector), I would like to have an efficient solution (O(x) time) while keeping the vector instead of list otherwise nth will slow down my program as well.
Concat and split-at both turn the input into a seq, effectively a linked-list representation, O(x) time. Here is how to do it with a vector for O(n) performance.
user> (defn inc-last-n
[n x]
(let [count (count x)
update (fn [x i] (update-in x [i] inc))]
(reduce update x (range (- count n) count))))
#'user/inc-last-n
user> (inc-last-n 3 [0 1 2 3 4 5 6])
[0 1 2 3 5 6 7]
This will fail on input that is not associative (like seq / lazy-seq) because there is no O(1) access time in non-associative types.
inc-last is an implementation using a transient, which allows to get a modifiable "in place" vector in constant time and return a persistent! vector also in constant time, which allows to make the updates in O(x). The original implementation used an imperative doseq loop but, as mentioned in the comments, transient operations can return a new object, so it's better to keep doing things in a functional way.
I added a doall to the call to inc-last-2 since it returns a lazy seq, but inc-last and inc-last-3 returns a vector so the doall is needed to be able to compare them all.
According to some quick tests I made, inc-last and inc-last-3 don't actually differ much in performance, not even for huge vectors (10000000 elements). For the inc-last-2 implementation though, there's quite a difference even for a vector of 1000 elements, modifying only the last 10, it's ~100x slower. For smaller vectors or when the n is close to (count nums) the difference is not really that much.
(Thanks to Michał Marczyk for his useful comments)
(def x (vec (range 1000)))
(defn inc-last [n x]
(let [x (transient x)
l (count x)]
(->>
(range (- l n) l)
(reduce #(assoc! %1 %2 (inc (%1 %2))) x)
persistent!)))
(defn inc-last-2 [x nums]
(let [n (count nums)]
(map #(if (>= % (- n x)) (inc %2) %2)
(range n)
nums)))
(defn inc-last-3 [n x]
(let [l (count x)]
(reduce #(assoc %1 %2 (inc (%1 %2))) x (range (- l n) l))))
(time
(dotimes [i 100]
(inc-last 50 x)))
(time
(dotimes [i 100]
(doall (inc-last-2 10 x))))
(time
(dotimes [i 100]
(inc-last-3 50 x)))
;=> "Elapsed time: 49.7965 msecs"
;=> "Elapsed time: 1751.964501 msecs"
;=> "Elapsed time: 67.651 msecs"

Help understanding Sieve of Eratosthenes implementation

This is boring, I know, but I need a little help understanding an implementation of the Sieve of Eratosthenes. It's the solution to this Programming Praxis problem.
(define (primes n)
(let* ((max-index (quotient (- n 3) 2))
(v (make-vector (+ 1 max-index) #t)))
(let loop ((i 0) (ps '(2)))
(let ((p (+ i i 3)) (startj (+ (* 2 i i) (* 6 i) 3)))
(cond ((>= (* p p) n)
(let loop ((j i) (ps ps))
(cond ((> j max-index) (reverse ps))
((vector-ref v j)
(loop (+ j 1) (cons (+ j j 3) ps)))
(else (loop (+ j 1) ps)))))
((vector-ref v i)
(let loop ((j startj))
(if (<= j max-index)
(begin (vector-set! v j #f)
(loop (+ j p)))))
(loop (+ 1 i) (cons p ps)))
(else (loop (+ 1 i) ps)))))))
The part I'm having trouble with is startj. Now, I can see that p is going to be cycling through odd numbers starting at 3, defined as (+ i i 3). But I don't understand the relationship between p and startj, which is (+ (* 2 i i) (* 6 i) 3).
Edit: I understand that the idea is to skip previously sifted numbers. The puzzle definition states that when sifting a number x, sifting should start at the square of x. So, when sifting 3, start by eliminating 9, etc.
However, what I don't understand is how the author came up with that expression for startj (algebraically speaking).
From the puzzle comments:
In general, when sifting by n, sifting starts at n-squared because all the previous multiples of n have already been sieved.
The rest of the expression has to do with the cross-reference between numbers and sieve indexes. There’s a 2 in the expression because we eliminated all the even numbers before we ever started. There’s a 3 in the expression because Scheme vectors are zero-based, and the numbers 0, 1 and 2 aren’t part of the sieve. I think the 6 is actually a combination of the 2 and the 3, but it’s been a while since I looked at the code, so I’ll leave it to you to figure out.
If anyone could help me with this, that'd be great. Thanks!
I think I've figured it out, with the assistance of programmingpraxis' comments on their website.
To restate the problem, startj is defined in the listing as (+ (* 2 i i) (* 6 i) 3), i.e. 2i^2 + 6i + 3.
I didn't initially understand how this expression related to p - since 'sifting' for a number p should start at p^2, I figured that startj should be something relating to 4i^2 + 12i + 9.
However, startj is an index into the vector v, which contains odd numbers starting from 3. Therefore, the index for p^2 is actually (p^2 - 3) / 2.
Expanding the equation: (p^2 - 3) / 2 = ([4i^2 + 12i + 9] - 3) / 2 = 2i^2 + 6i + 3 - which is the value of startj.
I feel it might've been clearer to define startj as (quotient (- (* p p) 3) 2), but anyway - I think that solves it :)
David Seiler: perhaps not the clearest, but in addition to implementing the basic Sieve it also has to implement the three optimizations described in the exercise.
Harto: that was my second exercise. I was still experimenting with my writing style.
Ephemient: correct.
See a more complete explanation in my comment at Programming Praxis.
EDIT: I've added an additional comment at Programming Praxis. And when I actually looked at the code, I was wrong about the derivation of the number 6 in the formula; sorry I mislead you.

Resources