Let f transform one value to another, then I'm writing a function that repeats the transformation n times.
I have come up with two different ways:
One is the obvious way that
literally applies the function n
times, so repeat(f, 4) means x →
f(f(f(f(x))))
The other way is inspired from the
fast method for powering, which means
dividing the problem into two
problems that are half as large
whenever n is even. So repeat(f, 4)
means x → g(g(x)) where g(x) =
f(f(x))
At first I thought the second method wouldn't improve efficiency that much. At the end of the day, we would still need to apply f n times, wouldn't we? In the above example, g would still be translated into f o f without any further simplification, right?
However, when I tried out the methods, the latter method was noticeable faster.
;; computes the composite of two functions
(define (compose f g)
(lambda (x) (f (g x))))
;; identify function
(define (id x) x)
;; repeats the application of a function, naive way
(define (repeat1 f n)
(define (iter k acc)
(if (= k 0)
acc
(iter (- k 1) (compose f acc))))
(iter n id))
;; repeats the application of a function, divide n conquer way
(define (repeat2 f n)
(define (iter f k acc)
(cond ((= k 0) acc)
((even? k) (iter (compose f f) (/ k 2) acc))
(else (iter f (- k 1) (compose f acc)))))
(iter f n id))
;; increment function used for testing
(define (inc x) (+ x 1))
In fact, ((repeat2 inc 1000000) 0) was much faster than ((repeat1 inc 1000000) 0). My question is in what aspect was the second method more efficient than the first? Did re-using the same function object preserves storage and reduces the time spent for creating new objects?
After all, the application has to be repeated n times, or saying it another way, x→((x+1)+1) cannot be automatically reduced to x→(x+2), right?
I'm running on DrScheme 4.2.1.
Thank you very much.
You're right that both versions do the same number of calls to inc -- but there's more
overhead than that in your code. Specifically, the first version creates N closures, whereas
the second one creates only log(N) closures -- and if the closure creation is most of the work
then you'll see a big difference in performance.
There are three things that you can use to see this in more details:
Use DrScheme's time special form to measure the speed. In addition to the time that it
took to perform some computation, it will also tell you how much time was spent in GC.
You will see that the first version is doing some GC work, while the second doesn't.
(Well, it does, but it's so little, that it will probably not show.)
Your inc function is doing so little, that you're measuring only the looping overhead.
For example, when I use this bad version:
(define (slow-inc x)
(define (plus1 x)
(/ (if (< (random 10) 5)
(* (+ x 1) 2)
(+ (* x 2) 2))
2))
(- (plus1 (plus1 (plus1 x))) 2))
the difference between the two uses drops from a factor of ~11 to 1.6.
Finally, try this version out:
(define (repeat3 f n)
(lambda (x)
(define (iter n x)
(if (zero? n) x (iter (sub1 n) (f x))))
(iter n x)))
It doesn't do any compositions, and it works in roughly
the same speed as your second version.
The first method essentially applies the function n times, thus it is O(n). But the second method is not actually applying the function n times. Every time repeat2 is called it splits n by 2 whenever n is even. Thus much of the time the size of the problem is halved rather than merely decreasing by 1. This gives an overall runtime of O(log(n)).
As Martinho Fernandez suggested, the wikipedia article on exponentiation by squaring explains it very clearly.
Related
Exercise 1.16: Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the observation that (b(^n/2))^2 = (b(^2))^n/2 , keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
So I've tried really hard and came up with this solution:
(define (exp b n)
(exp-iter b n 1))
(define (square p) (* p p))
(define (even? k)
(= (remainder k 2) 0))
(define (exp-iter b counter product)
(define (smash counter)
(if (even? counter) (square (exp-iter b (/ 2 counter) product)) (* b (exp-iter b (- counter 1) product))))
(if (= counter 0) product (smash counter)))
(exp 4 3) ;test
This runs perfectly but I'm not sure if this is what the author asked me to do. Are there any problems with this? Is my solution really iterative?
Your solution is not iterative. An iterative process is one that doesn't call anything after the recursive call, and that's not the case in these two lines:
(square (exp-iter b (/ 2 counter) product))
(* b (exp-iter b (- counter 1) product))
After invoking exp-iter, in the first line you're passing the result to square, and in the second line you're multiplying the result by b. Compare it with this, a tail recursive solution:
(define (exp-iter b counter product)
(cond ((= counter 0)
product)
((even? counter)
(exp-iter (square b) (/ counter 2) product))
(else
(exp-iter b (- counter 1) (* b product)))))
Notice that after invoking exp-iter there's nothing left to do and the procedure simply returns its value. A smart compiler will detect this, and transform the recursive call into a loop that will use a constant amount of stack memory (instead of increasing with every recursive call.)
I am studying SICP and wrote two procedures to compute the sum of 1/n^2, the first generating a recursive process and the second generating an iterative process :
(define (sum-rec a b)
(if (> a b)
0
(exact->inexact (+ (/ 1 (* a a)) (sum-rec (1+ a) b)))))
(define (sum-it a b)
(define (sum_iter a tot)
(if (> a b)
tot
(sum_iter (1+ a) (+ (/ 1 (* a a)) tot))))
(exact->inexact (sum_iter a 0)))
I tested that both procedures give exactly the same results when called with small values of b, and that the result is approaching $pi^2/6$ as b gets larger, as expected.
But surprisingly, calling (sum-rec 1 250000) is almost instantaneous whereas calling (sum-it 1 250000) takes forever.
Is there an explanation for that?
As was mentioned in the comments, sum-it in its present form is adding numbers using exact arithmetic, which is slower than the inexact arithmetic being used in sum-rec. To do an equivalent comparison, this is how you should implement it:
(define (sum-it a b)
(define (sum_iter a tot)
(if (> a b)
tot
(sum_iter (1+ a) (+ (/ 1.0 (* a a)) tot))))
(sum_iter a 0))
Notice that replacing the 1 with a 1.0 forces the interpreter to use inexact arithmetic. Now this will return immediately:
(sum-it 1 250000)
=> 1.6449300668562465
You can reframe both of these versions so that they do exact or inexact arithmetic appropriately, simply by controlling what value they use for zero and relying on the contagion rules. These two are in Racket, which doesn't have 1+ by default but does have a nice syntax for optional arguments with defaults:
(define (sum-rec low high (zero 0.0))
(let recurse ([i low])
(if (> i high)
zero
(+ (/ 1 (* i i)) (recurse (+ i 1))))))
(define (sum-iter low high (zero 0.0))
(let iterate ([i low] [accum zero])
(if (> i high)
accum
(iterate (+ i 1) (+ (/ 1 (* i i)) accum)))))
The advantage of this is you can see the performance difference easily for both versions. The disadvantage is that you'd need a really smart compiler to be able to optimize the numerical operations here (I think, even if it knew low and high were machine integers, it would have to infer that zero is going to be some numerical type and generate copies of the function body for all the possible types).
In school, I have been learning about runtime and writing more efficient algorithms using tail recursion and the like, and a little while back an assignment asked us to consider the function for calculating powers;
(define (exp x n)
(if (zero? n) 1 (* x (exp x (- n 1)))))
and we were tasked with writing an exp function with a runtime O(log n), so this was my answer:
(define (exp x n)
(cond
((zero? n) 1)
((= 1 n) x)
((even? n) (exp (* x x) (/ n 2)))
(else (* x (exp (* x x) (/ (- n 1) 2))))))
which simply comes from x^2n = (x^2)^n and x^2n+1 = x*(x^2)^n.
So I have been trying to think of a way to implement tail recursion to even further optimize this function, but I can't really think of a way to do this.Back to my question, Is there any sort of rule of thumb to know when you can write a polynomial runtime algorithm as a logarithmic runtime?
I ask this, because, as easy as it was to write this in such a way that its runtime is logarithmic, I never would have thought to do it without having been specifically asked to do so.
Regarding the first part of your question: it's relatively simple to turn the procedure into tail-recursive form, we just have to pass an additional parameter to accumulate the answer. To avoid creating an additional procedure I'll use a named let:
(define (exp x n)
(let loop ([x x] [n n] [acc 1])
(cond
((zero? n) acc)
((= n 1) (* x acc))
((even? n) (loop (* x x) (/ n 2) acc))
(else (loop (* x x) (/ (- n 1) 2) (* x acc))))))
And for the second question: the rule of thumb would be - if there's a way to halve the size of a problem at some point when making the recursive call (in such a way that the result can be computed accordingly), then that's a good sign that it might exist a logarithmic solution for it. Of course, that's not always so obvious.
(define (checksum-2 ls)
(if (null? ls)
0
(let ([n 0])
(+ (+ n 1))(* n (car ls))(checksum-2 (cdr ls)))))
Ok, I have this code, its suppose to, if I wrote it right, the number (n) should increase by one every time it goes through the list, so n (in reality) should be like 1 2 3 4, but I want n to be multiplied by the car of the list.
Everything loads, but when the answer is returned I get 0.
Thanks!
If you format your code differently, you might have an easier time seeing what is going on:
(define (checksum-2 ls)
(if (null? ls)
0
(let ([n 0])
(+ (+ n 1))
(* n (car ls))
(checksum-2 (cdr ls)))))
Inside the let form, the expressions are evaluated in sequence but you're not using the results for any of them (except the last one). The results of the addition and multiplication are simply discarded.
What you need to do in this case is define a new helper function that uses an accumulator and performs the recursive call. I'm going to guess this is homework or a learning exercise, so I'm not going to give away the complete answer.
UPDATE: As a demonstration of the sort of thing you might need to do, here is a similar function in Scheme to sum the integers from 1 to n:
(define (sum n)
(define (sum-helper n a)
(if (<= n 0)
a
(sum-helper (- n 1) (+ a n))))
(sum-helper n 0))
You should be able to use a similar framework to implement your checksum-2 function.
I remember once going to see
[Srinivasa Ramanujan] when he was ill
at Putney. I had ridden in taxi cab
number 1729 and remarked that the
number seemed to me rather a dull one,
and that I hoped it was not an
unfavorable omen. "No," he replied,
"it is a very interesting number; it
is the smallest number expressible as
the sum of two cubes in two different
ways." [G. H. Hardy as told in "1729
(number)"]
In "Math Wrath" Joseph Tartakovsky says about this feat, "So what?
Give me two minutes and my calculator watch, and I'll do the same
without exerting any little gray cells." I don't know how
Mr. Tartakovsky would accomplish that proof on a calculator watch, but
the following is my scheme function that enumerates numbers starting
at 1 and stops when it finds a number that is expressable in two
seperate ways by summing the cubes of two positive numbers. And it
indeeds returns 1729.
There are two areas where I would appreciate suggestions for
improvement. One area is, being new to scheme, style and idiom. The other area is around the calculations. Sisc
does not return exact numbers for roots, even when they could be. For
example (expt 27 1/3) yields 2.9999999999999996. But I do get exact
retults when cubing an exact number, (expt 3 3) yields 27. My
solution was to get the exact floor of a cube root and then test
against the cube of the floor and the cube of the floor plus one,
counting as a match if either match. This solution seems messy and hard to reason about. Is there a more straightforward way?
; Find the Hardy-Ramanujan number, which is the smallest positive
; integer that is the sum of the cubes of two positivie integers in
; two seperate ways.
(define (hardy-ramanujan-number)
(let ((how-many-sum-of-2-positive-cubes
; while i^3 + 1 < n/1
; tmp := exact_floor(cube-root(n - i^3))
; if n = i^3 + tmp^3 or n = i^3 + (tmp + 1) ^3 then count := count + 1
; return count
(lambda (n)
(let ((cube (lambda (n) (expt n 3)))
(cube-root (lambda (n) (inexact->exact (expt n 1/3)))))
(let iter ((i 1) (count 0))
(if (> (+ (expt i 3) 1) (/ n 2))
count
(let* ((cube-i (cube i))
(tmp (floor (cube-root (- n cube-i)))))
(iter (+ i 1)
(+ count
(if (or (= n (+ cube-i (cube tmp)))
(= n (+ cube-i (cube (+ tmp 1)))))
1
0))))))))))
(let iter ((n 1))
(if (= (how-many-sum-of-2-positive-cubes n) 2)
n
(iter (+ 1 n))))))
Your code looks mostly fine, I see a few very minor things to comment on:
There's no need to define cube and cube-root at the innermost scope,
Using define for internal functions makes it look a little clearer,
This is related to the second part of your question: you're using inexact->exact on a floating point number which can lead to large rationals (in the sense that you allocate a pair of two big integers) -- it would be better to avoid this,
Doing that still doesn't solve the extra test that you do -- but that's only because you're not certain if you have the right number of if you missed by 1. Given that it should be close to an integer, you can just use round and then do one check, saving you one test.
Fixing the above, and doing it in one function that returns the number when it's found, and using some more "obvious" identifier names, I get this:
(define (hardy-ramanujan-number n)
(define (cube n) (expt n 3))
(define (cube-root n) (inexact->exact (round (expt n 1/3))))
(let iter ([i 1] [count 0])
(if (> (+ (cube i) 1) (/ n 2))
(hardy-ramanujan-number (+ n 1))
(let* ([i^3 (cube i)]
[j^3 (cube (cube-root (- n i^3)))]
[count (if (= n (+ i^3 j^3)) (+ count 1) count)])
(if (= count 2) n (iter (+ i 1) count))))))
I'm running this on Racket, and it looks like it's about 10 times faster (50ms vs 5ms).
Different Schemes behave differently when it comes to exact exponentiation: some return an exact result when possible, some an inexact result in all cases. You can look at ExactExpt, one of my set of implementation contrasts pages, to see which Schemes do what.