SICP Exercise 1.16 - Is my solution correct? - scheme

Exercise 1.16: Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the observation that (b(^n/2))^2 = (b(^2))^n/2 , keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
So I've tried really hard and came up with this solution:
(define (exp b n)
(exp-iter b n 1))
(define (square p) (* p p))
(define (even? k)
(= (remainder k 2) 0))
(define (exp-iter b counter product)
(define (smash counter)
(if (even? counter) (square (exp-iter b (/ 2 counter) product)) (* b (exp-iter b (- counter 1) product))))
(if (= counter 0) product (smash counter)))
(exp 4 3) ;test
This runs perfectly but I'm not sure if this is what the author asked me to do. Are there any problems with this? Is my solution really iterative?

Your solution is not iterative. An iterative process is one that doesn't call anything after the recursive call, and that's not the case in these two lines:
(square (exp-iter b (/ 2 counter) product))
(* b (exp-iter b (- counter 1) product))
After invoking exp-iter, in the first line you're passing the result to square, and in the second line you're multiplying the result by b. Compare it with this, a tail recursive solution:
(define (exp-iter b counter product)
(cond ((= counter 0)
product)
((even? counter)
(exp-iter (square b) (/ counter 2) product))
(else
(exp-iter b (- counter 1) (* b product)))))
Notice that after invoking exp-iter there's nothing left to do and the procedure simply returns its value. A smart compiler will detect this, and transform the recursive call into a loop that will use a constant amount of stack memory (instead of increasing with every recursive call.)

Related

Scheme : recursive process much faster than iterative

I am studying SICP and wrote two procedures to compute the sum of 1/n^2, the first generating a recursive process and the second generating an iterative process :
(define (sum-rec a b)
(if (> a b)
0
(exact->inexact (+ (/ 1 (* a a)) (sum-rec (1+ a) b)))))
(define (sum-it a b)
(define (sum_iter a tot)
(if (> a b)
tot
(sum_iter (1+ a) (+ (/ 1 (* a a)) tot))))
(exact->inexact (sum_iter a 0)))
I tested that both procedures give exactly the same results when called with small values of b, and that the result is approaching $pi^2/6$ as b gets larger, as expected.
But surprisingly, calling (sum-rec 1 250000) is almost instantaneous whereas calling (sum-it 1 250000) takes forever.
Is there an explanation for that?
As was mentioned in the comments, sum-it in its present form is adding numbers using exact arithmetic, which is slower than the inexact arithmetic being used in sum-rec. To do an equivalent comparison, this is how you should implement it:
(define (sum-it a b)
(define (sum_iter a tot)
(if (> a b)
tot
(sum_iter (1+ a) (+ (/ 1.0 (* a a)) tot))))
(sum_iter a 0))
Notice that replacing the 1 with a 1.0 forces the interpreter to use inexact arithmetic. Now this will return immediately:
(sum-it 1 250000)
=> 1.6449300668562465
You can reframe both of these versions so that they do exact or inexact arithmetic appropriately, simply by controlling what value they use for zero and relying on the contagion rules. These two are in Racket, which doesn't have 1+ by default but does have a nice syntax for optional arguments with defaults:
(define (sum-rec low high (zero 0.0))
(let recurse ([i low])
(if (> i high)
zero
(+ (/ 1 (* i i)) (recurse (+ i 1))))))
(define (sum-iter low high (zero 0.0))
(let iterate ([i low] [accum zero])
(if (> i high)
accum
(iterate (+ i 1) (+ (/ 1 (* i i)) accum)))))
The advantage of this is you can see the performance difference easily for both versions. The disadvantage is that you'd need a really smart compiler to be able to optimize the numerical operations here (I think, even if it knew low and high were machine integers, it would have to infer that zero is going to be some numerical type and generate copies of the function body for all the possible types).

I'm trying to figure out how to incorporate 3 variables into my tail recursion code for racket

Write a tail recursive function called popadd that models a population with P people at time t = 0 and adds d people per year.
(define (popadd t P)
(if (= t 0)
P
(+(popadd( - t 1) P)d))
)
but, of course, I get the error that d hasn't been defined yet, which is true. I tried adding it as an input, but as a return I get the number inserted for D.
You can simply pass along another parameter to the recursion:
(define (popadd t P d)
(if (= t 0)
P
(+ d (popadd (- t 1) P d))))
Or you can define the value, to avoid passing it around - assuming it doesn't need to change:
(define d 100)
(define (popadd t P)
(if (= t 0)
P
(+ d (popadd (- t 1) P))))
Notice that you could do the same with P, if it's ok. It really depends on what's the expected contract for the procedure.
Note that neither your code nor the code in the other answer is tail-recursive: in a recursive call like (+ (f ...)), f is not in tail position. To make the code tail-recursive you need the result of the recursive call be the result of the overall call (so in the above example, + is in tail position). To do this you need an auxiliary function. Here is a way of doing it which relies only on local define:
(define (popadd t P d)
(define (popadd-loop tau pop)
(if (zero? tau)
pop
(popadd-loop (- tau 1) (+ pop d))))
(popadd-loop t P))
Here is essentially the same thing using named-let, which is nicer:
(define (popadd t P d)
(let popadd-loop ([tau t] [pop P])
(if (zero? tau)
pop
(popadd-loop (- tau 1) (+ pop d)))))
Finally note that this problem has a closed-form solution:
(define (popadd t P d)
(+ P (* t d)))
I really wish that people trying to teach programming knew enough maths to not set problems which have trivial closed-form answers, as doing so encourages people to write inefficient (in the complexity-class sense) code. Obviously this is not your fault: it's your teacher's.

Multiplication as repeated addition?

I am new to Scheme. I am attempting to write a program that defines (integer) multiplication as repeated addition. In python the program would look something like:
a = int(raw_input(['please enter a number to be multiplied']))
b = int(raw_input(['please enter a number to multiply by']))
y = a
print y
for i in range(b-1):
y+=a
print y
There are two problems I have when attempting to write in Scheme, one 'hard' and one 'soft':
The 'hard' problem: I cannot find an equivalent of the range function in Scheme. How should I implement this?
The 'soft' problem: At this point in the book, for loops have not been introduced for Scheme, which leads me to believe the solution does not contain a for loop; however, I am fine with using a for loop if that is easier/better.
You use recursion in place of iteration. The general idea is:
mult(a, b)
if b == 0, return 0
return a + mult(a, b-1)
Now, can you code that in Scheme on your own?
In Racket (a Scheme derivative) there is "named let" where one can keep adding in each loop for b times (easier to understand this concept):
(let loop ((n 0)
(s 0))
(cond
([= n b] s)
(else (loop (add1 n) (+ s a)))))
DrRacket Scheme code should be like that:
(define mult
(lambda (a b)
(if (= b 0)
b
(+ a (mult a (- b 1))))))
(mult 4 5)

Error : for: undefined (Scheme)

I have to make a function in Scheme (R5RS) that works as follow :
(power-close-to b n)
And it has to return a integer that I call "e" that is :
b^e > n
With b, e and n integers.
So if we do :
(power-close-to 2 10)
It has to return 4, because 4 is the first integer for which b^e > n
I've made this function in an iterative way but I have to make it in an recursive form.
So this is my code:
(define e 0)
(define (power-close-to b n)
(for ((e (< (expt b e) n))
(+ e 1))
e))
But when I try it, Scheme gives following error : "for: undefined;"
So it seems my Scheme don't know the procedure "for", but I saw it in multiple Scheme codes on the internet, so I don't understand why in my case he says he don't know "for".
Thanks for your help!
EDIT : I tried making it recursive, this is how i did it, but i think it still is iterative, and i really don't have any idea how i could make it recursive.
(define e 0)
(define (power-close-to b n)
(if (< (expt b e) n)
(and (set! e (+ e 1)) (power-close-to b n))
e))
I also tried this, but when i try it, it never prints anything and never ends (but this is recursive (i think))
(define e 0)
(define (power-close-to b n)
(if (< (expt b e) n)
(* b (power-close-to b n))
e))
When someone asks you to transform a recursive procedure in Scheme into an iterative one, it generally means that you have to use tail recursion, not that you should use the looping constructs of the language.
Notice that not all Scheme interpreters provide a for loop (most will provide a do loop, but I don't think that's the point of the exercise). The error you're reporting means that your interpreter doesn't have a for construct, so it's quite possible that you're expected to rewrite the procedure in a tail recursive fashion. I'll give you an example of what I mean, this is a recursive factorial:
(define (fac n)
(if (zero? n)
1
(* n (fac (sub1 n)))))
(fac 10)
=> 3628800
Now the same procedure can be written in such a way that it generates an iterative process (even though syntactically, it still uses recursion):
(define (fac n acc) ; now the result is stored in the accumulator parameter
(if (zero? n) ; when recursion ends
acc ; return accumulator
(fac (sub1 n) (* n acc)))) ; else update accumulator in each iteration
(fac 10 1) ; initialize the accumulator in the right value
=> 3628800
What's the point, you ask? that the second version of the procedure is written in tail-recursive form (notice that there's nothing left to do after the recursive call ends), so a compiler trick called tail call optimization kicks in and the procedure runs in constant stack space, just as efficient as a loop in other non-functional languages - making recursive calls very cheap. Now try to write your power-close-to implementation so it uses a tail call.
What comes closest (and handiest) to the traditional loop is the named let (search for "named let" here). That would look like this:
(define (power-close-to b n)
(let loop ((e 0))
(if (<= (expt b e) n)
(loop (+ e 1))
e)))
(display (power-close-to 2 10))
The loop variable is defined at the let level, so it is local to the loop (at not global as in your example). Other than that the code looks pretty similar to yours.
A named let creates an inner function, so you could also express it as follows:
(define (power-close-to b n)
(define (loop e)
(if (<= (expt b e) n)
(loop (+ e 1))
e))
(loop 0))
Unfortunately, R5RS does not support default values for arguments, but if you want to avoid the inner function you could go for:
(define (power-close-to b n e)
(if (<= (expt b e) n)
(power-close-to b n (+ e 1))
e))
but then you'd have to call it with an additional 0 like
(power-close-to 2 10 0)

two methods of composing functions, how different in efficiency?

Let f transform one value to another, then I'm writing a function that repeats the transformation n times.
I have come up with two different ways:
One is the obvious way that
literally applies the function n
times, so repeat(f, 4) means x →
f(f(f(f(x))))
The other way is inspired from the
fast method for powering, which means
dividing the problem into two
problems that are half as large
whenever n is even. So repeat(f, 4)
means x → g(g(x)) where g(x) =
f(f(x))
At first I thought the second method wouldn't improve efficiency that much. At the end of the day, we would still need to apply f n times, wouldn't we? In the above example, g would still be translated into f o f without any further simplification, right?
However, when I tried out the methods, the latter method was noticeable faster.
;; computes the composite of two functions
(define (compose f g)
(lambda (x) (f (g x))))
;; identify function
(define (id x) x)
;; repeats the application of a function, naive way
(define (repeat1 f n)
(define (iter k acc)
(if (= k 0)
acc
(iter (- k 1) (compose f acc))))
(iter n id))
;; repeats the application of a function, divide n conquer way
(define (repeat2 f n)
(define (iter f k acc)
(cond ((= k 0) acc)
((even? k) (iter (compose f f) (/ k 2) acc))
(else (iter f (- k 1) (compose f acc)))))
(iter f n id))
;; increment function used for testing
(define (inc x) (+ x 1))
In fact, ((repeat2 inc 1000000) 0) was much faster than ((repeat1 inc 1000000) 0). My question is in what aspect was the second method more efficient than the first? Did re-using the same function object preserves storage and reduces the time spent for creating new objects?
After all, the application has to be repeated n times, or saying it another way, x→((x+1)+1) cannot be automatically reduced to x→(x+2), right?
I'm running on DrScheme 4.2.1.
Thank you very much.
You're right that both versions do the same number of calls to inc -- but there's more
overhead than that in your code. Specifically, the first version creates N closures, whereas
the second one creates only log(N) closures -- and if the closure creation is most of the work
then you'll see a big difference in performance.
There are three things that you can use to see this in more details:
Use DrScheme's time special form to measure the speed. In addition to the time that it
took to perform some computation, it will also tell you how much time was spent in GC.
You will see that the first version is doing some GC work, while the second doesn't.
(Well, it does, but it's so little, that it will probably not show.)
Your inc function is doing so little, that you're measuring only the looping overhead.
For example, when I use this bad version:
(define (slow-inc x)
(define (plus1 x)
(/ (if (< (random 10) 5)
(* (+ x 1) 2)
(+ (* x 2) 2))
2))
(- (plus1 (plus1 (plus1 x))) 2))
the difference between the two uses drops from a factor of ~11 to 1.6.
Finally, try this version out:
(define (repeat3 f n)
(lambda (x)
(define (iter n x)
(if (zero? n) x (iter (sub1 n) (f x))))
(iter n x)))
It doesn't do any compositions, and it works in roughly
the same speed as your second version.
The first method essentially applies the function n times, thus it is O(n). But the second method is not actually applying the function n times. Every time repeat2 is called it splits n by 2 whenever n is even. Thus much of the time the size of the problem is halved rather than merely decreasing by 1. This gives an overall runtime of O(log(n)).
As Martinho Fernandez suggested, the wikipedia article on exponentiation by squaring explains it very clearly.

Resources