Does call/cc simulate goto this way? - scheme

In the book Lisp in Small Pieces, there is the following example code, which is intended to demo that call/cc could simulate goto.
(define (fact n)
(let ((r 1) (k 'void))
(call/cc (lambda (c) (set! k c) 'void))
(set! r (* r n))
(set! n (- n 1))
(if (= n 1) r (k 'recurse))))
However, I'm not sure if I'm misunderstanding something, but I cannot see that this is the way call/cc would simulate goto. When k is applied in the last line, the restored continuation has the r and n of the original continuation, whose values are not changed by the two set! applications. So the entire loop will never terminate.
Is the book wrong in this example? Or did I miss anything?

the restored continuation has the r and n of the original
continuation, whose values are not changed by the two set!
applications.
Nope; that's the important part; the changes to the values are visible. They're not reset. I'm not sure whether the question should be considered a duplicate or not, but this came up in call-with-current-continuation - state saving concept as well, where the asker noted that (look at the question for the whole context):
Calling next 3 times produces 0, 1 and 'done. That means when state used the function k given by generator it didn't
restore the state of the program.
You could test this very simply by printing the values of r and n after saving the continuation. You'll see that the updated values are there. For instance:
(define (fact n)
(let ((r 1) (k 'void))
(call-with-current-continuation (lambda (c) (set! k c) 'void))
(display "r: ") (display r) (newline)
(display "n: ") (display n) (newline)
(set! r (* r n))
(set! n (- n 1))
(if (= n 1) r (k 'recurse))))
> (fact 6)
r: 1
n: 6
r: 6
n: 5
r: 30
n: 4
r: 120
n: 3
r: 360
n: 2
720
Related Questions:
Also see:
Explaining different behavior of variables referenced in continuations? (not so useful in explaining the behavior, but it's related)

Related

SICP Exercise 1.16 - Is my solution correct?

Exercise 1.16: Design a procedure that evolves an iterative exponentiation process that uses successive squaring and uses a logarithmic number of steps, as does fast-expt. (Hint: Using the observation that (b(^n/2))^2 = (b(^2))^n/2 , keep, along with the exponent n and the base b, an additional state variable a, and define the state transformation in such a way that the product ab^n is unchanged from state to state. At the beginning of the process a is taken to be 1, and the answer is given by the value of a at the end of the process. In general, the technique of defining an invariant quantity that remains unchanged from state to state is a powerful way to think about the design of iterative algorithms.)
So I've tried really hard and came up with this solution:
(define (exp b n)
(exp-iter b n 1))
(define (square p) (* p p))
(define (even? k)
(= (remainder k 2) 0))
(define (exp-iter b counter product)
(define (smash counter)
(if (even? counter) (square (exp-iter b (/ 2 counter) product)) (* b (exp-iter b (- counter 1) product))))
(if (= counter 0) product (smash counter)))
(exp 4 3) ;test
This runs perfectly but I'm not sure if this is what the author asked me to do. Are there any problems with this? Is my solution really iterative?
Your solution is not iterative. An iterative process is one that doesn't call anything after the recursive call, and that's not the case in these two lines:
(square (exp-iter b (/ 2 counter) product))
(* b (exp-iter b (- counter 1) product))
After invoking exp-iter, in the first line you're passing the result to square, and in the second line you're multiplying the result by b. Compare it with this, a tail recursive solution:
(define (exp-iter b counter product)
(cond ((= counter 0)
product)
((even? counter)
(exp-iter (square b) (/ counter 2) product))
(else
(exp-iter b (- counter 1) (* b product)))))
Notice that after invoking exp-iter there's nothing left to do and the procedure simply returns its value. A smart compiler will detect this, and transform the recursive call into a loop that will use a constant amount of stack memory (instead of increasing with every recursive call.)

How does the named let in the form of a loop work?

In an answer which explains how to convert a number to a list the number->list procedure is defined as follows:
(define (number->list n)
(let loop ((n n)
(acc '()))
(if (< n 10)
(cons n acc)
(loop (quotient n 10)
(cons (remainder n 10) acc)))))
Here a "named let" is used. I don't understand how this named let works.
I see that a loop is defined where the variable n is equal to n, and the variable acc equal to the empty list. Then if n is smaller than 10 the n is consed to the acc. Otherwise, "the loop" is applied with n equal to n/10 and acc equal to the cons of the remainder of n/10 and the previous accumulated stuff, and then calls itself.
I don't understand why loop is called loop (what is looping?), how it can automatically execute and call itself, and how it will actually add each number multiplied by its appropriate multiplier to form a number in base 10.
I hope someone can shine his or her light on the procedure and the above questions so I can better understand it. Thanks.
The basic idea behind a named let is that it allows you to create an internal function, that can call itself, and invoke it automatically. So your code is equivalent to:
(define (number->list n)
(define (loop n acc)
(if (< n 10)
(cons n acc)
(loop (quotient n 10)
(cons (remainder n 10) acc))))
(loop n '()))
Hopefully, that is easier for you to read and understand.
You might, then, ask why people tend to use a named let rather than defining an internal function and invoking it. It's the same rationale people have for using (unnamed) let: it turns a two-step process (define a function and invoke it) into one single, convenient form.
It's called a loop because the function calls itself in tail position. This is known as tail recursion. With tail recursion, the recursive call returns directly to your caller, so there's no need to keep the current call frame around. You can do tail recursion as many times as you like without causing a stack overflow. In that way, it works exactly like a loop.
If you'd like more information about named let and how it works, I wrote a blog post about it. (You don't need to read it to understand this answer, though. It's just there if you're curious.)
A normal let usage can be considered an anonymous procedure call:
(let ((a 10) (b 20))
(+ a b))
;; is the same as
((lambda (a b)
(+ a b))
10
20)
A named let just binds that procedure to a name in the scope of the procedure so that it is equal to a single procedure letrec:
(let my-chosen-name ((n 10) (acc '()))
(if (zero? n)
acc
(my-chosen-name (- n 1) (cons n acc)))) ; ==> (1 2 3 4 5 6 7 8 9 10)
;; Is the same as:
((letrec ((my-chosen-name
(lambda (n acc)
(if (zero? n)
acc
(my-chosen-name (- n 1) (cons n acc))))))
my-chosen-name)
10
'()) ; ==> (1 2 3 4 5 6 7 8 9 10)
Notice that the body of the letrec just evaluates to the named procedure so that the name isn't in the environment of the first call. Thus you could do this:
(let ((loop 10))
(let loop ((n loop))
(if (zero? n)
'()
(cons n (loop (- n 1))))) ; ==> (10 9 8 7 6 5 4 3 2 1)
the procedure loop is only in the environment of the body of the inner let and does not shadow the variable loop of the outer let.
In your example, the name loop is just a name. In Scheme every loop is ultimately done with recursion, but usually the name is used when it's tail recursion and thus an iterative process.

Find the Hardy–Ramanujan number using R5RS scheme. Please suggest improvements in idiom and calculations.

I remember once going to see
[Srinivasa Ramanujan] when he was ill
at Putney. I had ridden in taxi cab
number 1729 and remarked that the
number seemed to me rather a dull one,
and that I hoped it was not an
unfavorable omen. "No," he replied,
"it is a very interesting number; it
is the smallest number expressible as
the sum of two cubes in two different
ways." [G. H. Hardy as told in "1729
(number)"]
In "Math Wrath" Joseph Tartakovsky says about this feat, "So what?
Give me two minutes and my calculator watch, and I'll do the same
without exerting any little gray cells." I don't know how
Mr. Tartakovsky would accomplish that proof on a calculator watch, but
the following is my scheme function that enumerates numbers starting
at 1 and stops when it finds a number that is expressable in two
seperate ways by summing the cubes of two positive numbers. And it
indeeds returns 1729.
There are two areas where I would appreciate suggestions for
improvement. One area is, being new to scheme, style and idiom. The other area is around the calculations. Sisc
does not return exact numbers for roots, even when they could be. For
example (expt 27 1/3) yields 2.9999999999999996. But I do get exact
retults when cubing an exact number, (expt 3 3) yields 27. My
solution was to get the exact floor of a cube root and then test
against the cube of the floor and the cube of the floor plus one,
counting as a match if either match. This solution seems messy and hard to reason about. Is there a more straightforward way?
; Find the Hardy-Ramanujan number, which is the smallest positive
; integer that is the sum of the cubes of two positivie integers in
; two seperate ways.
(define (hardy-ramanujan-number)
(let ((how-many-sum-of-2-positive-cubes
; while i^3 + 1 < n/1
; tmp := exact_floor(cube-root(n - i^3))
; if n = i^3 + tmp^3 or n = i^3 + (tmp + 1) ^3 then count := count + 1
; return count
(lambda (n)
(let ((cube (lambda (n) (expt n 3)))
(cube-root (lambda (n) (inexact->exact (expt n 1/3)))))
(let iter ((i 1) (count 0))
(if (> (+ (expt i 3) 1) (/ n 2))
count
(let* ((cube-i (cube i))
(tmp (floor (cube-root (- n cube-i)))))
(iter (+ i 1)
(+ count
(if (or (= n (+ cube-i (cube tmp)))
(= n (+ cube-i (cube (+ tmp 1)))))
1
0))))))))))
(let iter ((n 1))
(if (= (how-many-sum-of-2-positive-cubes n) 2)
n
(iter (+ 1 n))))))
Your code looks mostly fine, I see a few very minor things to comment on:
There's no need to define cube and cube-root at the innermost scope,
Using define for internal functions makes it look a little clearer,
This is related to the second part of your question: you're using inexact->exact on a floating point number which can lead to large rationals (in the sense that you allocate a pair of two big integers) -- it would be better to avoid this,
Doing that still doesn't solve the extra test that you do -- but that's only because you're not certain if you have the right number of if you missed by 1. Given that it should be close to an integer, you can just use round and then do one check, saving you one test.
Fixing the above, and doing it in one function that returns the number when it's found, and using some more "obvious" identifier names, I get this:
(define (hardy-ramanujan-number n)
(define (cube n) (expt n 3))
(define (cube-root n) (inexact->exact (round (expt n 1/3))))
(let iter ([i 1] [count 0])
(if (> (+ (cube i) 1) (/ n 2))
(hardy-ramanujan-number (+ n 1))
(let* ([i^3 (cube i)]
[j^3 (cube (cube-root (- n i^3)))]
[count (if (= n (+ i^3 j^3)) (+ count 1) count)])
(if (= count 2) n (iter (+ i 1) count))))))
I'm running this on Racket, and it looks like it's about 10 times faster (50ms vs 5ms).
Different Schemes behave differently when it comes to exact exponentiation: some return an exact result when possible, some an inexact result in all cases. You can look at ExactExpt, one of my set of implementation contrasts pages, to see which Schemes do what.

weirdness in scheme

I was trying to implement Fermat's primality test in Scheme.
I wrote a procedure fermat2(initially called fermat1) which returns true
when a^p-1 congruent 1(mod p) (please read it correctly guys!!)
a
every prime p number should satisfy the procedure (And hence Fermat's little theorem .. )
for any a
But when I tried to count the number of times this procedure yields true for a fixed number of trials ... ( using countt procedure, described in code) I got shocking results ans
So I changed the procedure slightly (I don't see any logical change .. may be I'm blind) and named it fermat1(replacing older fermat1 , now old fermat1 ->fermat2) and it worked .. the prime numbers passed the test all the times ...
why on earth the procedure fermat2 called less number of times ... what is actually wrong??
if it is wrong why don't I get error ... instead that computation is skipped!!(I think so!)
all you have to do , to understand what I'm trying to tell is
(countt fermat2 19 100)
(countt fermat1 19 100)
and see for yourself.
Code:
;;Guys this is really weird
;;I might not be able to explain this
;;just try out
;;(countt fermat2 19 100)
;;(countt fermat1 19 100)
;;compare both values ...
;;did you get any error using countt with fermat2,if yes please specify why u got error
;;if it was because of reminder procedure .. please tell your scheme version
;;created on 6 mar 2011 by fedvasu
;;using mit-scheme 9.0 (compiled from source/microcode)
;; i cant use a quote it mis idents (unfriendly stack overflow!)
;;fermat-test based on fermat(s) little theorem a^p-1 congruent to 1 (mod p) p is prime
;;see MIT-SICP,or Algorithms by Vazirani or anyother number theory book
;;this is the correct logic of fermat-test (the way it handles 0)
(define (fermat1 n)
(define (tryout a x)
;; (display "I've been called\n")
(= (remainder (fast-exp a (- x 1)) x) 1))
;;this exercises the algorithm
;;1+ to avoid 0
(define temp (random n))
(if (= temp 0)
(tryout (1+ temp) n)
(tryout temp n)))
;;old fermat-test
;;which is wrong
;;it doesnt produce any error!!
;;the inner procedure is called only selective times.. i dont know when exactly
;;uncomment the display line to see how many times tryout is called (using countt)
;;i didnt put any condition when it should be called
;;rather it should be every time fermat2 is called
;;how is it so??(is it to avoid error?)
(define (fermat2 n)
(define (tryout a x)
;; (display "I've been called\n")
(= (remainder (fast-exp a (- x 1)) x) 1))
;;this exercises the algorithm
;;1+ to avoid 0
(tryout (1+ (random n)) n))
;;this is the dependency procedure for fermat1 and fermat2
;;this procedure calculates base^exp (exp=nexp bcoz exp is a keyword,a primitive)
;;And it is correct :)
(define (fast-exp base nexp)
;;this is iterative procedure where a*b^n = base^exp is constant always
;;A bit tricky though
(define (logexp a b n)
(cond ((= n 0) a);;only at the last stage a*b^n is not same as base^exp
((even? n) (logexp a (square b) (/ n 2)))
(else (logexp (* a b) b (- n 1)))))
(logexp 1 base nexp))
;;utility procedure which takes a procedure and its argument and an extra
;; argument times which tells number of times to call
;;returns the number of times result of applying proc on input num yielded true
;;counting the number times it yielded true
;;procedure yields true for fixed input,
;;by calling it fixed times)
;;uncommenting display line will help
(define (countt proc num times)
(define (pcount p n t c)
(cond ((= t 0)c)
((p n );; (display "I'm passed by fermat1\n")
(pcount p n (- t 1) (+ c 1)));;increasing the count
(else c)))
(pcount proc num times 0))
I had real pain .. figuring out what it actually does .. please follow the code and tell why this dicrepieancies?
Even (countt fermat2 19 100) called twice returns different results.
Let's fix your fermat2 since it's shorter. Definition is: "If n is a prime number and a is any positive integer less than n, then a raised to the nth power is congruent to a modulo n.". That means f(a, n) = a^n mod n == a mod n. Your code tells f(a, n) = a^(n-1) mod n == 1 which is different. If we rewrite this according to definition:
(define (fermat2 n)
(define (tryout a x)
(= (remainder (fast-exp a x) x)
(remainder a x)))
(tryout (1+ (random n)) n))
This is not correct yet. (1+ (random n)) returns numbers from 1 to n inclusive, while we need [1..n):
(define (fermat2 n)
(define (tryout a x)
(= (remainder (fast-exp a x) x)
(remainder a x)))
(tryout (+ 1 (random (- n 1))) n))
This is correct version but we can improve it's readability. Since you're using tryout only in scope of fermat2 there is no need in parameter x to pass n - latter is already bound in scope of tryout, so final version is
(define (fermat n)
(define (tryout a)
(= (remainder (fast-exp a n) n)
(remainder a n)))
(tryout (+ 1 (random (- n 1)))))
Update:
I said that formula used in fermat2 is incorrect. This is wrong because if a*k = b*k (mod n) then a = b (mod n). Error as Vasu pointed was in generating random number for test.

two methods of composing functions, how different in efficiency?

Let f transform one value to another, then I'm writing a function that repeats the transformation n times.
I have come up with two different ways:
One is the obvious way that
literally applies the function n
times, so repeat(f, 4) means x →
f(f(f(f(x))))
The other way is inspired from the
fast method for powering, which means
dividing the problem into two
problems that are half as large
whenever n is even. So repeat(f, 4)
means x → g(g(x)) where g(x) =
f(f(x))
At first I thought the second method wouldn't improve efficiency that much. At the end of the day, we would still need to apply f n times, wouldn't we? In the above example, g would still be translated into f o f without any further simplification, right?
However, when I tried out the methods, the latter method was noticeable faster.
;; computes the composite of two functions
(define (compose f g)
(lambda (x) (f (g x))))
;; identify function
(define (id x) x)
;; repeats the application of a function, naive way
(define (repeat1 f n)
(define (iter k acc)
(if (= k 0)
acc
(iter (- k 1) (compose f acc))))
(iter n id))
;; repeats the application of a function, divide n conquer way
(define (repeat2 f n)
(define (iter f k acc)
(cond ((= k 0) acc)
((even? k) (iter (compose f f) (/ k 2) acc))
(else (iter f (- k 1) (compose f acc)))))
(iter f n id))
;; increment function used for testing
(define (inc x) (+ x 1))
In fact, ((repeat2 inc 1000000) 0) was much faster than ((repeat1 inc 1000000) 0). My question is in what aspect was the second method more efficient than the first? Did re-using the same function object preserves storage and reduces the time spent for creating new objects?
After all, the application has to be repeated n times, or saying it another way, x→((x+1)+1) cannot be automatically reduced to x→(x+2), right?
I'm running on DrScheme 4.2.1.
Thank you very much.
You're right that both versions do the same number of calls to inc -- but there's more
overhead than that in your code. Specifically, the first version creates N closures, whereas
the second one creates only log(N) closures -- and if the closure creation is most of the work
then you'll see a big difference in performance.
There are three things that you can use to see this in more details:
Use DrScheme's time special form to measure the speed. In addition to the time that it
took to perform some computation, it will also tell you how much time was spent in GC.
You will see that the first version is doing some GC work, while the second doesn't.
(Well, it does, but it's so little, that it will probably not show.)
Your inc function is doing so little, that you're measuring only the looping overhead.
For example, when I use this bad version:
(define (slow-inc x)
(define (plus1 x)
(/ (if (< (random 10) 5)
(* (+ x 1) 2)
(+ (* x 2) 2))
2))
(- (plus1 (plus1 (plus1 x))) 2))
the difference between the two uses drops from a factor of ~11 to 1.6.
Finally, try this version out:
(define (repeat3 f n)
(lambda (x)
(define (iter n x)
(if (zero? n) x (iter (sub1 n) (f x))))
(iter n x)))
It doesn't do any compositions, and it works in roughly
the same speed as your second version.
The first method essentially applies the function n times, thus it is O(n). But the second method is not actually applying the function n times. Every time repeat2 is called it splits n by 2 whenever n is even. Thus much of the time the size of the problem is halved rather than merely decreasing by 1. This gives an overall runtime of O(log(n)).
As Martinho Fernandez suggested, the wikipedia article on exponentiation by squaring explains it very clearly.

Resources