MIT-Scheme: Different Behavior on Infinite Loop Depending on Numbers - scheme

I was working on the solution of the exercise 1.6 of the SICP book when I saw two different behaviors when I run the code depending on the numbers that I used.
If I use natural numbers when I call the sqrt-iter procedure the interpreter just never stop but when I force the decimal division using float-point numbers the interpreter responds: Aborting!: maximum recursion depth exceeded.
Does anyone know the reason for the different behavior?
I made a gist with my answer to help anyone that wants to run the code, just copy & paste: http://bit.ly/Qv1wru. The mit-scheme version is 9.1.1.

Your good-enough? procedure seems wrong, try with this one:
(define (good-enough? guess x)
(< (abs (- (sqr guess) x)) 0.001))

Related

How do I load my file at DrRacket

I am a undergraduate who wants to go through "The Scheme programming language" as a self-study.
Here is a simple program and I named it as "reciprocal.ss"
(define reciprocal
(lambda (n)
(if(= n 0)
"oops!"
(/ 1 n))))
Then I wanted to load my procedure:
(load "reciprocal.ss")
It produces this error:
reciprocal.ss:1:0: #%top-interaction: unbound identifier;
also, no #%app syntax transformer is bound in: #%top-interaction
I did each parts as what the book says. Perhaps I am just making a rookie mistake. Any insight would be appreciated.
Since load uses eval, using it outside of a REPL generally will not work — for reasons described in Namespaces
Using racket/load can work for you here however:
loader.ss
#lang racket/load
(load "reciprocal.ss")
(display (reciprocal 10))
reciprocal.ss
(define reciprocal
(lambda (n)
(if (= n 0) "oops!"
(/ 1 n))))
In Racket (and Scheme at large) has a more complex story than the average language regarding running external code. In general, you should use import when you want to directly 'include' a file, you should use provide/require when you want to establish module boundaries and you should use load when you are sophisticated enough to be stretching the limits of either.
The simplest approach is not to use load at all.
In "reciprocal.ss" make the first lines:
#lang racket
(provide (all-defined-out))
(define reciprocal
(lambda (n)
(if (= n 0)
"oops!"
(/ 1 n))))
Then use (require "reciprocal.ss") in the file where you need to use the function reciprocal.
The load mechanism was used back in the good old days before module systems had arrived. Writing (load "foo.ss") basically works as if you manually pasted the contents of foo.ss into the repl and excecuted it. This means that the result of your program is dependent of the order of loading files (if you are using side effects). Module systems handle this (and other things too) much better.

how is the efficiency and cost of nested function in scheme language

In SICP, I have learned using functions, it's amazing and usefull. But I am confused with the cost of function nested, like below code:
(define (sqrt x)
(define (good-enough? guess)
(< (abs (- (square guess) x)) 0.001))
(define (improve guess)
(average guess (/ x guess)))
(define (sqrt-iter guess)
(if (good-enough? guess)
guess
(sqrt-iter (improve guess))))
(sqrt-iter 1.0))
It defines three child-functions, how is the efficiency and cost? If I use more function
calls like this?
UPDATE:
look at code below, in Searching for divisors
(define (smallest-divisor n)
(find-divisor n 2))
(define (find-divisor n test-divisor)
(cond ((> (square test-divisor) n) n)
((divides? test-divisor n) test-divisor)
(else (find-divisor n (+ test-divisor 1)))))
(define (divides? a b)
(= (remainder b a) 0))
(define (prime? n)
(= n (smallest-divisor n)))
Q1: the divides? and smallest-divisor are not necessary, just for clarification. How are the efficiency and cost? Does Scheme compiler optimize for this situation. I think I should learn some knowledge about compiler.(͡๏̯͡๏)
q2: How about in interpreter?
It's implementation dependent. It does not say anything about how much cost a closure should have, but since Scheme is designed around closures any implementation should stride to make closures cheap. Many implementations does CPS conversions and that introduces a closure per evaluation operation.
There is a compiler technique called lambda lifting where local functions get transformed to global by changing free variables not in global scope to bounded ones and lifting the procedure out of the procedure it was defined in. The SICP code might get translated to something like:
(define (lift:good-enough? x guess)
(< (abs (- (square guess) x)) 0.001))
(define (lift:improve x guess)
(average guess (/ x guess)))
(define (lift:sqrt-iter x guess)
(if (lift:good-enough? x guess)
guess
(lift:sqrt-iter (lift:improve x guess))))
(define (sqrt x)
(lift:sqrt-iter x 1.0))
Where all the lift:-prefixed identifiers are unique so that it does not collide with existing global bindings.
The efficiency question breaks down to a question of compiler optimization. Generally anytime you have a lexical procedure, such as your improve procedure, that references a free variable, such as your x, then a closure needs to be created. The closure has an 'environment' that must be allocated to store all free variables. Thus there is some space overhead and some time overhead (to allocate the memory and to fill the memory).
Where does compiler optimization come to play? When a lexical procedure does not 'escape' its lexical block, such as all of yours, then the compiler can inline the procedures. In that case a closure with its environment need not be created.
But, importantly, in every day use, you shouldn't worry about the use of lexical procedures.
In one word: negligible. Using nested functions instead of top-level defined functions won't have a noticeable effect on performance, we use nested functions ("block structure" in SICP's terms) for clarity and better structuring a procedure, not for efficiency:
Such nesting of definitions, called block structure, is basically the right solution to the simplest name-packaging problem
There might be a small difference in the time it takes to look up a function depending on where it was defined, but that will depend on implementation details of the interpreter. It's not worth worrying about it.
Not relevant.
One of the important aspects of designing a programming language is often choosing between efficiency on one side, and expressiveness on the other side. In most situations these two aspects defines the charactersistics of a low-level and high-level language, respectively.
Scheme is a small, but powerful high-level language from the family of Lisp languages. One of the most powerful feature of Scheme is it's expressiveness and ability to abstract. As a programmer of Scheme you use block structure inside procedures because it encapsulates related behaviour and solves your problem in a structured way, but you don't consider low-level properties of this behaviour, such as runtime-cost of calling procedures or allocating lists. This is part of the joy of programming in a high-level language such as Scheme.
As you say: It's amazing and useful, so continue your work and create something nice. Until the program becomes considerably slow in operation I wouldn't care about these things and just concentrate on the harder problems like defining concepts and behaviour in your program.

P-NP problems solved? FindBugs solves the halting prob?

There is a tool called FindBugs it can detect infinite never ending loops in a given program/ code base.
This implies FindBugs can detect if a program will end or not by analyzing the code.
Halting problem is the problem defining that:
Given a description of an arbitrary computer program, decide whether
the program finishes running or continues to run forever
So does this imply that the halting problem is solved or a subset of the halting problem is solved?
No, it's not solved. Findbugs only finds some of the cases of infinite never ending loops, such as this one:
public void myMethod() {
int a = 0;
while (true) {
a++;
}
}
IIRC, the only false negative it suffers from is, if the above method myMethod is never called, in which case you 'll still want to delete it as it's dead code.
It does suffers from false positives: there are many cases of non-ending programs that findbugs will not detect.
Imagine that you have a tool that always detects infinite loops.
Suppose there exists a unievrsal machine HALT(CODE, INPUT) that halts iff CODE halts on INPUT. Now consider this:
if HALT(CODE, CODE), loop forever
else halt
If CODE halts on CODE, you'll get a contradiction, and also if it doesn't. Why?
Assuming CODE halts on CODE, then the program will loop forever.. meaning that... it doesn't stop..Now assume that CODE doesn't halt on CODE, you'll get that.... it does stop..
If you were to make a program to analyze a program for the same platform with the same limits as the analyzing programs it's impossible for such analyzer to exist. This is known as the halting problem.
When that said, halting problem is solvable for programs that has a lot less memory consumption and code length than the analyzing program can have. Eg. I can make a halt? procedure for all 2 byte BrainFuck-programs like this:
;; takes a valid 2 byte BF-program
;; and returns if it will halt
(define (halt? x)
(cond ((equal? x "[]") #f)
(else #t)))
A larger example is by making an interpreter and hash memory states and pc-location. If a previous state is found it's an infinite loop. Even with a very good data model the memory used by the interpreter must be considerable larger than what it interprets.
I'm thinking of constant folding programs by doing and the halting problem becomes an issue. My idea is to have a data structure that has the number of times a particular branch in AST has been seen and have a cutoff limit that is very large. Thus if the interpreter has been at a branch more than the cutoff it will end up in the compiled program instead of it's computation. It takes a lot less memory and will establish that some or all parts of a program certainly does return (halt).
Imagine this code:
(define (make-list n f)
(if (zero? n)
'()
(cons (f) (make-list (- n 1) f))))
(define (a)
(b))
(define (b)
(c))
(define (c)
(b))
(display (make-list 4 read))
(display (make-list 4 a))
It's actually pretty bad code since you don't know which order the input might get. The compiler get to choose whats best and it might turn into:
(display-const "(")
(display (read))
(display-const " ")
(display (read))
(display-const " ")
(display (read))
(display-const " ")
(display (read))
(display-const ")")
(display (cons (b) (cons (b) (cons (b) (cons (b) '())))) ; gave up on (b)

I can't seem to wrap my mind around call/cc in Scheme

Does anyone have a good guide as to how it works? Something with visual aids would be nice, every guide I've come across all seem to say the same thing I need a fresh take on it.
Here's the diagram that was left on our CS lab's whiteboard. So you're going to fetch some apples, and you grab a continuation before you begin. You wander through the forest, collecting apples, when at the end you apply your continuation on your apples. Suddenly, you find yourself where you were before you went into the forest, except with all of your apples.
(display
(call/cc (lambda (k)
(begin
(call-with-forest
(lambda (f)
(k (collect-apples f))))
(get-eaten-by-a-bear)))))
=> some apples (and you're not eaten by a bear)
I think a Bar Mitzvah and buried gold might have been involved.
Have a look at the continuation part of PLAI -- it's very "practical
oriented", and it uses a "black-hole" visualization for continuations that can help you
understand it.
There is no shortcut in learning call/cc. Read the chapters in The Scheme Programming Language or Teach Yourself Scheme in Fixnum Days.
I found that it helps to visualize the call stack. When evaluating an expression, keep track of the call stack at every step. (See for example http://4.flowsnake.org/archives/602) This may be non-intuitive at first, because in most languages the call stack is implicit; you don't get to manipulate it directly.
Now think of a continuation as a function that saves the call stack. When that function is called (with a value X), it restores the saved call stack, then passes X to it.
Never likes visual representation of call/cc as I can't reflect it back to the code (yes, poor imagination) ;)
Anyway, I think it is easier start not with call/cc but with call/ec (escape continuation) if you already familiar with exceptions in other languages.
Here is some code which should evaluate to value:
(lambda (x) (/ 1 x))
What if x will be equal '0'? In other languages we can throw exception, what about scheme?
We can throw it too!
(lambda (x) (call/ec (cont)
(if (= x 0) (cont "Oh noes!") (/ 1 x))))
call/ec (as well as call/cc) is works like "try" here. In imperative languages you can easily jump out of function simply returning value or throwing exception.
In functional you can't jump out, you should evaluate something. And call/* comes to rescue.
What it does it represent expression under "call/ec" as function (this named "cont" in my case) with one argument. When this function is called it replaces the WHOLE call/* to it's argument.
So, when (cont "Oh noes!") replaces (call/ec (cont) (if (= x 0) (cont "Oh noes!") (/ 1 x))) to "Oh noes!" string.
call/cc and call/ec are almost equals to each other except ec simplier to implement. It allows only jump up, whil cc may be jumped down from outside.

reduce, or explicit recursion?

I recently started reading through Paul Graham's On Lisp with a friend, and we realized that we have very different opinions of reduce: I think it expresses a certain kind of recursive form very clearly and concisely; he prefers to write out the recursion very explicitly.
I suspect we're each right in some context and wrong in another, but we don't know where the line is. When do you choose one form over the other, and what do you think about when making that choice?
To be clear about what I mean by reduce vs. explicit recursion, here's the same function implemented twice:
(defun my-remove-if (pred lst)
(fold (lambda (left right)
(if (funcall pred left)
right
(cons left right)))
lst :from-end t))
(defun my-remove-if (pred lst)
(if lst
(if (funcall pred (car lst))
(my-remove-if pred (cdr lst))
(cons (car lst) (my-remove-if pred (cdr lst))))
'()))
I'm afraid I started out a Schemer (now we're Racketeers?) so please let me know if I've botched the Common Lisp syntax. Hopefully the point will be clear even if the code is incorrect.
If you have a choice, you should always express your computational intent in the most abstract terms possible. This makes it easier for a reader to figure out your intentions, and it makes it easier for the compiler to optimize your code. In your example, when the compiler trivially knows you are doing a fold operation by virtue of you naming it, it also trivially knows that it could possibly parallelize the leaf operations. It would be much harder for a compiler to figure that out when you write extremely low level operations.
I'm going to take a slightly-subjective question and give a highly-subjective answer, since Ira already gave a perfectly pragmatic and logical one. :-)
I know writing things out explicitly is highly valued in some circles (the Python guys make it part of their "zen"), but even when I was writing Python I never understood it. I want to write at the highest level possible, all the time. When I want to write things out explicitly, I use assembly language. The point of using a computer (and a HLL) is to get it to do these things for me!
For your my-remove-if example, the reduce one looks fine to me (apart from the Scheme-isms like fold and lst :-)). I'm familiar with the concept of reduce, so all I need to understand it is figure out your f(x,y) -> z. For the explicit variant, I had to think it for a second: I have to figure out the loop myself. Recursion isn't the hardest concept out there, but I think it is harder than "a function of two arguments".
I also don't care for a whole line being repeated -- (my-remove-if pred (cdr lst)). I think I like Lisp in part because I'm absolutely ruthless at DRY, and Lisp allows me to be DRY on axes that other languages don't. (You could put in another LET at the top to avoid this, but then it's longer and more complex, which I think is another reason to prefer the reduction, though at this point I might just be rationalizing.)
I think maybe the contexts in which the Python guys, at least, dislike implicit functionality would be:
when no-one could be expected to guess the behavior (like frobnicate("hello, world", True) -- what does True mean?), or:
cases when it's reasonable for implicit behavior to change (like when the True argument gets moved, or removed, or replaced with something else, since there's no compile-time error in most dynamic languages)
But reduce in Lisp fails both of these criteria: it's a well-understood abstraction that everybody knows, and that isn't going to change, at least not on any timescale I care about.
Now, I absolutely believe there are some cases where it'd be easier for me to read an explicit function call, but I think you'd have to be pretty creative to come up with them. I can't think of any offhand, because reduce and mapcar and friends are really good abstractions.
In Common Lisp one prefers the higher-order functions for data structure traversal, filtering, and other related operations over recursion. That's also to see from many provided functions like REDUCE, REMOVE-IF, MAP and others.
Tail recursion is a) not supported by the standard, b) maybe invoked differently with different CL compilers and c) using tail recursion may have side effects on the generated machine code for surrounding code.
Often, for certain data structures, many of these above operations are implemented with LOOP or ITERATE and provided as higher-order function. There is a tendency to prefer new language extensions (like LOOP and ITERATE) for iterative code over using recursion for iteration.
(defun my-remove-if (pred list)
(loop for item in list
unless (funcall pred item)
collect item))
Here is also a version that uses the Common Lisp function REDUCE:
(defun my-remove-if (pred list)
(reduce (lambda (left right)
(if (funcall pred left)
right
(cons left right)))
list
:from-end t
:initial-value nil))

Resources