Is a recursive function in Scheme always tail-call optimized? - scheme

I've read something about tail-call optimization in Scheme. But I'm not sure whether I understand the concept of tail calls. If I have code like this:
(define (fac n)
(if (= n 0)
1
(* n (fac (- n 1)))))
can this be optimized, so that it won't take stack memory?
or can tail-call optimization only be applied to a function like this:
(define (factorial n)
(let fact ([i n] [acc 1])
(if (zero? i)
acc
(fact (- i 1) (* acc i)))))

A useful way to think about tail calls is to ask "what must happen to the result of the recursive procedure call?"
The first function cannot be tail-optimised, because the result of the internal call to fac must be used, and multiplied by n to produce the result of the overall call to fac.
In the second case, however, the result of the 'outer' call to fact is... the result of the inner call to fact. Nothing else has to be done to it, and the latter value can simply be passed back directly as the value of the function. That means that no other function context has to be retained, so it can simply be discarded.
The R5RS standard defines 'tail call' by using the notion of a tail context, which is essentially what I've described above.

No, the first fac cannot be optimized.
When a function is called, you need to know the place it was called from, so that you can return to that location once the call is complete and use the result of the call in future computations (a fac function).
fact is implemented differently. The last thing that fact does is to call itself. There is no need to remember the place we are calling from — instead, we can perform tail call elimination. There is no other actions which should be done after the fact returns.

Related

Order of growth in mixed functions

The summation procedure of section 1.3.1 of SICP produces a linear recursive process with order of N space and time complexity. The code for this procedure is:
(define (sum-integers a b)
(if (< a b)
0
(+ a (sum-integers (+ a 1) b))))
What I would like to know is, if I decided that I want to sum a range of Fibonacci numbers using the analogous procedure:
(define (sum-fib a b)
(if (< a b)
0
(+ (fib a) (sum-fib (+ a 1) b))))
with fib defined as:
(define (fib n)
(cond ((= n 0) 0)
((= n 1) 1)
(else (+ (fib (- n 1))
(fib (- n 2))))))
How would I analyse the space and time complexity of sum-fib? Would I ignore the linear recursive flavor of the overall procedure and prioritize the tree recursion of fib within it as a worst case scenario? Would I have to somehow combine the space/time complexities of fib and sum-fib, and if so, how? Also, say I got sum-fib from another programmer and I was using it as a component in a larger system. If my program slowed down because of how fib was implemented, how would I know?
This is my first question on this platform so please also advise on how to better post and find answers to questions. Your contribution is appreciated.
There is a slight error in your code. After checking SICP, I am assuming you meant to use a > instead of a < in both sum-integers and sum-fib. That is the only modification I made, please correct me if it was done erroneously.
Note: I do not have a formal background, but this question has been unanswered for quite a while, so I thought I would share my thoughts for anyone else who happens across it.
Time
When dealing with the time complexity, we care about how many iterations are performed as n grows larger. Here, we can assume n to be the distance between a and b (inclusive) in sum-fib. The function sum-fib itself will only recurse n times in this case. If a was 0 and b was 9, then the function will run 10 times. This is completely linear, or O(n), but it isn't so simple: the next question to ask is what happens for each of these iterations?
We know that the summation part is linear, so all that's left is the Fibonnaci function. Inside, you see that it either immediately terminates ( O(1) ), or branches off into two recursive calls to itself. Big-O notation is concerned with the worst-case, meaning the branch. We'll have 1 call turn to 2, which turns to 4, which turns to 8, etc, n times. This behavior is O(2^n).
Don't forget that this is called n times as part of the overarching O(n) summation loop, so the total function will be O(n(2^n)).
Space
The space requirements of a function are a bit different. By writing out what's going on by hand, you can start to see the shape of the function form. This is what is shown early on in SICP, where a "pyramid" function is compared to a linear one.
One thing to keep in mind is that Scheme is tail-call optimized. This means that, if a recursive call is at the end of a function (meaning that there are no instructions which take place after the recursive call), then the frame can be reused, and no extra space is required. For example:
(define (loop n)
(if (> n 2)
0
(loop (+ n 1))))
Drawing out (loop 0) would be:
(loop 0)
(loop 1)
(loop 2)
0
You can see that the space required is linear. Compare this to:
(define (loop n)
(if (> n 2)
0
(+ n (loop (+ n 1)))))
With (loop 0):
(loop 0)
(1 + (loop 1))
(1 + (2 + (loop 2)))
(1 + (2 + 0))
(1 + 2)
3
You can see that the space required grows as the number of iterations required grows in this case.
In your case, the space required is going to increase dramatically as n increases, since fib generates a full tree for each number, and is not tail-recursive, nor is sum-fib.
I suspect that the space required will also be O(n(2^n)). The sum-fib function (ignoring the fib calls), seems to be linear in space, or O(n). It calls 2 fibs per iteration. Each fib branches off into 2 more, and is not tail-recursive, so the space required is O(2^n). Combining them, we get O(n(2^n)). Whether or not this will always be the case, I am not certain.
How to Test for Slow Functions
What you are looking for is called a profiler. It will watch your code while it runs, and report back to you with information on which functions took the most time, which functions were called most often, etc. For Scheme, Dr. Racket is an IDE which has a built-in profiler.
A word of advice: Get your software working first, then worry about profiling and optimizations. Many programmers get stuck in hyper-optimizing their code without first finishing to see where the true bottlenecks lie. You can spend weeks gaining a 1% performance boost utilizing arcane algorithms when it turns out that a 5-minute tweak could net you a 50% boost.

scheme - why does this function take much longer to run

I have these two functions, foldl and foldr. After a chain of function definitions were made i tested two alternatives, and the only differences i could find between the two chains for function calls and definitions was between these two functions, and for some reason the function that calls foldr takes exceptionally longer (tested with large input)
Here is foldl:
(define (foldl op z ls)
(if (null? ls)
z
(foldl op (op z (car ls)) (cdr ls))))
and here is foldr:
(define (foldr op z ls)
(if (null? ls)
z
(op (car ls) (foldr op z (cdr ls)))))
My question is why does the chain that calls foldr, take a ridiculously longer time to run compared to the chain that calls foldl?
Your implementation of foldl is tail recursive because the foldl is the last function called each time through. Your implementation of foldr is not tail recursive because op is the last thing called each time through.
Ok, so what does that mean?
When foldl calls itself each time through, op has already been applied and returned a value. The compiler can optimize this into an equivalent loop. In contrast, when foldr calls itself, op still needs to be applied and so the program must remember to apply op after the recursive call to foldr returns a value. Unfortunately, the recursive call to foldr cannot return a value until op is applied to the next recursive call to foldr and so on until the end of the list. Then at the end of the list, each of the pending applications of op must be applied one by one.
Remembering all the applications of op that are pending takes time and memory space.

Why are `not-equal?` and similar negated comparisons not built into Racket?

In Racket (and other Schemes, from what I can tell), the only way I know of to check whether two things are not equal is to explicitly apply not to the test:
(not (= num1 num2))
(not (equal? string1 string2))
It's obviously (not (that-big-of-deal?)), but it's such a common construction that I feel like I must be overlooking a reason why it's not built in.
One possible reason, I suppose, is that you can frequently get rid of the not by using unless instead of when, or by switching the order of the true/false branches in an if statement. But sometimes that just doesn't mimic the reasoning that you're trying to convey.
Also, I know the negated functions are easy to define, but so is <=, for example, and that is built in.
What are the design decisions for not having things like not-equal?, not-eqv?, not-eq? and != in the standard library?
First, you are correct that it is (not (that-big-of-a-deal?))1
The reason Racket doesn't include it out of the box is likely just because it adds a lot of extra primitives without much benefit. I will admit that a lot of languages do have != for not equal, but even in Java, if you want to do a deep equality check using equals() (analogous to equal? in Racket), you have to manually invert the result with a ! yourself.
Having both <= and > (as well as >= and <) was almost certainly just convenient enough to cause the original designers of the language to include it.
So no, there isn't any deep reason why there is not any shortcut for having a not-eq? function built into Racket. It just adds more primitives and doesn't happen to add much benefit. Especially as you still need not to exist on its own anyway.
1I love that pun by the way. Have some imaginary internet points.
I do miss not having a not= procedure (or ≠ as mentioned in #soegaard's comment), but not for the reasons you think.
All the numeric comparison operators are variadic. For example, (< a b c d) is the same as (and (< a b) (< b c) (< c d)). In the case of =, it checks whether all arguments are numerically equal. But there is no procedure to check whether all arguments are all unequal—and that is a different question from whether not all arguments are equal (which is what (not (= a b c d)) checks).
Yes, you can simulate that procedure using a fold. But still, meh.
Edit: Actually, I just answered my own question in this regard: the reason for the lack of a variadic ≠ procedure is that you can't just implement it using n-1 pairwise comparisons, unlike all the other numeric comparison operators. The straightforward approach of doing n-1 pairwise comparisons would mean that (≠ 1 2 1 2) would return true, and that's not really helpful.
I'll leave my original musings in place for context, and for others who wonder similar things.
Almost all of the predicates are inherited by Scheme, the standard #!racket originally followed. They kept the number of procedures to a minimum as a design principle and left it to the user to make more complex structures and code. Feel free to make the ones you'd like:
(define not-equal? (compose1 not equal?))
(define != (compose1 not =))
; and so on
You can put it in a module and require it. Keep it by convention so that people who read you code knows after a minute that everything not-<known predicate> and !-<known-predicate> are (compose not <known-predicate>)
If you want less work and you are not after using the result in filter then making a special if-not might suffice:
(define-syntax-rule (if-not p c a) (if p a c))
(define (leafs tree)
(let aux ((tree tree) (acc 0))
(if-not (pair? tree)
(+ acc 1) ; base case first
(aux (cdr tree)
(aux (car tree) acc)))))
But it's micro optimizations compared to what I would have written:
(define (leafs tree)
(let aux ((tree tree) (acc 0))
(if (not (pair? tree))
(+ acc 1) ; base case first
(aux (cdr tree)
(aux (car tree) acc)))))
To be honest if I were trying to squeeze out a not I would just have switched them manually since then optimizing speed thrumps optimal readability.
I find that one easily can define != (for numbers) using following macro:
(define-syntax-rule (!= a b)
(not(= a b)))
(define x 25)
(!= x 25)
(!= x 26)
Output:
#f
#t
That may be the reason why it is not defined in the language; it can easily be created, if needed.

Why closure use seems so "chicken or egg"

I've read and somewhat understand Use of lambda for cons/car/cdr definition in SICP. My problem is understanding the why behind it. My first problem was staring and staring at
(define (cons x y)
(lambda (m) (m x y)))
and not understanding how this function actually did any sort of consing. Consing as I learned it from various Lisp/Scheme books is putting stuff in lists, i.e.,
(cons 1 ()) => (1)
how does
(define (cons x y)
(lambda (m) (m x y)))
do anything like consing? But as the light went on in my head: cons was only sort of a placeholder for the eventual definitions of car and cdr. So car is
(define (car z)
(z (lambda (p q) p)))
and it anticipates an incoming z. But what is this z? When I saw this use:
(car (cons 1 2))
it finally dawned on me that, yes, the cons function in its entirety is z, i.e., we're passing cons to car! How weird!
((lambda (m) (m 1 2)) (lambda (p q) p)) ; and then
((lambda (p q) p) 1 2)
which results in grabbing the first expression since the basic car operation can be thought of as an if statement where the boolean is true, thus, grab the first one.
Yes, all lists can be thought of as cons-ed together expressions, but what have we won by this strangely backward definition? It's as if any initial, stand-alone definition of cons is not germane. It's as if uses of something define that something, as if there's no something until its uses circumscribe it. Is this the primary use of closures? Can someone give me some other examples?
but what have we won by this strangely backward definition?
The point of the exercise is to demonstrate that data structures can be defined completely in terms of functions; that data structures are not necessary as a primitive construct in a language -- if you have functions (that are closures), that's sufficient. This shows the power of functions, and is probably mind-boggling to someone from outside of functional programming.
It's not that in a real project we would actually define data structures this way. It would be more efficient to use language-provided data structure constructs. But it's important to know that we can do it this way. In computer science, it's useful to be able to "reduce" one construct (data structures) into another construct (functions) so that if we prove something about the second construct, it applies to the first one too.

SICP Exercise 1.5

Exercise 1.5. Ben Bitdiddle has invented a test to determine whether the interpreter he is faced with is using applicative-order
evaluation or normal-order evaluation. He defines the following two
procedures:
(define (p) (p))
(define (test x y) (if (= x 0)
0
y))
Then he evaluates the expression
(test 0 (p))
What behavior will Ben observe with an interpreter that uses
applicative-order evaluation? What behavior will he observe with an
interpreter that uses normal-order evaluation?
I understand the answer to the exercise; my question lies in how (p) is interpreted versus p. For example, (test 0 (p)) causes the interpreter to hang (which is expected), but (test 0 p) with the above definition immediately evaluates to 0. Why?
Moreover, suppose we changed the definition to (define (p) p). With the given definition, (test 0 (p)) and (test 0 p) both evaluate to 0. Why does this occur? Why doesn't the interpreter hang? I am using Dr. Racket with the SICP package.
p is a function. (p) is a call to a function.
In your interpreter evaluate p.
p <Return>
==> P : #function
Now evaluate (p). Make sure you know how to kill your interpreter! (Probably there is a “Stop” button in Dr. Racket.)
(p)
Note that nothing happens. Or, at least, nothing visible. The interpreter is spinning away, eliminating tail calls (so, using near 0 memory), calling p.
As p and (p) evaluate to different things, you should expect different behaviour.
As to your second question : You are defining p to be a function that returns itself. Again, try evaluating p and (p) with your (define (p) p) and see what you get. My guess (I am using a computer on which I cannot install anything and which has no scheme) is that they will evaluate to the same thing. (I might even bet that (eq? p (p)) will evaluate to #t.)

Resources