Beta reduction Lambda calculus - lambda-calculus

I am trying to reduce the following using beta reduction:
(λx.x x) (λx. λy.x x)
I am getting stuck after the first substitution since it seems to be giving (λx. λy.x x)(λx. λy.x x) which would end in kind of a loop. What am I doing wrong?

Here's an illustration of the evaluation
beta reduction 1
(λx.x x) (λx.λy.x x) →β x [x := (λx.λy.x x)]
(λx.(λx.λy.x x) (λx.λy.x x))
beta reduction 2
(λx.λy.x x) (λx.λy.x x) →β x [x := (λx.λy.x x)]
(λx.λy.(λx.λy.x x) (λx.λy.x x))
result
λy.(λx.λy.x x) (λx.λy.x x)
Now we have reached Weak Head Normal Form – ie, we have a lambda λy without any arguments to apply it to.
To get to Head Normal Form, we can attempt to reduce under the lambda ...
reduction 1
λy.(λx.λy.x x) (λx.λy.x x) →β x [x := (λx.λy.x x)]
λy.(λx.λy.(λx.λy.x x) (λx.λy.x x))
reduction 2 ...
λy.λy.(λx.λy.x x) (λx.λy.x x)
Ok, we can immediately see that this pattern will repeat itself. Each time we try to reduce under the lambda, the result gets wrapped in another λy.
So, this particular lambda expression does not have a Head Normal Form – ie, the evaluation of this expression (when applied to an argument) will never terminate; it will never reach Normal Form.

You are doing nothing wrong.
The expression
(λx.x x) (λx. λy.x x) beta-reduces in one step to (λx. λy.x x)(λx. λy.x x), which beta-reduces to λy.(λx. λy.x x)(λx. λy.x x) and then to λy.λy.(λx. λy.x x)(λx. λy.x x).
In every step, each new expression is the same as before, but contained in a new abstraction.
In Lambda Calculus, the reduction process may not terminate. In other words, programs may not terminate (like in any turing-complete programming language).
Another example of this is the term Ω = (λx.x x)(λx.x x)

Related

how to use the if statement in scheme programming?

I just started learning the scheme language and below is a question that I stuck a little bit(Is there anything wrong with my code cuz the error message is kinda weird)
Prompt: Define a procedure over-or-under which takes in a number x and a number y and returns the following:
-1 if x is less than y
0 if x is equal to y
1 if x is greater than y
What I've tried so far is :
(define (over-or-under x y)
(if (< x y)
-1)
(if (= x y)
0)
(if (> x y)
1)
)
The error message is :
scm> (load-all ".")
Traceback (most recent call last):
0 (adder 8)
Error: str is not callable: your-code-here
scm> (over-or-under 5 5)
# Error: expected
# 0
# but got
The syntax of if is:
(if condition expression1 expression2)
and its value is the value of expression1 when the condition is true, otherwise it is the value of expression2.
In your function instead you use:
(if condition expression1)
and this is not allowed. Note, moreover that the three ifs one after the other are executed sequentially and only the value of the last one is actually used, as the value returned by the function call.
A way of solving this problem is using a “cascade” of if:
(define (over-or-under x y)
(if (< x y)
-1
(if (= x y)
0
1)))
Note that the proper alignment make clear the order of execution of the different expressions. If (< x y) is true than the value -1 is the result of the if, but, since it is the last expression of the function, it is also the value of the function call. If this is not true, we execute the “inner” if, checking if x is equal to y, and so on. Note also that in the third case is not necessary to check if x is greater than y, since it is surely true, given that x is not less than y, neither equal to y.
Finally, note that the “cascade” of x is so common that in scheme exists a more syntactically convient way of expressing it with the specific cond expression:
(cond (condition1 expression1)
(condition2 expression2)
...
(else expressionN))
so you could rewrite the function is this way:
(define (over-or-under x y)
(cond ((< x y) -1)
((= x y) 0)
(else 1)))

Reduce Lambda Term to Normal Form

I just learned about lambda calculus and I'm having issues trying to reduce
(λx. (λy. y x) (λz. x z)) (λy. y y)
to its normal form. I get to (λy. y (λy. y y) (λz. (λy. y y) z) then get kind of lost. I don't know where to go from here, or if it's even correct.
(λx. (λy. y x) (λz. x z)) (λy. y y)
As #ymonad notes, one of the y parameters needs to be renamed to avoid capture (conflating different variables that only coincidentally share the same name). Here I rename the latter instance (using α-equivalence):
(λx. (λy. y x) (λz. x z)) (λm. m m)
Next step is to β-reduce. In this expression we can do so in one of two places: we can either reduce the outermost application (λx) or the inner application (λy). I'm going to do the latter, mostly on arbitrary whim / because I thought ahead a little bit and think it will result in shorter intermediate expressions:
(λx. (λz. x z) x) (λm. m m)
Still more β-reduction to do. Again I'm going to choose the inner expression because I can see where this is going, but it doesn't actually matter in this case, I'll get the same final answer regardless:
(λx. x x) (λm. m m)
Side note: these two lambda expressions (which are also known as the "Mockingbird" (as per Raymond Smullyan)) are actually α-equivalent, and the entire expression is the (in)famous Ω-combinator. If we ignore all that however, and apply another β-reduction:
(λm. m m) (λm. m m)
Ah, that's still β-reducible. Or is it? This expression is α-equivalent to the previous. Oh dear, we appear to have found ourselves stuck in an infinite loop, as is always possible in Turing-complete (or should we say Lambda-complete?) languages. One might denote this as our original expression equalling "bottom" (in Haskell parlance), denoted ⊥:
(λx. (λy. y x) (λz. x z)) (λy. y y) = ⊥
Is this a mistake? Well, some good LC theory to know is:
if an expression has a β-normal form, then it will be the same β-normal form no matter what order of reductions was used to reach it, and
if an expression has a β-normal form, then normal order evaluation is guaranteed to find it.
So what is normal order? In short, it is β-reducing the outermost expression at every step. Let's take this expression for another spin!
(λx. (λy. y x) (λz. x z)) (λm. m m)
(λy. y (λm. m m)) (λz. (λm. m m) z)
(λz. (λm. m m) z) (λm. m m)
(λm. m m) (λm. m m)
Darn. Looks like this expression has no normal form – it diverges (doesn't terminate).

Scheme procedure with 2 arguments

Learned to code C, long ago; wanted to try something new and different with Scheme. I am trying to make a procedure that accepts two arguments and returns the greater of the two, e.g.
(define (larger x y)
(if (> x y)
x
(y)))
(larger 1 2)
or,
(define larger
(lambda (x y)
(if (> x y)
x (y))))
(larger 1 2)
I believe both of these are equivalent i.e. if x > y, return x; else, return y.
When I try either of these, I get errors e.g. 2 is not a function or error: cannot call: 2
I've spent a few hours reading over SICP and TSPL, but nothing is jumping out (perhaps I need to use a "list" and reference the two elements via car and cdr?)
Any help appreciated. If I am mis-posting, missed a previous answer to the same question, or am otherwise inappropriate, my apologies.
The reason is that, differently from C and many other languages, in Scheme and all Lisp languages parentheses are an important part of the syntax.
For instance they are used for function call: (f a b c) means apply (call) function f to arguments a, b, and c, while (f) means apply (call) function f (without arguments).
So in your code (y) means apply the number 2 (the current value of y), but 2 is not a function, but a number (as in the error message).
Simply change the code to:
(define (larger x y)
(if (> x y)
x
y))
(larger 1 2)

Unable to evaluate a lambda expression as argument in SICP ex-1.37

The problem can be found at http://mitpress.mit.edu/sicp/full-text/book/book-Z-H-12.html#%_thm_1.37
The problem is to expand a continuing fraction in order to approximate phi. It suggests that your procedure should be able to calculate phi by evaluating:
(cont-frac (lambda (i) 1.0)
(lambda (i) 1.0)
k)
My solution is as follows:
(define (cont-frac n d k)
(if (= k 1) d
(/ n (+ d (cont-frac n d (- k 1))))))
This solution works when calling (cont-frac 1 1 k), but not when using the lambda expressions as the problem suggests. I get what looks like a type error
;ERROR: "ex-1.37.scm": +: Wrong type in arg1 #<CLOSURE <anon> (x) 1.0>
; in expression: (##+ ##d (##cont-frac ##n ##d (##- ##k 1)))
; in scope:
; (n d k) procedure cont-frac
; defined by load: "ex-1.37.scm"
;STACK TRACE
1; ((##if (##= ##k 1) ##d (##/ ##n (##+ ##d (##cont-frac ##n ##d ...
My question is two-part:
Question 1. Why am I getting this error when using the lambda arguments? I (mistakenly, for sure) thought that (lambda (x) 1) should evaluate to 1. It clearly does not. I'm not sure I understand what it DOES evaluate to: I presume that it doesn't evaluate to anything (i.e., "return a value" -- maybe the wrong term for it) without being passed an argument for x.
It still leaves unanswered why you would have a lambda that returns a constant. If I understand correctly, (lambda (x) 1.0) will always evaluate to 1.0, regardless of what the x is. So why not just put 1.0? This leads to:
Question 2. Why should I use them? I suspect that this will be useful in ex-1.38, which I've glanced at, but I can't understand why using (lambda (x) 1.0) is any different that using 1.0.
In Scheme lambda expression creates a function, therefore expression such as:
(lambda (i) 1.0)
really does have result, it is a function object.
But if you add parentheses around that expression, it will indeed be evaluated to 1.0 as you expected:
((lambda (i) 1.0))
Using of lambdas in that exercise is necessary for building general solution, as you've correctly noticed in exercise 1.38, you'll be using the same implementation of cont-frac function but with different numerator and denominator functions, and you'll see an example, where you should calculate one of them in runtime using loop counter.
You could compare your exercise solutions with mine, e.g. 1.37 and 1.38
(/ n (+ d (cont-frac n d (- k 1))))))
In this case 'd' being the lambda statement, it doesn't make any sense to '+' it, same for 'n' and '/' try something like
(/ (n k) (+ (d k) (cont-frac n d (- k 1))))))
you'll see why in the next exercise you can also make this tail-recursive
I named my variables F-d and F-n instead of d and n, becuase they accept a function that calculates the numerator and denominator terms. (lambda (i) 1.0) is a function that accepts one argument and returns 1.0, 1.0 is just a number. In other continued fractions, the value may vary with the depth (thus why you need to pass k to the numerator and denomenator function to calculate the proper term.

Why do function calls slow things down in clojure?

I've been playing around with the Is Clojure is Still Fast? (and prequel Clojure is Fast) code. It seemed unfortunate that inlining the differential equation (f) is one of the steps taken to improving performance. The cleanest/fastest thing I've been able to come up without doing this is the following:
; As in the referenced posts, for giving a rough measure of cycles/iteration (I know this is a very rough
; estimate...)
(def cpuspeed 3.6) ;; My computer runs at 3.6 GHz
(defmacro cyclesperit [expr its]
`(let [start# (. System (nanoTime))
ret# ( ~#expr (/ 1.0 ~its) ~its )
finish# (. System (nanoTime))]
(println (int (/ (* cpuspeed (- finish# start#)) ~its)))))
;; My solution
(defn f [^double t ^double y] (- t y))
(defn mysolveit [^double t0 ^double y0 ^double h ^long its]
(if (> its 0)
(let [t1 (+ t0 h)
y1 (+ y0 (* h (f t0 y0)))]
(recur t1 y1 h (dec its)))
[t0 y0 h its]))
; => 50-55 cycles/it
; The fastest solution presented by the author (John Aspden) is
(defn faster-solveit [^double t0 ^double y0 ^double h ^long its]
(if (> its 0)
(let [t1 (+ t0 h)
y1 (+ y0 (* h (- t0 y0)))]
(recur t1 y1 h (dec its)))
[t0 y0 h its]))
; => 25-30 cycles/it
The type hinting in my solution helps quite a bit (it's 224 cycles/it without type hinting on either f or solveit), but it's still nearly 2x slower than the inlined version. Ultimately this performance is still pretty decent, but this hit is unfortunate.
Why is there such a performance hit for this? Is there a way around it? Are there plans to find ways of improvingthis? As pointed out by John in the original post, it seems funny/unfortunate for function calls to be inefficient in a functional language.
Note: I'm running Clojure 1.5 and have :jvm-opts ^:replace [] in a project.clj file so that I can use lein exec/run without it slowing things down (and it will if you don't do this I discovered...)
Benchmarking in the presence of a JIT compiler is tricky; you really must allow for a warm-up period, but then you also can't just run it all in a loop, since it may then be proved a no-op and optimized away. In Clojure, the usual solution is to use Hugo Duncan's Criterium.
Running a Criterium benchmark for (solveit 0.0 1.0 (/ 1.0 1000000) 1000000) for both versions of solveit results in pretty much exactly the same timings on my machine (mysolveit ~3.44 ms, faster-solveit ~3.45 ms). That's in a 64-bit JVM run with -XX:+UseConcMarkSweepGC, using Criterium 0.4.2 (criterium.core/bench). Presumably HotSpot just inlines f. In any case, there's no performance hit at all.
Adding to the already good answers, the JVM JIT most often does inline the primitive function calls when warmed up, and in this case, when you bench it with a warmed JIT you see the same results. Just wanted to say Clojure also has an inlining feature though for cases where that yields benefits.
(defn f
{:inline-arities #{2}
:inline (fn [t y] `(- (double ~t) (double ~y)))}
^double [^double t ^double y]
(- t y))
Now Clojure will compile away the calls to f, inlining the function at compile time. Whereas the JIT will inline the function at runtime as needed otherwise.
Also note that I added a ^double type hint to the return of f, if you don't do that, it gets compiled to return Object, and a cast needs to be added, I'm not sure if that really affects performance much, but if you want a fully primitive function that takes primitives and return primitives you need to type hint the return as well.

Resources