What is the leftost-innermost term in a lambda expression? - lambda-calculus

Assume there is a lamda term like this:
If you are reducing it by the applicative strategy (leftmost-innermost), the first step is the delta reduction of len:
What is the next step? Do I beta-reduce the outer lambda term?
Or do I delta-reduce zero?
The latter looks right to me, because the outer lambda term is not normal and zero is the leftmost-innermost term of it.

Pure lambda calculus doesn't recognize function names (in other words: all functions are anonymous), so delta-reductions are not really applicable to the process of beta-reduction and they don't influence the evaluation (i.e. beta-reduction) order.
In any case you don't need to delta-reduce zero yet, as the left-hand side of the expression can't be beta-reduced on its own - it is just more clear if you first proceed with (cons one nil) zero (λxr.succ r).

Related

Is having only one argument functions efficient? Haskell

I have started learning Haskell and I have read that every function in haskell takes only one argument and I can't understand what magic happens under the hood of Haskell that makes it possible and I am wondering if it is efficient.
Example
>:t (+)
(+) :: Num a => a -> a -> a
Signature above means that (+) function takes one Num then returns another function which takes one Num and returns a Num
Example 1 is relatively easy but I have started wondering what happens when functions are a little more complex.
My Questions
For sake of the example I have written a zipWith function and executed it in two ways, once passing one argument at the time and once passing all arguments.
zipwithCustom f (x:xs) (y:ys) = f x y : zipwithCustom f xs ys
zipwithCustom _ _ _ = []
zipWithAdd = zipwithCustom (+)
zipWithAddTo123 = zipWithAdd [1,2,3]
test1 = zipWithAddTo123 [1,1,1]
test2 = zipwithCustom (+) [1,2,3] [1,1,1]
>test1
[2,3,4]
>test2
[2,3,4]
Is passing one argument at the time (scenario_1) as efficient as passing all arguments at once (scenario_2)?
Are those scenarios any different in terms of what Haskell is actually doing to compute test1 and test2 (except the fact that scenario_1 probably takes more memory as it needs to save zipWithAdd and zipWithAdd123)
Is this correct and why? In scenario_1 I iterate over [1,2,3] and then over [1,1,1]
Is this correct and why? In scenario_1 and scenario_2 I iterate over both lists at the same time
I realise that I have asked a lot of questions in one post but I believe those are connected and will help me (and other people who are new to Haskell) to better understand what actually is happening in Haskell that makes both scenarios possible.
You ask about "Haskell", but Haskell the language specification doesn't care about these details. It is up to implementations to choose how evaluation happens -- the only thing the spec says is what the result of the evaluation should be, and carefully avoids giving an algorithm that must be used for computing that result. So in this answer I will talk about GHC, which, practically speaking, is the only extant implementation.
For (3) and (4) the answer is simple: the iteration pattern is exactly the same whether you apply zipWithCustom to arguments one at a time or all at once. (And that iteration pattern is to iterate over both lists at once.)
Unfortunately, the answer for (1) and (2) is complicated.
The starting point is the following simple algorithm:
When you apply a function to an argument, a closure is created (allocated and initialized). A closure is a data structure in memory, containing a pointer to the function and a pointer to the argument. When the function body is executed, any time its argument is mentioned, the value of that argument is looked up in the closure.
That's it.
However, this algorithm kind of sucks. It means that if you have a 7-argument function, you allocate 7 data structures, and when you use an argument, you may have to follow a 7-long chain of pointers to find it. Gross. So GHC does something slightly smarter. It uses the syntax of your program in a special way: if you apply a function to multiple arguments, it generates just one closure for that application, with as many fields as there are arguments.
(Well... that might be not quite true. Actually, it tracks the arity of every function -- defined again in a syntactic way as the number of arguments used to the left of the = sign when that function was defined. If you apply a function to more arguments than its arity, you might get multiple closures or something, I'm not sure.)
So that's pretty nice, and from that you might think that your test1 would then allocate one extra closure compared to test2. And you'd be right... when the optimizer isn't on.
But GHC also does lots of optimization stuff, and one of those is to notice "small" definitions and inline them. Almost certainly with optimizations turned on, your zipWithAdd and zipWithAddTo123 would both be inlined anywhere they were used, and we'd be back to the situation where just one closure gets allocated.
Hopefully this explanation gets you to where you can answer questions (1) and (2) yourself, but just in case it doesn't, here's explicit answers to those:
Is passing one argument at the time as efficient as passing all arguments at once?
Maybe. It's possible that passing arguments one at a time will be converted via inlining to passing all arguments at once, and then of course they will be identical. In the absence of this optimization, passing one argument at a time has a (very slight) performance penalty compared to passing all arguments at once.
Are those scenarios any different in terms of what Haskell is actually doing to compute test1 and test2?
test1 and test2 will almost certainly be compiled to the same code -- possibly even to the point that only one of them is compiled and the other is an alias for it.
If you want to read more about the ideas in the implementation, the Spineless Tagless G-machine paper is much more approachable than its title suggests, and only a little bit out of date.

Find indexes of sublists that contain a value

Let's say we have the following list :
'( (1 2 3) (3 4 5) (7 8 9) (2 9 9) )
I need to create a list with all the indexes from sublists that contain a give value, e.g. for 2 , the result will be '(0 3) .It is a homework assignment and we are not allowed to use loops. It is simple to solve this using recursion, but I would like to use functionals but I don't know if that's possible (without using global variables, set! and any other lateral functions). Any hints / suggestions are welcomed !
Yes, you can definitely solve this problem with foldl. Foldl can be used to model any loop-like traversal of a list.
Honestly, the clearest way to understand how to use foldl for a problem like this is first to write it in a simple recursive way, following the design recipe. You should be careful to use the rest of the list only in the recursive call.
Next, you'd want to refactor the program so that it makes only tail calls; that is, if you're defining a function called f, the "cons" case should be (f ...), with nothing "outside" of it. In order to make this possible, you're allowed to introduce one extra argument for your function, often called the 'accumulator'. The result of the function in the empty case should probably just be the accumulator. At this point, changing to a use of foldl is just a matter of calling foldl with the list, the initial value of the accumulator, and the function that takes the first element of the list and the accumulator and produces the new accumulator.
Whew!

Eagerly evaluating all predicate calls in Prolog

Reading the SWI-Prolog documentation on meta-predicates, I initially assumed that call(f, ...) is equivalent to f(...), where f is some predicate. But I observe that the behavior of the two actually diverge in certain cases. For instance, suppose a knowledge base includes the clause f(g(x)). Then the query call(f, g(x)) succeeds, whereas f(call(g, x)) does not. This is problematic, because I sometimes need to use clauses whose bodies include nested predicate calls. I'd like Prolog to evaluate all predicate calls eagerly (I wonder if this is the right word?), such that the query f(call(g, x)) reduces to f(g(x)) before unification begins, and succeeds. Is this possible?

applicative-order/call-by-value and normal-order/call-by-name differences

Background
I am learning the sicp according to an online course and got confused by its lecture notes. In the lecture notes, the applicative order seems to equal cbv and normal order to cbn.
Confusion
But the wiki points out that, beside evaluation orders(left to right, right to left, or simultaneous), there is a difference between the applicative order and cbv:
Unlike call-by-value, applicative order evaluation reduces terms within a function body as much as possible before the function is applied.
I don't understand what does it mean by reduced. Aren't applicative order and cbv both going to get the exact value of a variable before going into the function evaluation.
And for the normal order and cbv, I am even more confused according to wiki.
In contrast, a call-by-name strategy does not evaluate inside the body of an unapplied function.
I guess does it mean that normal order would evaluate inside the body of an unapplied function. How could it be?
Question
Could someone give me some more concrete definitions of the four strategies.
Could someone show an example for each strategy, using whatever programming language.
Thanks a lot?
Applicative order (without taking into account
the order of evaluation ,which in scheme is undefined) would be equivalent to cbv. All arguments of a function call are fully evaluated before entering the functions body. This is the example given in
SICP
(define (try a b)
(if (= a 0) 1 b))
If you define the function, and call it with these arguments:
(try 0 (/ 1 0))
When using applicative order evaluation (default in scheme) this will produce and error. It will evaluate
(/ 1 0) before entering the body. While with normal order evaluation, this would return 1. The arguments
will be passed without evaluation to the functions body and (/ 1 0) will never be evaluated because (= a 1) is true, avoiding the error.
In the article you link to, they are talking about Lambda Calculus when they mention Applicative and Normal order evaluation. In this article wiki It is explained more clearly I think.
Reduced means applying reduction rules to the expression. (also in the link).
α-conversion: changing bound variables (alpha);
β-reduction: applying functions to their arguments (beta);
Normal order:
The leftmost, outermost redex is always reduced first. That is, whenever possible the arguments are substituted into the body of an abstraction before the arguments are reduced.
Call-by-name
As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
A normal form is an equivalent expression that cannot be reduced any further under the rules imposed by the form
In the same article, they say about call-by-value
Only the outermost redexes are reduced: a redex is reduced only when its right hand side has reduced to a value (variable or lambda abstraction).
And Applicative order:
The leftmost, innermost redex is always reduced first. Intuitively this means a function's arguments are always reduced before the function itself.
You can read the article I linked for more information about lambda-calculus.
Also Programming Language Foundations

Declarative interpretation of list inversion in Prolog

I have some problem to join declarative reasoning...so I am here to ask you if my reasoning is correct or if there is something wrong or if I am missing something...
I have the following problem: Write a Prolog program that invert the element of a list
for example if I call something like:
myreverse([a,b,c,d,e],X). I have to obtain that X=[e,d,c,b,a]
I have the following solution:
naiverev([],[]).
naiverev([H|T],R):- naiverev(T,RevT),
append(RevT,[H],R).
This is my interpretation:
I have a fact that say that the inverse of an empty list is an empty list.
If the first list it is not empty, the fact it is not true and the fact it is not match so move on to the next rule.
The rule say that: the program that prove that the list R is the inverse of the list [H|T]
I can read the logic implication from right to left in the following way:
IF it is TRUE that naivrev(T, RevT) AND append(RevT, [H], R) ---> naivrev([H|T],R) it is TRUE
So (in the body of the rule) I am assuming that the "function" naivrev(T,RevT) respond TRUE if RevT is the inverse of T, FALSE otherwise.
At the same time I am assuming* that **append(RevT,[H],R) respond TRUE if R is [H|RevT], FALSE otherwise.
Then, if both the parts of the rule body are TRUE, the program can infer that the HEAD is TRUE (in this case that R is the inverse of [H|T])
Is this reasoning good or I am missing something?
Like the last two times, you have again intermixed Prolog's engine of computation with the purely declarative reading. Whenever you say, procedurally "the rule is not matched so move on" or anything like that, you're invoking Prolog's algorithm to explain what's going on, not logic. But you're getting better, because the rest of your stuff is much closer than before.
Rule 1: the reverse of the empty list is the empty list. (Same as you have.)
Rule 2: the reverse of [H|T] is the reverse of T, called RevT, appended to the list [H].
What makes this work, of course, is that Prolog will try rule 1, when it doesn't match, it will try rule 2, and recursively build up the result (in a tremendously inefficient way, as you probably realize). But this "making it go" of checking rules and choosing between them and how that process is performed, is what Prolog adds to a declarative or logical statement.
Your reading of the logical implication is correct: If naiverev(T, RevT) is true and append(RevT, [H], R) is true, then naiverev([H|T], R) is true. This is just as #false explained in your previous question, so I would say you're definitely starting to get it. Your remarks about the body being true leading to the head being true are spot on.
So good job, it looks like you're getting it. :)

Resources