What does postcompose mean when talking about lambda calculus? - lambda-calculus

When reading various papers about the lambda calculus, ISWIM and a number of other things, I have heard the word "postcompose" come up a lot (e.g. in https://en.m.wikipedia.org/wiki/J_operator). However, after a lot of research, I could not find anything (except one definition that mathematical and unrelated to what I was looking for). So, what does "postcompose" mean?

The term postcompose is not specific to the lambda calculus. It really just means compose in the context of two functions, operators, functors or whatever objects on which composition is defined. The post prefix is used to disambiguate the order of composition, so postcomposing f with g is g ∘ f = g(f(x)).
In the context of lambda calculus, you have e.g. function application which is composable.

Related

Practical usage of lambda calculus

I have recently started self learning lambda calculus. One thing i am unable to visualize is how this language can be used to build practical applications. One simple use case i could think of is the following : Assume we have records of test scores of multiple students in a class. Like
name = John, math=30, science = 40 , english = 60
name = Jane, math=22, science = 80, english = 45
name = Mark, math=77, science = 43, english = 83
How can we write a program in lambda calculus (untyped or simply typed) that computes the average of test scores for each student.Please note that my question is not about the parsing of above text, but about the actual core computation .
Expected output:
name = John, average = 43
name = Jane, average = 49
name = Mark, average = 68
Can you please kindly share how this simple computation can be implemented using lambda calculus?
Even though i only have little knowledge of Haskell, I am not looking for haskell implementations, but i am curios about how this would be done in lambda-calculus itself.
Best Regards.
Even though lambda expressions are used in lots of programming languages like JavaScript and C# (and of course all functional languages), pure lambda calculus (and I assume this is what you refer to) is not meant to be used in practice. Just like Turing Machines are not meant to be for any practical applications.
The purpose of the lambda calculus is to model and reason about the nature of computation. This includes foundational questions like computability, equivalences, and encodings.
So, while it would be possible to write a lambda calculus expression that does what you are asking for, this expression would be huge and in itself it wouldn't be particularly enlightening. The interesting bit is what the basic building blocks of such an expression would look like: how do you encode booleans, boolean operators, conditional branches, integers, arithmetic operations, lists and list operations - and finally, recursion - in pure lambda calculus? Once you know the answer to these questions, you could in principle build the expression that you are asking for.

How is predicate logic represented in Prolog?

may be a strange and broad question and not a 100% programming question, but I hope this is ok. I recently had a discussion about, that a lot of programs in Prolog don´t follow strict predicate logic (of Frege) but often are "object oriented" which I am trying to grasp.
I know that Prolog is based on first order predicate logic especially Horn Clauses and that they are a special form of modus ponens. A fact and a rule if they occur solo are simply clauses, but as soon as I add more than one occurrence they become a predicate.
How are the quantors of first order predicate logic represented and related to fact , rule , predicate or the Prolog concept in general? What does the functor express and what the arguments in relation to predicate logic. How is predicate logic and first order predicate logic reflected in Prolog and where does prolog leave their concepts? e.g. how would I define a point, a line and a vertical line in predicate logic and first order predicate logic.
How do I formulate this in predicate logic and first order predicate logic what is the semantic and logic difference between
vertical(line).
line(vertical).
Or a line and point in this example. Are point and line not predicate logic?
For me it is " point(X) the set of all points" and when I pick a concrete point "there exists one point(110, 12)."
point(X,Y).
line(point(W,X), point(Y,Z)).
vertical(line(point(X,Y), point(X,Z))).
horizontal(line(point(X,Y), point(Z,Y))).
Any info helps! Many thanks, H
A chapter of Programming in Prolog by W.Clocksin and C.Mellish is devoted to explain the relation of Prolog with logic. Citing from there
If we wish to discuss how Prolog is related to logic, we must first establish what we
mean by logic. Logic was originally devised as a way of representing the form of
arguments, so that it would be possible to check in a formal way whether or not they
are valid. Thus we can use logic to express propositions, the relations between propositions and how one can validly infer some propositions from others. The particular
form of logic that we will be talking about here is called the Predicate Calculus. We
will only be able to say a few words about it here. There are scores of good basic
introductions to logic you can turn to for background reading.
If we wish to express propositions about the world, we must be able to describe
the objects that are involved in them. In Predicate Calculus, we represent objects by
terms. A term is of one of the following forms:
A constant symbol. This is a symbol that stands for a single individual or concept.
We can think of this as a Prolog atom, and we will use the Prolog syntax. So
greek, agatha, and peace are constant symbols.
A variable symbol. This is a symbol that we may want to stand for different
individuals at different times. Variables are really only introduced in conjunction
with quantifiers, which are discussed below. We can think of them as Prolog
variables and will use the Prolog syntax. Thus X, Man, and Greek are variable
symbols.
A compound term. A compound term consists of a function symbol, together
with an ordered set of terms as its arguments. The idea is that the compound
term represents some individual that depends on the individuals represented by
the arguments. The function symbol represents how the first depends on the second. For instance, we could have a function symbol standing for the notion of
"distance" and two arguments. In this case, the compound term stands for the
distance between the objects represented by the arguments. We can think of a
compound term as a Prolog structure with the function symbol as the functor.
We will write Predicate Calculus compound terms using the Prolog syntax, so
that, for instance, wife(henry) might mean Henry's wife, distance(point1, X)
might mean the distance between some particular point and some other place to
be specified, and classes(mary, dayafter(W)) might mean the classes that Mary
teaches on the day after some day W to be specified.
Thus in Predicate Calculus the ways of representing objects are just like the ways available in Prolog.
Seems not appropriate to put the entire chapter here... there is also a program, very explanatory, in appendix B, that performs an automatic translation of WFFs into clauses.
The book is very readable, just a pity it's not among the titles in Free Prolog Programming Books section.
I know that Prolog is based on first order predicate logic especially Horn Clauses and that they are a special form of modus ponens.
In a sense, inverse "modus ponens":
a :- b
You want to show "a true", and to do so, you have to show "b true"
A fact and a rule if they occur solo are simply clauses, but as soon as I add more than one occurrence they become a predicate.
No, they are all predicates. The "predicate" is an object/agent/program/platonic-phenomenon which expresses that there (objectively) is some "relationship" between "things", and you can ask the Prolog Processor about that relationship. There is no direct meaning associated to all of that though, it's "strings related to strings via strings". We are working with syntactic machines after all (i.e. computers).
Enter this logic program:
p(x,y). % Predicate p/2 states that there is a relationship p between x and y
And now, you can query the database about what the program is saying:
?- p(x,y).
true. % a p relationship exists (fact, but could also be rule)
?- p(x,A).
A = y. % the thing related to x via p is y
?- p(A,y).
A = x. % the thing related to x via p is y
?- p(A,B).
A = x, % things related via p are x and y
B = y.
?- p(c,d).
false. % not REALLY "false" but "as far as I can tell, there
% is no relationship p between c and d"
Note the interpretation of "false", which is not the "strong false" of classical logic. Even though it is traditionally state that Prolog works in classical logic, this is not really the case:
From "Logic Programming with Strong Negation" (David Pearce, Gerd Wagner, FU Berlin, 1991), appears in Springer LNAI 475: Extensions of Logic Programming, International Workshop Tübingen, FRG, December 8–10, 1989 Proceedings):
According to the standard view, a logic program is a set of definite Horn clauses. Thus, logic programs are regarded as syntactically restricted first-order theories within the framework of classical logic. Correspondingly, the proof theory of logic programs is considered as the specialized version of classical resolution, known as SLD-resolution. This view, however, neglects the fact that a program clause, a_0 <— a_1, a_2, • • •, a_n, is an expression of a fragment of positive logic (a subsystem of intuitionistic logic) rather than an implicational formula of classical logic. The classical interpretation of logic programs, therefore, seems to be a semantical overkill.
It should be clear that in order to explain the deduction mechanism of Prolog one does not have to refer to the indirect method of SLD-resolution which checks for the refutability of the contrary. It is certainly more natural to view Prolog's proof procedure as a kind of natural deduction, as, for example, in [Hallnäs & Schroeder-Heister 1987] and [Miller 1989]. This also is more in line with the intuitions of a Prolog programmer. Since Prolog is the paradigm, logic programming semantics should take it as a point of departure.
Now:
How are the quantors of first order predicate logic represented and related
to fact, rule, predicate or the Prolog concept in general?
That is a long story. Note that Prolog is primarily about "programming using logic", and also about "modeling using logic". The two aspects certainly overlap well for problems that can be solved using explicit enumeration, but Prolog is not made for specifying general FOL constraints describing a sought-for solution. In fact, certain FOL constraints cannot be represented and other have to be transformed into nominally equivalent expression that are agreeable to the machine. Look up "skolemization". For example: https://www.cs.toronto.edu/~sheila/384/w11/Lectures/csc384w11-KR-tutorial.pdf
On the flip side, Prolog provides "meta-predicates" which generate solutions by calling other predicates, so it's making forays into second-order logic. As it must - nobody can survive in the FOL desert for long.
What does the functor express
Nothing. It just stands for itself. Pure syntax. Look up "Herbrand Universe".
How do I formulate this in predicate logic and first order predicate logic
what is the semantic and logic difference between
vertical(line).
line(vertical).
It's you who imbues vertical and line with meaning. So, feelings. You want a "vertial line", so you would say, the "thing" is the "line" and "vertical" is an attribute of the "line". So vertical(line) sounds appropriate. Or maybe attribute(line,vertical). It depends.
Here:
point(X,Y).
line(point(W,X), point(Y,Z)).
You have to aspects:
Predicates express "relationships". "Function symbols" are used to construct "things with structure": you can form trees of stuff with function symbols on nodes and integers/strings/variables on leaves. These are called "term". But terms can appear as predicates or as things, depending on the context, it's quite fluid. So you can for example construct a Prolog program with another Prolog program.
point(X,Y)
line(point(W,X), point(Y,Z))
These are terms!
If you type this into a file program.pl:
point_on_line(point(X,Y),line(point(W,X), point(Y,Z))).
The terms appear as "things" related by predicate point_on_line/2. The whole line is itself a term.
If you type this into a file program.pl:
point(X,Y).
line(point(W,X), point(Y,Z)).
The terms appear as "predicates", and point appears both as predicate point/2 and as "thing" about which predicate line/2 is talking.
This is actually a vast subject and it takes some time getting used to it, much more than functional programming. I had some Prolog and Logic courses at uni but 20 years later I found out that I had badly misunderstood a lot of aspects.

What is Eta abstraction in lambda calculus used for?

Eta Abstraction in lambda calculus means following.
A function f can be written as \x -> f x
Is Eta abstraction of any use while reducing lambda expressions?
Is it only an alternate way of writing certain expressions?
Practical use cases would be appreciated.
The eta reduction/expansion is just a consequence of the law that says that given
f = g
it must be, that for any x
f x = g x
and vice versa.
Hence given:
f x = (\y -> f y) x
we get, by beta reducing the right hand side
f x = f x
which must be true. Thus we can conclude
f = \y -> f y
First, to clarify the terminology, paraphrasing a quote from the Eta conversion article in the Haskell wiki (also incorporating Will Ness' comment above):
Converting from \x -> f x to f would
constitute an eta reduction, and moving in the opposite way
would be an eta abstraction or expansion. The term eta conversion can refer to the process in either direction.
Extensive use of η-reduction can lead to Pointfree programming.
It is also typically used in certain compile-time optimisations.
Summary of the use cases found:
Point-free (style of) programming
Allow lazy evaluation in languages using strict/eager evaluation strategies
Compile-time optimizations
Extensionality
1. Point-free (style of) programming
From the Tacit programming Wikipedia article:
Tacit programming, also called point-free style, is a programming
paradigm in which function definitions do not identify the arguments
(or "points") on which they operate. Instead the definitions merely
compose other functions
Borrowing a Haskell example from sth's answer (which also shows composition that I chose to ignore here):
inc x = x + 1
can be rewritten as
inc = (+) 1
This is because (following yatima2975's reasoning) inc x = x + 1 is just syntactic sugar for \x -> (+) 1 x so
\x -> f x => f
\x -> ((+) 1) x => (+) 1
(Check Ingo's answer for the full proof.)
There is a good thread on Stackoverflow on its usage. (See also this repl.it snippet.)
2. Allow lazy evaluation in languages using strict/eager evaluation strategies
Makes it possible to use lazy evaluation in eager/strict languages.
Paraphrasing from the MLton documentation on Eta Expansion:
Eta expansion delays the evaluation of f until the surrounding function/lambda is applied, and will re-evaluate f each time the function/lambda is applied.
Interesting Stackoverflow thread: Can every functional language be lazy?
2.1 Thunks
I could be wrong, but I think the notion of thunking or thunks belongs here. From the wikipedia article on thunks:
In computer programming, a thunk is a subroutine used to inject an
additional calculation into another subroutine. Thunks are primarily
used to delay a calculation until its result is needed, or to insert
operations at the beginning or end of the other subroutine.
The 4.2 Variations on a Scheme — Lazy Evaluation of the Structure and Interpretation of Computer Programs (pdf) has a very detailed introduction to thunks (and even though the latter has not one occurrence of the phrase "lambda calculus", it is worth reading).
(This paper also seemed interesting but didn't have the time to look into it yet: Thunks and the λ-Calculus.)
3. Compile-time optimizations
Completely ignorant on this topic, therefore just presenting sources:
From Georg P. Loczewski's The Lambda Calculus:
In 'lazy' languages like Lambda Calculus, A++, SML, Haskell, Miranda etc., eta conversion, abstraction and reduction alike, are mainly used within compilers. (See [Jon87] page 22.)
where [Jon87] expands to
Simon L. Peyton Jones
The Implementation of Functional Programming Languages
Prentice Hall International, Hertfordshire,HP2 7EZ, 1987.
ISBN 0 13 453325 9.
search results for "eta" reduction abstraction expansion conversion "compiler" optimization
4. Extensionality
This is another topic that I know little about, and this is more theoretical, so here it goes:
From the Lambda calculus wikipedia article:
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments.
Some other sources:
nLab entry on Eta-conversion that goes deeper into its connection with extensionality, and its relationship with beta-conversion
ton of info in the What's the point of η-conversion in lambda calculus? on the Theoretical Computer Science Stackexchange (but beware: the author of the accepted answer seems to have a beef with the commonly held belief about the relationsship between eta reduction and extensionality, so make sure to read the entire page. Most of it was over my head so I have no opinions.)
The question above has been cross-posted to Math Exchange as well
Speaking of "over my head" stuff: here's Conor McBride's take; the only thing I understood were that eta conversions can be controversial in certain context, but reading his reply was that of trying to figure out an alien language (couldn't resist)
Saved this page recursively in Internet Archive so if any of the links are not live anymore then that snapshot may have saved those too.

How to avoid using assert and retractall in Prolog to implement global (or state) variables

I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.

What is a unification algorithm?

Well I know it might sound a bit strange but yes my question is: "What is a unification algorithm".
Well, I am trying to develop an application in F# to act like Prolog. It should take a series of facts and process them when making queries.
I was suggested to get started in implementing a good unification algorithm but did not have a clue about this.
Please refer to this question if you want to get a bit deeper to what I want to do.
Thank you very much and Merry Christmas.
If you have two expressions with variables, then unification algorithm tries to match the two expressions and gives you assignment for the variables to make the two expressions the same.
For example, if you represented expressions in F#:
type Expr =
| Var of string // Represents a variable
| Call of string * Expr list // Call named function with arguments
And had two expressions like this:
Call("foo", [ Var("x"), Call("bar", []) ])
Call("foo", [ Call("woo", [ Var("z") ], Call("bar", []) ])
Then the unification algorithm should give you an assignment:
"x" -> Call("woo", [ Var("z") ]
This means that if you replace all occurrences of the "x" variable in the two expressions, the results of the two replacements will be the same expression. If you had expressions calling different functions (e.g. Call("foo", ...) and Call("bar", ...)) then the algorithm will tell you that they are not unifiable.
There is also some explanation in WikiPedia and if you search the internet, you'll surely find some useful description (and perhaps even an implementation in some functional language similar to F#).
I found Baader and Snyder's work to be most informative. In particular, they describe several unification algorithms (including Martelli and Montanari's near-linear algorithm using union-find), and describe both syntactic unification and various kinds of semantic unification.
Once you have unification, you'll also need backtracking. Kiselyov/Shan/Friedman's LogicT framework will help here.
Obviously, destructive unification would be much more efficient than a pure functional one, but much less F-sharpish as well. If it's a performance you're after, probably you will end up implementing a subset of WAM any way:
https://en.wikipedia.org/wiki/Warren_Abstract_Machine
And probably this could help: Andre Marien, Bart Demoen: A new Scheme for Unification in WAM.

Resources