Call by value in the lambda calculus - lambda-calculus

I'm working my way through Types and Programming Languages, and Pierce, for the call by value reduction strategy, gives the example of the term id (id (λz. id z)). The inner redex id (λz. id z) is reduced to λz. id z first, giving id (λz. id z) as the result of the first reduction, before the outer redex is reduced to the normal form λz. id z.
But call by value order is defined as 'only outermost redexes are reduced', and 'a redex is reduced only when its right-hand side has already been reduced to a value'. In the example id (λz. id z) appears on the right-hand side of the outermost redex, and is reduced. How is this squared with the rule that only outermost redexes are reduced?
Is the answer that 'outermost' and 'innermost' only refers to lambda abstractions? So for a term t in λz. t, t can't be reduced, but in a redex s t, t is reduced to a value v if this is possible, and then s v is reduced?

Short answer: yes. You can never reduce inside a lambda-term you can only reduce term outside, starting by right.
The set of evaluation contexts in lambda-calculus by value is defined as follow:
E = [ ] | (λ.t)E | Et
E is what you can value..
For example in lambda calculus by name the evaluation context is :
E = [ ] | Et | fE
as you can reduce an application even if a term is not a value.
For example (λx.x)(z λx.x) is stuck in call by value but in call by name it reduce to (z λx.x), which is a normal form.
In the context grammar f is a normal form (in call by name) defined as:
f = λx.t | L
L = x | L f
You can see another definition of contexts at chapter 19.5.3 of the Pierce.

Is the answer that 'outermost' and 'innermost' only refers to lambda abstractions? So for a term t in λz. t, t can't be reduced, but in a redex s t, t is reduced to a value v if this is possible, and then s v is reduced?
Yes, that's exactly right.

Related

How can I subtract a multiset from a set with a given multiset?

So I'm trying to define a function apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset"
It takes in a function C that may convert an 'a multiset into a single element of type 'a. Here we assume that each element in the domain of C is pairwise mutually exclusive and not the empty multiset (I already have another function that checks these things). apply will also take another multiset inp. What I'd like the function to do is check if there is at least one element in the domain of C that is completely contained in inp. If this is the case, then perform a set difference inp - s where s is the element in the domain of C and add the element the (C s) into this resulting multiset. Afterwards, keep running the function until there are no more elements in the domain of C that are completely contained in the given inp multiset.
What I tried was the following:
fun apply_C :: "('a multiset ⇒ 'a option) ⇒ 'a multiset ⇒ 'a multiset" where
"apply_C C inp = (if ∃s ∈ (domain C). s ⊆# inp then apply_C C (add_mset (the (C s)) (inp - s)) else inp)"
However, I get this error:
Variable "s" occurs on right hand side only:
⋀C inp s.
apply_C C inp =
(if ∃s∈domain C. s ⊆# inp
then apply_C C
(add_mset (the (C s)) (inp - s))
else inp)
I have been thinking about this problem for days now, and I haven't been able to find a way to implement this functionality in Isabelle. Could I please have some help?
After thinking more about it, I don't believe there is a simple solutions for that Isabelle.
Do you need that?
I have not said why you want that. Maybe you can reduce your assumptions? Do you really need a function to calculate the result?
How to express the definition?
I would use an inductive predicate that express one step of rewriting and prove that the solution is unique. Something along:
context
fixes C :: ‹'a multiset ⇒ 'a option›
begin
inductive apply_CI where
‹apply_CI (M + M') (add_mset (the (C M)) M')›
if ‹M ∈ dom C›
context
assumes
distinct: ‹⋀a b. a ∈ dom C ⟹ b ∈ dom C ⟹ a ≠ b ⟹ a ∩# b = {#}› and
strictly_smaller: ‹⋀a b. a ∈ dom C ⟹ size a > 1›
begin
lemma apply_CI_determ:
assumes
‹apply_CI⇧*⇧* M M⇩1› and
‹apply_CI⇧*⇧* M M⇩2› and
‹⋀M⇩3. ¬apply_CI M⇩1 M⇩3›
‹⋀M⇩3. ¬apply_CI M⇩2 M⇩3›
shows ‹M⇩1 = M⇩2›
sorry
lemma apply_CI_smaller:
‹apply_CI M M' ⟹ size M' ≤ size M›
apply (induction rule: apply_CI.induct)
subgoal for M M'
using strictly_smaller[of M]
by auto
done
lemma wf_apply_CI:
‹wf {(x, y). apply_CI y x}›
(*trivial but very annoying because not enough useful lemmas on wf*)
sorry
end
end
I have no clue how to prove apply_CI_determ (no idea if the conditions I wrote down are sufficient or not), but I did spend much thinking about it.
After that you can define your definitions with:
definition apply_C where
‹apply_C M = (SOME M'. apply_CI⇧*⇧* M M' ∧ (∀M⇩3. ¬apply_CI M' M⇩3))›
and prove the property in your definition.
How to execute it
I don't see how to write an executable function on multisets directly. The problem you face is that one step of apply_C is nondeterministic.
If you can use lists instead of multisets, you get an order on the elements for free and you can use subseqs that gives you all possible subsets. Rewrite using the first element in subseqs that is in the domain of C. Iterate as long as there is any possible rewriting.
Link that to the inductive predicate to prove termination and that it calculates the right thing.
Remark that in general you cannot extract a list out of a multiset, but it is possible to do so in some cases (e.g., if you have a linorder over 'a).

How to set mutually exclusive probabilities in Problog?

A person X can either be inpatient or outpatient.
Given the fact location(X,outpatient) how can Problog infer that the probability of location(X,inpatient) is 0?
For example I want a side effect of:
person(1).
location(1,inpatient).
dependent(1,opioids).
receive(1,clonidine).
query(detoxification(1,opioids,success)).
to be an inference that location(1,outpatient) has zero probability.
If I write location(X,outpatient);location(X,inpatient)., all queries return both with a probability of 1.
If I write P::location(X,outpatient);(1-P)::location(X,inpatient). that gives an error because I haven't specified a value for P. If I specify a value for P, that value is never updated (as expected because Prolog treats variables as algebraic variables and I haven't told Problog to update P.
If I write location(X,outpatient) :- \+ location(X,inpatient). I create a negative cycle, which I have to if I am to specify the inverse goal.
One solution:
P::property(X,location,inpatient);(1-P)::property(X,location, outpatient) :-
inpatient(X),
P is 1.
P::property(X,location,outpatient);(1-P)::property(X,location, inpatient) :-
outpatient(X),
P is 1.
P::inpatient(X);(1-P)::outpatient(X) :-
property(X,location,inpatient),
P is 1.
P::outpatient(X);(1-P)::inpatient(X) :-
property(X,location,outpatient),
P is 1.
This binds inpatient/1 to property/3 for the property of location with value inpatient.

Efficient Union Find with Existential Quantifier?

Is there a classical algorithm to solve the following problem.
Assume the union find algorithm without existential quantifiers
has the following input:
x1 = y1 /\ .. /\ xn = yn
It will then build some datastructure u, so that I can check
u.root(x)==u.root(y), to decide whether x and y are in the same
subgraph.
The input can be characterized by the following grammar:
Input :== Var = Var | Input /\ Input
Assume now we also allow existential quantifiers:
Input :== Var = Var | Input /\ Input | exists Var Input
What union find algorithm could deal with such an input.
I am still assuming that the algorithm builds some datastructure
u, where I can check via u.root(x)==u.root(y) whether x and
y are in the same subgraph.
Additionally u.root(x) should throw an exception when used
with a bound variable. These variables should all have been
eliminated and not anymore part of datastructure. Means
the subgraph should have been accordingly reduced, without
changing the validity of the result.
Bye
Here is a sketch of an algorithm. It will traverse the AST, and
feed a special union find algorithm. First the traversal:
traverse((X = Y)) :- add_conn(X, Y).
traverse(exists(X,I)) :- push_var(X), traverse(I), pop_var_remove_conn(X).
traverse((A /\ B)) :- traverse(A), traverse(B).
The special union find algorithm works with a list. This list defines
the weight of the nodes, the head of list has weight 0, the second element
weight 1, etc... add_conn(X,Y) first computes X'=root(X) and Y'=root(Y). The
less weighted of X' and Y' is connected to the more weighted one.
push_var(X) adds X to the front of the list. Making it the less weighted
node. pop_var_remove_conn(X) removes X again from the list, and removes a
possibly established connection from X to some other node as well.

What's the formal term for a function that can be written in terms of `fold`?

I use the LINQ Aggregate operator quite often. Essentially, it lets you "accumulate" a function over a sequence by repeatedly applying the function on the last computed value of the function and the next element of the sequence.
For example:
int[] numbers = ...
int result = numbers.Aggregate(0, (result, next) => result + next * next);
will compute the sum of the squares of the elements of an array.
After some googling, I discovered that the general term for this in functional programming is "fold". This got me curious about functions that could be written as folds. In other words, the f in f = fold op.
I think that a function that can be computed with this operator only needs to satisfy (please correct me if I am wrong):
f(x1, x2, ..., xn) = f(f(x1, x2, ..., xn-1), xn)
This property seems common enough to deserve a special name. Is there one?
An Iterated binary operation may be what you are looking for.
You would also need to add some stopping conditions like
f(x) = something
f(x1,x2) = something2
They define a binary operation f and another function F in the link I provided to handle what happens when you get down to f(x1,x2).
To clarify the question: 'sum of squares' is a special function because it has the property that it can be expressed in terms of the fold functional plus a lambda, ie
sumSq = fold ((result, next) => result + next * next) 0
Which functions f have this property, where dom f = { A tuples }, ran f :: B?
Clearly, due to the mechanics of fold, the statement that f is foldable is the assertion that there exists an h :: A * B -> B such that for any n > 0, x1, ..., xn in A, f ((x1,...xn)) = h (xn, f ((x1,...,xn-1))).
The assertion that the h exists says almost the same thing as your condition that
f((x1, x2, ..., xn)) = f((f((x1, x2, ..., xn-1)), xn)) (*)
so you were very nearly correct; the difference is that you are requiring A=B which is a bit more restrictive than being a general fold-expressible function. More problematically though, fold in general also takes a starting value a, which is set to a = f nil. The main reason your formulation (*) is wrong is that it assumes that h is whatever f does on pair lists, but that is only true when h(x, a) = a. That is, in your example of sum of squares, the starting value you gave to Accumulate was 0, which is a does-nothing when you add it, but there are fold-expressible functions where the starting value does something, in which case we have a fold-expressible function which does not satisfy (*).
For example, take this fold-expressible function lengthPlusOne:
lengthPlusOne = fold ((result, next) => result + 1) 1
f (1) = 2, but f(f(), 1) = f(1, 1) = 3.
Finally, let's give an example of a functions on lists not expressible in terms of fold. Suppose we had a black box function and tested it on these inputs:
f (1) = 1
f (1, 1) = 1 (1)
f (2, 1) = 1
f (1, 2, 1) = 2 (2)
Such a function on tuples (=finite lists) obviously exists (we can just define it to have those outputs above and be zero on any other lists). Yet, it is not foldable because (1) implies h(1,1)=1, while (2) implies h(1,1)=2.
I don't know if there is other terminology than just saying 'a function expressible as a fold'. Perhaps a (left/right) context-free list function would be a good way of describing it?
In functional programming, fold is used to aggregate results on collections like list, array, sequence... Your formulation of fold is incorrect, which leads to confusion. A correct formulation could be:
fold f e [x1, x2, x3,..., xn] = f((...f(f(f(e, x1),x2),x3)...), xn)
The requirement for f is actually very loose. Lets say the type of elements is T and type of e is U. So function f indeed takes two arguments, the first one of type U and the second one of type T, and returns a value of type U (because this value will be supplied as the first argument of function f again). In short, we have an "accumulate" function with a signature f: U * T -> U. Due to this reason, I don't think there is a formal term for these kinds of function.
In your example, e = 0, T = int, U = int and your lambda function (result, next) => result + next * next has a signaturef: int * int -> int, which satisfies the condition of "foldable" functions.
In case you want to know, another variant of fold is foldBack, which accumulates results with the reverse order from xn to x1:
foldBack f [x1, x2,..., xn] e = f(x1,f(x2,...,f(n,e)...))
There are interesting cases with commutative functions, which satisfy f(x, y) = f(x, y), when fold and foldBack return the same result. About fold itself, it is a specific instance of catamorphism in category theory. You can read more about catamorphism here.

DPLL algorithm definition

I am having some problems understanding the DPLL algorithm and I was wondering if anyone could explain it to me because I think my understanding is incorrect.
The way I understand it is, I take some set of literals and if some every clause is true the model is true but if some clause is false then the model is false.
I recursively check the model by looking for a unit clause, if there is one I set the value for that unit clause to make it true, then update the model. Removing all clauses that are now true and remove all literals which are now false.
When there are no unit clauses left, I chose any other literal and assign values for that literal which make it true and make it false, then again remove all clauses which are now true and all literals which are now false.
DPLL requires a problem to be stated in disjunctive normal form, that is, as a set of clauses, each of which must be satisfied.
Each clause is a set of literals {l1, l2, ..., ln}, representing the disjunction of those literals (i.e., at least one literal must be true for the clause to be satisfied).
Each literal l asserts that some variable is true (x) or that it is false (~x).
If any literal is true in a clause, then the clause is satisfied.
If all literals in a clause are false, then the clause is unsatisfiable and hence the problem is unsatisfiable.
A solution is an assignment of true/false values to the variables such that every clause is satisfied. The DPLL algorithm is an optimised search for such a solution.
DPLL is essentially a depth first search that alternates between three tactics. At any stage in the search there is a partial assignment (i.e., an assignment of values to some subset of the variables) and a set of undecided clauses (i.e., those clauses that have not yet been satisfied).
(1) The first tactic is Pure Literal Elimination: if an unassigned variable x only appears in its positive form in the set of undecided clauses (i.e., the literal ~x doesn't appear anywhere) then we can just add x = true to our assignment and satisfy all the clauses containing the literal x (similarly if x only appears in its negative form, ~x, we can just add x = false to our assignment).
(2) The second tactic is Unit Propagation: if all but one of the literals in an undecided clause are false, then the remaining one must be true. If the remaining literal is x, we add x = true to our assignment; if the remaining literal is ~x, we add x = false to our assignment. This assignment can lead to further opportunities for unit propagation.
(3) The third tactic is to simply choose an unassigned variable x and branch the search: one side trying x = true, the other trying x = false.
If at any point we end up with an unsatisfiable clause then we have reached a dead end and have to backtrack.
There are all sorts of clever further optimisations, but this is the core of almost all SAT solvers.
Hope this helps.
The Davis–Putnam–Logemann–Loveland (DPLL) algorithm is a, backtracking-based search algorithm for deciding the satisfiability of propositional logic formulae in conjunctive normal form also known as satisfiability problem or SAT.
Any boolean formula can be expressed in conjunctive normal form (CNF) which means a conjunction of clauses i.e. ( … ) ^ ( … ) ^ ( … )
where a clause is a disjunction of boolean variables i.e. ( A v B v C’ v D)
an example of boolean formula expressed in CNF is
(A v B v C) ^ (C’ v D) ^ (D’ v A)
and solving the SAT problem means finding the combination of values for the variables in the formula that satisfy it like A=1, B=0, C=0, D=0
This is a NP-Complete problem. Actually it is the first problem which has been proven to be NP-Complete by Stepehn Cook and Leonid Levin
A particular type of SAT problem is the 3-SAT which is a SAT in which all clauses have three variables.
The DPLL algorithm is way to solve SAT problem (which practically depends on the hardness of the input) that recursively creates a tree of potential solution
Suppose you want to solve a 3-SAT problem like this
(A v B v C) ^ (C’ v D v B) ^ (B v A’ v C) ^ (C’ v A’ v B’)
if we enumerate the variables like A=1 B=2 C=3 D=4 and se negative numbers for negated variables like A’ = -1 then the same formula can be written in Python like this
[[1,2,3],[-3,4,2],[2,-1,3],[-3,-1,-2]]
now imagine to create a tree in which each node consists of a partial solution. In our example we also depicted a vector of the clauses satisfied by the solution
the root node is [-1,-1,-1,-1] which means no values have been yet assigned to the variables neither 0 nor 1
at each iteration:
we take the first unsatisfied clause then
if there are no more unassigned variables we can use to satisfy that clause then there can’t be valid solutions in this branch of the search tree and the algorithm shall return None
otherwise we take the first unassigned variable and set it such it satisfies the clause and start recursively from step 1. If the inner invocation of the algorithm returns None we flip the value of the variable so that it does not satisfy the clause and set the next unassigned variable in order to satisfy the clause. If all the three variables have been tried or there are no more unassigned variable for that clause it means there are no valid solutions in this branch and the algorithm shall return None
See the following example:
from the root node we choose the first variable (A) of the first clause (A v B v C) and set it such it satisfies the clause then A=1 (second node of the search tree)
the continue with the second clause and we pick the first unassigned variable (C) and set it such it satisfies the clause which means C=0 (third node on the left)
we do the same thing for the fourth clause (B v A’ v C) and set B to 1
we try to do the same thing for the last clause we realize we no longer have unassigned variables and the clause is always false. We then have to backtrack to the previous position in the search tree. We change the value we assigned to B and set B to 0. Then we look for another unassigned value that can satisfy the third clause but there are not. Then we have to backtrack again to the second node
Once there we have to flip the assignment of the first variable (C) so that it won’t satisfy the clause and set the next unassigned variable (D) in order to satisfy it (i.e. C=1 and D=1). This also satisfies the third clause which contains C.
The last clause to satisfy (C’ v A’ v B’) has one unassigned variable B which can be then set to 0 in order to satisfy the clause.
In this link http://lowcoupling.com/post/72424308422/a-simple-3-sat-solver-using-dpll you can find also the python code implementing it

Resources