We try to translate a very simple program in pseudo-code to Predicate Logic.
The program is straightforward and does not contain loops. (sequential)
It only consists of assignments of variables and if-else statements.
Unfortunately we do not have any good information provided to solve the problem. It would be great if someone has some
examples "conversions" of simple 5liner code snippets or
links to sources for free information, which describe the topic on the surface level. ( We only do predicate and prepositional logic and do not want to dive much deeper in the logic space. )
Kind regards
UPDATE:
After enough research I found the solution and can share it inc. examples.
The trick is to think of the program state as a set of all our arbitrary variables inc. a program counter which stands for the current instruction to be executed.
x = input;
x = x*2;
if (y>0)
x = x∗y ;
else
x = y;
We will form the Predicate P(x,i,y,pc).
From here we can build promises e.g.:
∀i∀x∀y(P (x, i, y, 1) => P (i, i, y, 2))
∀i∀x∀y(P (x, i, y, 2) => P (x ∗ 2, i, y, 3))
∀i∀x∀y(P (x, i, y, 3) ∧ y > 0 =⇒ P (x ∗ y, i, y, 4))
∀i∀x∀y(P (x, i, y, 3) ∧ ¬y > 0 =⇒ P (y, i, y, 4))
By incrementing the Program counter we make sure that the promises follow in order. Now we are able to define make a proof when given a premise for the Input e.g. P(x,4,7,1).
Related
Is it possible to get an item of a list of tuples randomly regarding its own random value with a fair share?
For example:
X = [(0.60, test1), (0.20, test2), (0.20, test3)]
In this case test1 has a 60% probability of getting chosen over the other ones.
I tried using maybe/1 but that gives me a "binary chance" over each one whereas I want a fair chance for each member of the list if that makes sense.
You can easily adapt the solution from this answer (from which i copied the predicates choice/3 and choice/4), in this way:
solve:-
X = [(0.60, test1), (0.20, test2), (0.20, test3)],
findall(P,member((P,_),X),LP),
findall(T,member((_,T),X),LT),
choice(LT,LP,V),
writeln(V).
choice([X|_], [P|_], Cumul, Rand, X) :-
Rand < Cumul + P.
choice([_|Xs], [P|Ps], Cumul, Rand, Y) :-
Cumul1 is Cumul + P,
Rand >= Cumul1,
choice(Xs, Ps, Cumul1, Rand, Y).
choice([X], [P], Cumul, Rand, X) :-
Rand < Cumul + P.
choice(Xs, Ps, Y) :- random(R), choice(Xs, Ps, 0, R, Y), !.
And then call ?- solve. This is a basic solution, it can be improved, for instance without calling findall/2 two times... An interesting alternative is to use probabilistic logic programming, check it out.
This is My First Logic Programming Language course so this is a really Dumb Question But I cannot for the life of me figure out how does this power predicate work I've tried making a search tree to trace it But I still cannot understand how is it working
mult(_ , 0 ,0).
mult(X , Y, Z):-
Y > 0,
Y1 is Y - 1,
mult(X,Y1,Z1),
Z is Z1 + X.
exp2(_ ,0 , 1).
exp2(X,Y,Z):-
Y > 0,
Y1 is Y - 1,
exp2(X , Y1 , Z1),
mult(X,Z1,Z).
I so far get that I'm going to call the exp2 predicate till I reach the point where the Y is going to be Zero then I'm going to start multiplying from there, but At the last call when it's at exp2(2 , 1 , Z) what is the Z value and how does the predicate work from there?
Thank you very much =)
EDIT: I'm really sorry for the Late reply I had some problems and couldn't access my PC
I'll walk through mult/3 in more detail here, but I'll leave exp2/3 to you as an exercise. It's similar..
As I mentioned in my comment, you want to read a Prolog predicate as a rule.
mult(_ , 0 ,0).
This rule says 0 is the result of multiplying anything (_) by 0. The variable _ is an anonymous variable, meaning it is not only a variable, but you don't care what its value is.
mult(X, Y, Z) :-
This says, Z is the result of multiplying X by Y if....
Y > 0,
Establish that Y is greater than 0.
Y1 is Y - 1,
And that Y1 has the value of Y minus 1.
mult(X, Y1, Z1),
And that Z1 is the result of multiplying X by Y1.
Z is Z1 + X.
And Z is the value of Z1 plus X.
Or reading the mult(X, Y, Z) rule altogether:
Z is the result of multiplying X by Y if Y is greater than 0, and Y1 is Y-1, and Z1 is the result of multiplying X by Y1, and Z is the result of adding Z1 to X.
Now digging a little deeper, you can see this is a recursive definition, as in the multiplication of two numbers is being defined by another multiplication. But what is being multiplied is important. Mathematically, it's using the fact that x * y is equal to x * (y - 1) + x. So it keeps reducing the second multiplicand by 1 and calling itself on the slightly reduced problem. When does this recursive reduction finally end? Well, as shown above, the second rule says Y must be greater than 0. If Y is 0, then the first rule, mult(_, 0, 0) applies and the recursion finally comes back with a 0.
If you are not sure how recursion works or are unfamiliar with it, I highly recommend Googling it to understand it. That is, indeed, a concept that applies to many computer languages. But you need to be careful about learning Prolog via comparison with other languages. Prolog is fundamentally different in it's behavior from procedural/imperative languages like Java, Python, C, C++, etc. It's best to get used to interpreting Prolog rules and facts as I have described above.
Say you want to compute 2^3 as assign result to R.
For that you will call exp2(2, 3, R).
It will recursively call exp2(2, 2, R1) and then exp2(2, 1, R2) and finally exp(2, 0, R3).
At this point exp(_, 0, 1) will match and R3 will be assigned to 1.
Then when call stack unfolds 1 will be multiplied by 2 three times.
In Java this logic would be encoded as follows. Execution would go pretty much the same route.
public static int Exp2(int X, int Y) {
if (Y == 0) { // exp2(_, 0, 1).
return 1;
}
if (Y > 0) { // Y > 0
int Y1 = Y - 1; // Y1 is Y - 1
int Z1 = Exp2(X, Y1); // exp2(X, Y1, Z1);
return X * Z1; // mult(X, Z1, Z).
}
return -1; // this should never happen.
}
I have defined a function f(x, y, z) in Julia and I want to parallely compute f for many values of x, holding y and z fixed. What is the "best practices" way to do this using pmap?
It would be nice if it was something like pmap(f, x, y = 5, z = 8), which is how the apply family handles fixed arguments in R, but it doesn't appear to be as simple as that. I have devised solutions, but I find them inelegant and I doubt that they will generalize nicely for my purposes.
I can wrap f in a function g where g(x) = f(x, y = 5, z = 8). Then I simply call pmap(g, x). This is less parsimonious than I would like.
I can set 5 and 8 as default values for y and z when f is defined and then call pmap(f, x). This makes me uncomfortable in the case where I want to fix y at the value of some variable a, where a has (for good reason) not been defined at the time that f is defined, but will be by the time f is called. It works, but it kind of spooks me.
A good solution, which turns your apparently inflexible first option into a flexible one, is to use an anonymous function, e.g.
g(y, z) = x -> f(x, y, z)
pmap(g(5, 8), x)
or just
pmap(x -> f(x, 5, 8), x)
In Julia 0.4, anonymous functions have a performance penalty, but this will be gone in 0.5.
I'm really desperate about this question (I'm not very good with Prolog).
I'm asked to create a reductionTheosophique,
in other words I have to do the following:
If I'm given, lets say, 123, I'll need to return the sum so: 1+2+3=6.
This is what I got so far.
reduction(X,R) :-
X >= 0,
K is (K + (X mod 10)),
T is (X//10),
reduction(T,R),
R is K
65=6+5=11=1+1=2 :(
I'm still working on it... Thank you!
Does this work for you?
reduction(0, 0) :- !.
reduction(X, R) :-
X2 is X // 10,
reduction(X2, R2),
R is ((X mod 10) + R2).
Part of the problem with your code is that you wrote K is (K + (X mod 10)) and that could only ever be true when K is already a number and X is 0.
Here's a version that continues to reduce:
reduction(0, 0) :- !.
reduction(X, R) :-
X2 is X // 10,
reduction(X2, R2),
R is ((X mod 10) + R2),
R < 10.
reduction(X, R) :-
X2 is X // 10,
reduction(X2, R2),
R1 is ((X mod 10) + R2),
R1 >= 10,
reduction(R1, R).
It certainly could be reduced down a bit more, but that would require a bit more thinking that I'm two glasses of red past being able to do. :-)
It could be a bit simpler (I'm on my second cup of coffee, no wine ;)):
reduce(N, N) :- N < 10, !.
reduce(N, R) :-
N >= 10,
Y is N // 10,
reduce(Y, R1),
R2 is (N mod 10) + R1,
reduce(R2, R).
Examining your original attempt:
reduction(X,R) :-
X >= 0,
You could check X >= 10 here instead of checking >= 0 and include a needed base case, as I have done above, for X < 10. Without the base case, your predicate will always ultimately return failure or loop infinitely since there's no valid case for X < 0 and no other clauses.
K is (K + (X mod 10)),
The is/2 predicate is for evaluating fully instantiated arithmetic expressions on the right hand side (second argument to is/2), and instantiating the variable on the left (first argument to is/2) with that value. Here, K doesn't have a value, so you'll get an instantiation error from Prolog. If K did have a value, it would still necessarily fail unless X mod 10 happens to be zero because you are saying, in Prolog, that the value of K is the value of that same K plus X mod 10, which of course, is impossible if X mod 10 is not zero.
T is (X//10),
This seems OK, since X is known.
reduction(T, R),
R is K.
These two together are a problem. Assuming that reduction(T, R) succeeds as you wish it to (which it wouldn't due to the above problems), R would be instantiated, then the following expression R is K would fail unless K was instantiated and already had the same value as R. This is also a common mistake by prolog beginners to "assign" one variable to another, like in imperative languages. You really were trying to unify R and K, which is done with R = K.
Is there an extensible, efficient way to write existential statements in Haskell without implementing an embedded logic programming language? Oftentimes when I'm implementing algorithms, I want to express existentially quantified first-order statements like
∃x.∃y.x,y ∈ xs ∧ x ≠ y ∧ p x y
where ∈ is overloaded on lists. If I'm in a hurry, I might write perspicuous code that looks like
find p [] = False
find p (x:xs) = any (\y -> x /= y && (p x y || p y x)) xs || find p xs
or
find p xs = or [ x /= y && (p x y || p y x) | x <- xs, y <- xs]
But this approach doesn't generalize well to queries returning values or predicates or functions of multiple arities. For instance, even a simple statement like
∃x.∃y.x,y,z ∈ xs ∧ x ≠ y ≠ z ∧ f x y z = g x y z
requires writing another search procedure. And this means a considerable amount of boilerplate code. Of course, languages like Curry or Prolog that implement narrowing or a resolution engine allow the programmer to write statements like:
find(p,xs,z) = x ∈ xs & y ∈ xs & x =/= y & f x y =:= g x y =:= z
to abuse the notation considerably, which performs both a search and returns a value. This problem arises often when implementing formally specified algorithms, and is often solved by combinations of functions like fmap, foldr, and mapAccum, but mostly explicit recursion. Is there a more general and efficient, or just general and expressive, way to write code like this in Haskell?
There's a standard transformation that allows you to convert
∃x ∈ xs : P
to
exists (\x -> P) xs
If you need to produce a witness you can use find instead of exists.
The real nuisance of doing this kind of abstraction in Haskell as opposed to a logic language is that you really must pass the "universe" set xs as a parameter. I believe this is what brings in the "fuss" to which you refer in your title.
Of course you can, if you prefer, stuff the universal set (through which you are searching) into a monad. Then you can define your own versions of exists or find to work with the monadic state. To make it efficient, you can try Control.Monad.Logic, but it may involve breaking your head against Oleg's papers.
Anyway, the classic encoding is to replace all binding constructs, including existential and universal quantifiers, with lambdas, and proceed with appropriate function calls. My experience is that this encoding works even for complex nested queries with a lot of structure, but that it always feels clunky.
Maybe I don't understand something, but what's wrong with list comprehensions? Your second example becomes:
[(x,y,z) | x <- xs, y <- xs, z <- xs
, x /= y && y /= z && x /= z
, (p1 x y z) == (p2 x y z)]
This allows you to return values; to check if the formula is satisfied, just use null (it won't evaluate more than needed because of laziness).