I'm a beginner at prolog and I'm trying to make pacman move by itself using netlogo and prolog. So this is part of my code:
walkfront(_,_,_,_,_,_,Pacman,DirP,Lab,Ghost,_,Pellet,_,_,Decision) :-
findall(Dir,
( member(Dir,[0,90,180,270]),
\+ ( member((G,false),Ghost), dangerous(Pacman,G,2,Dir,_) ) ),
L),
findall(Dir,(member(Dir,[0,90,180,270]),(member(P,Pellet))),T),
chooseNotDangerous(L,Pacman,DirP,Lab,Dir,T)
walkfront(_,_,_,_,_,_,Pacman,DirP,Lab,Ghost,_,Pellet,_,_,Decision) this line has all the lists of information I'm getting from netlogo, Pacman has the position of pacman (x,y), DirP is the direction pacman is facing, Lab is the free spaces in the maze, Ghost is the position of the ghosts (x,y,eaten?), Pellet is a list of all the positions of the pellets (x,y), Decision is the output choosen by pacman.
The first findall is supposed to give me all the directions (Dir) that don't have ghosts and that aren't dangerous and save them in a list called L.
The second findall, I wanted it to give me all the directions that have pellets and save them in a list called T.
My question is if this findall's are correct because my code isn't working for some reason and I think it might be beacause of the second findall.
Thank you for helping me :).
Technically, findall/3 never fails, as it will complete with an empty result list if none of the calls succeed (well, exceptions apart, if your Prolog implements them).
Of course, it's impossible to answer your question, without all the code. And probably, even with all the code available, you will get little - if any - help, because the structure of your program seems more complex than what could be advisable.
Prolog is a language with a relational data model, and such data model is better employed when it's possible to keep the relations clean, best if normalized. Now you have a predicate with 16 arguments. How are you going to ensure all of them play correctly together ?
I would say - not change now the structure of your program, if you succeed debugging it ok. But the next program - if any - use another style, and the facilities that your Prolog offers to implement data hiding.
Plain old Prolog 'only' had compound terms: you code should likely be
packman(CurrPackManState, CurrGhostsState, NextPackManState, NextGhostsState) :-
...
where CurrGhostsState should be a list of CurrGhostState, and each element of this list should unify with an appropriate structure, hiding information about position, color, shape, etc...
SWI-Prolog now has dicts, and any Prolog will let you use DCGs to reduce the complexity of the code. See this page from Markus Triska, look for 'Implicitly passing states around'.
Also, you can always choice to put some less frequently updated info - like for instance the maze structure - in global database, with assert/retract.
Related
So I am currently learning prolog and I can't get my head around how this language works.
"It tries all the possible solutions until it finds one, if it doesn't it returns false" is what I've read that this language does. You just Describe the solution and it finds it for you
With that in mind, I am trying to solve the 8 queens problem ( how to place 8 queens on a chess board without anyone threatening the others).
I have this predicate, 'safe' that gets a list of pairs, the positions of all the queens and succeeds when they are not threatening each other.
When I enter in the terminal
?- safe([(1,2),(3,5)]).
true ?
| ?- safe([(1,3),(1,7)]).
no
| ?- safe([(2,2),(3,3)]).
no
| ?- safe([(2,2),(3,4),(8,7)]).
true ?
it recognizes the correct from the wrong answers, so it knows if something is a possible solution
BUT
when I enter
| ?- safe(L).
L = [] ? ;
L = [_] ? ;
it gives me the default answers, even though it recognizes a solution for 2 queens when I enter them.
here is my code
threatens((_,Row),(_,Row)).
threatens((Column,_),(Column,_)).
threatens((Column1,Row1),(Column2,Row2)) :-
Diff1 is Column1 - Row1,
Diff2 is Column2 - Row2,
abs(Diff1) =:= abs(Diff2).
safe([]).
safe([_]).
safe([A,B|T]) :-
\+ threatens(A,B),
safe([A|T]),
safe(T).
One solution I found to the problem is to create predicates 'position' and modify the 'safe' one
possition((0,0)).
possition((1,0)).
...
...
possition((6,7)).
possition((7,7)).
safe([A,B|T]) :-
possition(A),
possition(B),
\+ threatens(A,B),
safe([A|T]),
safe(T).
safe(L,X):-
length(L,X),
safe(L).
but this is just stupid, as you have to type everything explicitly and really really slow,
even for 6 queens.
My real problem here, is not with the code itself but with prolog, I am trying to think in prolog, But all I read is
Describe how the solution would look like and let it work out what is would be
Well that's what I have been doing but it does not seem to work,
Could somebody point me to some resources that don't teach you the semantics but how to think in prolog
Thank you
but this is just stupid, as you have to type everything explicitly and really really slow, even for 6 queens.
Regarding listing the positions, the two coordinates are independent, so you could write something like:
position((X, Y)) :-
coordinate(X),
coordinate(Y).
coordinate(1).
coordinate(2).
...
coordinate(8).
This is already much less typing. It's even simpler if your Prolog has a between/3 predicate:
coordinate(X) :-
between(1, 8, X).
Regarding the predicate being very slow, this is because you are asking it to do too much duplicate work:
safe([A,B|T]) :-
...
safe([A|T]),
safe(T).
Once you know that [A|T] is safe, T must be safe as well. You can remove the last goal and will get an exponential speedup.
Describe how the solution would look like and let it work out what is
would be
demands that the AI be very strong in general. We are not there yet.
You are on the right track though. Prolog essentially works by enumerating possible solutions and testing them, rejecting those that don't fit the conditions encoded in the program. The skill resides in performing a "good enumeration" (traversing the domain in certain ways, exploiting domain symmetries and overlaps etc) and subsequent "fast rejection" (quickly throwing away whole sectors of the search space as not promising). The basic pattern:
findstuff(X) :- generate(X),test(X).
And evidently the program must first generate X before it can test X, which may not be always evident to beginners.
Logic-wise,
findstuff(X) :- x_fulfills_test_conditions(X),x_fullfills_domain_conditions(X).
which is really another way of writing
findstuff(X) :- test(X),generate(X).
would be the same, but for Prolog, as a concrete implementation, there would be nothing to work with.
That X in the program always stands for a particular value (which may be uninstantiated at a given moment, but becomes more and more instantiated going "to the right"). Unlike in logic, where the X really stands for an unknown object onto which we pile constraints until -ideally- we can resolve X to a set of concrete values by applying a lot of thinking to reformulate constraints.
Which brings us the the approach of "Constraint Logic Programming (over finite domains)", aka CLP(FD) which is far more elegant and nearer what's going on when thinking mathematically or actually doing theorem proving, see here:
https://en.wikipedia.org/wiki/Constraint_logic_programming
and the ECLiPSe logic programming system
http://eclipseclp.org/
and
https://www.metalevel.at/prolog/clpz
https://github.com/triska/clpfd/blob/master/n_queens.pl
N-Queens in Prolog on YouTube. as a must-watch
This is still technically Prolog (in fact, implemented on top of Prolog) but allows you to work on a more abstract level than raw generate-and-test.
Prolog is radically different in its approach to computing.
Arithmetic often is not required at all. But the complexity inherent in a solution to a problem show up in some place, where we control how relevant information are related.
place_queen(I,[I|_],[I|_],[I|_]).
place_queen(I,[_|Cs],[_|Us],[_|Ds]):-place_queen(I,Cs,Us,Ds).
place_queens([],_,_,_).
place_queens([I|Is],Cs,Us,[_|Ds]):-
place_queens(Is,Cs,[_|Us],Ds),
place_queen(I,Cs,Us,Ds).
gen_places([],[]).
gen_places([_|Qs],[_|Ps]):-gen_places(Qs,Ps).
qs(Qs,Ps):-gen_places(Qs,Ps),place_queens(Qs,Ps,_,_).
goal(Ps):-qs([0,1,2,3,4,5,6,7,8,9,10,11],Ps).
No arithmetic at all, columns/rows are encoded in a clever choice of symbols (the numbers indeed are just that, identifiers), diagonals in two additional arguments.
The whole program just requires a (very) small subset of Prolog, namely a pure 2-clauses interpreter.
If you take the time to understand what place_queens/4 does (operationally, maybe, if you have above average attention capabilities), you'll gain a deeper understanding of what (pure) Prolog actually computes.
I'm trying to write Prolog logic for the first time, but I'm having trouble. I am to write logic that takes two lists and checks for like elements between the two. For example, consider the predicate similarity/2 :
?- similarity([2,4,5,6,8], [1,3,5,6,9]).
true.
?- similarity([1,2,3], [5,6,8]).
false.
The first query will return true as those two lists have 5 and 6 in common. The second returns false as there are no common elements between the two lists in that query.
I CANNOT use built in logic, such as member, disjoint, intersection, etc. I am thinking of iterating through the first list provided, and checking to see if it matches each element in the second list. Is this an efficient approach to this problem? I will appreciate any advice and help. Thank you so much.
Writing Prolog for the first time can be really daunting, since it is unlike many traditional programming languages that you will most likely encounter; however it is a very rewarding experience once you've got a grasp on this new style of programming! Since you mention that you are writing Prolog for the first time I'll give some general tips and tricks about writing Prolog, and then move onto some hints to your problem, and then provide what I believe to be a solution.
Think Recursively
You can think of every Prolog program that you write to be intrinsically recursive in nature. i.e. you can provide it with a series of "base-cases" which take the following form:
human(John). or wildling(Ygritte) In my opinion, these rules should always be the first ones that you write. Try to break down the problem into its simplest case and then work from there.
On the other hand, you can also provide it with more complex rules which will look something like this: contains(X, [H|T]):- contains(X, T) The key bit is that writing a rule like this is very much equivalent to writing a recursive function in say, Python. This rule does a lot of the heavy lifting in looking to see whether a value is contained in a list, but it isn't complete without a "base-case". A complete contains rule would actually be two rules put together: contains(X, [X|_]).
contains(X, [H|T]):-contains(X, T).
The big takeaway from this is to try and identify the simple cases of your problem, which can act like base cases in a recursive function, and then try to identify how you want to "recurse" and actually do work on the problem at hand.
Pattern Matching
Part of the great thing about Prolog is the pattern matching system that it has in place. You should 100% use this to your advantage whenever you can -- it is especially helpful when trying to do anything with lists. For example:
head(X, [X|T]).
Will evaluate to true when called thusly: head(1, [1, 2, 3]) because intrinsic in the rule is the matching of X. This sort of pattern matching on the first element of a list is incredibly important and really the key way that you will do any work on lists in Prolog. In my experience, pattern matching on the head of a list will often be one of the "base-cases" that I mentioned beforehand.
Understand The Flow of the Program
Another key component of how Prolog works is that it takes a "top-down" approach to reading code. What I mean by that is that every time a rule is called (except for definitions of the form king(James).), Prolog starts at line 1 and continues until it reaches a rule that is true or the end of the file. Therefore, the ordering of your rules is incredibly important. I'm assuming that you know that you can combine rules together via a comma to indicate logical AND, but what is maybe more subtle is that if you order one rule above another, it can act as a logical OR, simply because it will be evaluated before another rule, and can potentially cause the program to recurse.
Specific Example
Now that I've gotten all of my general advice out of the way, I'll actually reference the given problem. First, I'd write my "base-case". What would happen if you are given two lists whose first elements are the same? If the first element in each list is not the same, then they have to be different. So, you have to look through the second list to see if this element is contained anywhere in the rest of the list. What kind of rule would this produce? OR it could be the case that the first element of the first list is not contained within the second at all, in which case you have to advance once in the first list, and start again with the second list. What kind of rule would this produce?
In the end, I would say that your approach is the correct one to take, and I have provided my own solution below:
similarity([H|_], [H|_]).
similarity(H1|T1], [_|T2]):- similarity([H1|T1], T2).
similarity([_|T1], [H2|T2]):- similarity(T1, [H2|T2]).
Hope all of this helps in some way!
What is the Prolog predicate that helps to show wasteful representations of Prolog terms?
Supplement
In a aside of an earlier Prolog SO answer, IIRC by mat, it used a Prolog predicate to analyze a Prolog term and show how it was overly complicated.
Specifically for a term like
[op(add),[[number(0)],[op(add),[[number(1)],[number(1)]]]]]
it revealed that this has to many [].
I have searched my Prolog questions and looked at the answers twice and still can't find it. I also recall that it was not in SWI-Prolog but in another Prolog so instead of installing the other Prolog I was able to use the predicate with an online version of Prolog.
If you read along in the comments you will see that mat identified the post I was seeking.
What I was seeking
I have one final note on the choice of representation. Please try out the following, using for example GNU Prolog or any other conforming Prolog system:
| ?- write_canonical([op(add),[Left,Right]]).
'.'(op(add),'.'('.'(_18,'.'(_19,[])),[]))
This shows that this is a rather wasteful representation, and at the same time prevents uniform treatment of all expressions you generate, combining several disadvantages.
You can make this more compact for example using Left+Right, or make all terms uniformly available using for example op_arguments(add, [Left,Right]), op_arguments(number, [1]) etc.
Evolution of a Prolog data structure
If you don't know it already the question is related to writing a term rewriting system in Prolog that does symbolic math and I am mostly concentrating on simplification rewrites at present.
Most people only see math expressions in a natural representation
x + 0 + sin(y)
and computer programmers realize that most programming languages have to parse the math expression and convert it into an AST before using
add(add(X,0),sin(Y))
but most programming languages can not work with the AST as written above and have to create data structures See: Compiler/lexical analyzer, Compiler/syntax analyzer, Compiler/AST interpreter
Now if you have ever done more than dipped your toe in the water when learning about Prolog you will have come across Program 3.30 Derivative rules, which is included in this, but the person did not give attribution.
If you try and roll your own code to do symbolic math with Prolog you might try using is/2 but quickly find that doesn't work and then find that Prolog can read the following as compound terms
add(add(X,0),sin(Y))
This starts to work until you need to access the name of the functor and find functor/3 but then we are getting back to having to parse the input, however as noted by mat and in "The Art of Prolog" if one were to make the name of the structure accessible
op(add,(op(add,X,0),op(sin,Y)))
now one can access not only the terms of the expression but also the operator in a Prolog friendly way.
If it were not for the aside mat made the code would still be using the nested list data structure and now is being converted to use the compound terms that expose the name of the structure. I wonder if there is a common phrase to describe that, if not there should be one.
Anyway the new simpler data structure worked on the first set of test, now to see if it holds up as the project is further developed.
Try it for yourself online
Using GNU Prolog at tutorialspoint.com enter
:- initialization(main).
main :- write_canonical([op(add),[Left,Right]]).
then click Execute and look at the output
sh-4.3$ gprolog --consult file main.pg
GNU Prolog 1.4.4 (64 bits)
Compiled Aug 16 2014, 23:07:54 with gcc
By Daniel Diaz
Copyright (C) 1999-2013 Daniel Diaz
compiling /home/cg/root/main.pg for byte code...
/home/cg/root/main.pg:2: warning: singleton variables [Left,Right] for main/0
/home/cg/root/main.pg compiled, 2 lines read - 524 bytes written, 9 ms
'.'(op(add),'.'('.'(_39,'.'(_41,[])),[]))| ?-
Clean vs. defaulty representations
From The Power of Prolog by Markus Triska
When representing data with Prolog terms, ask yourself the following question:
Can I distinguish the kind of each component from its outermost functor and arity?
If this holds, your representation is called clean. If you cannot distinguish the elements by their outermost functor and arity, your representation is called defaulty, a wordplay combining "default" and "faulty". This is because reasoning about your data will need a "default case", which is applied if everything else fails. In addition, such a representation prevents argument indexing, and is considered faulty due to this shortcoming. Always aim to avoid defaulty representations! Aim for cleaner representations instead.
Please see the last part of:
https://stackoverflow.com/a/42722823/1613573
It uses write_canonical/1 to display the canonical representation of a term.
This predicate is very useful when learning Prolog and helps to clear several misconceptions that are typical for beginners. See for example the recent question about hyphens, where it would have helped too.
Note that in SWI, the output deviates from canonical Prolog syntax in general, so I am not using SWI when explaining Prolog syntax.
You could also programmatially count how many subterms are a single-element list using something like this (not optimized);
single_element_list_subterms(Term, Index) :-
Term =.. [Functor|Args],
( Args = []
-> Index = 0
; maplist(single_element_list_subterms, Args, Indices),
sum_list(Indices, SubIndex),
( Functor = '.', Args = [_, []]
-> Index is SubIndex + 1
; Index = SubIndex
)
).
Trying it on the example compound term:
| ?- single_element_list_subterms([op(add),[[number(0)],[op(add),[[number(1)],[number(1)]]]]], Count).
Count = 7
yes
| ?-
Indicating that there are 7 subterms consisting of a single-element list. Here is the result of write_canonical:
| ?- write_canonical([op(add),[[number(0)],[op(add),[[number(1)],[number(1)]]]]]).
'.'(op(add),'.'('.'('.'(number(0),[]),'.'('.'(op(add),'.'('.'('.'(number(1),[]),'.'('.'(number(1),[]),[])),[])),[])),[]))
yes
| ?-
I often end up writing code in Prolog which involves some arithmetic calculation (or state information important throughout the program), by means of first obtaining the value stored in a predicate, then recalculating the value and finally storing the value using retractall and assert because in Prolog we cannot assign values to variable twice using is (thus making almost every variable that needs modification, global). I have come to know that this is not a good practice in Prolog. In this regard I would like to ask:
Why is it a bad practice in Prolog (though i myself don't like to go through the above mentioned steps just to have have a kind of flexible (modifiable) variable)?
What are some general ways to avoid this practice? Small examples will be greatly appreciated.
P.S. I just started learning Prolog. I do have programming experience in languages like C.
Edited for further clarification
A bad example (in win-prolog) of what I want to say is given below:
:- dynamic(value/1).
:- assert(value(0)).
adds :-
value(X),
NewX is X + 4,
retractall(value(_)),
assert(value(NewX)).
mults :-
value(Y),
NewY is Y * 2,
retractall(value(_)),
assert(value(NewY)).
start :-
retractall(value(_)),
assert(value(3)),
adds,
mults,
value(Q),
write(Q).
Then we can query like:
?- start.
Here, it is very trivial, but in real program and application, the above shown method of global variable becomes unavoidable. Sometimes the list given above like assert(value(0))... grows very long with many more assert predicates for defining more variables. This is done to make communication of the values between different functions possible and to store states of variables during the runtime of program.
Finally, I'd like to know one more thing:
When does the practice mentioned above become unavoidable in spite of various solutions suggested by you to avoid it?
The general way to avoid this is to think in terms of relations between states of your computations: You use one argument to hold the state that is relevant to your program before a calculation, and a second argument that describes the state after some calculation. For example, to describe a sequence of arithmetic operations on a value V0, you can use:
state0_state(V0, V) :-
operation1_result(V0, V1),
operation2_result(V1, V2),
operation3_result(V2, V).
Notice how the state (in your case: the arithmetic value) is threaded through the predicates. The naming convention V0 -> V1 -> ... -> V scales easily to any number of operations and helps to keep in mind that V0 is the initial value, and V is the value after the various operations have been applied. Each predicate that needs to access or modify the state will have an argument that allows you to pass it the state.
A huge advantage of threading the state through like this is that you can easily reason about each operation in isolation: You can test it, debug it, analyze it with other tools etc., without having to set up any implicit global state. As another huge benefit, you can then use your programs in more directions provided you are using sufficiently general predicates. For example, you can ask: Which initial values lead to a given outcome?
?- state0_state(V0, given_outcome).
This is of course not readily possible when using the imperative style. You should therefore use constraints instead of is/2, because is/2 only works in one direction. Constraints are much easier to use and a more general modern alternative to low-level arithmetic.
The dynamic database is also slower than threading states through in variables, because it performs indexing etc. on each assertz/1.
1 - it's bad practice because destroys the declarative model that (pure) Prolog programs exhibit.
Then the programmer must think in procedural terms, and the procedural model of Prolog is rather complicate and difficult to follow.
Specifically, we must be able to decide about the validity of asserted knowledge while the programs backtracks, i.e. follow alternative paths to those already tried, that (maybe) caused the assertions.
2 - We need additional variables to keep the state. A practical, maybe not very intuitive way, is using grammar rules (a DCG) instead of plain predicates. Grammar rules are translated adding two list arguments, normally hidden, and we can use those arguments to pass around the state implicitly, and reference/change it only where needed.
A really interesting introduction is here: DCGs in Prolog by Markus Triska. Look for Implicitly passing states around: you'll find this enlighting small example:
num_leaves(nil), [N1] --> [N0], { N1 is N0 + 1 }.
num_leaves(node(_,Left,Right)) -->
num_leaves(Left),
num_leaves(Right).
More generally, and for further practical examples, see Thinking in States, from the same author.
edit: generally, assert/retract are required only if you need to change the database, or keep track of computation result along backtracking. A simple example from my (very) old Prolog interpreter:
findall_p(X,G,_):-
asserta(found('$mark')),
call(G),
asserta(found(X)),
fail.
findall_p(_,_,N) :-
collect_found([],N),
!.
collect_found(S,L) :-
getnext(X),
!,
collect_found([X|S],L).
collect_found(L,L).
getnext(X) :-
retract(found(X)),
!,
X \= '$mark'.
findall/3 can be seen as the basic all solutions predicate. That code should be the very same from Clockins-Mellish textbook - Programming in Prolog. I used it while testing the 'real' findall/3 I implemented. You can see that it's not 'reentrant', because of the '$mark' aliased.
In the book we are asked to define the predicates left_of, right_of, above, and below using the following layout.
% bike camera
% pencil hourglass butterfly fish
left_of(pencil, hourglass).
left_of(hourglass, butterfly).
left_of(butterfly, fish).
above(bike, pencil).
above(camera, butterfly).
right_of(Obj1, Obj2) :-
left_of(Obj2, Obj1).
below(Obj1, Obj2) :-
above(Obj2, Obj1).
This seems to find correct solutions.
Later in the book we are asked to add a recursive rule for left_of. The only solution I could find is to use a different functor name: left_of2. So I've basically reimplemented the ancestor relationship.
left_of2(Obj1, Obj2) :-
left_of(Obj1, Obj2).
left_of2(Obj1, Obj2) :-
left_of(Obj1, X),
left_of2(X, Obj2).
In my attempts to reuse left_of, I can get all the correct solution but on the final redo, a stack overflow occurs. I'm guessing that's because I don't have a correct base case defined. Can this be coded using left_of for facts and a recursive procedure?
As mentioned in the comments, it is an unfortunate fact that in Prolog you must have separately named predicates to do this. If you don't, you'll wind up with something that looks like this:
left_of(X,Z) :- left_of(X,Y), left_of(Y,Z).
which gives you unbounded recursion twice. There's nothing wrong in principle with facts and predicates sharing the same name--in fact, it's pretty common for a base case rule to look like a fact. It's just that handling a transitive closure situation like this results in stack overflows unless one of the two steps is finite, and there's no other way to ensure that in Prolog than by naming them separately.
This is far from the only case in Prolog where you are compelled to break work down into separate predicates. Other commonly-occurring cases include computational loops with initializers or finalizers.
Conventionally one would wind up naming the predicate differently from the fact. For instance, directly_left_of for the facts and left_of for the predicate. Using the module system or Logtalk you can easily hide the "direct" version and encourage your users to use the transitive version. You can also make the intention more explicit without disallowing it by using an uncomfortable name for the hidden one, like left_of_.
In other languages, a function is a more opaque, larger sort of abstraction and there are facilities for hiding considerable work behind one. Prolog's predicates, by comparison, are "simpler," which used to bother me. Nowadays I'm glad that they're simpler because there's enough other stuff going on that I'm glad I don't also have to figure out variable-arity predicates or keyword arguments (though you can easily simulate both with lists, if you need to).