Prolog predicate arguments: readability vs. efficiency - prolog

I want to ask pros and cons of different Prolog representations in arguments of predicates.
For example in Exercise 4.3: Write a predicate second(X,List) which checks whether X is the second element of List. The solution can be:
second(X,List):- [_,X|_]=List.
Or,
second(X,[_,X|_]).
The both predicates would behave similarly. The first one would be more readable than the second, at least to me. But the second one uses more stacks during the execution (I checked this with trace).
A more complicated example is Exercise 3.5: Binary trees are trees where all internal nodes have exactly two children. The smallest binary trees consist of only one leaf node. We will represent leaf nodes as leaf(Label) . For instance, leaf(3) and leaf(7) are leaf nodes, and therefore small binary trees. Given two binary trees B1 and B2 we can combine them into one binary tree using the functor tree/2 as follows: tree(B1,B2) . So, from the leaves leaf(1) and leaf(2) we can build the binary tree tree(leaf(1),leaf(2)) . And from the binary trees tree(leaf(1),leaf(2)) and leaf(4) we can build the binary tree tree(tree(leaf(1), leaf(2)),leaf(4)). Now, define a predicate swap/2 , which produces the mirror image of the binary tree that is its first argument. The solution would be:
A2.1:
swap(T1,T2):- T1=tree(leaf(L1),leaf(L2)), T2=tree(leaf(L2),leaf(L1)).
swap(T1,T2):- T1=tree(tree(B1,B2),leaf(L3)), T2=tree(leaf(L3),T3), swap(tree(B1,B2),T3).
swap(T1,T2):- T1=tree(leaf(L1),tree(B2,B3)), T2=tree(T3,leaf(L1)), swap(tree(B2,B3),T3).
swap(T1,T2):- T1=tree(tree(B1,B2),tree(B3,B4)), T2=tree(T4,T3), swap(tree(B1,B2),T3),swap(tree(B3,B4),T4).
Alternatively,
A2.2:
swap(tree(leaf(L1),leaf(L2)), tree(leaf(L2),leaf(L1))).
swap(tree(tree(B1,B2),leaf(L3)), tree(leaf(L3),T3)):- swap(tree(B1,B2),T3).
swap(tree(leaf(L1),tree(B2,B3)), tree(T3,leaf(L1))):- swap(tree(B2,B3),T3).
swap(tree(tree(B1,B2),tree(B3,B4)), tree(T4,T3)):- swap(tree(B1,B2),T3),swap(tree(B3,B4),T4).
The number of steps of the second solution was much less than the first one (again, I checked with trace). But regarding the readability, the first one would be easier to understand, I think.
Probably the readability depends on the level of one's Prolog skill. I am a learner level of Prolog, and am used to programming with C++, Python, etc. So I wonder if skillful Prolog programmers agree with the above readability.
Also, I wonder if the number of steps can be a good measurement of the computational efficiency.
Could you give me your opinions or guidelines to design predicate arguments?
EDITED.
According to the advice from #coder, I made a third version that consists of a single rule:
A2.3:
swap(T1,T2):-
( T1=tree(leaf(L1),leaf(L2)), T2=tree(leaf(L2),leaf(L1)) );
( T1=tree(tree(B1,B2),leaf(L3)), T2=tree(leaf(L3),T3), swap(tree(B1,B2),T3) );
( T1=tree(leaf(L1),tree(B2,B3)), T2=tree(T3,leaf(L1)), swap(tree(B2,B3),T3) );
( T1=tree(tree(B1,B2),tree(B3,B4)), T2=tree(T4,T3), swap(tree(B1,B2),T3),swap(tree(B3,B4),T4) ).
I compared the number of steps in trace of each solution:
A2.1: 36 steps
A2.2: 8 steps
A2.3: 32 steps
A2.3 (readable single-rule version) seems to be better than A2.1 (readable four-rule version), but A2.2 (non-readable four-rule version) still outperforms.
I'm not sure if the number of steps in trace is reflecting the actual computation efficiency.
There are less steps in A2.2 but it uses more computation cost in pattern matching of the arguments.
So, I compared the execution time for 40000 queries (each query is a complicated one, swap(tree(tree(tree(tree(leaf(3),leaf(4)),leaf(5)),tree(tree(tree(tree(leaf(3),leaf(4)),leaf(5)),leaf(4)),leaf(5))),tree(tree(leaf(3),tree(tree(leaf(3),leaf(4)),leaf(5))),tree(tree(tree(tree(leaf(3),leaf(4)),leaf(5)),leaf(4)),leaf(5)))), _). ). The results were almost the same (0.954 sec, 0.944 sec, 0.960 sec respectively). This is showing that the three reresentations A2.1, A2.2, A2.3 have close computational efficiency.
Do you agree with this result? (Probably this is a case specific; I need to vary the experimental setup).

This question is a very good example of a bad question for a forum like Stackoverflow. I am writing an answer because I feel you might use some advice, which, again, is very subjective. I wouldn't be surprised if the question gets closed as "opinion based". But first, an opinion on the exercises and the solutions:
Second element of list
Definitely, second(X, [_,X|_]). is to be preferred. It just looks more familiar. But you should be using the standard library anyway: nth1(2, List, Element).
Mirroring a binary tree
The tree representation that the textbook suggests is a bit... unorthodox? A binary tree is almost invariably represented as a nested term, using two functors, for example:
t/3 which is a non-empty tree, with t(Value_at_node, Left_subtree, Right_subtree)
nil/0 which is an empty tree
Here are some binary trees:
The empty tree: nil
A binary search tree holding {1,2,3}: t(2, t(1, nil, nil), t(3, nil, nil))
A degenerate left-leaning binary tree holding the list [1,2,3] (if you traversed it pre-order): t(1, t(2, t(3, nil, nil), nil), nil)
So, to "mirror" a tree, you would write:
mirror(nil, nil).
mirror(t(X, L, R), t(X, MR, ML)) :-
mirror(L, ML),
mirror(R, MR).
The empty tree, mirrored, is the empty tree.
A non-empty tree, mirrored, has its left and right sub-trees swapped, and mirrored.
That's all. No need for swapping, really, or anything else. It is also efficient: for any argument, only one of the two clauses will be evaluated because the first arguments are different functors, nil/0 and t/3 (Look-up "first argument indexing" for more information on this). If you would have instead written:
mirror_x(T, MT) :-
( T = nil
-> MT = nil
; T = t(X, L, R),
MT = t(X, MR, ML),
mirror_x(L, ML),
mirror_x(R, MR)
).
Than not only is this less readable (well...) but probably less efficient, too.
On readability and efficiency
Code is read by people and evaluated by machines. If you want to write readable code, you still might want to address it to other programmers and not to the machines that are going to evaluate it. Prolog implementations have gotten better and better at being efficient at evaluating code that is also more readable to people who have read and written a lot of Prolog code (do you recognize the feedback loop?). You might want to take a look at Coding Guidelines for Prolog if you are really interested in readability.
A first step towards getting used to Prolog is trying to solve the 99 Prolog Problems (there are other sites with the same content). Follow the suggestion to avoid using built-ins. Then, look at the solutions and study them. Then, study the documentation of a Prolog implementation to see how much of these problems have been solved with built-in predicates or standard libraries. Then, study the implementations. You might find some real gems there: one of my favorite examples is the library definition of nth0/3. Just look at this beauty ;-).
There is also a whole book written on the subject of good Prolog code: "The Craft of Prolog" by Richard O'Keefe. The efficiency measurements are quite outdated though. Basically, if you want to know how efficient your code is, you end up with a matrix with at least three dimensions:
Prolog implementation (SWI-Prolog, SICSTUS, YAP, Gnu-Prolog...)
Data structure and algorithm used
Facilities provided by the implementation
You will end up having some wholes in the matrix. Example: what is the best way to read line-based input, do something with each line, and output it? Read line by line, do the thing, output? Read all at once, do everything in memory, output at once? Use a DCG? In SWI-Prolog, since version 7, you can do:
read_string(In_stream, _, Input),
split_string(Input, "\n", "", Lines),
maplist(do_x, Lines, Xs),
atomics_to_string(Xs, "\n", Output),
format(Out_stream, "~s\n", Output)
This is concise and very efficient. Caveats:
The available memory might be a bottle neck
Strings are not standard Prolog, so you are stuck with implementations that have them
This is a very basic example, but it demonstrates at least the following difficulties in answering your question:
Differences between implementations
Opinions on what is readable or idiomatic Prolog
Opinions on the importance of standards
The example above doesn't even go into details about your problem, as for example what you do with each line. Is it just text? Do you need to parse the lines? Why are you not using a stream of Prolog terms instead? and so on.
On efficiency measurements
Don't use the number of steps in the tracer, or even the reported number of inferences. You really need to measure time, with a realistic input. Sorting with sort/2, for example, always counts as exactly one inference, no matter what is the length of the list being sorted. On the other hand, sort/2 in any Prolog is about as efficient as a sort on your machine would ever get, so is that an issue? You can't know until you have measured the performance.
And of course, as long as you make an informed choice of an algorithm and a data structure, you can at the very least know the complexity of your solution. Doing an efficiency measurement is interesting only if you notice a discrepancy between what you expect and what you measure: obviously, there is a mistake. Either your complexity analysis is wrong, or your implementation is wrong, or even the Prolog implementation you are using is doing something unexpected.
On top of this, there is the inherent problem of high-level libraries. With some of the more complex approaches, you might not be able to easily judge what the complexity of a given solution might be (constraint logic programming, as in CHR and CLPFD, is a prime example). Most real problems that fit nicely to the approach will be much easier to write, and more efficient than you could ever do without considerable effort and very specific code. But get fancy enough, and your CHR program might not even want to compile any more.
Unification in the head of the predicate
This is not opinion-based any more. Just do the unifications in the head if you can. It is more readable to a Prolog programmer, and it is more efficient.
PS
"Learn Prolog Now!" is a good starting point, but nothing more. Just work your way through it and move on.

In the first way for example for Exercise 3.5 you use the rule swap(T1,T2) four times ,which means that prolog will examine all these four rules and will return true or fail for every of these four calls .Because these rules can't all be true together (each time one of them will return true) ,for every input you waste three calls that will not succeed (that's why it demands more steps and more time ). The only advantage in the above case is that by writing with the first way ,it is more readable. In generally when you have such cases of pattern matching it's better to write the rules in a way that are well defined and not two(or more) rules match a input ,if of course you require only one answer ,as for example the second way of writing the above example .
Finally one example where it is required that more than one rules match an input is the predicate member where it is written:
member(H,[H|_]).
member(H,[_|T]):- member(H,T).
where in this case you require more than one answers.
In the third way you just write the first way without pattern matching .It has the form (condition1);...;(condition4) and if the condition1 does not return true it examines the next condition .Most of the times the fourth condition returns true ,but it has called and tested condition1-3 which returned false .So it is almost as the first way of writing the solution ,except the fact that in third solution if it finds true condition1 it will not test other conditions so you will save some wasted calls (compared to solution1).
As for the running time ,it was expected to be almost the same because in worst case solution 1 and 3 does four times the tests/calls that solution 2 does .So if solution2 is O(g) complexity (for some function g) ,then solution 1 and 3 are O(4g) which is O(g) complexity so running times will be very close.

Related

misconception about how prolog works

So I am currently learning prolog and I can't get my head around how this language works.
"It tries all the possible solutions until it finds one, if it doesn't it returns false" is what I've read that this language does. You just Describe the solution and it finds it for you
With that in mind, I am trying to solve the 8 queens problem ( how to place 8 queens on a chess board without anyone threatening the others).
I have this predicate, 'safe' that gets a list of pairs, the positions of all the queens and succeeds when they are not threatening each other.
When I enter in the terminal
?- safe([(1,2),(3,5)]).
true ?
| ?- safe([(1,3),(1,7)]).
no
| ?- safe([(2,2),(3,3)]).
no
| ?- safe([(2,2),(3,4),(8,7)]).
true ?
it recognizes the correct from the wrong answers, so it knows if something is a possible solution
BUT
when I enter
| ?- safe(L).
L = [] ? ;
L = [_] ? ;
it gives me the default answers, even though it recognizes a solution for 2 queens when I enter them.
here is my code
threatens((_,Row),(_,Row)).
threatens((Column,_),(Column,_)).
threatens((Column1,Row1),(Column2,Row2)) :-
Diff1 is Column1 - Row1,
Diff2 is Column2 - Row2,
abs(Diff1) =:= abs(Diff2).
safe([]).
safe([_]).
safe([A,B|T]) :-
\+ threatens(A,B),
safe([A|T]),
safe(T).
One solution I found to the problem is to create predicates 'position' and modify the 'safe' one
possition((0,0)).
possition((1,0)).
...
...
possition((6,7)).
possition((7,7)).
safe([A,B|T]) :-
possition(A),
possition(B),
\+ threatens(A,B),
safe([A|T]),
safe(T).
safe(L,X):-
length(L,X),
safe(L).
but this is just stupid, as you have to type everything explicitly and really really slow,
even for 6 queens.
My real problem here, is not with the code itself but with prolog, I am trying to think in prolog, But all I read is
Describe how the solution would look like and let it work out what is would be
Well that's what I have been doing but it does not seem to work,
Could somebody point me to some resources that don't teach you the semantics but how to think in prolog
Thank you
but this is just stupid, as you have to type everything explicitly and really really slow, even for 6 queens.
Regarding listing the positions, the two coordinates are independent, so you could write something like:
position((X, Y)) :-
coordinate(X),
coordinate(Y).
coordinate(1).
coordinate(2).
...
coordinate(8).
This is already much less typing. It's even simpler if your Prolog has a between/3 predicate:
coordinate(X) :-
between(1, 8, X).
Regarding the predicate being very slow, this is because you are asking it to do too much duplicate work:
safe([A,B|T]) :-
...
safe([A|T]),
safe(T).
Once you know that [A|T] is safe, T must be safe as well. You can remove the last goal and will get an exponential speedup.
Describe how the solution would look like and let it work out what is
would be
demands that the AI be very strong in general. We are not there yet.
You are on the right track though. Prolog essentially works by enumerating possible solutions and testing them, rejecting those that don't fit the conditions encoded in the program. The skill resides in performing a "good enumeration" (traversing the domain in certain ways, exploiting domain symmetries and overlaps etc) and subsequent "fast rejection" (quickly throwing away whole sectors of the search space as not promising). The basic pattern:
findstuff(X) :- generate(X),test(X).
And evidently the program must first generate X before it can test X, which may not be always evident to beginners.
Logic-wise,
findstuff(X) :- x_fulfills_test_conditions(X),x_fullfills_domain_conditions(X).
which is really another way of writing
findstuff(X) :- test(X),generate(X).
would be the same, but for Prolog, as a concrete implementation, there would be nothing to work with.
That X in the program always stands for a particular value (which may be uninstantiated at a given moment, but becomes more and more instantiated going "to the right"). Unlike in logic, where the X really stands for an unknown object onto which we pile constraints until -ideally- we can resolve X to a set of concrete values by applying a lot of thinking to reformulate constraints.
Which brings us the the approach of "Constraint Logic Programming (over finite domains)", aka CLP(FD) which is far more elegant and nearer what's going on when thinking mathematically or actually doing theorem proving, see here:
https://en.wikipedia.org/wiki/Constraint_logic_programming
and the ECLiPSe logic programming system
http://eclipseclp.org/
and
https://www.metalevel.at/prolog/clpz
https://github.com/triska/clpfd/blob/master/n_queens.pl
N-Queens in Prolog on YouTube. as a must-watch
This is still technically Prolog (in fact, implemented on top of Prolog) but allows you to work on a more abstract level than raw generate-and-test.
Prolog is radically different in its approach to computing.
Arithmetic often is not required at all. But the complexity inherent in a solution to a problem show up in some place, where we control how relevant information are related.
place_queen(I,[I|_],[I|_],[I|_]).
place_queen(I,[_|Cs],[_|Us],[_|Ds]):-place_queen(I,Cs,Us,Ds).
place_queens([],_,_,_).
place_queens([I|Is],Cs,Us,[_|Ds]):-
place_queens(Is,Cs,[_|Us],Ds),
place_queen(I,Cs,Us,Ds).
gen_places([],[]).
gen_places([_|Qs],[_|Ps]):-gen_places(Qs,Ps).
qs(Qs,Ps):-gen_places(Qs,Ps),place_queens(Qs,Ps,_,_).
goal(Ps):-qs([0,1,2,3,4,5,6,7,8,9,10,11],Ps).
No arithmetic at all, columns/rows are encoded in a clever choice of symbols (the numbers indeed are just that, identifiers), diagonals in two additional arguments.
The whole program just requires a (very) small subset of Prolog, namely a pure 2-clauses interpreter.
If you take the time to understand what place_queens/4 does (operationally, maybe, if you have above average attention capabilities), you'll gain a deeper understanding of what (pure) Prolog actually computes.

Prolog: Looping through elements of list A and comparing to members of list B

I'm trying to write Prolog logic for the first time, but I'm having trouble. I am to write logic that takes two lists and checks for like elements between the two. For example, consider the predicate similarity/2 :
?- similarity([2,4,5,6,8], [1,3,5,6,9]).
true.
?- similarity([1,2,3], [5,6,8]).
false.
The first query will return true as those two lists have 5 and 6 in common. The second returns false as there are no common elements between the two lists in that query.
I CANNOT use built in logic, such as member, disjoint, intersection, etc. I am thinking of iterating through the first list provided, and checking to see if it matches each element in the second list. Is this an efficient approach to this problem? I will appreciate any advice and help. Thank you so much.
Writing Prolog for the first time can be really daunting, since it is unlike many traditional programming languages that you will most likely encounter; however it is a very rewarding experience once you've got a grasp on this new style of programming! Since you mention that you are writing Prolog for the first time I'll give some general tips and tricks about writing Prolog, and then move onto some hints to your problem, and then provide what I believe to be a solution.
Think Recursively
You can think of every Prolog program that you write to be intrinsically recursive in nature. i.e. you can provide it with a series of "base-cases" which take the following form:
human(John). or wildling(Ygritte) In my opinion, these rules should always be the first ones that you write. Try to break down the problem into its simplest case and then work from there.
On the other hand, you can also provide it with more complex rules which will look something like this: contains(X, [H|T]):- contains(X, T) The key bit is that writing a rule like this is very much equivalent to writing a recursive function in say, Python. This rule does a lot of the heavy lifting in looking to see whether a value is contained in a list, but it isn't complete without a "base-case". A complete contains rule would actually be two rules put together: contains(X, [X|_]).
contains(X, [H|T]):-contains(X, T).
The big takeaway from this is to try and identify the simple cases of your problem, which can act like base cases in a recursive function, and then try to identify how you want to "recurse" and actually do work on the problem at hand.
Pattern Matching
Part of the great thing about Prolog is the pattern matching system that it has in place. You should 100% use this to your advantage whenever you can -- it is especially helpful when trying to do anything with lists. For example:
head(X, [X|T]).
Will evaluate to true when called thusly: head(1, [1, 2, 3]) because intrinsic in the rule is the matching of X. This sort of pattern matching on the first element of a list is incredibly important and really the key way that you will do any work on lists in Prolog. In my experience, pattern matching on the head of a list will often be one of the "base-cases" that I mentioned beforehand.
Understand The Flow of the Program
Another key component of how Prolog works is that it takes a "top-down" approach to reading code. What I mean by that is that every time a rule is called (except for definitions of the form king(James).), Prolog starts at line 1 and continues until it reaches a rule that is true or the end of the file. Therefore, the ordering of your rules is incredibly important. I'm assuming that you know that you can combine rules together via a comma to indicate logical AND, but what is maybe more subtle is that if you order one rule above another, it can act as a logical OR, simply because it will be evaluated before another rule, and can potentially cause the program to recurse.
Specific Example
Now that I've gotten all of my general advice out of the way, I'll actually reference the given problem. First, I'd write my "base-case". What would happen if you are given two lists whose first elements are the same? If the first element in each list is not the same, then they have to be different. So, you have to look through the second list to see if this element is contained anywhere in the rest of the list. What kind of rule would this produce? OR it could be the case that the first element of the first list is not contained within the second at all, in which case you have to advance once in the first list, and start again with the second list. What kind of rule would this produce?
In the end, I would say that your approach is the correct one to take, and I have provided my own solution below:
similarity([H|_], [H|_]).
similarity(H1|T1], [_|T2]):- similarity([H1|T1], T2).
similarity([_|T1], [H2|T2]):- similarity(T1, [H2|T2]).
Hope all of this helps in some way!

Prolog: How to tell if a predicate is deterministic or not

So from what I understand about deterministic predicates:
Deterministic predicate = 1 solution
Non-deterministic predicate = multiple solutions
Are there any type of rules as to how you can detect if the predicate is one or the other? Like looking at the search tree, etc.
There is no clear, generally accepted consensus about these notions. However, they are usually based rather on the observed answers and not based on the number of solutions. In certain contexts the notions are very implementation related. Non-determinate may mean: leaves a choice point open. And sometimes determinate means: never even creates a choice point.
Answers vs. solutions
To see the difference, consider the goal length(L, 1). How many solutions does it have? L = [a] is one, L = [23] another... but all of these solutions are compactly represented with a single answer substitution: L = [_] which thus contains infinitely many solutions.
In any case, in all implementations I know of, length(L, 1) is a determinate goal.
Now consider the goal repeat which has exactly one solution, but infinitely many answers. This goal is considered non-determinate.
In case you are interested in constraints, things become even more evolved. In library(clpfd), the goal X #> Y, Y #> X has no solution, but still one answer. Combine this with repeat: infinitely many answers and no solution.
Further, the goal append(Xs, Ys, []) has exactly one solution and also exactly one answer, nevertheless it is considered non-determinate in many implementations, since in those implementations it leaves a choice point open.
In an ideal implementation, there would be no answers without solutions (except false), and there would be non-determinism only when there is more than one answer. But then, all of this is mostly undecidable in the general case.
So, whenever you are using these notions make sure on what level things are meant. Rather explicitly say: multiple answers, multiple solutions, leaves no (unnecessary) choice point open.
You need understand the difference between det, semidet and undet, it is more than just number of solutions.
Because there is no loop control operator in Prolog, looping (not recursion) is constructed as a 'sequence generating' predicate (undet) followed by the loop body. Also you can store solutions with some of findall-group predicates as a list and loop later with the member/2 predicate.
So, any piece of your program is either part of loop construction or part of usual flow. So, there is a difference in designing det and undet predicates almost in the intended usage. If you can work with a sequence you always do undet and comment it as so. There is a nice unit-test extension in swi-prolog which can check wheter your predicate always the same in mean of det/semidet/undet (semidet is for usage the same way as undet but as a head of 'if' construction).
So, the difference is pre-design, and this question should not be arised with already existing predicates. It is a good practice always comment the intended usage in a comment like.
% member(?El, ?List) is undet.
Deterministic: Always succeeds with a single answer that is always the same for the same input. Think a of a static list of three items, and you tell your function to return value one. You will get the same answer every time. Additionally, arithmetic functions. 1 + 1 = 2. X + Y = Z.
Semi-deterministic: Succeeds with a single answer that is always the same for the same input, but it can fail. Think of a function that takes a list of numbers, and you ask your function if some number exists in the list. It either does, or it doesn't, based on the contents of the list given and the number asked.
Non-deterministic: Succeeds with a single answer, but can exhibit different behaviors on different runs, even for the same input. Think any kind of math.random(min,max) function like random/3
In essence, this is entirely separate from the concept of choice points, as choice points are a function of Prolog. Where I think the Prolog confusion of these terms comes from is that Prolog can find a single answer, then go back and try for another solution, and you have to use the cut operator ! to tell it that you want to discard your choice points explicitly.
This is very useful to know when working with Prolog Unit Testing

most readable way in XPath to write "is value X a member of sequence S"?

XPath 2.0 has some new functions and syntax, relative to 1.0, that work with sequences. Some of theset don't really add to what the language could already do in 1.0 (with node sets), but they make it easier to express the desired logic in ways that are more readable. This increases the chances of the programmer getting the code correct -- and keeping it that way. For example,
empty(s) is equivalent to not(s), but its intent is much clearer when you want to test whether a sequence is empty.
Correction: the effective boolean value of a sequence is in general more complicated than that. E.g. empty((0)) != not((0)). This applies to exists(s) vs. s in a boolean context as well. However, there are domains of s where empty(s) is equivalent to not(s), so the two could be used interchangeably within those domains. But this goes to show that the use of empty() can make a non-trivial difference in making code easier to understand.
Similarly, exists(s) is equivalent to boolean(s) that already existed in XPath 1.0 (or just s in a boolean context), but again is much clearer about the intent.
Quantified expressions; e.g. "some $x in expression satisfies test($x)" would be equivalent to boolean(expression[test(.)]) (although the new syntax is more flexible, in that you don't need to worry about losing the context item because you have the variable to refer to it by).
Similarly, "every $x in expression satisfies test($x)" would be equivalent to not(expression[not(test(.))]) but is more readable.
These functions and syntax were evidently added at no small cost, solely to serve the goal of writing XPath that is easier to map to how humans think. This implies, as experienced developers know, that understandable code is significantly superior to code that is difficult to understand.
Given all that ... what would be a clear and readable way to write an XPath test expression that asks
Does value X occur in sequence S?
Some ways to do it: (Note: I used X and S notation here to indicate the value and the sequence, but I don't mean to imply that these subexpressions are element name tests, nor that they are simple expressions. They could be complicated.)
X = S: This would be one of the most unreadable, since it requires the reader to
think about which of X and S are sequences vs. single values
understand general comparisons, which are not obvious from the syntax
However, one advantage of this form is that it allows us to put the topic (X) before the comment ("is a member of S"), which, I think, helps in readability.
See also CMS's good point about readability, when the syntax or names make the "cardinality" of X and S obvious.
index-of(S, X): This one is clear about what's intended as a value and what as a sequence (if you remember the order of arguments to index-of()). But it expresses more than we need to: it asks for the index, when all we really want to know is whether X occurs in S. This is somewhat misleading to the reader. An experienced developer will figure out what's intended, with some effort and with understanding of the context. But the more we rely on context to understand the intent of each line, the more understanding the code becomes a circular (spiral) and potentially Sisyphean task! Also, since index-of() is designed to return a list of all the indexes of occurrences of X, it could be more expensive than necessary: a smart processor, in order to evaluate X = S, wouldn't necessarily have to find all the contents of S, nor enumerate them in order; but for index-of(S, X), correct order would have to be determined, and all contents of S must be compared to X. One other drawback of using index-of() is that it's limited to using eq for comparison; you can't, for example, use it to ask whether a node is identical to any node in a given sequence.
Correction: This form, used as a conditional test, can result in a runtime error: Effective boolean value is not defined for a sequence of two or more items starting with a numeric value. (But at least we won't get wrong boolean values, since index-of() can't return a zero.) If S can have multiple instances of X, this is another good reason to prefer form 3 or 6.
exists(index-of(X, S)): makes the intent clearer, and would help the processor eliminate the performance penalty if the processor is smart enough.
some $m in S satisfies $m eq X: This one is very clear, and matches our intent exactly. It seems long-winded compared to 1, and that in itself can reduce readability. But maybe that's an acceptable price for clarity. Keep in mind that X and S could potentially be complex expressions themselves -- they're not necessarily just variable references. An advantage is that since the eq operator is explicit, you can replace it with is or any other comparison operator.
S[. eq X]: clearer than 1, but shares the semantic drawbacks of 2: it computes all members of S that are equal to X. Actually, this could return a false negative (incorrect effective boolean value), if X is falsy. E.g. (0, 1)[. eq 0] returns 0 which is falsy, even though 0 occurs in (0, 1).
exists(S[. eq X]): Clearer than 1, 2, 3, and 5. Not as clear as 4, but shorter. Avoids the drawbacks of 5 (or at least most of them, depending on the processor smarts).
I'm kind of leaning toward the last one, at this point: exists(S[. eq X])
What about you... As a developer coming to a complex, unfamiliar XSLT or XQuery or other program that uses XPath 2.0, and wanting to figure out what that program is doing, which would you find easiest to read?
Apologies for the long question. Thanks for reading this far.
Edit: I changed = to eq wherever possible in the above discussion, to make it easier to see where a "value comparison" (as opposed to a general comparison) was intended.
For what it's worth, if names or context make clear that X is a singleton, I'm happy to use your first form, X = S -- for example when I want to check an attribute value against a set of possible values:
<xsl:when test="#type = ('A', 'A+', 'A-', 'B+')" />
or
<xsl:when test="#type = $magic-types"/>
If I think there is a risk of confusion, then I like your sixth formulation. The less frequently I have to remember the rules for calculating an effective boolean value, the less frequently I make a mistake with them.
I prefer this one:
count(distinct-values($seq)) eq count(distinct-values(($x, $seq)))
When $x is itself a sequence, this expression implements the (value-based) subset of relation between two sets of values, that are represented as sequences. This implementation of subset of has just linear time complexity -- vs many other ways of expressing this, that have O(N^2)) time complexity.
To summarize, the question whether a single value belongs to a set of values is a special case of the question whether one set of values is a subset of another. If we have a good implementation of the latter, we can simply use it for answering the former.
The functx library has a nice implementation of this function, so you can use
functx:is-node-in-sequence($X, $Y)
(this particular function can be found at http://www.xqueryfunctions.com/xq/functx_is-node-in-sequence.html)
The whole functx library is available for both XQuery (http://www.xqueryfunctions.com/) and XSLT (http://www.xsltfunctions.com/)
Marklogic ships the functx library with their core product; other vendors may also.
Another possibility, when you want to know whether node X occurs in sequence S, is
exists((X) intersect S)
I think that's pretty readable, and concise. But it only works when X and the values in S are nodes; if you try to ask
exists(('bob') intersect ('alice', 'bob'))
you'll get a runtime error.
In the program I'm working on now, I need to compare strings, so this isn't an option.
As Dimitri notes, the occurrence of a node in a sequence is a question of identity, not of value comparison.

Mathematica's pattern matching poorly optimized?

I recently inquired about why PatternTest was causing a multitude of needless evaluations: PatternTest not optimized? Leonid replied that it is necessary for what seems to me as a rather questionable method. I can accept that, though I would prefer a more efficient alternative.
I now realize, which I believe Leonid has been saying for some time, that this problem runs much deeper in Mathematica, and I am troubled. I cannot understand why this is not or cannot be better optimized.
Consider this example:
list = RandomReal[9, 20000];
Head /# list; // Timing
MatchQ[list, {x__Integer, y__}] // Timing
{0., Null}
{1.014, False}
Checking the heads of the list is essentially instantaneous, yet checking the pattern takes over a second. Surely Mathematica could recognize that since the first element of the list is not an Integer, the pattern cannot match, and unlike the case with PatternTest I cannot see how there is any mutability in the pattern. What is the explanation for this?
There appears to be some confusion regarding packed arrays, which as far as I can tell have no bearing on this question. Rather, I am concerned with the O(n2) time complexity on all lists, packed or unpacked.
MatchQ unpacks for these kinds of tests. The reason is that no special case for this has been implemented. In principle it could contain anything.
On["Packing"]
MatchQ[list, {x_Integer, y__}] // Timing
MatchQ[list, {x__Integer, y__}] // Timing
Improving this is very tricky - if you break the pattern matcher you have a serious problem.
Edit 1:
It is true that the unpacking is not the cause for the O(n^2) complexity. It does, however, show that for the MatchQ[list, {x__Integer, y__}] part the code goes to another part of the algorithm (which needs the lists to be unpacked). Some other things to note: This complexity arises only if both patterns are __ if either one of them is _ the algorithm has a better complexity.
The algorithm then goes through all n*n potential matches and there seems no early bailout. Presumably because other patters could be constructed that would need this complexity - The issue is that the above pattern forces the matcher to a very general algorithm.
I then was hoping for MatchQ[list, {Shortest[x__Integer], __}] and friends but to no avail.
So, my two cents: either use a different pattern (and have On["Packing"] to see if it goes to the general matcher) or do a pre-check DeveloperPackedArrayQ[expr] && Head[expr[[1]]]===Integer or some such.
#the author of the first answer. As far as I know from reverse-engeneering and reading of available information, it may be due to different ways the patterns are checked. In fact - as they say - a special hash code is used for pattern matching. This hash (basically a FNV-1 round) makes it very easy to check for particular patterns related to the type of expression involved (matter of a few xor operations). The hashing algorithm cycles inside the expression and each subpart is xorred with the output of the previous one. Special xor values are used for each atom expression - machineInts, machineReals, bigNums, Rationals and so on. Hence, for example, _Integer is easy to check because the hash of any integer is formed with integer's xor value, so all we need to do is doing the inverse op and see if matches - i.e. if we get some particular value or something like that (sorry if I'm vague on actual implementation details. It's WIP). For general or uncommon patterns the check may not take advantage of this hash stuff and require something different.
#the OP Head[] simply acts on the internal expression, taking the value of the first pointer of the expression (expressions are implemented as arrays of pointers). So doing it is as easy as copying and printing a string - very very fast. The pattern matching engine is not even called in this case.

Resources