How do I show that the language {w|M_w accepts 0x iff it accepts 1x} is not recursive? - computation-theory

I need to show that L = {w|M_w accepts 1x iff it accepts 0x} is not recursive
I believe this should be a simple application of Rice's Theorem that states that for any nontrivial property P of recursively enumerable languages, {w | M_w is a Turing Machine and L(M_w) is in P} is not recursive. I am a bit unsure of this because I am not too sure what constitutes as a property; moreover, I do not know how to show that it is specifically a property of recursively enumerable languages, either.

I am a bit unsure of this because I am not too sure what constitutes as a property;
For the purposes of Rice's theorem, a property is any logical statement (proposition, predicate, etc.) about some language which is either true or false for that language. A property is nontrivial if the statement is neither tautological nor contradictory: that is, it is nontrivial if it is true of some languages but not others. The property "|L| < 0" is contradictory and is therefore a trivial property; "|L| >= 0" is tautological and therefore a trivial property; "|L| = 0" is a nontrivial property.
moreover, I do not know how to show that it is specifically a property of recursively enumerable languages, either.
Recursively enumerable languages are also "just languages" and their properties are also the properties of "just languages". They are sets of strings and their properties all involve what strings are in them. Do not get hung up on whether a particular property of languages is a property of recursively-enumerable languages: it is. Even if it were not, a proof using Rice's Theorem isn't affected: if the property is known not to apply to recursively-enumerable languages and to be nontrivial, then it is known the language is not recursive anyway - Rice's Theorem isn't even necessary.

Related

How is predicate logic represented in Prolog?

may be a strange and broad question and not a 100% programming question, but I hope this is ok. I recently had a discussion about, that a lot of programs in Prolog don´t follow strict predicate logic (of Frege) but often are "object oriented" which I am trying to grasp.
I know that Prolog is based on first order predicate logic especially Horn Clauses and that they are a special form of modus ponens. A fact and a rule if they occur solo are simply clauses, but as soon as I add more than one occurrence they become a predicate.
How are the quantors of first order predicate logic represented and related to fact , rule , predicate or the Prolog concept in general? What does the functor express and what the arguments in relation to predicate logic. How is predicate logic and first order predicate logic reflected in Prolog and where does prolog leave their concepts? e.g. how would I define a point, a line and a vertical line in predicate logic and first order predicate logic.
How do I formulate this in predicate logic and first order predicate logic what is the semantic and logic difference between
vertical(line).
line(vertical).
Or a line and point in this example. Are point and line not predicate logic?
For me it is " point(X) the set of all points" and when I pick a concrete point "there exists one point(110, 12)."
point(X,Y).
line(point(W,X), point(Y,Z)).
vertical(line(point(X,Y), point(X,Z))).
horizontal(line(point(X,Y), point(Z,Y))).
Any info helps! Many thanks, H
A chapter of Programming in Prolog by W.Clocksin and C.Mellish is devoted to explain the relation of Prolog with logic. Citing from there
If we wish to discuss how Prolog is related to logic, we must first establish what we
mean by logic. Logic was originally devised as a way of representing the form of
arguments, so that it would be possible to check in a formal way whether or not they
are valid. Thus we can use logic to express propositions, the relations between propositions and how one can validly infer some propositions from others. The particular
form of logic that we will be talking about here is called the Predicate Calculus. We
will only be able to say a few words about it here. There are scores of good basic
introductions to logic you can turn to for background reading.
If we wish to express propositions about the world, we must be able to describe
the objects that are involved in them. In Predicate Calculus, we represent objects by
terms. A term is of one of the following forms:
A constant symbol. This is a symbol that stands for a single individual or concept.
We can think of this as a Prolog atom, and we will use the Prolog syntax. So
greek, agatha, and peace are constant symbols.
A variable symbol. This is a symbol that we may want to stand for different
individuals at different times. Variables are really only introduced in conjunction
with quantifiers, which are discussed below. We can think of them as Prolog
variables and will use the Prolog syntax. Thus X, Man, and Greek are variable
symbols.
A compound term. A compound term consists of a function symbol, together
with an ordered set of terms as its arguments. The idea is that the compound
term represents some individual that depends on the individuals represented by
the arguments. The function symbol represents how the first depends on the second. For instance, we could have a function symbol standing for the notion of
"distance" and two arguments. In this case, the compound term stands for the
distance between the objects represented by the arguments. We can think of a
compound term as a Prolog structure with the function symbol as the functor.
We will write Predicate Calculus compound terms using the Prolog syntax, so
that, for instance, wife(henry) might mean Henry's wife, distance(point1, X)
might mean the distance between some particular point and some other place to
be specified, and classes(mary, dayafter(W)) might mean the classes that Mary
teaches on the day after some day W to be specified.
Thus in Predicate Calculus the ways of representing objects are just like the ways available in Prolog.
Seems not appropriate to put the entire chapter here... there is also a program, very explanatory, in appendix B, that performs an automatic translation of WFFs into clauses.
The book is very readable, just a pity it's not among the titles in Free Prolog Programming Books section.
I know that Prolog is based on first order predicate logic especially Horn Clauses and that they are a special form of modus ponens.
In a sense, inverse "modus ponens":
a :- b
You want to show "a true", and to do so, you have to show "b true"
A fact and a rule if they occur solo are simply clauses, but as soon as I add more than one occurrence they become a predicate.
No, they are all predicates. The "predicate" is an object/agent/program/platonic-phenomenon which expresses that there (objectively) is some "relationship" between "things", and you can ask the Prolog Processor about that relationship. There is no direct meaning associated to all of that though, it's "strings related to strings via strings". We are working with syntactic machines after all (i.e. computers).
Enter this logic program:
p(x,y). % Predicate p/2 states that there is a relationship p between x and y
And now, you can query the database about what the program is saying:
?- p(x,y).
true. % a p relationship exists (fact, but could also be rule)
?- p(x,A).
A = y. % the thing related to x via p is y
?- p(A,y).
A = x. % the thing related to x via p is y
?- p(A,B).
A = x, % things related via p are x and y
B = y.
?- p(c,d).
false. % not REALLY "false" but "as far as I can tell, there
% is no relationship p between c and d"
Note the interpretation of "false", which is not the "strong false" of classical logic. Even though it is traditionally state that Prolog works in classical logic, this is not really the case:
From "Logic Programming with Strong Negation" (David Pearce, Gerd Wagner, FU Berlin, 1991), appears in Springer LNAI 475: Extensions of Logic Programming, International Workshop Tübingen, FRG, December 8–10, 1989 Proceedings):
According to the standard view, a logic program is a set of definite Horn clauses. Thus, logic programs are regarded as syntactically restricted first-order theories within the framework of classical logic. Correspondingly, the proof theory of logic programs is considered as the specialized version of classical resolution, known as SLD-resolution. This view, however, neglects the fact that a program clause, a_0 <— a_1, a_2, • • •, a_n, is an expression of a fragment of positive logic (a subsystem of intuitionistic logic) rather than an implicational formula of classical logic. The classical interpretation of logic programs, therefore, seems to be a semantical overkill.
It should be clear that in order to explain the deduction mechanism of Prolog one does not have to refer to the indirect method of SLD-resolution which checks for the refutability of the contrary. It is certainly more natural to view Prolog's proof procedure as a kind of natural deduction, as, for example, in [Hallnäs & Schroeder-Heister 1987] and [Miller 1989]. This also is more in line with the intuitions of a Prolog programmer. Since Prolog is the paradigm, logic programming semantics should take it as a point of departure.
Now:
How are the quantors of first order predicate logic represented and related
to fact, rule, predicate or the Prolog concept in general?
That is a long story. Note that Prolog is primarily about "programming using logic", and also about "modeling using logic". The two aspects certainly overlap well for problems that can be solved using explicit enumeration, but Prolog is not made for specifying general FOL constraints describing a sought-for solution. In fact, certain FOL constraints cannot be represented and other have to be transformed into nominally equivalent expression that are agreeable to the machine. Look up "skolemization". For example: https://www.cs.toronto.edu/~sheila/384/w11/Lectures/csc384w11-KR-tutorial.pdf
On the flip side, Prolog provides "meta-predicates" which generate solutions by calling other predicates, so it's making forays into second-order logic. As it must - nobody can survive in the FOL desert for long.
What does the functor express
Nothing. It just stands for itself. Pure syntax. Look up "Herbrand Universe".
How do I formulate this in predicate logic and first order predicate logic
what is the semantic and logic difference between
vertical(line).
line(vertical).
It's you who imbues vertical and line with meaning. So, feelings. You want a "vertial line", so you would say, the "thing" is the "line" and "vertical" is an attribute of the "line". So vertical(line) sounds appropriate. Or maybe attribute(line,vertical). It depends.
Here:
point(X,Y).
line(point(W,X), point(Y,Z)).
You have to aspects:
Predicates express "relationships". "Function symbols" are used to construct "things with structure": you can form trees of stuff with function symbols on nodes and integers/strings/variables on leaves. These are called "term". But terms can appear as predicates or as things, depending on the context, it's quite fluid. So you can for example construct a Prolog program with another Prolog program.
point(X,Y)
line(point(W,X), point(Y,Z))
These are terms!
If you type this into a file program.pl:
point_on_line(point(X,Y),line(point(W,X), point(Y,Z))).
The terms appear as "things" related by predicate point_on_line/2. The whole line is itself a term.
If you type this into a file program.pl:
point(X,Y).
line(point(W,X), point(Y,Z)).
The terms appear as "predicates", and point appears both as predicate point/2 and as "thing" about which predicate line/2 is talking.
This is actually a vast subject and it takes some time getting used to it, much more than functional programming. I had some Prolog and Logic courses at uni but 20 years later I found out that I had badly misunderstood a lot of aspects.

Equality vs. Bicondition in Z3

I'm trying to understand the difference in Z3 between equality testing and biconditional. My understanding is that = is used to express biconditional, but how is equality tested?
For example. I am trying to write something similar to the following (toy) statement in z3:
on_table(o, a) ↔ (in_hand(o) Λ a != pickup(o)) ∨ a = put_on_table(o)
Note: I am aware the above statement can be factored into a set of implications, but I am interested in expressing it as a single biconditional.
For the Bool type, equality and biconditional are the same operations. For any other type, biconditional doesn't really make sense.
All logics in SMT come equipped with the notion of equality, which is essentially term-level equality of objects. The standard explicitly states:
Version 2.6 of the SMT-LIB format adopts as its underlying logic a
version of many-sorted first-order logic with equality [Man93, Gal86,
End01].
See Section 2.2 of http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
The same document also says (Section 3.7.1):
Note the absence of a symbol for double implication. Such a connective
is superfluous because the equality symbol = can be used in its place.
I suspect though, perhaps, you are trying to ask for something else. Some further examples would definitely help.

Validity and satisfiability

I am having problems understanding the difference between validity and satisfiability.
Given the following:
(i) For all F, F is satisfiable or ~F is satisfiable.
(ii) For all F, F is valid or ~F is valid.
How do I prove which is true and which is false?
Statement (i) is true, as for all F, F will either be satisfiable, or ~F will be satisfiable (truth table). However, how do I go about solving for statement (ii)?
Any help is highly appreciated!
Aprilrocks92,
I don't blame you for being confused, because actually logicians, mathematicians, heck even those philosopher types use the words validity differently sometimes.
Trying not to overcomplicate, I'll give you a thin definition: a conclusion if valid when it is true whenever the premises are true. We also say, given a suitably defined logic, that the conclusion follows as a "logical consequence" of the premises.
On the other hand, satisfisability means that there exists a valuation of the non logical symbols in the formula F that makes the formula true in the logic.
So I should probably mention the difference between semantics and syntax to explain. The syntax of your logic is all those logical and non logical symbols, and the deductive rules that enable you to make "steps" towards proof in the logic. My definition of satisfisability above mentioned the word "valuation"- now how does that fit? Well the answer is that you need to supply a semantics: in short this is the structure that the formula of the logic are expressions of, usually given in set theory, and a valuation of a given F is a function that maps all the non logical symbols in F to sets and members of sets, which a given semantics for the logic composes into a truth value.
Hmm. I'm not sure that's the best explanation, but I don't have much time.
Either way, that should help you understand the difference. To answer your question about the difference between (i) and (ii) without giving too much away, think: what's the relationship between the two? Well given that as above an F' is true given a valuation that sends the sentence to true. So you could "rewrite" my definition of validity as: a conclusion is valid iff whenever the premises are satisfisable the conclusion is satisfisable.
Now, with regards your requirement to prove these things, I strongly suspect you've got a lot more context about your logic you're not telling us and your teacher or text book has intimated a context in which to answer these, as actually taken in the general sense your question doesn't make complete sense.

Using a binary decision diagram for the conceptual representation of simple rules

I'm attempting to express rules more formally than plain English sentences and was hoping for some direction in using a propositional approach and some sort of binary decision tree for illustration of the rules.
Suppose that objects outside a specified zone are required to be in a particular state (say redState) in order to be considered safe. Expressed as a plain English sentences;
if object is outside of ZoneA and is in a RedState, then it is Safe,
However, in some instances objects may be exempt from this restriction:
if object is outside of ZoneA, is not in a RedState and is Exempt, then it is Safe.
if object is outside of ZoneA, is not in a RedState and is not Exempt, then it is not Safe.
Whether or not an object in zone A is in a red state is unimportant. The remaining rule is:
if an object is contained in zone A, then it is safe.
Using a propositional formulation, I thought these rules could be expressed as
¬InZoneA ∧ RedState ⇒ Safe
¬InZoneA ∧ ¬RedState ∧ Exempt ⇒ Safe
¬InZoneA ∧ ¬RedState ∧ ¬Exempt ⇒ ¬Safe
InZoneA ⇒ Safe
I've consulted system specification approaches (such as Z) but am more interested in conveying a concise conceptual idea of the rules and less so in ensuring their functioning within a larger system. I therefore thought to represent them as a type of binary decision tree (diagram). I've read some notes on the subject but am a little unsure as to whether their use is the best approach or if I am butchering them. The representation I arrive at for these rules is presented in the figure, where solid lines indicate True and dashed lines indicate False.
I would greatly appreciate your input as to whether or not this representation is correct or if my approach/thinking is flawed. Many Thanks!
What you have shown is fine, as far as I can tell.
However, there are some other things you might want to consider.
For example, if the Exempt is available at any level, it might want to be evaluated before anything else, it depends on nothing, and nothing depends on it. This would save you time by not having to evaluate any other properties that will not affect the outcome.

most readable way in XPath to write "is value X a member of sequence S"?

XPath 2.0 has some new functions and syntax, relative to 1.0, that work with sequences. Some of theset don't really add to what the language could already do in 1.0 (with node sets), but they make it easier to express the desired logic in ways that are more readable. This increases the chances of the programmer getting the code correct -- and keeping it that way. For example,
empty(s) is equivalent to not(s), but its intent is much clearer when you want to test whether a sequence is empty.
Correction: the effective boolean value of a sequence is in general more complicated than that. E.g. empty((0)) != not((0)). This applies to exists(s) vs. s in a boolean context as well. However, there are domains of s where empty(s) is equivalent to not(s), so the two could be used interchangeably within those domains. But this goes to show that the use of empty() can make a non-trivial difference in making code easier to understand.
Similarly, exists(s) is equivalent to boolean(s) that already existed in XPath 1.0 (or just s in a boolean context), but again is much clearer about the intent.
Quantified expressions; e.g. "some $x in expression satisfies test($x)" would be equivalent to boolean(expression[test(.)]) (although the new syntax is more flexible, in that you don't need to worry about losing the context item because you have the variable to refer to it by).
Similarly, "every $x in expression satisfies test($x)" would be equivalent to not(expression[not(test(.))]) but is more readable.
These functions and syntax were evidently added at no small cost, solely to serve the goal of writing XPath that is easier to map to how humans think. This implies, as experienced developers know, that understandable code is significantly superior to code that is difficult to understand.
Given all that ... what would be a clear and readable way to write an XPath test expression that asks
Does value X occur in sequence S?
Some ways to do it: (Note: I used X and S notation here to indicate the value and the sequence, but I don't mean to imply that these subexpressions are element name tests, nor that they are simple expressions. They could be complicated.)
X = S: This would be one of the most unreadable, since it requires the reader to
think about which of X and S are sequences vs. single values
understand general comparisons, which are not obvious from the syntax
However, one advantage of this form is that it allows us to put the topic (X) before the comment ("is a member of S"), which, I think, helps in readability.
See also CMS's good point about readability, when the syntax or names make the "cardinality" of X and S obvious.
index-of(S, X): This one is clear about what's intended as a value and what as a sequence (if you remember the order of arguments to index-of()). But it expresses more than we need to: it asks for the index, when all we really want to know is whether X occurs in S. This is somewhat misleading to the reader. An experienced developer will figure out what's intended, with some effort and with understanding of the context. But the more we rely on context to understand the intent of each line, the more understanding the code becomes a circular (spiral) and potentially Sisyphean task! Also, since index-of() is designed to return a list of all the indexes of occurrences of X, it could be more expensive than necessary: a smart processor, in order to evaluate X = S, wouldn't necessarily have to find all the contents of S, nor enumerate them in order; but for index-of(S, X), correct order would have to be determined, and all contents of S must be compared to X. One other drawback of using index-of() is that it's limited to using eq for comparison; you can't, for example, use it to ask whether a node is identical to any node in a given sequence.
Correction: This form, used as a conditional test, can result in a runtime error: Effective boolean value is not defined for a sequence of two or more items starting with a numeric value. (But at least we won't get wrong boolean values, since index-of() can't return a zero.) If S can have multiple instances of X, this is another good reason to prefer form 3 or 6.
exists(index-of(X, S)): makes the intent clearer, and would help the processor eliminate the performance penalty if the processor is smart enough.
some $m in S satisfies $m eq X: This one is very clear, and matches our intent exactly. It seems long-winded compared to 1, and that in itself can reduce readability. But maybe that's an acceptable price for clarity. Keep in mind that X and S could potentially be complex expressions themselves -- they're not necessarily just variable references. An advantage is that since the eq operator is explicit, you can replace it with is or any other comparison operator.
S[. eq X]: clearer than 1, but shares the semantic drawbacks of 2: it computes all members of S that are equal to X. Actually, this could return a false negative (incorrect effective boolean value), if X is falsy. E.g. (0, 1)[. eq 0] returns 0 which is falsy, even though 0 occurs in (0, 1).
exists(S[. eq X]): Clearer than 1, 2, 3, and 5. Not as clear as 4, but shorter. Avoids the drawbacks of 5 (or at least most of them, depending on the processor smarts).
I'm kind of leaning toward the last one, at this point: exists(S[. eq X])
What about you... As a developer coming to a complex, unfamiliar XSLT or XQuery or other program that uses XPath 2.0, and wanting to figure out what that program is doing, which would you find easiest to read?
Apologies for the long question. Thanks for reading this far.
Edit: I changed = to eq wherever possible in the above discussion, to make it easier to see where a "value comparison" (as opposed to a general comparison) was intended.
For what it's worth, if names or context make clear that X is a singleton, I'm happy to use your first form, X = S -- for example when I want to check an attribute value against a set of possible values:
<xsl:when test="#type = ('A', 'A+', 'A-', 'B+')" />
or
<xsl:when test="#type = $magic-types"/>
If I think there is a risk of confusion, then I like your sixth formulation. The less frequently I have to remember the rules for calculating an effective boolean value, the less frequently I make a mistake with them.
I prefer this one:
count(distinct-values($seq)) eq count(distinct-values(($x, $seq)))
When $x is itself a sequence, this expression implements the (value-based) subset of relation between two sets of values, that are represented as sequences. This implementation of subset of has just linear time complexity -- vs many other ways of expressing this, that have O(N^2)) time complexity.
To summarize, the question whether a single value belongs to a set of values is a special case of the question whether one set of values is a subset of another. If we have a good implementation of the latter, we can simply use it for answering the former.
The functx library has a nice implementation of this function, so you can use
functx:is-node-in-sequence($X, $Y)
(this particular function can be found at http://www.xqueryfunctions.com/xq/functx_is-node-in-sequence.html)
The whole functx library is available for both XQuery (http://www.xqueryfunctions.com/) and XSLT (http://www.xsltfunctions.com/)
Marklogic ships the functx library with their core product; other vendors may also.
Another possibility, when you want to know whether node X occurs in sequence S, is
exists((X) intersect S)
I think that's pretty readable, and concise. But it only works when X and the values in S are nodes; if you try to ask
exists(('bob') intersect ('alice', 'bob'))
you'll get a runtime error.
In the program I'm working on now, I need to compare strings, so this isn't an option.
As Dimitri notes, the occurrence of a node in a sequence is a question of identity, not of value comparison.

Resources