Is valid argument always true? - arguments

In discrete mathematics,some people say,if an argument is vaild,then all the premises and the conclusion is true(in reality).But others say that an argument may be valid,but it does not mean all the premises and the conclusion to be true in reality.Whose thinking is right?

If "the earth exploded today," then "every human would be dead".
Valid argument, but neither the hypothesis nor conclusion are true (in reality...).

People's terminology can vary. But in my academic community ...
An argument is "valid" if it is impossible for the premises to be true without the conclusion being true. That definition makes no claims at all about the truth of the premises or conclusion.
An argument is "sound" if it is valid and all the statements (premises and conclusion) are in true ("in reality" as the OP says).

Related

Reporting *why* a query failed in Prolog in a systematic way

I'm looking for an approach, pattern, or built-in feature in Prolog that I can use to return why a set of predicates failed, at least as far as the predicates in the database are concerned. I'm trying to be able to say more than "That is false" when a user poses a query in a system.
For example, let's say I have two predicates. blue/1 is true if something is blue, and dog/1 is true if something is a dog:
blue(X) :- ...
dog(X) :- ...
If I pose the following query to Prolog and foo is a dog, but not blue, Prolog would normally just return "false":
? blue(foo), dog(foo)
false.
What I want is to find out why the conjunction of predicates was not true, even if it is an out of band call such as:
? getReasonForFailure(X)
X = not(blue(foo))
I'm OK if the predicates have to be written in a certain way, I'm just looking for any approaches people have used.
The way I've done this to date, with some success, is by writing the predicates in a stylized way and using some helper predicates to find out the reason after the fact. For example:
blue(X) :-
recordFailureReason(not(blue(X))),
isBlue(X).
And then implementing recordFailureReason/1 such that it always remembers the "reason" that happened deepest in the stack. If a query fails, whatever failure happened the deepest is recorded as the "best" reason for failure. That heuristic works surprisingly well for many cases, but does require careful building of the predicates to work well.
Any ideas? I'm willing to look outside of Prolog if there are predicate logic systems designed for this kind of analysis.
As long as you remain within the pure monotonic subset of Prolog, you may consider generalizations as explanations. To take your example, the following generalizations might be thinkable depending on your precise definition of blue/1 and dog/1.
?- blue(foo), * dog(foo).
false.
In this generalization, the entire goal dog(foo) was removed. The prefix * is actually a predicate defined like :- op(950, fy, *). *(_).
Informally, above can be read as: Not only this query fails, but even this generalized query fails. There is no blue foo at all (provided there is none). But maybe there is a blue foo, but no blue dog at all...
?- blue(_X/*foo*/), dog(_X/*foo*/).
false.
Now we have generalized the program by replacing foo with the new variable _X. In this manner the sharing between the two goals is retained.
There are more such generalizations possible like introducing dif/2.
This technique can be both manually and automatically applied. For more, there is a collection of example sessions. Also see Declarative program development in Prolog with GUPU
Some thoughts:
Why did the logic program fail: The answer to "why" is of course "because there is no variable assignment that fulfills the constraints given by the Prolog program".
This is evidently rather unhelpful, but it is exactly the case of the "blue dog": there are no such thing (at least in the problem you model).
In fact the only acceptable answer to the blue dog problem is obtained when the system goes into full theorem-proving mode and outputs:
blue(X) <=> ~dog(X)
or maybe just
dog(X) => ~blue(X)
or maybe just
blue(X) => ~dog(X)
depending on assumptions. "There is no evidence of blue dogs". Which is true, as that's what the program states. So a "why" in this question is a demand to rewrite the program...
There may not be a good answer: "Why is there no x such that x² < 0" is ill-posed and may have as answer "just because" or "because you are restricting yourself to the reals" or "because that 0 in the equation is just wrong" ... so it depends very much.
To make a "why" more helpful, you will have to qualify this "why" somehow. which may be done by structuring the program and extending the query so that additional information collecting during proof tree construction is bubbling up, but you will have to decide beforehand what information that is:
query(Sought, [Info1, Info2, Info3])
And this query will always succeed (for query/2, "success" no longer means "success in finding a solution to the modeled problem" but "success in finishing the computation"),
Variable Sought will be the reified answer of the actual query you want answered, i.e. one of the atoms true or false (and maybe unknown if you have had enough with two-valued logic) and Info1, Info2, Info3 will be additional details to help you answer a why something something in case Sought is false.
Note that much of the time, the desire to ask "why" is down to the mix-up between the two distinct failures: "failure in finding a solution to the modeled problem" and "failure in finishing the computation". For example, you want to apply maplist/3 to two lists and expect this to work but erroneously the two lists are of different length: You will get false - but it will be a false from computation (in this case, due to a bug), not a false from modeling. Being heavy-handed with assertion/1 may help here, but this is ugly in its own way.
In fact, compare with imperative or functional languages w/o logic programming parts: In the event of failure (maybe an exception?), what would be a corresponding "why"? It is unclear.
Addendum
This is a great question but the more I reflect on it, the more I think it can only be answer in a task-specific way: You must structure your logic program to be why-able, and you must decide what kind of information why should actually return. It will be something task-specific: something about missing information, "if only this or that were true" indications, where "this or that" are chosen from a dedicates set of predicates. This is of course expected, as there is no general way to make imperative or functional programs explain their results (or lack thereof) either.
I have looked a bit for papers on this (including IEEE Xplore and ACM Library), and have just found:
Reasoning about Explanations for Negative Query Answers in DL-Lite which is actually for Description Logics and uses abductive reasoning.
WhyNot: Debugging Failed Queries in Large Knowledge Bases which discusses a tool for Cyc.
I also took a random look at the documentation for Flora-2 but they basically seem to say "use the debugger". But debugging is just debugging, not explaining.
There must be more.

Representing FOL in english language

I have the following FOL formula: ∀e(S(e)) → ∃d(P(d))
And the vocabulary:
variables: e:'exam', d:'day'
functions: S:'successful', P:'party'
I initially translated that formula into:
For every successful exam, there will be a day of party
While apparently the correct translation would be something of the sort of:
You party at least one day after all exams were successful.
Why does the correct one say that we only party after ALL exams were successful?
Does ∀e(S(e)) mean: "For all exams, they will all be successful"?
And ∃d(P(d)) mean: "there exists at least one day where we party"?
And doesn't the implication translte to "if a then b" ?
I think I can somehow see the logic of the correct translation, but there's something about the implication that makes me unsure...
Careful here. This formula:
∀e(S(e)) → ∃d(P(d))
Really only has one precise sense, the one you acknowledge as apparently correct:
If all exams are successful, then there will be a party.
Your translation is wrong for a subtle but significant reason. Your translation:
For every successful exam, there will be a day of party
Corresponds to this formula:
∀e.∃d(S(e) → P(d))
These formulae are not logically equivalent, that is, the following is not a tautology:
(∀e(S(e)) → ∃d(P(d))) <=> (∀e.∃d(S(e) → P(d)))
To see this, consider what happens when you pass ten exams and fail one exam. The original formula is vacuously true regardless of whether any party is had, since ∀e(S(e)) is not satisfied. However, your statement is only true if you have at least one party, since you did pass at least one exam.

What's the difference between "false" and "no" in Prolog

I started to learn Prolog following the book Programming in Prolog: Using the ISO Standard. At page 7 of the intro to the language they made the assertion : "In Prolog the answer no is used to mean nothing unifies with the question. It is important to remember that no is not the same as false". So why SWI-Prolog uses the falseand truestatement instead of yesor no?
To begin with, the ISO standard (ISO/IEC 13211-1:1995) does not define
a toplevel loop. In 1 Scope it reads:
NOTE — This part of ISO/IEC 13211 does not specify:
...
f) the user environment (top level loop, debugger, librarysystem, editor, compiler etc.) of a Prolog processor.
Traditionally, the answer of a query has been answered with yes or no. In case of yes, answer substitutions were shown, if present.
Today, with more and more constraints present in answers, the traditional toplevel loop becomes a bit cumbersome to use. What is the correct answer to
?- dif(X,a).? It cannot be a yes, it might be a maybe,which was used first by Jaffar et al.s CLP(R). But very frequently one wants to reuse the answer.
?- dif(X,a).
dif(X,a).
?- dif(b,a).
true.
?- true.
true.
Following Prolog IV's pioneering toplevel, the idea in SWI is to produce text as an answer such that you can paste it back to get the very same result. In this manner the syntax of answers is specified to some degree - it has to be valid Prolog text.
So if there is no longer yes, why should there be no? For this reason SWI gives false. as an answer. Prior to SWI, Prolog IV did respond false. Note for example the following fixpoint in SWI:
?- true ; false.
true
; false.
So even this tiny detail is retained in answers. Whereas in Prolog IV this is collapsed into true because Prolog IV shows all answers in one fell swoop.
?- true ; false.
true.
For more on answers, see this.
I came across this question recently and took a look at an old (third) edition of Clocksin and Mellish which discusses the difference between yes, no and true, false. From my reading of the text this is what I understand:
yes and no are returned after Edinburgh Prolog evaluates a query using its 'database' of facts and rules. yes means the result is provable from the facts and rules in the database; no means it's not provable from those rules and facts.
true and false refer to the real world. It's possible for Prolog to return no to the query isAmerican(obama) simply because this fact is not in the database; whereas in fact (in the real world) Obama is an American and so this fact is true in reality.
Edinburgh Prolog returns yes and no to queries, however later implementations, like SWI Prolog, return true and false. Clearly later implementors didn't consider this distinction very important, but in fact it is a crucial distinction. When Edinburgh Prolog returns a no then it means "not provable from the database"; and when SWI Prolog returns false it also means "not provable from the database". They mean the same thing (semantically) but they look different (syntactically) because SWI Prolog doesn't conform entirely to Edinburgh Prolog conventions.

Ruby: target-less 'case', compared to 'if'

(I have asked this question already at Ruby Forum, but it didn't draw any answer, so I'm crossposting it now)
From my understanding, the following pieces of code are equivalent under
Ruby 1.9 and higher:
# (1)
case
when x < y
foo
when a > b
bar
else
baz
end
# (2)
if x < y
foo
elsif a > b
bar
else
baz
end
So far I would have always used (2), out of a habit. Could someone think
of a particular reason, why either (1) or (2) is "better", or is it just
a matter of taste?
CLARIFICATION: Some users have objected, that this question would just be "opinion-based", and hence not suited to this forum. I therefore think that I did not make myself clear enough: I don't want to start a discussion on personal programming style. The reason why I brought up this topic is this:
I was surprised, that Ruby offered two very different syntaxes (target-less case, and if-elsif) for, as it seems to me, the exactly same purpose, in particular since the if-elsif syntax is the one virtually every programmer is familiar. I wouldn't even consider 'target-less if' as "syntactic sugar", because it doesn't allow me to express the programming logic more consisely then 'if-elsif'.
So I wonder in what situation I might want to use the 'target-less case' construct. Does it give a performance advantage? Is it different from if-elsif in some subtle way which I just don't notice?
ADDITIONAL FINDINGS regarding the implementation of target-less case:
Olivier Poulin has pointed out, that a target-less case statement would explicitly use the === operator against the value "true", which would cause a (tiny) perfomance penalty of the 'case' compared to 'if' (and one more reason why I don't see why someone might want to use it).
However, when checking the documentation of the case statement for Ruby 1.9 and Ruby 2.0, I found that they describe it differently, but both at least suggest that === might NOT be used in this case. In the case of Ruby 1.9:
Case statements consist of an optional condition, which is in the position of an argument to case, and zero or more when clauses. The first when clause to match the condition (or to evaluate to Boolean truth, if the condition is null) “wins”
Here it says, that if the condition (i.e. what comes after 'case') is null (i.e. does not exist), the first 'when' clause which evaluates to true is the one being executed. No reference to === here.
In Ruby 2.0, the wording is completely different:
The case expression can be used in two ways. The most common way is to compare an object against multiple patterns. The patterns are matched using the +===+ method [.....]. The other way to use a case expression is like an if-elsif expression: [example of target-less case is given here].
It hence says that === is used in the "first" way (case with target), while the target-less case "is like" if-elsif. No mentioning of === here.
Midwire ran a few benchmarks and concluded that if/elsif is faster
than case because case “implicitly compares using the more expensive
=== operator”.
Here is where I got this quote. It compares if/elsif statements to case.
It is very thorough and explores the differences in the instruction sequence, definitely will give you an idea on which is better.
The main thing i pulled from the post though, is that both if/else if and case have no huge differences, both can usually be used interchangeably.
Some major differences can present themselves depending on how many cases you have.
n = 1 (last clause matches)
if: 7.4821e-07
threequal_if: 1.6830500000000001e-06
case: 3.9176999999999997e-07
n = 15 (first clause matches)
if: 3.7357000000000003e-07
threequal_if: 5.0263e-07
case: 4.3348e-07
As you can see, if the last clause is matched,if/elsif runs much slower than case, while if the first clause is matched, it's the other way around.
This difference comes from the fact that if/elsif uses branchunless, while case uses branchif in their instruction sequences.
Here is a test I did on my own with a target-less case vs if/elsif statements (using "=="). The first time is case, while the second time is if/elsif.
First test, 5 when statements, 5 if/elsif, the first clause is true for both.
Time elapsed 0.052023 milliseconds
Time elapsed 0.031467999999999996 milliseconds
Second test, 5 when statements, 5 if/elsif, the last(5th) clause is true for both.
Time elapsed 0.001224 milliseconds
Time elapsed 0.028578 milliseconds
As you can see, just as we saw before, when the first clause is true, if/elsif perform better than case, while case has a massive performance advantage when the last clause is true.
CONCLUSION
Running more tests has shown that it probably comes down to probability. If you think the answer is going to come earlier in your list of clauses, use if/elsif, otherwise case seems to be faster.
The main thing that this has shown is that both case and if/elsif are equally efficient and that using one over the other comes down to probability and taste.

Validity and satisfiability

I am having problems understanding the difference between validity and satisfiability.
Given the following:
(i) For all F, F is satisfiable or ~F is satisfiable.
(ii) For all F, F is valid or ~F is valid.
How do I prove which is true and which is false?
Statement (i) is true, as for all F, F will either be satisfiable, or ~F will be satisfiable (truth table). However, how do I go about solving for statement (ii)?
Any help is highly appreciated!
Aprilrocks92,
I don't blame you for being confused, because actually logicians, mathematicians, heck even those philosopher types use the words validity differently sometimes.
Trying not to overcomplicate, I'll give you a thin definition: a conclusion if valid when it is true whenever the premises are true. We also say, given a suitably defined logic, that the conclusion follows as a "logical consequence" of the premises.
On the other hand, satisfisability means that there exists a valuation of the non logical symbols in the formula F that makes the formula true in the logic.
So I should probably mention the difference between semantics and syntax to explain. The syntax of your logic is all those logical and non logical symbols, and the deductive rules that enable you to make "steps" towards proof in the logic. My definition of satisfisability above mentioned the word "valuation"- now how does that fit? Well the answer is that you need to supply a semantics: in short this is the structure that the formula of the logic are expressions of, usually given in set theory, and a valuation of a given F is a function that maps all the non logical symbols in F to sets and members of sets, which a given semantics for the logic composes into a truth value.
Hmm. I'm not sure that's the best explanation, but I don't have much time.
Either way, that should help you understand the difference. To answer your question about the difference between (i) and (ii) without giving too much away, think: what's the relationship between the two? Well given that as above an F' is true given a valuation that sends the sentence to true. So you could "rewrite" my definition of validity as: a conclusion is valid iff whenever the premises are satisfisable the conclusion is satisfisable.
Now, with regards your requirement to prove these things, I strongly suspect you've got a lot more context about your logic you're not telling us and your teacher or text book has intimated a context in which to answer these, as actually taken in the general sense your question doesn't make complete sense.

Resources