I'm building a propositional logic library and am running into some conceptual (and computational) problems to do with arbitrary tautologies and contradictions.
Firstly, I assume the symbols \top and \bot are both well-formed formulas.
If \top represents an arbitrary tautology, and it is known that any formula can be converted to NNF, it seems that \top is both in NNF and not in NNF simultaneously.
This leads me to conclude I was wrong about my assumption and that they are not conventional well-formed
formulas but rather special forms of syntactic sugar undefined wrt NNF and thus, if my NNF converter is provided with a formula such as \alpha\lor\top, it should "treat \top as it would an atom" and just return the formula (no simplifying as that's not the function's prerogative). This solution would extend to either types of normal forms too.
Is this thinking correct? Any suggestions as to how I could better deal with them?
Thanks in advance to anyone who responds! I imagine this may be entry-level stuff.
Related
More specially, given arbitrary Lean proof/theorem, is it possible to express it solely using first-order logic? If so, is it practical, i.e. the generated FOL will not be enormously large?
I have seen https://www.cl.cam.ac.uk/~lp15/papers/Automation/translations.pdf, but since I am not an expert, I am still not sure whether all Lean's proof code can be converted.
Other mathematical proof languages are also OK.
The short answer is: yes, it is not impractically large and this is done in particular when translating proofs to SMT solvers for sledgehammer-like tools. There is a fair amount of blowup, but it is a linear factor on the order of 2-5. You probably lose more from not having specific support for all the built in rules, and in the case of DTT, writing down all the defeq proofs which are normally implicit.
I am reading a book on λ-calculus "Functional programming Through Lambda Calculus" (Greg Michaelson). In the book the author introduces a short-hand notation for defining functions. For example
def identity = λx.x
and goes on saying that we should insist that when using such shorthand "all defined names should be replaced by their definitions before the expression is evaluated"
Later on, when introducing recursion he uses as an example a definition of the addition function such as:
def add x y = if iszero y then x else add (succ x) (pred y)
and goes to say, that had we not had the restriction mentioned above we would be able to evaluate this function by slowly expanding it. However since we have the restriction of replacing all defined names before the evaluation of the expression, we cannot do that since we go on indefinetely replacing add and thus the need of thinking about recursion in a more detailed way.
My question is thus the following: What are the theoritical or practical reasons for placing this restriction upon ourselves? (of having to replace all defined names before the evaluation of the function)? Are there any?
I was trying to show how to build a rich language from a very simple one, by adding successive layers of syntax, where each layer could be translated into the previous layer. So it's important to distinguish translation, which must terminate, from evaluation which needn't. I think it's really interesting that recursion can be translated into non-recursion. I'm sorry if my explanation isn't helpful.
The reason is that we want to stay within the rules of the lambda calculus. Allowing names for terms to mean anything other than immediate substitution would mean adding a recursive let expression to the language, which would mean we would need a truly more expressive system (no longer the lambda calculus).
You can think of the names as no more than syntactic sugar for the original lambda term. The Y-combinator is exactly the way to introduce recursion into a system that does not have it built in.
If the book you are currently reading confuses you, you might want search for some additional resources on the internet explaining the Y-combinator.
I will try to post my own answer, the way I understand it.
For the untyped lambda calculus there is no practical reason, we need the Y combinator. By practical I mean that if someone wants to build an expression evaluator, it is possible to do it without needing the combinator and just slowly expanding the definition.
For theoretical reasons though, we need to make sure that when we define a function this definition has some meaning and is not defined in terms of itself. e.g. there is not much meaning in the following definition:
def something = something
For this reason, we need to see if it is possible to rewrite the definition in a way that it is not self-referential, i.e. it is possible to define something without referring to itself. It turns out that in the untyped lambda calculus we can always do that through the Y-combinator.
Using the Y-combinator we can always construct the solution to the equation x=f(x)=f(f(x))=...=f(f(f(f(x)))=.... for any f,
i.e. we can always rewrite a self-referential definition to a definition that it does not include itself
I’m taking my first baby-steps in learning functional programing using F# and I’ve just come across the Forward Pipe (|>) and Forward Composition (>>) operators. At first I thought they were just sugar rather than having an effect on the final running code (though I know piping helps with type inference).
However I came across this SO article:
What are advantages and disadvantages of “point free” style in functional programming?
Which has two interesting and informative answers (that instead of simplifying things for me opened a whole can of worms surrounding “point-free” or “pointless” style) My take-home from these (and other reading around) is that point-free is a debated area. Like lambas, point-free style can make code easier to understand, or much harder, depending on use. It can help in naming things meaningfully.
But my question concerns a comment on the first answer:
AshleyF muses in the answer:
“It seems to me that composition may reduce GC pressure by making it more obvious to the compiler that there is no need to produce intermediate values as in pipelining; helping make the so-called "deforestation" problem more tractable.”
Gasche replies:
“The part about improved compilation is not true at all. In most languages, point-free style will actually decrease performances. Haskell relies heavily on optimizations precisely because it's the only way to make the cost of these things bearable. At best, those combinators are inlined away and you get an equivalent pointful version”
Can anyone expand on the performance implications? (In general and specifically for F#) I had just assumed it was a writing-style thing and the compiler would unstrangle both idioms into equivalent code.
This answer is going to be F#-specific. I don't know how the internals of other functional languages work, and the fact that they don't compile to CIL could make a big difference.
I can see three questions here:
What are the performance implications of using |>?
What are the performance implications of using >>?
What is the performance difference between declaring a function with its arguments and without them?
The answers (using examples from the question you linked to):
Is there any difference between x |> sqr |> sum and sum (sqr x)?
No, there isn't. The compiled CIL is exactly the same (here represented in C#):
sum.Invoke(sqr.Invoke(x))
(Invoke() is used, because sqr and sum are not CIL methods, they are FSharpFunc, but that's not relevant here.)
Is there any difference between (sqr >> sum) x and sum (sqr x)?
No, both samples compile to the same CIL as above.
Is there any difference between let sumsqr = sqr >> sum and let sumsqr x = (sqr >> sum) x?
Yes, the compiled code is different. If you specify the argument, sumsqr is compiled into a normal CLI method. But if you don't specify it, it's compiled as a property of type FSharpFunc with a backing field, whose Invoke() method contains the code.
But the effect of all is that invoking the point-free version means loading one field (the FSharpFunc), which is not done if you specify the argument. But I think that shouldn't measurably affect performance, except in the most extreme circumstances.
I am stuck on Kripke semantics, and wonder if there is educational software through which I can test equivalence of statements etc, since Im starting to think its easier to learn by example (even if on abstract variables).
I will use
☐A to write necessarily A
♢A for possibly A
do ☐true, ☐false, ♢true, ♢false evaluate to values, if so what values or kinds of values from what set ({true, false} or perhaps {necessary,possibly})? [1]
I think I read all Kripke models use the duality axiom:
(☐A)->(¬♢¬A)
i.e. if its necessary to paytax then its not allowed to not paytax
(irrespective of wheither its necessary to pay tax...)
i.e.2. if its necessary to earnmoney its not allowed to not earnmoney
(again irrespective of wheither earning money is really necessary, the logic holds, so far)
since A->B is equivalent to ¬A<-¬B lets test
¬☐A<-♢¬A
its not necessary to upvote if its allowed to not upvote
this axiom works dually:
♢A->¬☐¬A
If its allowed to earnmoney then its not necessary to not earnmoney
Not all modalities behave the same, and different Kripke model are more suitable to model one modalit than another: not all Kripke models use the same axioms. (Are classical quantifiers also modalities? if so do Kripke models allow modeling them?)
I will go through the list of common axioms and try to find examples that make it seem counterintuitive or unnecessary to postulate...
☐(A->B)->(☐A->☐B):
if (its necessary that (earningmoney implies payingtaxes))
then ((necessity of earningmoney) implies (necessity of payingtaxes))
note that earning money does not imply paying taxes, the falsehood of the implication A->B does not affect the truth value of the axiom...
urgh its taking too long to phrase my problems in trying to understand it all... feel free to edit
Modal logic provers and reasoners:
http://www.cs.man.ac.uk/~schmidt/tools/
http://www.cs.man.ac.uk/~sattler/reasoners.html
Engine tableau in Java:
http://www.irisa.fr/prive/fschwarz/lotrecscheme/
https://github.com/gertvv/oops/wiki
http://molle.sourceforge.net/
Modal logic calculators:
http://staff.science.uva.nl/~jaspars/lvi98/Week3/modal.html
http://www.ffst.hr/~logika/implog/doku.php?id=program:possible_worlds
http://www.personeel.unimaas.nl/roos/EpLogic/start.htm
Lectures for practical game implementations of epistemic logic:
http://www.ai.rug.nl/mas/
Very good phd thesis:
http://www.cs.man.ac.uk/~schmidt/mltp/
http://www.harrenstein.nl/Publications.dir/Harrenstein.pdf.gz
Lectures about modal logic (in action, conflict, games):
http://www.logicinaction.org/
http://www.masfoundations.org/download.html
Modal Logic for Open Minds, http://logicandgames.pbworks.com/f/mlbook-almostfinal.pdf (the final version is not free)
Video lectures about modal logic and logic in general:
http://videolectures.net/ssll09_gore_iml/
http://videolectures.net/esslli2011_benthem_logic/
http://videolectures.net/esslli2011_jaspars_logic/
http://www.youtube.com/view_play_list?p=C88812FFE0F526B0
I'm not sure whether educational software for teaching relational semantics for modal logics exists. However, I can attempt to answer some of the questions you have asked.
First, the modal operators for necessity and possibility operate on propositions, not truth values. Hence, if φ is a proposition then both ☐φ and ♢φ are propositions. Because neither true nor false are propositions, none of ☐true, ♢true, ☐false, and ♢false are meaningful sequences of symbols.
Second, what you refer to as the "duality axiom" is usually the expression of the interdefinability of the modal operators. It can be introduced as an axiom in an axiomatic development of modal logic or derived as a consequence of the semantics of the modal operators.
Third, the classical quantifiers are not modal operators and don't express modal concepts. In fact, modal logics are generally defined by introducing the modal operators into either propositional or predicate logics. I think your confusion arises because the semantics of modal operators appears similar to the semantics of quantifiers. For instance, the semantics of the necessity operator appears similar to the semantics of the universal quantifier:
⊧ ∀x.φ(x) ≡ φ(α) is true for all α in the domain of quantification
⊧w ☐φ ≡ φ is true in every possible world accessible from w
A similarity is seen when comparing the possibility operator with the existential quantifier. In fact, the modal operators can be defined as quantifiers over possible worlds. As far as I know, the converse isn't true.
Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.