In coq, defining an inductive proposition seems analogous to adding new inference rules/axioms to a logic. What constraints in defining an inductive proposition guarantees that coq remains consistent?
This is a very good, and not easy to answer question. The "Calculus of Inductive Constructions" has been analyzed in literally hundredths of papers.
The most accepted argument for justification of consistency is the equivalence of W-types with inductive data types. In this sense, every inductive type you add to the theory is just an instance of a W-type, which is an object that is supposed to be well-founded and thus not a danger to the consistency of the theory.
However, the details of Coq's implementation are a bit more complicated, mainly due to the reliance on the "guard condition" for programming convenience. The also provide support for impredicate inductives and these tend to be quite complicated objects. I suggest you read a bit about this and ask more concrete questions. The main reference is "C. Paulin-Mohring. Inductive Definitions in the System Coq" .
See also this wiki page
Related
I am taking a logic and functional programming course (with programming in SML) and as part of our first assignment the following question is asked
"... You need to define an (abstract) type called 'a set
Documentation: Describe formally how finite sets will be represented as lists, stating a representational invariant property. ..."
Can anyone explain what does "describe formally" means ?
Check your textbook and class notes to see how "formal description" is used for other type, and then follow that example.
Describe formally in mathematics usually means bounding things with mathematical expressions, and using standard notation and terminology where necessary. In logic, you generally will use implies and therefore instead of more colloquial terms. Wikipedia has an article
I am stuck on Kripke semantics, and wonder if there is educational software through which I can test equivalence of statements etc, since Im starting to think its easier to learn by example (even if on abstract variables).
I will use
☐A to write necessarily A
♢A for possibly A
do ☐true, ☐false, ♢true, ♢false evaluate to values, if so what values or kinds of values from what set ({true, false} or perhaps {necessary,possibly})? [1]
I think I read all Kripke models use the duality axiom:
(☐A)->(¬♢¬A)
i.e. if its necessary to paytax then its not allowed to not paytax
(irrespective of wheither its necessary to pay tax...)
i.e.2. if its necessary to earnmoney its not allowed to not earnmoney
(again irrespective of wheither earning money is really necessary, the logic holds, so far)
since A->B is equivalent to ¬A<-¬B lets test
¬☐A<-♢¬A
its not necessary to upvote if its allowed to not upvote
this axiom works dually:
♢A->¬☐¬A
If its allowed to earnmoney then its not necessary to not earnmoney
Not all modalities behave the same, and different Kripke model are more suitable to model one modalit than another: not all Kripke models use the same axioms. (Are classical quantifiers also modalities? if so do Kripke models allow modeling them?)
I will go through the list of common axioms and try to find examples that make it seem counterintuitive or unnecessary to postulate...
☐(A->B)->(☐A->☐B):
if (its necessary that (earningmoney implies payingtaxes))
then ((necessity of earningmoney) implies (necessity of payingtaxes))
note that earning money does not imply paying taxes, the falsehood of the implication A->B does not affect the truth value of the axiom...
urgh its taking too long to phrase my problems in trying to understand it all... feel free to edit
Modal logic provers and reasoners:
http://www.cs.man.ac.uk/~schmidt/tools/
http://www.cs.man.ac.uk/~sattler/reasoners.html
Engine tableau in Java:
http://www.irisa.fr/prive/fschwarz/lotrecscheme/
https://github.com/gertvv/oops/wiki
http://molle.sourceforge.net/
Modal logic calculators:
http://staff.science.uva.nl/~jaspars/lvi98/Week3/modal.html
http://www.ffst.hr/~logika/implog/doku.php?id=program:possible_worlds
http://www.personeel.unimaas.nl/roos/EpLogic/start.htm
Lectures for practical game implementations of epistemic logic:
http://www.ai.rug.nl/mas/
Very good phd thesis:
http://www.cs.man.ac.uk/~schmidt/mltp/
http://www.harrenstein.nl/Publications.dir/Harrenstein.pdf.gz
Lectures about modal logic (in action, conflict, games):
http://www.logicinaction.org/
http://www.masfoundations.org/download.html
Modal Logic for Open Minds, http://logicandgames.pbworks.com/f/mlbook-almostfinal.pdf (the final version is not free)
Video lectures about modal logic and logic in general:
http://videolectures.net/ssll09_gore_iml/
http://videolectures.net/esslli2011_benthem_logic/
http://videolectures.net/esslli2011_jaspars_logic/
http://www.youtube.com/view_play_list?p=C88812FFE0F526B0
I'm not sure whether educational software for teaching relational semantics for modal logics exists. However, I can attempt to answer some of the questions you have asked.
First, the modal operators for necessity and possibility operate on propositions, not truth values. Hence, if φ is a proposition then both ☐φ and ♢φ are propositions. Because neither true nor false are propositions, none of ☐true, ♢true, ☐false, and ♢false are meaningful sequences of symbols.
Second, what you refer to as the "duality axiom" is usually the expression of the interdefinability of the modal operators. It can be introduced as an axiom in an axiomatic development of modal logic or derived as a consequence of the semantics of the modal operators.
Third, the classical quantifiers are not modal operators and don't express modal concepts. In fact, modal logics are generally defined by introducing the modal operators into either propositional or predicate logics. I think your confusion arises because the semantics of modal operators appears similar to the semantics of quantifiers. For instance, the semantics of the necessity operator appears similar to the semantics of the universal quantifier:
⊧ ∀x.φ(x) ≡ φ(α) is true for all α in the domain of quantification
⊧w ☐φ ≡ φ is true in every possible world accessible from w
A similarity is seen when comparing the possibility operator with the existential quantifier. In fact, the modal operators can be defined as quantifiers over possible worlds. As far as I know, the converse isn't true.
I am studying natural deduction as a part of my Formal Specification & Verification Computer Science course at University/College.
I find it interesting, however I learn much better when I can find a practical use for things.
Could anyone explain to me if and how natural deduction is used other than for formally verifying bits of code?
Thanks!
Natural deduction isn't that much used in practical formal methods: sequent calculus is generally a better basis, because it is closer to the tableau methods used in constructing decision procedures for logics. Tableau methods are pretty central to practical applications of logic in computer science.
Natural deduction is most used in constructive type theory, and this gives it some leverage in programming language design. It's considered a nice-to-know, though, rather than a must know.
The main value of natural deduction is that it is the nicest way to learn formal inference, but this is a didactic application seen mostly in academia.
Natural deduction is very interesting and kind of cool, but it is very rarely used outside of academia. Formal proofs of correction on programs are tedious using natural deduction, and thus higher level tools are often used.
Could you please explain me what is the basic connection between the fundamentals of logical programming and the phenomenon of syntactic similarity between type systems and conventional logic?
The Curry-Howard correspondence is not about logic programming, but functional programming. The fundamental mechanic of Prolog is justified in proof theory by John Robinson's resolution technique, which shows how it is possible to check whether logical formulae expressed as Horn clauses are satisfiable, that is, whether you can find terms to substitue for their logic variables that make them true.
Thus logic programming is about specifying programs as logical formulae, and the calculation of the program is some form of proof inference, in Prolog reolution, as I have said. By contrast the Curry-Howard correspondence shows how proofs in a special formulasition of logic, called natural deduction, correspond to programs in the lambda calculus, with the type of the program corresponding to the formula that the proof proves; computation in the lambda calculus corresponds to an important phenomenon in proof theory called normalisation, which transforms proofs into new, more direct proofs. So logic programming and functional programming correspond to different levels in these logics: logic programs match formulae of a logic, whilst functional programs match proofs of formulae.
There's another difference: the logics used are generally different. Logic programming generally uses simpler logics — as I said, Prolog is founded on Horn clauses, which are a highly restricted class of formulae where implications may not be nested, and there are no disjunctions, although Prolog recovers the full strength of classical logic using the cut rule. By contrast, functional programming languages such as Haskell make heavy use of programs whose types have nested implications, and are decorated by all kinds of forms of polymorphism. They are also based on intuitionistic logic, a class of logics that forbids use of the principle of the excluded middle, which Robinson's computational mechanism is based on.
Some other points:
It is possible to base logic programming on more sophisticated logics than Horn clauses; for example, Lambda-prolog is based on intuitionistic logic, with a different computation mechanism than resolution.
Dale Miller has called the proof-theoretic paradigm behind logic programming the proof search as programming metaphor, to contrast with the proofs as programs metaphor that is another term used for the Curry-Howard correspondence.
Logic programming is fundamentally about goal directed searching for proofs. The structural relationship between typed languages and logic generally involves functional languages, although sometimes imperative and other languages - but not logic programming languages directly. This relationship relates proofs to programs.
So, logic programming proof search can be used to find proofs that are then interpreted as functional programs. This seems to be the most direct relationship between the two (as you asked for).
Building whole programs this way isn't practical, but it can be useful for filling in tedious details in programs, and there's some important examples of this in practice. A basic example of this is structural subtyping - which corresponds to filling in a few proof steps via a simple entailment proof. A much more sophisticated example is the type class system of Haskell, which involves a particular kind of goal directed search - in the extreme this involves a Turing-complete form of logic programming at compile time.
I've been contemplating programming language designs, and from the definition of Declarative Programming on Wikipedia:
This is in contrast from imperative programming, which requires a detailed description of the algorithm to be run.
and further down:
... Any style of programming that is not imperative. ...
It then goes on to express that functional languages, because they are not imperative, are declarative by their very nature.
However, this makes me wonder, are purely functional programming languages able to solve any algorithmic problem, or are the constraints based upon what functions are available in that language?
I'm mostly interested in general thoughts on the subject, although if specific examples can illustrate the point, I certainly welcome them.
According to the Church-Turing Thesis ,
the three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent"
where Turing machine can be read as "procedural" and lambda calculus as "functional".
Yes, Haskell, Erlang, etc. are Turing complete languages. In principle, you don't need mutable state to solve a problem, since you can always create a new object instead of mutating the old one. Of course, Brainfuck is also Turing complete. In other words, just because an algorithm can be expressed in a functional language doesn't mean it's not horribly awkward.
OK, so Church and Turing provied it is possible, but how do we actually do something?
Rewriting imperative code in pure functional style is an exercise I frequently assign to undergraduate students:
Each mutable variable becomes a function parameter
Loops are rewritten using recursion
Each goto is expressed as a function call with arguments
Sometimes what comes out is a mess, but often the results are surprisingly elegant. The only real trick is not to pass arguments that never change, but instead to let-bind them in the outer environment.
The big difference with functional style programming is that it avoids mutable state. Where imperative programming will typically update variables, functional programming will define new, read-only values.
The main place where this will hit performance is with algorithms that use updatable arrays. An imperative implementation can update an array element in O(1) time, while the best a purely functional style of implementation can achieve is O(log N) (using a sorted tree).
Note that functional languages generally have some way to use updateable arrays with O(1) access time (e.g., Haskell provides this with its state transformer monad). However, this is arguably an imperative programming method... nothing wrong with that; you want to use the best tools for a particular job, after all.
The functional style of O(log N) incremental array update is not all bad, though, as functional style algorithms seem to lend themselves well to parallellization.
Too long to be posted as a comment on #SteveB's answer.
Functional programming and imperative programming have equal capability: whatever one can do, the other can do. They are said to be Turing complete. The functions that a Turing machine can compute are exactly the ones that recursive function theory and λ-calculus express.
But the Church-Turing Thesis, as such, is irrelevant. It asserts that any computation can be carried out by a Turing machine. This relates an informal idea - computation - to a formal one - the Turing machine. Nobody has yet found anything we would recognise as computation that a Turing machine can't do. Will someone find such a thing in future? Who can tell.
Using state monads you can program in an imperative style in Haskell.
So the assertion that Haskell is declarative by its very nature needs to be taken with a grain of salt. On the positive side it then is equivalent to imperative programming languages, also in a practical sense which doesn't completely ignore efficiency.
While I completely agree with the answer that invokes Church-Turing thesis, this begs an interesting question actually. If I have a parallel computation problem (which is not algorithmic in a strict mathematical sense), such as multiple producer/consumer queue or some network protocol between several machines, can this be adequately modeled by Turing machine? It can be simulated, but if we simulate it, we lose the purpose why we have the parallelism in the problem (because then we can find simpler algorithm on the Turing machine). So what if we were not to lose parallelism inherent to the problem (and thus the reason why are we interested in it), we couldn't remove the notion of state?
I remember reading somewhere that there are problems which are provably harder when solved in a purely functional manner, but I can't seem to find the reference.
As noted above, the primary problem is array updates. While the compiler may use a mutable array under the hood in some conditions, it must be guaranteed that only one reference to the array exists in the entire program.
Not only is this a hard mathematical fact, it is also a problem in practice, if you don't use impure constructs.
On a more subjective note, stating that all Turing complete languages are equivalent is only true in a narrow mathematical sense. Paul Graham explores the issue in Beating the Averages in the section "The Blub Paradox."
Formal results such as Turing-completeness may be provably correct, but they are not necessarily useful. The travelling salesman problem may be NP-complete, and yet salesman travel all the time. It seems they don't feel the need to follow an "optimal" path, so the theorem is irrelevant.
NOTE: I am not trying to bash functional programming, since I really like it. It is just important to remember that it is not a panacea.