What is natural deduction used for outside of academia? - logic

I am studying natural deduction as a part of my Formal Specification & Verification Computer Science course at University/College.
I find it interesting, however I learn much better when I can find a practical use for things.
Could anyone explain to me if and how natural deduction is used other than for formally verifying bits of code?
Thanks!

Natural deduction isn't that much used in practical formal methods: sequent calculus is generally a better basis, because it is closer to the tableau methods used in constructing decision procedures for logics. Tableau methods are pretty central to practical applications of logic in computer science.
Natural deduction is most used in constructive type theory, and this gives it some leverage in programming language design. It's considered a nice-to-know, though, rather than a must know.
The main value of natural deduction is that it is the nicest way to learn formal inference, but this is a didactic application seen mostly in academia.

Natural deduction is very interesting and kind of cool, but it is very rarely used outside of academia. Formal proofs of correction on programs are tedious using natural deduction, and thus higher level tools are often used.

Related

Is it possible to convert all higher-order logic and dependent type in Lean/Isabelle/Coq into first-order logic?

More specially, given arbitrary Lean proof/theorem, is it possible to express it solely using first-order logic? If so, is it practical, i.e. the generated FOL will not be enormously large?
I have seen https://www.cl.cam.ac.uk/~lp15/papers/Automation/translations.pdf, but since I am not an expert, I am still not sure whether all Lean's proof code can be converted.
Other mathematical proof languages are also OK.
The short answer is: yes, it is not impractically large and this is done in particular when translating proofs to SMT solvers for sledgehammer-like tools. There is a fair amount of blowup, but it is a linear factor on the order of 2-5. You probably lose more from not having specific support for all the built in rules, and in the case of DTT, writing down all the defeq proofs which are normally implicit.

What's the relationship between Calculus and programming syntaxes?

I'm starting Calculus this semester. I've used programming (or scripting) languages before, mostly PHP and C#. I haven't done much low-level work. The only relationships I've made between the syntaxes are Anonymous functions with Y-Combinators and Arrays with Set-notation (I'm not even sure if these are correct).
I always see similarities between Calculus and programming — it's almost like numerology — so how do calculus and programming languages relate?
Subconsciously, I know there are relationships, but I don't think I know the proper terminology to describe it. Some people have referred me to "computational theory" and "Turing machines", but I haven't really looked into it yet. Can I still consider myself a programmer if I don't fully understand computational theory?
"Calculus" is a word meaning, in the context of mathematics:
Any formal system in which symbolic
expressions are manipulated according
to fixed rules.
So it does not stand to reason that two concepts are in some way related just because their names both contain the word "calculus".
Lambda calculus is a formalism for modeling computation, provably equivalent to the Turing machine. The purpose of both Turing Machines and the lambda calculus (which were developed independently around the same time), is to provide a formal system in which statements about computation can be rigorously proved. This is the fundamental underpinning of theoretical computer science. It relates to programming languages because of the Church-Turing Thesis, which essentially states that any programming language capable of emulating a Turing Machine is capable of computing anything that can possibly be computed. A language satisfying this property is called Turing-complete. Nearly all modern general-purpose programming languages have this property.
Differential/Integral calculus, the kind you learned in high school, has nothing in common with lambda calculus other than the word "calculus". It has nothing to do with programming... unless you're writing a program to compute integrals or derivatives.
First-order logic (a type of predicate calculus) has some relevance in the domain of artificial intelligence and automated theorem-proving, but again this is just using computers to solve math problems, and has no relationship to the underlying theory of computation, or to the design of programming languages.
Numerology is something entirely different, but that's not the point here!
Its been awhile since I took calculus, but nevertheless, it is mathematics. It has a lot of applications with physics and mechanical engineering.
Calculus and programming are somewhat related, such as your mention of computational theory, which is also a subset of mathematics, but strictly speaking it is not at all programming.
Lastly, you can use programming languages and software to solve calculus equations, but you don't need to. Calculus has been around for a lot longer than computers!

A question about logic and the Curry-Howard correspondence

Could you please explain me what is the basic connection between the fundamentals of logical programming and the phenomenon of syntactic similarity between type systems and conventional logic?
The Curry-Howard correspondence is not about logic programming, but functional programming. The fundamental mechanic of Prolog is justified in proof theory by John Robinson's resolution technique, which shows how it is possible to check whether logical formulae expressed as Horn clauses are satisfiable, that is, whether you can find terms to substitue for their logic variables that make them true.
Thus logic programming is about specifying programs as logical formulae, and the calculation of the program is some form of proof inference, in Prolog reolution, as I have said. By contrast the Curry-Howard correspondence shows how proofs in a special formulasition of logic, called natural deduction, correspond to programs in the lambda calculus, with the type of the program corresponding to the formula that the proof proves; computation in the lambda calculus corresponds to an important phenomenon in proof theory called normalisation, which transforms proofs into new, more direct proofs. So logic programming and functional programming correspond to different levels in these logics: logic programs match formulae of a logic, whilst functional programs match proofs of formulae.
There's another difference: the logics used are generally different. Logic programming generally uses simpler logics — as I said, Prolog is founded on Horn clauses, which are a highly restricted class of formulae where implications may not be nested, and there are no disjunctions, although Prolog recovers the full strength of classical logic using the cut rule. By contrast, functional programming languages such as Haskell make heavy use of programs whose types have nested implications, and are decorated by all kinds of forms of polymorphism. They are also based on intuitionistic logic, a class of logics that forbids use of the principle of the excluded middle, which Robinson's computational mechanism is based on.
Some other points:
It is possible to base logic programming on more sophisticated logics than Horn clauses; for example, Lambda-prolog is based on intuitionistic logic, with a different computation mechanism than resolution.
Dale Miller has called the proof-theoretic paradigm behind logic programming the proof search as programming metaphor, to contrast with the proofs as programs metaphor that is another term used for the Curry-Howard correspondence.
Logic programming is fundamentally about goal directed searching for proofs. The structural relationship between typed languages and logic generally involves functional languages, although sometimes imperative and other languages - but not logic programming languages directly. This relationship relates proofs to programs.
So, logic programming proof search can be used to find proofs that are then interpreted as functional programs. This seems to be the most direct relationship between the two (as you asked for).
Building whole programs this way isn't practical, but it can be useful for filling in tedious details in programs, and there's some important examples of this in practice. A basic example of this is structural subtyping - which corresponds to filling in a few proof steps via a simple entailment proof. A much more sophisticated example is the type class system of Haskell, which involves a particular kind of goal directed search - in the extreme this involves a Turing-complete form of logic programming at compile time.

Can any algorithmic puzzle be implemented in a purely functional way?

I've been contemplating programming language designs, and from the definition of Declarative Programming on Wikipedia:
This is in contrast from imperative programming, which requires a detailed description of the algorithm to be run.
and further down:
... Any style of programming that is not imperative. ...
It then goes on to express that functional languages, because they are not imperative, are declarative by their very nature.
However, this makes me wonder, are purely functional programming languages able to solve any algorithmic problem, or are the constraints based upon what functions are available in that language?
I'm mostly interested in general thoughts on the subject, although if specific examples can illustrate the point, I certainly welcome them.
According to the Church-Turing Thesis ,
the three computational processes (recursion, λ-calculus, and Turing machine) were shown to be equivalent"
where Turing machine can be read as "procedural" and lambda calculus as "functional".
Yes, Haskell, Erlang, etc. are Turing complete languages. In principle, you don't need mutable state to solve a problem, since you can always create a new object instead of mutating the old one. Of course, Brainfuck is also Turing complete. In other words, just because an algorithm can be expressed in a functional language doesn't mean it's not horribly awkward.
OK, so Church and Turing provied it is possible, but how do we actually do something?
Rewriting imperative code in pure functional style is an exercise I frequently assign to undergraduate students:
Each mutable variable becomes a function parameter
Loops are rewritten using recursion
Each goto is expressed as a function call with arguments
Sometimes what comes out is a mess, but often the results are surprisingly elegant. The only real trick is not to pass arguments that never change, but instead to let-bind them in the outer environment.
The big difference with functional style programming is that it avoids mutable state. Where imperative programming will typically update variables, functional programming will define new, read-only values.
The main place where this will hit performance is with algorithms that use updatable arrays. An imperative implementation can update an array element in O(1) time, while the best a purely functional style of implementation can achieve is O(log N) (using a sorted tree).
Note that functional languages generally have some way to use updateable arrays with O(1) access time (e.g., Haskell provides this with its state transformer monad). However, this is arguably an imperative programming method... nothing wrong with that; you want to use the best tools for a particular job, after all.
The functional style of O(log N) incremental array update is not all bad, though, as functional style algorithms seem to lend themselves well to parallellization.
Too long to be posted as a comment on #SteveB's answer.
Functional programming and imperative programming have equal capability: whatever one can do, the other can do. They are said to be Turing complete. The functions that a Turing machine can compute are exactly the ones that recursive function theory and λ-calculus express.
But the Church-Turing Thesis, as such, is irrelevant. It asserts that any computation can be carried out by a Turing machine. This relates an informal idea - computation - to a formal one - the Turing machine. Nobody has yet found anything we would recognise as computation that a Turing machine can't do. Will someone find such a thing in future? Who can tell.
Using state monads you can program in an imperative style in Haskell.
So the assertion that Haskell is declarative by its very nature needs to be taken with a grain of salt. On the positive side it then is equivalent to imperative programming languages, also in a practical sense which doesn't completely ignore efficiency.
While I completely agree with the answer that invokes Church-Turing thesis, this begs an interesting question actually. If I have a parallel computation problem (which is not algorithmic in a strict mathematical sense), such as multiple producer/consumer queue or some network protocol between several machines, can this be adequately modeled by Turing machine? It can be simulated, but if we simulate it, we lose the purpose why we have the parallelism in the problem (because then we can find simpler algorithm on the Turing machine). So what if we were not to lose parallelism inherent to the problem (and thus the reason why are we interested in it), we couldn't remove the notion of state?
I remember reading somewhere that there are problems which are provably harder when solved in a purely functional manner, but I can't seem to find the reference.
As noted above, the primary problem is array updates. While the compiler may use a mutable array under the hood in some conditions, it must be guaranteed that only one reference to the array exists in the entire program.
Not only is this a hard mathematical fact, it is also a problem in practice, if you don't use impure constructs.
On a more subjective note, stating that all Turing complete languages are equivalent is only true in a narrow mathematical sense. Paul Graham explores the issue in Beating the Averages in the section "The Blub Paradox."
Formal results such as Turing-completeness may be provably correct, but they are not necessarily useful. The travelling salesman problem may be NP-complete, and yet salesman travel all the time. It seems they don't feel the need to follow an "optimal" path, so the theorem is irrelevant.
NOTE: I am not trying to bash functional programming, since I really like it. It is just important to remember that it is not a panacea.

Algebraic logic

Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.

Resources