pseudo code - formal rules - pseudocode

Has anyone set out a proposal for a formal pseudo code standard?
Is it better to be a 'rough' standard to infer an understanding?

It is better to be a rough standard; the intent of pseudocode is to be human-readable, not machine-readable, and the goal of actually writing pseudocode is to convey a higher-level description of an algorithm while being unconcerned (typically) with the minutiae of the implementation. My opinion is that for it to qualify as pseudocode there has to be some ambiguity, and your goal should be a clear conveyance of your algorithmic intentions. Stick to common control structures, declarations and concepts that are paradigmatic to your target audience or language and you'll get the point across. If you start getting too formal, you're getting too close to writing actual code.

NOT AN ANSWER.
IMHO forcing a standard (pseudo code syntax, if you will) will cause people to be less clear on what they want to say.
Browse around, try to gather some knowledge about used conventions, and do your best to be clear.

Although this is by no means a formal proposal, Python is considered by some to be Executable Pseudocode.

in my opinion it depends on the people who are using your programs. In books for algorithms the pseudo code is very formal an near to maths, but is also described in a few paragraphs, so this is for scientific situations.
If I develop in others environments I would prefer a not that formal way because that is easier to understand for most people. I prefer spoken words over formalism. If you want formalism you could read the code instead.

Related

Is there any algorithm needs functional language exclusively to be implemented

I'm a C# developer and I don't have enough information about functional languages,
My question that is there any algorithm needs functional language exclusively to be implemented?
Regards.
As long as a language is Turing complete, any algorithm can be implemented in it (by definition of "algorithm"). But as others have said, functional languages can do certain things more elegantly. (Just take a look at Haskell. What a lovely language.) I'd also argue that there is a class of problems that OOP languages do better. (In my opinion, GUIs, although some may disagree.)
No, however a functional language may lead to a more elegant implementation for an algorithm that can exploit the features of such a language. For example, one that requires large recursive depth.
As I understand it, such algorithm would have to be translated into a set of machine commands executed on some micro-processor (whether you use compiled or interpreted language). And none of the current processors are 'functional'.
In fact, this leads to even broader assertion: any 'functional algorithm' can be implemented in C or assembler :)

What is more interesting or powerful: Curry, Mercury or Lambda-Prolog?

I would like to ask you about what formal system could be more interesting to implement from scratch/reverse engineer.
I've looked through some existing and open-source projects of logical/declarative programming systems. I've decided to make up something similar in my free time, or at least to catch the general idea of implementation.
It would be great if some of these systems would provide most of the expressive power and conciseness of modern academic investigations in logic and its relation with computational models.
What would you recommend to study at least at the conceptual level? For example, Lambda-Prolog is interesting particularly because it allows for higher order relations, but AFAIK is based on intuitionist logic and therefore lack the excluded-middle principle; that's generally a disadvantage for me.
I would also welcome any suggestions about modern logical programming systems which are less popular but more expressive/powerful.
Prolog was the first language which changed my point of view at programming. But later I found it to be not so high-level as I'd like to see it.
Curry - I've tried only Munster CC, and found it somewhat inconvenient. Actually, at this point, I decided to stop ignoring Haskell.
Mercury has many things which I wanted to see in Prolog. I have a really good expectation about the possibility to distinguish modes of rules. Programs written in Mercury should inspire compiler to do a lot of optimizations (I guess).
Twelf.
It generalizes lambda-prolog significantly, and it's a logical framework and a metalogical framework as well as a logic programming language. If you need a language with a heavy focus on logic as well as computation, it's the best I know of.
If I were to try to extend a logic based system, I'd choose Prolog Cafe as it is small, open sourced, standards compliant, and can be easily integrated into java based systems.
For the final project in a programming languages course I took, we had to embed a Prolog evaluator in Scheme using continuations and macros. The end result was that you could freely mix Scheme and Prolog code, and even pass arbitrary predicates written in Scheme to the Prolog engine.
It was a very instructive exercise. The first 12 lines of code (and and or) literally took about 6 hours to write and get correct. It was pretty much the search logic, written very concisely using continuations. The rest followed a bit more easily. Then once I added the unification algorithm, it all just worked.

Formally verifying the correctness of an algorithm

First of all, is this only possible on algorithms which have no side effects?
Secondly, where could I learn about this process, any good books, articles, etc?
COQ is a proof assistant that produces correct ocaml output. It's pretty complicated though. I never got around to looking at it, but my coworker started and then stopped using it after two months. It was mostly because he wanted to get things done quicker, but if you need to verify an algorithm this might be a good idea.
Here is a course that uses COQ and talks about proving algorithms.
And here is a tutorial about writing academic papers in COQ.
It's generally a lot easier to verify/prove correctness when no side effects are involved, but it's not an absolute requirement.
You might want to look at some of the documentation for a formal specification language like Z. A formal specification isn't a proof itself, but is often the basis for one.
I think that verifying the correctness of an algorithm would be validating its conformance with a specification. There is a branch of theoretical Computer Science called Formal Methods which may be what you are looking for if you need to get as close to proof as you can. From wikipedia,
Formal Methods are a particular kind
of mathematically-based techniques for
the specification, development and
verification of software and hardware
systems
You will be able to find many learning resources and tools from the multitude of links on the linked Wikipedia page and from the Formal Methods wiki.
Usually proofs of correctness are very specific to the algorithm at hand.
However, there are several well known tricks that are used and re-used again. For example, with recursive algorithms you can use loop invariants.
Another common trick is reducing the original problem to a problem for which your algorithm's proof of correctness is easier to show, then either generalizing the easier problem or showing that the easier problem can be translated to a solution to the original problem. Here is a description.
If you have a particular algorithm in mind, you may do better in asking how to construct a proof for that algorithm rather than a general answer.
Buy these books: http://www.amazon.com/Science-Programming-Monographs-Computer/dp/0387964800
The Gries book, Scientific Programming is great stuff. Patient, thorough, complete.
Logic in Computer Science, by Huth and Ryan, gives a reasonably readable overview of modern systems for verifying systems. Once upon a time people talked about proving programs correct - with programming languages which may or may not have side effects. The impression I get from this book and elsewhere is that real applications are different - for instance proving that a protocol is correct, or that a chip's floating point unit can divide correctly, or that a lock-free routine for manipulating linked lists is correct.
ACM Computing Surveys Vol 41 Issue 4 (October 2009) is a special issue on software verification. It looks like you can get to at least one of the papers without an ACM account by searching for "Formal Methods: Practice and Experience".
The tool Frama-C, for which Elazar suggests a demo video in the comments, gives you a specification language, ACSL, for writing function contracts and various analyzers for verifying that a C function satisfies its contract and safety properties such as the absence of run-time errors.
An extended tutorial, ACSL by example, shows examples of actual C algorithms being specified and verified, and separates the side-effect-free functions from the effectful ones (the side-effect-free ones are considered easier and come first in the tutorial). This document is also interesting in that it was not written by the designers of the tools it describe, so it gives a fresher and more didactic look at these techniques.
If you are familiar with LISP then you should definitely check out ACL2: http://www.cs.utexas.edu/~moore/acl2/acl2-doc.html
Dijkstra's Discipline of Programming and his EWDs lay the foundation for formal verification as a science in programming. A simpler work is Wirth's Systematic Programming, which begins with the simple approach to using verification. Wirth uses pre-ISO Pascal for the language; Dijkstra uses an Algol-68-like formalism called Guarded (GCL). Formal verification has matured since Dijkstra and Hoare, but these older texts may still be a good starting point.
PVS tool developed by Stanford guys is a specification and verification system. I worked on it and found it very useful for Theoram Proving.
WRT (1), you will probably have to create a model of the algorithm in a way that "captures" the side-effects of the algorithm in a program variable intended to model such state-based side-effects.

Algebraic logic

Both Wolfram Alpha and Bing are now providing the ability to solve complex, algebraic logic problems (ie "solve for x, given this equation"), and not just evaluate simple arithmetic expressions (eg "what's 5+5?"). How is this done?
I can read most types of code that might get thrown at me, so it doesn't really make a difference what you use to explain and represent the algorithm. I find that bash makes a really good pseudo-code, not to mention its actually functional, so that'd be ideal. Also, I'm fairly familiar with its in's and out's. Sorry to go ranting on a tangent, but it really irritates me to see people spend effort on crunching out "pseudocode" when they could be getting something 100% functional for just slightly more effort. Anyways, thanks so much for advance.
There are 2 main methods to solve:
Numeric methods. Numerical methods mean, basically, that the solver tries to change the value of x until the equation is satisfied. More info on numerical methods.
Symbolic math. The solver manipulates the equation as a string of symbols, by a number of formal rules. It's not that different from algebra we learn in school, the solver just knows a lot of different rules. More info on computer algebra.
Wolfram|Alpha (W|A) is based on the Mathematica kernel, combined with a natural language parser (which is also built primarily with Mathematica). They have a whole heap of curated data and associated formula that can be used once the question has been interpreted.
There's a blog post describing some of this which came out at the same time as W|A.
Finally, Bing simply uses the (non-free) API to answer questions via W|A.

Choice of programming language in book on algorithms? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Following up on my previous question on the enduring properties of a book on algorithms, see here, now I would like to ask the community what language would you use to write the examples of such a reference book.
I will probably not use MMIX (!) to write the examples of the book, but at the same time, I think just pseudo-code would be less interesting than examples in a real language.
Still, I'd also like the book to be a resource for researchers as well. What could be the choice of the community? Why?
Answer: I knew this was a tough question and that there would be several different answers. Notice that answers cover the whole range from Assembly/MMIX(!!) to Python and pseudo-code. The votes and the arguments compel me to choose Uri's sensible answer, with one caveat: my pseudo-code will be as close to C as I possibly can (without going into platform specific issues, of course), and I will possibly discuss better implementations in side notes (As all of us know, mathematically proving the algorithm works is far, far from the problems of implementing it).
The book is on algorithms in a particular domain, not on the mathematics of algorithms in general (much smarter people have done and will do much better than me on general algorithms). As such, one thing I consider would add value to such a book is the repository of the algorithms, which I will definitely put online in a companion website (maybe in a couple of languages, if I find the time).
Thanks for all the answers. I sometimes feel I should put everybody who answers as co-authors. :)
A good book on algorithms should be written in psueod-code a-la-CLR...
In my experience, most books that go into language-specific examplse end up looking more like undergraduate textbook than like a serious reference or learning books. In addition, most languages are fairly clunky when dealing with collections (esp. C++ and Java, even with generics). Between all the details, too much is lost. You're also immediately eliminating a lot of your potential audience.
The only advantage to language specific books is that if you were writing a textbook, the publisher could attach a CD and add 50$ to the MSRP.
It's easier for me to understand an algorithm from (readable) pseudocode. If I can't figure out how to implement it in my language with my own collections, I'm in trouble anyway.
You could add to every pseudocode listing a note about implementation details for specific languages (e.g., use a TreeSet in Java for best performance, etc.)
You could also maintain a separate website for the book (good idea anyway) where you'll have actual implementations in different languages. No need to kill trees with long printouts.
Use a real programming language -- never a psuedo one. Readers are very suspicious
of psuedo code , readers like real programming languages. The trap with psuedo languages is that you can define code concepts that the reader cannot impliment in their language of choice
A real programming language has a number of advantages :
1) you can test your code, hopefuly proving your code correct !
2) you can export that code into a published format for insertion in your book,
ensuring that anybody following your code would be looking
at actual executable code.
3) you would not have to defend you psuedo code.
The choice of language is obvously subjective, but I think that almost any modern language
could be used, but I'd recommend one that has 'least overheads' in terms of quick understanding. And perferably one that the reader can get a compiler/interpreter.
If you'd like to use C, then perhaps you should check out D. An improved C.
For example, Ruby is of this ilk if you keep you code 'simple',
Java is not ( too many support libraries required),
in an earlier time Pascal would be a candidate.
BTW: I dont use Ruby now, as I currently use Smalltalk & REBOL, but I would not use
either of those languages in a book. Your book would go straight to the remainder bin !
I would avoid anything that abstracts the core 'mechanics' of any particular algorithm
There is a tremendous benefit in Knuth's rendering of the algorithms in an assembly level language. It forces the reader to carefully consider exactly what is going on in the silicon when we code algorithms in some higher level language. Especially for systems programmers, this kind of understanding can't be gotten any other way.
Knuth's new MMIX is ideal: consider it an assembly level pseudo-code.
My ideal textbook would have algorithms written in pseudocode and MMIX, so that we can see the algorithm in both its pretty and gory-complex forms. Pseudo-code should be preferred to "real languages", because it sidesteps pointless "you should have used this language not that language" battles. At this stage, pseudocode needs no defending -- the best extant algorithms textbooks use either pseudocode or in Knuth's case a kind of assembly pseudocode.
The choice is not going to be able to please everyone.
Robert Sedgewick has written his "Algorithms in..." books in multiple languages. I had the C version for a course and bought the C++ version when I started working with C++ at work.
You can't escape language features (even pseudo language features).
To try to please as many people as possible you could choose two languages, say one functional and one not. It could help illustrate motivations in algorithm choice.
C style is often used because many languages use a very similar style so most programmers understand it without explanation. Further, examples can be run on any machine with a C compiler - which is nearly every machine.
However, higher level concepts often require the use of more recent technologies and techniques - OO, functional programming, etc.
These are often expressed in the language that has the required features. Java, C#, Erlang, Ada, etc - most good programmers will grasp what is going on with just a little explanation.
But C is very nearly a universal foundation - you really can't go wrong if you adopt a C style for examples.
-Adam
I would not use any specific language. Use a pseudo-language that will be clear to most anyone who has done a little programming. Usually these books use something close to the C style, but that is not a rule. I know that you mention that you do not want to use pseudo code, but that will allow you to reach a broader audience.
I would use something that lets you express exactly the idea behind the algorithm.
Haskell is quite neat, but I think that with algorithms that work with state, it can get in your way, and you would be more occupied with the language than with the algorithm.
I wouldn't use C or its descendants (C++, C#, Java ...) because they will get in your way when your algorithms are more "functional" in nature. Again, you would be more occupied with the language than with the algorithm. I would feel very uncomfortable if I had to work without higher order functions.
So, basically, I would use a multi-paradigm language that you are comfortable with, and with which you feel confident that you can express the algorithms without diving into language specifics.
My personal choice would be something like Common Lisp, but perhaps Python or Scala is workable, too.
Python's a good choice all around. It's very readable, even if you haven't programmed in it before. Plus, it's a lot less verbose than some other common language choices.

Resources