Basic defensive programming [duplicate] - defensive-programming

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
Favorite (Clever) Defensive Programming Best Practices
I am always advised by some programmers to pay concentration to easy debugging. What is defensive programming and to which extend should it be considered while practicing?
And one more important question: is there any key things to consider while coding and what are they?

Have a look at
Defensive programming
Case Study – Defensive
Programming
The art of defensive programming
Defensive programming is the idea that
the developer makes as few assumptions
as absolutely necessary. In addition,
the developer preemptively creates
code that anticipates not only
potential problems but also
specification changes.

As a rule of thumb -- if you catch yourself thinking "this will always be true", write ASSERT( condition) in that place. That is probably the core of what defensive programming should be ;).

If defensive programming meant only one thing , that should be use assert extensively.
Here is a good article about when and where to use assert.
There are many situations where it
is good to use assertions. This
section covers some of them:
* Internal Invariants
* Control-Flow Invariants
* Preconditions, Postconditions, and Class Invariants

http://en.wikipedia.org/wiki/Defensive_programming
Defensive programming means, that you check if a file exists and if you have the permissions to open it instead of just trying to open it and catching any eventual exceptions.
(Just an example)

Related

Priority of learning programming craft and other suggestions [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
As I am in my starting career year in software development (C++ & C#) I now see my flaws and what I miss in this sphere. Because of that I came into some conclusions and made myself a plan to fill those gaps and increase my knowledge in software development. But the question I stumbled upon after making a tasks which I need to do has not quite obvious answer to me. What is the priority of those tasks? Here are these tasks and my priority by numbering:
Learning:
Functional programming (Scala)
Data structures & Algorithms (Cormen book to the rescue + TopCoder/ProjectEuler/etc)
Design patterns (GOF or Head First)
Do you agree with this tasks and priorities? Or do I miss something here? Any suggestions are welcome!
I think you have it backwards. Start with design patterns, which will help you reduce the amount messy code you produce, and understand better code made by other people (particularly libraries written with design patterns in mind).
In addition to the book of four, there are many other design pattern books -- Patterns of Enterprise Application Architecture, for example. It might be worth looking at them after you get a good grounding. But I also highly recommend Domain Driven Design, which I think gives you a way of thinking about how to structure your program, instead of just identifying pieces here and there.
Next you can go with algorithms. I prefer Skiena's The Algorithm Design Manual, whose emphasis is more on getting people to know how to select and use algorithms, as well as building them from well known "parts" than on getting people to know to make proofs about algorithms. It is also available for Kindle, which was useful to me.
Also, get a good data structures book -- people often neglect that. I like the Handbook of Data Structures and Applications, though I'm also looking into Advanced Data Structures.
However, I cannot recommend either TopCoder or Euler for this task. TopCoder is, imho, mostly about writing code fast. Nothing bad about it, but it's hardly likely to make a difference on day-to-day stuff. If you like it, by all means do it. Also, it's excellent preparation for job interviews with the more technically minded companies.
Project Euler, on the other hand, is much more targeted at scientific computing, computer science and functional programming. It will be an excellent training ground when learning functional programming.
There's something that has a bit of design patterns, algorithms and functional programming, which is Elements of Programming. It uses C++ for its examples, which is a plus for you.
As for functional programming, I think it is less urgent than the other two. However, I indicate either Clojure or Haskell instead of Scala.
Learning functional programming in Scala is like learning Spanish in a latino neighborhood, while learning functional programming in Clojure is like learning Spanish in Madrid, and learning functional programming in Haskell is like learning Spanish in an isolated monastery in Spain. :-)
Mind you, I prefer Scala as a programming language, but I already knew FP when I came to it.
When you do get to functional programming, get Chris Okasaki's Purely Functional Data Structures, for a good grounding on algorithms and data structures for functional programming.
Beyond that, try to learn a new language every year. Even if not for the language itself, you are more likely to keep up to date with what people are doing nowadays.
Data structures and algorithms will help you no matter what language you use. I'd work on it first. Then design patterns (any OOP language will benefit from them). Functional programming is nice, but not necessarily a top priority.
Depends entirely on what you're doing.
I'd tailor which one you learn first to what would help you the most with your current job.
Write lots of code. Try to do it better every time. Occasionally work with more senior people, who can provide guidance praise and gentle correction.
I think that in general the topics that you have picked are very important, and my give you the chance to do something more than the usual boring stuff. However, I believe that the order should be something like this:
Data structures & Algorithms
Functional programming
Software Design
Specific technologies you need
My opinion is that Algorithms and data structures should be first. It is very hard to study algorithms if you have a lot of other things in you head (good coding practices, lots of programming paradigms, etc.). Also with time, people tend to become more lazy, and lose the patience to get into the ideas of this complex matter. On the other hand, missing some fundamental understanding about how things can be represented or operate, may lead to serious flaws in understanding anything more sophisticated. So, assuming that you have some ideas about imperative programming (the usual stuff tаught in the introductory courses) you should enhance your knowledge with algorithms and data structures.
It is important to have at least basic understanding of other paradigms. Functional programming is a good example. You may also consider getting familiar with logic programming. Having basic understanding of Algorithms and Data Structures will help you a lot in understanding how such languages work. I don't know whether Scala is the best language for that purpose, but will probably do. Alternatively, you can pick something more classic like Lisp, or Scheme. Haskell is also an interesting language.
About the Design Patterns... knowing design patterns will help you in doing object oriented design, but you should be aware, that design patterns are just a set of solutions to popular problems. Knowing Design Patterns is by no means that same as knowing how to design software. In order to improve you software design skills you should study other materials too. A good example from where you can get understanding about these concepts is the book Code Complete, or the MIT course 6.170 (its materials are publicly available).
At some point you will need to get into the details of a specific framework (or frameworks) that you will need for what you do. Keep in mind, that such frameworks change, and you should be able to adapt, and learn new technologies. For instance, knowing ASP.NET MVC now, may be worthless 5 years from now (or may not be, who knows?).
Finally, keep in mind, that no matter what you read, you need to practice a lot, which means solving problems, writing code, designing software, etc. Most of these concepts can not be easily explained, or even expressed with words, so you will need to reach most of them by yourself, (that is, you will need to reinvent the wheel many times).
Good luck with your career!
If would think Functional Programming would be low in priority since the languages you use are OO in nature, I would think spending some time in Design Patterns and on the specifics of the language itself would be more useful.
I read both GOF and HeadFirst, HeadFirst is probably the easier and more fun of the 2 but much thicker. You should probably look at Enterprise Design Patterns, like Martin Fowler's page http://martinfowler.com/eaaCatalog/
What field do you think you will work in? Games ? Web? That will probably decide how important the Algo part would be for.
I would say that you first need to understand (even if not remember) the base algorithms and data structures. (use Knuth and Cormen), then get to learn architecture (design patterns are here.)..
Functional programming is just one type of programming and is mandatory. There are many great programmers that are not using functional programming, but I assume that for all kinds you must first know the basics- algorithms and data structures.
I'd say #2 goes first, especially if you are planning to use C++/C# at work, having a good command of data structures and algorithms will give you some edge. I see #1 and #3 as somewhat parallel paths, but I do have a couple of suggestions: start with the Head First book for patterns, the GOF is more like a reference book and also the notation and language may get quite abstruse. As for functional programming, may I suggest Clojure instead of Scala? I'm convinced that a "functional-first" language (like F# or Clojure) will force you to think functional (a good thing) instead of just patching your O-O/imperative skills.

What is more interesting or powerful: Curry, Mercury or Lambda-Prolog?

I would like to ask you about what formal system could be more interesting to implement from scratch/reverse engineer.
I've looked through some existing and open-source projects of logical/declarative programming systems. I've decided to make up something similar in my free time, or at least to catch the general idea of implementation.
It would be great if some of these systems would provide most of the expressive power and conciseness of modern academic investigations in logic and its relation with computational models.
What would you recommend to study at least at the conceptual level? For example, Lambda-Prolog is interesting particularly because it allows for higher order relations, but AFAIK is based on intuitionist logic and therefore lack the excluded-middle principle; that's generally a disadvantage for me.
I would also welcome any suggestions about modern logical programming systems which are less popular but more expressive/powerful.
Prolog was the first language which changed my point of view at programming. But later I found it to be not so high-level as I'd like to see it.
Curry - I've tried only Munster CC, and found it somewhat inconvenient. Actually, at this point, I decided to stop ignoring Haskell.
Mercury has many things which I wanted to see in Prolog. I have a really good expectation about the possibility to distinguish modes of rules. Programs written in Mercury should inspire compiler to do a lot of optimizations (I guess).
Twelf.
It generalizes lambda-prolog significantly, and it's a logical framework and a metalogical framework as well as a logic programming language. If you need a language with a heavy focus on logic as well as computation, it's the best I know of.
If I were to try to extend a logic based system, I'd choose Prolog Cafe as it is small, open sourced, standards compliant, and can be easily integrated into java based systems.
For the final project in a programming languages course I took, we had to embed a Prolog evaluator in Scheme using continuations and macros. The end result was that you could freely mix Scheme and Prolog code, and even pass arbitrary predicates written in Scheme to the Prolog engine.
It was a very instructive exercise. The first 12 lines of code (and and or) literally took about 6 hours to write and get correct. It was pretty much the search logic, written very concisely using continuations. The rest followed a bit more easily. Then once I added the unification algorithm, it all just worked.

What are the advantages of using Prolog over other languages? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Every language that is being used is being used for its advantages, generally.
What are the advantages of Prolog?
What are the general situations/ category of problems where one can use Prolog more efficiently than any other language?
Compared to what exactly? Prolog is really just the pre-eminent implementation of logic programming so if your question is really about a comparison of programming paradigms well that's really very broad indeed and you should look here.
If your question is more specifically about prolog vs the more commonly seen OO languages I would argue that you're really comparing apples to oranges - the "advantage" (such as it is) is just a different way of thinking about the world, and sometimes changing the way you ask a question provides a better tool for solving a problem.
Basically, if your program can be stated easily as declaritive formal logic statements, Prolog (or another language in that family) will give the fastest development time. If you use a good Prolog compiler, it will also give the best performance and reliability, because the engine will have had a lot of design and development effort.
Trying to implement this kind of thing in another language tends to be a mess. The cleanest and most general solution probably involves implementing your own unification engine. Even naive implementations aren't exactly trivial, the Warren Abstract Machine has a book or two written about it, and doing better will at the very least involve a fair bit of research, reading some headache-inducing papers.
Of course in the real world, key parts of your program may benefit from Prolog, but a lot of other stuff is better handled using another language. That's why a lot of Prolog compilers can interface with, e.g., C.
One of the best times to use Prolog is when you have a problem suited to solving with backtracking. And that is when you have lots of possible solutions to a problem, and perhaps you want to order them to include/exclude depending on some context. This suggests a lot of ambiguity... as in natural language processing.
It sure would be a lot tidier to write all the potential answers as Prolog clauses. With a imperative language all I think you can really do is write a giant (really giant) CASE statement, which is not too fun.
The stuff that are inherent in Prolog:
pattern matching!
anything that involves a depth first search. ( in Java if you want to do a DFS, you may want to implement it by a visitor pattern or do a (really giant) CASE
unification
??
Paul Graham, is a Lisp person nonetheless he argues that Prolog is really good for 2% of the problems, I am myself like to break this 2% down and figure how he'd come up with such number.
His argument for "better" languages is "less code, more power". Prolog is definitely "less code" and if you go for latter flavours of it (typed ones), you get more power too. The only thing that bothered me when using Prolog is the fact that I don't have random access in lists (no arrays).
Prolog is a very high level programming language. An analogy could be (Prolog : C) as (C : Assembler)
Why is not used that much then? I think that it has to do with the machines we use; They are based on turing machines. C can be compiled into byte code automatically, but Prolog is compiled to run on an emulation of the Abstract Warren Machine, thus, it is not that efficient.
Also, prolog is based on first order logic which is not capable of solving every solvable problem in a declarative manner, thus, at some point, you need to rely on imperative-like code.
I'd say prolog works well for problems where a knowledge base forms an important part of the solution. Especially when the knowledge structure is suited to be encoded as logical rules.
For example, writing a natural language interpreter for a particular problem domain would require a lot of knowledge in that domain. Expert systems also fall within this knowledge driven category.
It's also a nice language to explore solutions to logical puzzles ;-)
I have been programming (for fun) over a year with Swi-Prolog. I think one of the advantages of Prolog is that Prolog has no side effects: Prolog is language that kind of has no use for (local or class member) variables, it kind of forces the programmer not use variables. Prolog objects have no state, kind of. I think. I have been writing command line Prolog (no GUI, except few XPCE tests): it is like a train on a track.

Choice of programming language in book on algorithms? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Following up on my previous question on the enduring properties of a book on algorithms, see here, now I would like to ask the community what language would you use to write the examples of such a reference book.
I will probably not use MMIX (!) to write the examples of the book, but at the same time, I think just pseudo-code would be less interesting than examples in a real language.
Still, I'd also like the book to be a resource for researchers as well. What could be the choice of the community? Why?
Answer: I knew this was a tough question and that there would be several different answers. Notice that answers cover the whole range from Assembly/MMIX(!!) to Python and pseudo-code. The votes and the arguments compel me to choose Uri's sensible answer, with one caveat: my pseudo-code will be as close to C as I possibly can (without going into platform specific issues, of course), and I will possibly discuss better implementations in side notes (As all of us know, mathematically proving the algorithm works is far, far from the problems of implementing it).
The book is on algorithms in a particular domain, not on the mathematics of algorithms in general (much smarter people have done and will do much better than me on general algorithms). As such, one thing I consider would add value to such a book is the repository of the algorithms, which I will definitely put online in a companion website (maybe in a couple of languages, if I find the time).
Thanks for all the answers. I sometimes feel I should put everybody who answers as co-authors. :)
A good book on algorithms should be written in psueod-code a-la-CLR...
In my experience, most books that go into language-specific examplse end up looking more like undergraduate textbook than like a serious reference or learning books. In addition, most languages are fairly clunky when dealing with collections (esp. C++ and Java, even with generics). Between all the details, too much is lost. You're also immediately eliminating a lot of your potential audience.
The only advantage to language specific books is that if you were writing a textbook, the publisher could attach a CD and add 50$ to the MSRP.
It's easier for me to understand an algorithm from (readable) pseudocode. If I can't figure out how to implement it in my language with my own collections, I'm in trouble anyway.
You could add to every pseudocode listing a note about implementation details for specific languages (e.g., use a TreeSet in Java for best performance, etc.)
You could also maintain a separate website for the book (good idea anyway) where you'll have actual implementations in different languages. No need to kill trees with long printouts.
Use a real programming language -- never a psuedo one. Readers are very suspicious
of psuedo code , readers like real programming languages. The trap with psuedo languages is that you can define code concepts that the reader cannot impliment in their language of choice
A real programming language has a number of advantages :
1) you can test your code, hopefuly proving your code correct !
2) you can export that code into a published format for insertion in your book,
ensuring that anybody following your code would be looking
at actual executable code.
3) you would not have to defend you psuedo code.
The choice of language is obvously subjective, but I think that almost any modern language
could be used, but I'd recommend one that has 'least overheads' in terms of quick understanding. And perferably one that the reader can get a compiler/interpreter.
If you'd like to use C, then perhaps you should check out D. An improved C.
For example, Ruby is of this ilk if you keep you code 'simple',
Java is not ( too many support libraries required),
in an earlier time Pascal would be a candidate.
BTW: I dont use Ruby now, as I currently use Smalltalk & REBOL, but I would not use
either of those languages in a book. Your book would go straight to the remainder bin !
I would avoid anything that abstracts the core 'mechanics' of any particular algorithm
There is a tremendous benefit in Knuth's rendering of the algorithms in an assembly level language. It forces the reader to carefully consider exactly what is going on in the silicon when we code algorithms in some higher level language. Especially for systems programmers, this kind of understanding can't be gotten any other way.
Knuth's new MMIX is ideal: consider it an assembly level pseudo-code.
My ideal textbook would have algorithms written in pseudocode and MMIX, so that we can see the algorithm in both its pretty and gory-complex forms. Pseudo-code should be preferred to "real languages", because it sidesteps pointless "you should have used this language not that language" battles. At this stage, pseudocode needs no defending -- the best extant algorithms textbooks use either pseudocode or in Knuth's case a kind of assembly pseudocode.
The choice is not going to be able to please everyone.
Robert Sedgewick has written his "Algorithms in..." books in multiple languages. I had the C version for a course and bought the C++ version when I started working with C++ at work.
You can't escape language features (even pseudo language features).
To try to please as many people as possible you could choose two languages, say one functional and one not. It could help illustrate motivations in algorithm choice.
C style is often used because many languages use a very similar style so most programmers understand it without explanation. Further, examples can be run on any machine with a C compiler - which is nearly every machine.
However, higher level concepts often require the use of more recent technologies and techniques - OO, functional programming, etc.
These are often expressed in the language that has the required features. Java, C#, Erlang, Ada, etc - most good programmers will grasp what is going on with just a little explanation.
But C is very nearly a universal foundation - you really can't go wrong if you adopt a C style for examples.
-Adam
I would not use any specific language. Use a pseudo-language that will be clear to most anyone who has done a little programming. Usually these books use something close to the C style, but that is not a rule. I know that you mention that you do not want to use pseudo code, but that will allow you to reach a broader audience.
I would use something that lets you express exactly the idea behind the algorithm.
Haskell is quite neat, but I think that with algorithms that work with state, it can get in your way, and you would be more occupied with the language than with the algorithm.
I wouldn't use C or its descendants (C++, C#, Java ...) because they will get in your way when your algorithms are more "functional" in nature. Again, you would be more occupied with the language than with the algorithm. I would feel very uncomfortable if I had to work without higher order functions.
So, basically, I would use a multi-paradigm language that you are comfortable with, and with which you feel confident that you can express the algorithms without diving into language specifics.
My personal choice would be something like Common Lisp, but perhaps Python or Scala is workable, too.
Python's a good choice all around. It's very readable, even if you haven't programmed in it before. Plus, it's a lot less verbose than some other common language choices.

How to write a linter? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In my day job I, and others on my team write a lot of hardware models in Verilog-AMS, a language supported primarily by commercial vendors and a few opensource simulator projects.
One thing that would make supporting each others code more helpful would be a LINTER that would check our code for common problems and assist with enforcing a shared code formatting style.
I of course want to be able to add my own rules and, after I prove their utility to myself, promote them to the rest of the team..
I don't mind doing the work that has to be done, but of course also want to leverage the work of other existing projects.
Does having the allowed language syntax in a yacc or bison format give me a leg up?
or should I just suck each language statement into a perl string, and use pattern matching to find the things I don't like?
(most syntax and compilation errors are easily caught by the commercial tools.. but we have some of our own extentions.)
lex/flex and yacc/bison provide easy-to-use, well-understood lexer- and parser-generators, and I'd really recommend doing something like that as opposed to doing it procedurally in e.g. Perl. Regular expressions are powerful stuff for ripping apart strings with relatively-, but not totally-fixed structure. With any real programming language, the size of your state machine gets to be simply unmanageable with anything short of a Real Lexer/Parser (tm). Imagine dealing with all possible interleavings of keywords, identifiers, operators, extraneous parentheses, extraneous semicolons, and comments that are allowed in something like Verilog AMS, with regular expressions and procedural code alone.
There's no denying that there's a substantial learning curve there, but writing a grammar that you can use for flex and bison, and doing something useful on the syntax tree that comes out of bison, will be a much better use of your time than writing a ton of special-case string-processing code that's more naturally dealt with using a syntax-tree in the first place. Also, what you learn writing it this way will truly broaden your skillset in ways that writing a bunch of hacky Perl code just won't, so if you have the means, I highly recommend it ;-)
Also, if you're lazy, check out the Eclipse plugins that do syntax highlighting and basic refactoring for Verilog and VHDL. They're in an incredibly primitive state, last I checked, but they may have some of the code you're looking for, or at least a baseline piece of code to look at to better inform your approach in rolling your own.
I've written a couple verilog parsers and I would suggest PCCTS/ANTLR if your favorite programming language is C/C++/Java. There is a PCCTS/ANTLR Verilog grammar that you can start with. My favorite parser generator is Zebu which is based on Common Lisp.
Of course the big job is to specify all the linting rules. It makes sense to make some kind of language to specify the linting rules as well.
Don't underestimate the amount of work that goes into a linter. Parsing is the easy part because you have tools (bison, flex, ANTLR/PCCTS) to automate much of it.
But once you have a parse, then what? You must build a semantic tree for the design. Depending on how complicated your inputs are, you must elaborate the Verilog-AMS design (i.e. resolving parameters, unrolling generates, etc. If you use those features). And only then can you try to implement rules.
I'd seriously consider other possible solutions before writing a linter, unless the number of users and potential time savings thereby justify the development time.
In trying to find my answer, I found this on ANTLR - might be of use
If you use Java at all (and thus IDEA), the IDE's extensions for custom languages might be of use
yacc/bison definitely gives you a leg up, since good linting would require parsing the program. Regex (true regex, at least) might cover trivial cases, but it is easy to write code that the regexes don't match but are still bad style.
ANTLR looks to be an alternative path to the more common (OK I heard about them before) YACC/BISON approach, which it turns out also commonly use LEX/FLEX as a front end.
a Quick read of the FLEX man page kind of make me think It could be the framework for that regex type of idea..
Ok.. I'll let this stew a little longer, then see how quickly I can build a prototype parser in one or the other.
and a little bit longer

Resources