I have a vague memory of a programming language where 1..10 meant "the range 1 inclusive to 10 exclusive", similar to python's range(1, 10), but I haven't the foggiest which, and this doesn't particularly lend itself to searches. Any help?
If the answer's "python", please forgive me. I know very little python.
Pascal supports that syntax. You can actually use this as a type, and I believe it's also used in specifying array bounds. (I'm not sure how much of this is standard Pascal and how much is Turbo Pascal extensions.)
Haskell does it.
It is Perl, called the "Range Operator"
http://www.cs.cf.ac.uk/Dave/PERL/node38.html
Groovy uses this syntax, too.
http://groovy.codehaus.org/Collections
F#, they are called Sequence Expressions
http://msdn.microsoft.com/en-us/library/dd233209.aspx (select the F# examples to see the code)
Ruby!
Since the OP already got the answer they were looking for, and this has kind of become a list of languages that use 1..n syntax, I'll add one more.
Maple
The wiki page shows a good example
myfac := n -> product( i, i=1..n );
Note however that this is 1 to 10 inclusive
Ruby has VERY similar syntax...
You can read more here:
http://www.ruby-doc.org/core/classes/Range.html
If it's a vague and distant memory, then I'd guess that Pascal or Delphi is the most likely candidate for the language you're thinking of.
It's most commonly used in Pascal in a case statement. See here for example syntax: http://en.wikipedia.org/wiki/Switch_statement#Pascal
It could also be any of a number of other languages that use this syntax, but without knowing a bit more about your programming history, my guess would still be Pascal / Delphi.
Related
Has anyone set out a proposal for a formal pseudo code standard?
Is it better to be a 'rough' standard to infer an understanding?
It is better to be a rough standard; the intent of pseudocode is to be human-readable, not machine-readable, and the goal of actually writing pseudocode is to convey a higher-level description of an algorithm while being unconcerned (typically) with the minutiae of the implementation. My opinion is that for it to qualify as pseudocode there has to be some ambiguity, and your goal should be a clear conveyance of your algorithmic intentions. Stick to common control structures, declarations and concepts that are paradigmatic to your target audience or language and you'll get the point across. If you start getting too formal, you're getting too close to writing actual code.
NOT AN ANSWER.
IMHO forcing a standard (pseudo code syntax, if you will) will cause people to be less clear on what they want to say.
Browse around, try to gather some knowledge about used conventions, and do your best to be clear.
Although this is by no means a formal proposal, Python is considered by some to be Executable Pseudocode.
in my opinion it depends on the people who are using your programs. In books for algorithms the pseudo code is very formal an near to maths, but is also described in a few paragraphs, so this is for scientific situations.
If I develop in others environments I would prefer a not that formal way because that is easier to understand for most people. I prefer spoken words over formalism. If you want formalism you could read the code instead.
I love Ruby, for past couple of years it is my language of choice.
But even since I started learning it, I was repelled with the fact that so often there are several ways to do the same (or equivalent) thing. I'll give couple of examples:
methods often have aliases, so you always have to bother to choose the most adequate, popular or commonly accepted alternative
and and or, besides && and || - just look at how much confusion precedence difference among them causes
for keyword, used almost exclusively by inexperienced non-native Ruby developers
What was the rationale behind such design decisions? Did they (Matz?) believe that the language will be easier to adopt, and therefore more popular that way?
Ruby is inspired by Perl, and one important Perl philosophy is "There is more than one way to do it", i.e. redunancies are fine since they give the programmer more freedom (and increases the odds that the functionality they want is available under the name they'd give it - not only under one). Your decision whether that's actually a good thing.
When Matz wrote Ruby, he tried to follow the 'Principle of Least Surprise'. Often this meant that there'd be more than one way to do the same thing, for example assigning to arrays by using square brackets, or an insert method. I enjoy it, because I find that rather than trying to remember which exact name to use in which situation (I always used to pause for a moment for size vs length in Java), I just just write what seems logical, and usually it will work. When reading the code, it's normally not a problem to use a different name, as the names are usually self-explanatory. So, I don't worry about which is most adequate or popular, I choose the most logical at the time.
Matz was also inspired by Perl, which has 'There's more than one way to do it' as its slogan.
I don't believe Matz was worried about what would be most popular, he just wanted to write the language he wanted to use.
I'm not going to try to explain and vs && though...
Beware that and vs. &&, though similar, have different precedence.
a = b && c # => equivalent to a = (b and c). a is set to a boolean.
a = b and c # => equivalent to (a = b) and c. a is set to b, and expression is a boolean.
There's more than one way to do it, but there may be subtle differences between them.
(update, just noticed you mentioned the precedence difference in your question... sorry. nothing to see here. move along.)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I heard people say they can understand their python code a year later but not their XYZ code. Why? I dont know what is good about python syntax or what is bad about another. I like C# but i have a feeling VB.NET code is easier to read. I am doing language design so what do you find makes code/syntax/language readable or not readable?
Experience.
IMO, one of the big things is significant white space. Block indention goes a long ways and languages like Python and F# that provide a level of significant white space can help with readability.
Code like Java and C# tend to be structured and the readability becomes a focal point of how it was coded to begin with and not of the language itself.
Code is readable when its written in a style of explicit "stating what you want to do".
This only depends on the language in sofar
it allows you to express what you want (functional-programming!)
it doesn't emphasize on cryptical statements
The rest depends on the style you use to write code (even Perl can be understandable!), but certain languages make it easier to hacky statements.
Clear:
expr = if not (ConditionA and ConditionB) then ... else ...
Unclear:
expr = (!(conditionA && conditionB)) ? ... : ...
Clear:
foreach line in lines:
if (line =~ /regex/):
// code
Unclear:
... if /regex/ foreach(#lines);
Clear:
x = length [ x | x <- [1..10], even x ]
Unclear:
int x = 0;
for (int i = 1; i <= 10; ++i)
if ((i&&1)==0) ++x;
Generally what makes Python considered readable is that it forces a standardized indentation. This means that you'll never be forced to wonder whether you're in an if block or a function, it is clear as day. Even poorly written code therefore becomes obvious.
One language which I generally consider difficult to read is PHP for the same reason (or rather, its opposite). Since programmers are allowed to indent at will, and store variables anywhere, it can get convoluted very quickly. Further, since PHP historically did not have case sensitive function names (PHP < 4.4.7 I believe), this means that there really isn't a consistency in the implementation of the core language either... (Don't get me wrong, I like the language, but a bad coder can REALLY make a mess).
JavaScript also has a lot of problems with undisciplined developers. You'll find yourself wondering where variables have been defined and what scope you're in. Code will not be in one consolidated place, but rather spread across multiple files, and often lurking where unexpected.
ActionScript 3 is a bit better. Generally, there has been a move to have everyone use similar syntax's, and Adobe has gone so far as to define its standards and make them accessible and common. It does not take much to see how the ECMAScript implementation which is supported by a for-profit company is superior to the generalized one.
Readability is a function that takes a lot of inputs. I don't think it's really possible to compile a full list of things that can affect a language's readability. The most general way to describe it is "minimizing cognitive load." A few major factors:
Subtleties of meaning. If two code snippets look very similar at a glance but do different things, it hurts readability because the reader has to stop and deduce what's actually happening.
Meaningless code — aka boilerplate. This doesn't necessarily mean code that does nothing, but code that doesn't tell me anything about what we're actually doing. Every bit of code that doesn't express the actual intent of a function or object reduces readability by that much.
Cramming meaning — aka golf. This is the opposite of the boilerplate problem. It's possible to compress code so far that the reader is forced to stop and examine it pretty much character by character. The exact line where this occurs is somewhat subjective (which is part of why some people love Perl and some people hate it), but it's definitely a real phenomenon.
The programmer makes code readable or unreadable, not the language. Thinking otherwise is just fooling yourself. This is because the only people who are qualified to judge readability are those who know the language. To the non-programmer, all languages are equally unreadable.
I heard people say they can understand their python code a year later but not their XYZ code. Why?
Firstly, I don't think that people say that based solely on syntax. There are a lot of other factors to take into consideration, to name just a few:
The fact that some languages tend to promote only one right way to do something (like Python), and others promote many different ways (Ruby for example, from what I hear [disclaimer: I am not a Ruby programmer])
The libraries the language has. The better designed ones tend to be incredibly easy to understand without needing documentation, and this also tends to help remember. A language with good libraries will therefore make things easier.
Having said that, my personal take on Python is the fact that many people call it "executable pseud-code". It supports a wide variety of things that tend to appear in pseudo-code, and as an extension, are the standard way to think about things.
Also, Python's un-C-like syntax, one of the features that make it so disliked by so many people, also makes Python look more like pseudocode.
Well, that's my take on Python's readability.
To be honest when it comes to what makes a language readable it is really seems to boil down to a combination of simplicity and personal preference. (Of course - it is always possible to write unreadable code in any language if you try hard enough). Since personal preference can't really be controlled, it comes down to ease of expression - the more complicated it is in a language to use simple features, the more difficult that language is likely to be in general from a readability standpoint.
A word required when one character will suffice - a stone in the garden of Pascal and VB.
Compare:
Block ()
Begin
// Content
End
vs.
Block
{
// Content
}
It requires extra brain processing to read a word and mentally associate it with a concept, while a single symbol is immediately recognized by its image.
It is the same thing as the difference with natural languages, usual textual languages vs. symbol languages with hieroglyphs (Asian group). The processing of the first group is slower because basically a text is parsed to a set of concepts while hieroglyphs represent concepts themselves. Compare it with what you already know - will a serialization/deserialization from an XML be faster than a custom search over a binary format?
IMHO, the more a computer language resembles a spoken language, the more readable it is. For extreme examples, take languages like J or Whitespace or Brainfuck... completely unreadable to the untrained eye.
But a language that resembles English can be more easily understood. Not that this makes it the best language, as COBOL can attest.
I think it has more to do with the person writing the code rather than the actual language itself. You can write very readable code in any language, and unreadable code in any language. Even a complex Regular expression can be formatted and commented so as to make it easy to read.
a coworker of mine use to have a saying: "You can write crap code in any language." I liked it and wanted to share today. What makes code readable? Here are my thoughts
The ability to read the syntax of the language.
Well formatted code.
Meaningfully named variables and functions
Comments to explain complex processing. Beware, too much commentes can make the code hard to read
Short functions are easier to read than long ones.
None of these have anything to do with the language, it's all about the coder, and the quality of their work.
I will try saying that a code is readable by its simplicity.
You got to get at first sight what it does and what its purpose is. Why write a thousand lines of code when only a few does what is required?
This is the spirit of a functional language like F#, for instance.
For me its mainly the question of wether the language allows you to develop more readable abstractions which do prevent getting lost in details.
This is where OOP comes in very handy with the hiding of details. If i can hide the details of a task behind an interface that has the behavior of a common concept (i.e. iterators in C++) i usually don't have to read the implementation details.
I think, language design (for normal languages, not brainfuck :) ) not matters. To make code readable you should follow standards, code conventions, and don't forget about refactoring.
It's all about clean code.
Keep it small, simple, well named, and formatted.
class ShoppingCart {
def add(item) {
println "you added some $item"
}
def remove(item) {
println "you just took out the $item"
}
}
def myCart = new ShoppingCart()
myCart.with {
add "juice"
add "milk"
add "cookies"
add "eggs"
remove "cookies"
}
The literacy level of the reader.
Two distinct aspects, really. First is syntax and whitespace. Python enforces a whitespace standard, dropping unnecessary {, } and ; characters. This makes it easy on the eyes. Second, and most importantly, clarity of expression- i.e. how easy is it to map code back to the way you think. There are several features (and non-features) in programming languages that contribute to the latter point:
Disallowing jumps. The goto statement in C is a typical example. Code that doesn't keep running out of structured blocks is easier to read.
Minimizing side-effects. Global variables are evil, remember?
Using more tailored functions. How can your head track a for loop with 5 iteration variables? The Common Lisp loop is much easier to read (although VERY difficult to write, but that's a different story)
Lexical closures. You can figure out a variable's value by just looking at it, as opposed to running the code in your head, and then figuring out which statement is shadowing which.
A couple of examples:
(loop
do for tweet = (master-response-parser (twitter-show-status tweet-id))
for tweet-id = tweet-id then (gethash in-reply-to tweet)
while tweet-id
collecting tweet)
and
listOfFacs = [x | x <- [1 ..], x == sumOfFacDigits x]
where sumOfFacDigits x = sum [factorial (x `div` y) | y <- [1 .. 10]]
Concerning the syntax, I think it it imperative that it be fairly descriptive. For instance, in many languages, you have the foreach statement, and each one handles it a bit differently.
// PHP
foreach ($array as $variable) ...
// Ruby
array.each{ |variable| ... }
// Python
for variable in array ...
// Java
for (String variable : array)
Honestly, I feel that PHP and Python have the clearest means of understanding, but, it all boils down to how smart and clear the programmer wants to be. For instance, a bad programmer could write the following:
// PHP
foreach ($user as $_user) ...
My guess is that you would have almost no idea what the heck the code is doing unless you tracked back and attempted to figure out what $user was and why you were iterating over it. Being clear and concise is all about making small chunks of code make sense without having to trace back through the program to figure out what variables/function names are.
Also, I would have to completely agree with whitespace. Tabs, newlines and spacing in-between operators really make a huge difference!
Edit 1: I might also interject that some languages have the syntax and tools readily available to make things more clear. Take Ruby for example:
if [1,2,3].include? variable then ... end
verses, say Java:
if (variable != 1 || variable != 2 || variable != 3) { ... }
One of these (IMHO) is certainly more clear and readable than the other.
I have decided to write a small interpreter as my next project, in Ruby. What knowledge/skills will I need to have to be successful?
I haven't decided on the language to interpret yet, but I am looking for something that is not a toy language, but would be relatively easy to write an interpreter for.
Thanks in advance.
You will have to learn at least:
lexical analysis (grouping characters into tokens)
parsing (grouping tokens together into structure)
abstract syntax trees (representing program structure in a data structure)
data representation (assuming your language will have variables)
an evaluation loop that "runs" your program
An excellent introduction to some of these topics can be found in the introductory text Structure and Interpretation of Computer Programs. The language used in that book is Scheme, which is a robust, well-specified language that is ideally suited for your first interpreter implementation. Highly recommended.
I haven't decided on the language to interpret yet, but I am looking for
something that is not a toy language, but would be relatively easy to write an
interpreter for. Thanks in advance.
Try some dialect of Lisp like Scheme or Clojure. (Now there's an idea: Clojure-in-Ruby, which integrates with Ruby as well as Clojure does with Java.)
With Lisp, there is no need to bother with idiosyncracies of syntax, as Lisp's syntax is much closer to the abstract syntax tree.
This SICP chapter shows how to write a Lisp interpreter in Lisp (a metacircular evaluator). In my opinion this is the best place to start. Then you can move on to Lisp in Small Pieces to learn how to write advanced interpreters and compilers for Lisp. The advantage of implementing a language like Lisp (in Lisp itself!) is that you get the lexical analyzer, parser, AST, data/program representation and REPL for free. You can concentrate on the task of getting your great language working!
There is Tree top project wich can be helpful for you http://treetop.rubyforge.org/
You can checkout Ruby Draft Specification http://ruby-std.netlab.jp/
I had a similar idea a couple of days ago. LISP is by far the easiest to implement because the syntax is so simple, and the data structures that the language manipulates are the same structures that the code is written in. Hence you need only a minimal implementation, and can define the rest in terms of itself.
However, if you are trying to learn about parsing, you may want to do a more complex language with Abstract Syntax Trees, etc.
If you want to check out my (literally two days old) Java implementation of lisp, check out mylisp.googlecode.com. I'm still working on it but it is incredible how short a time it took to get the existing stuff working.
It's not sooo hard. here's a LISP interpreter in ruby and the source is so small you are supposed to copy/paste it. but are you gonna learn LISP now? hehe.
If you're just doing this for fun, make up your own, simple language and just try it. My recommendation would be something like a really simple classic BASIC (no visual basic or object oriented stuff). With line numbers, GOTO, INPUT and PRINT and that's it. You get to do the basics, and you get a better understanding of how things work.
The knowledge you'll need?
Tokenizing (turning that huge chunk of characters into something more efficiently readable, effectively splitting it up into 'words')
Parsing (going over the tokens and building a data structure from it)
Interpreting (looping over the data structure and executing each command)
And for that last one you'll also need a way to keep around variables. Usually you'd just implement a "stack", one huge block of data where you can mark off an area at the end.
It's not implemented in Lisp, but I found Write Yourself A Scheme in 48 Hours to be a very useful document while I was starting out with Haskell (though I didn't get anywhere near finishing it after 48 hours; YMMV). It also gives you a lot of insight into interpreters in general.
I can recommend this book. It discusses patterns for writing parsers and interpreters and more:
http://www.amazon.co.uk/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=language+implementation+patterns&x=0&y=0
Today someone asked me what was wrong with their source code. It was obvious. "Use double equals in place of that single equal in that if statement. Um, I think..." As I remember some languages actually take a single equals for comparison. Since I sometimes forget or mix up the syntax details among the several languages I use, I stepped over to my laptop to try a quickie experiment.
It costs a bit of time and is a break in the flow to try "quick" experiments (though maybe the practice is good for memory.) What tips do you have for keeping straight in your mind the syntax (and other) details of multiple languages?
(And nowadays, this applies just as well to the many wiki-like markups!)
To me, the hardest part isn't the syntax -usually you get into the mode when looking at the code you're working on. The really hard part is remembering the library of the language so you don't go inventing the wheel over and over again. Now if only people would organize their help files so it was easy to search for particular stuff in the library.
IDEs that can draw red and yellow squiggles can help, until you develop that mental muscle memory.
One of the annoying things with XCode (for Cocoa/ObjectiveC) is that you don't get said squiggles until you compile. (As opposed to Eclipse/Java where you get live squiggles).
In my case it's just experience. I think once you code in a language for long enough your brain seems to be able to do language-context-switching with it.
Indeed, on SO I advised not to forget avoiding if (a = b) in Java, and someone reminded me that it is legal only if a and b are boolean! Of course, the advice is good for C, C++, JavaScript and a number of other C-like languages.
Likewise, I realized only recently that var v in JavaScript have a function-level scope only, not a brace-level scope.
Somehow, that's the pitfall of having similar syntaxes, but different behaviors.
For the anecdote, some people in the Lua mailing list complain that this language isn't C-like, with the terse and familiar curly braces, the += and ++, the bitwise operators. They say it hurts adoption of the language, because people are more familiar with C-like syntax.
That's non-sense, Basic was (and still is) widely used with its verbose syntax. And so is Pascal (Delphià. And lot of people find the Lua syntax readable and easy to learn, good for those non familiar to programming (game AI specialists, for example).
Moreover, and to the point, Lua is designed to be integrated to C/C++ programs and to be extended with C[++] functions. And people say the quite different syntaxes helps in the mindset shifting.