I'm interesting why the shorthand forms of the assignment operators only works in one way, that means appending the value of the variable.
Ex. (In Javascript):
x+=y x=x+y
x-=y x=x-y
x*=y x=x*y
x/=y x=x/y
x%=y x=x%y
Frequently I found situations where I need to prepend the variable:
Ex.
x=y+x
Suppose x and y are strings and you are concatenating.
I would like to have a syntax that allow something like:
x=+y
As I do with i++ or ++i incrementing number.
Is there some language that support this?
surely x=y+x is the same as y+=x
I'm puzzled as to why you would learn a new language just to save on 1 character!
However, I would suggest JQuery's .prepend() method
http://api.jquery.com/prepend/
There are languages that allow to define new operators and/or overload existing operators (see operator overloading).
But operators and the use of them should be unambiguous. In your example x=+y could be interpreted as x=y+x (as you denoted) but also as x=(+x) (+ as unary operation like negation operation in -1). This ambiguity can make using a language hard, especially when programmers want to make their code short and concise. That’s also why some languages don’t have syntactic sugar like pre/post increment/decrement operators (e.g. Python).
Related
I'm wondering why calls to operator methods don't require a dot? Or rather, why can't normal methods be called without a dot?
Example
class Foo
def +(object)
puts "this will work"
end
def plus(object)
puts "this won't"
end
end
f = Foo.new
f + "anything" # "this will work"
f plus "anything" # NoMethodError: undefined method `plus' for main:Object
The answer to this question, as to pretty much every language design question is: "Just because". Language design is a series of mostly subjective trade-offs. And for most of those subjective trade-offs, the only correct answer to the question why something is the way it is, is simply "because Matz said so".
There are certainly other choices:
Lisp doesn't have operators at all. +, -, ::, >, = and so on are simply normal legal function names (variable names, actually), just like foo or bar?
(plus 1 2)
(+ 1 2)
Smalltalk almost doesn't have operators. The only special casing Smalltalk has is that methods which consist only of operator characters do not have to end with a colon. In particular, since there are no operators, all method calls have the same precedence and are evaluated strictly left-to-right: 2 + 3 * 4 is 20, not 14.
1 plus: 2
1 + 2
Scala almost doesn't have operators. Just like Lisp and Smalltalk, *, -, #::: and so on are simply legal method names. (Actually, they are also legal class, trait, type and field names.) Any method can be called either with or without a dot. If you use the form without the dot and the method takes only a single argument, then you can leave off the brackets as well. Scala does have precedence, though, although it is not user-definable; it is simply determined by the first character of the name. As an added twist, operator method names that end with a colon are inverted or right-associative, i.e. a :: b is equivalent to b.::(a) and not a.::(b).
1.plus(2)
1 plus(2)
1 plus 2
1.+(2)
1 +(2)
1 + 2
In Haskell, any function whose name consists of operator symbols is considered an operator. Any function can be treated as an operator by enclosing it in backticks and any operator can be treated as a function by enclosing it in brackets. In addition, the programmer can freely define associativity, fixity and precedence for user-defined operators.
plus 1 2
1 `plus` 2
(+) 1 2
1 + 2
There is no particular reason why Ruby couldn't support user-defined operators in a style similar to Scala. There is a reason why Ruby can't support arbitrary methods in operator position, simply because
foo plus bar
is already legal, and thus this would be a backwards-incompatible change.
Another thing to consider is that Ruby wasn't actually fully designed in advance. It was designed through its implementation. Which means that in a lot of places, the implementation is leaking through. For example, there is absolutely no logical reason why
puts(!true)
is legal but
puts(not true)
isn't. The only reason why this is so, is because Matz used an LALR(1) parser to parse a non-LALR(1) language. If he had designed the language first, he would have never picked an LALR(1) parser in the first place, and the expression would be legal.
The Refinement feature currently being discussed on ruby-core is another example. The way it is currently specified, will make it impossible to optimize method calls and inline methods, even if the program in question doesn't actually use Refinements at all. With just a simple tweak, it can be just as expressive and powerful, and ensure that the pessimization cost is only incurred for scopes that actually use Refinements. Apparently, the sole reason why it was specified this way, is that a) it was easier to prototype this way, and b) YARV doesn't have an optimizer, so nobody even bothered to think about the implications (well, nobody except Charles Oliver Nutter).
So, for basically any question you have about Ruby's design, the answer will almost always be either "because Matz said so" or "because in 1993 it was easier to implement that way".
The implementation doesn't have the additional complexity that would be needed to allow generic definition of new operators.
Instead, Ruby has a Yacc parser that uses a statically defined grammar. You get the built-in operators and that's it. Symbols occur in a fixed set of sentences in the grammar. As you have noted, the operators can be overloaded, which is more than most languages offer.
Certainly it's not because Matz was lazy.
Ruby actually has a fiendishly complex grammar that is roughly at the limit of what can be accomplished in Yacc. To get more complex would require using a less portable compiler generator or it would have required writing the parser by hand in C, and doing that would have limited future implementation portability in its own way as well as not providing the world with the Yacc input. That would be a problem because Ruby's Yacc source code is the only Ruby grammar documentation and is therefore "the standard".
Because Ruby has "syntax sugar" that allows for a variety of convenient syntax for preset situations. For example:
class Foo
def bar=( o ); end
end
# This is actually calling the bar= method with a parameter, not assigning a value
Foo.new.bar = 42
Here's a list of the operator expressions that may be implemented as methods in Ruby.
Because Ruby's syntax was designed to look roughly like popular OO languages, and those use the dot operator to call methods. The language it borrowed its object model from, Smalltalk, didn't use dots for messages, and in fact had a fairly "weird" syntax that many people found off-putting. Ruby has been called "Smalltalk with an Algol syntax," where Algol is the language that gave us the conventions you're talking about here. (Of course, there are actually more differences than just the Algol syntax.)
Missing braces was some "advantage" for ruby 1.8, but with ruby 1.9 you can't even write method_0 method_1 some param it will be rejected, so the language goes rather to the strict version instead of freeforms.
Racket is the first dialect of scheme I am learning, and I’m not that far in, however due to scheme’s minimal syntax, I believe it’s safe to assume that a question mark in variable names is not treated by the interpreter any differently than any other viable character.
With that run on sentence out of the way, why does scheme use the symbol “?” to denote a function that returns true or false (called a predicate)? For example, in racket, there is a built in function called number?. number? returns true when applied to any number (1, 5, -5, 2.7, etc), and false otherwise. I believe that number? is short for something along the lines of is_the_following_argument_a_number?. Assuming that is true, the expression (number? 5) translates into (is_the_following_argument_a_number? 5).
In english (the language this variable was written in), the predicate of “is the following argument a number?” can be found by first translating the question into its statement form by moving the verb: “the following argument is a number”, and then extracting the predicate: “is a number”. Now, I’m not the best at speaking languages as I am at programming languages, but I believe that is correct. Also, sorry if this is turning into an english question more than a scheme question.
What I am having trouble understanding is the fact that if the lisp community calls number? a predicate, why is the variable name not a predicate in english (I say that the variable name isn’t a predicate in english, not the type of function it is in scheme isn’t a predicate) I found what I thought the predicate of what I thought number? translated into, as being “is a number”, not the entire question “is the following argument a number?”, just the predicate. So, why does the lisp community choose to name predicates in scheme as questions in english? I believe that this is because the community mistakes the values of statements (true or false) for the answers to yes/no questions (yes or no (obviously)). Am I wrong to think this?
A predicate in computer science doesn't have anything to do with a predicate in language grammar. They both derive from having to do with thruth but otherwise they are unrelated concepts. A predicate in Scheme is a procedure that checks if something is true or not and in reality it can have any name. However since we can code information in the name it should contain to the point what it is about, which can be any word or even sentence delimited by hyphens, ending with question mark to indicate that it is indeed a predicate procedure. Both the name in the definition and the usage will stand out to the reader so that they know it without looking at the documentation or the implementation.
Scheme predicates in the very first Scheme report and the second looked like Common Lisp and the predicates in Scheme followed the same naming convention as Common Lisp has today. Old procedures that were in LISP 1.5 has the same name without the common p-ending while new introduced ones had it, like procp (called procedure? today). The reason for this is that Scheme run under MacLisp and borrowed all the dull stuff from it while it was lexical closures that were the magic of Scheme. Actually, it looked a lot looked like Common Lisp.
In the RRRS or R2RS they made all predicated end with ? and it worked with eq? and friends but the arithmetic predicates that used symbols, like <?, =?, <=?, etc, was not a success and were removed in the R3RS.
In a conditional we call the parts predicate, consequence and alternative:
(if (< a 0) ; predicate
(- a) ; consequent
a) ; alternative
Here a predicate is just an expression that either turns true or false. Actually all Scheme values are allowed and only #f is false. A predicate procedure is a procedure that always either returns #t or #f and it is as you are writing that number? check whether the argument is a number and string=? checks whether two arguments are strings that look the same. The pattern is very good and you can imagine what it does just by looking at the name being used while keeping the procedure names short. In speech we often do the same, like saying "coffee?" and getting either positive or negative response. It works most of the times and some times people need to spell it out that they are offering them a hot beverage whose name is coffee. In coding that means looking in the documentation or definition of a procedure.
There are other naming conventions in Scheme.
foo->bar is a procedure that takes an argument that is a foo type and it returns it as a bar type. number->string takes a number and makes a string representation of it. (number->string 5) ; ==> "5"
foo! may change the objects you pass it in order to do the job slightly faster than if it was named foo. set! and set-car! are examples.
*variable* are from CL but in Scheme you can be sure it is a global variable.
CONSTANT, +CONSTANT+, +constant+ are common naming for variables that are considered to be constants.
form* does something similar to what form does, but not quite. Special form let* does something similar to let but it binds one variable at a time.
The code works whether you follow these or not, but you are making it easier to read by using this convention and when you try to make a somparison procedure foo=? is just as easy to understand as are-these-two-foo-things-equal and foo? is just as easy as argument-is-a-foo.
Note that other programming languages also does this. In Java one write isFoo and equals so it's not spelled out there either.
It's just a programming convention. Predicates - meaning: those procedures that return true or false, are defined with a name that ends in a question mark. Similarly, Procedures that have side effects (e.g., that mutate state) are defined with a name that ends in exclamation mark.
The case/when statements remind me of try/catch statements in Python, which are fairly expensive operations. Is this similar with the Ruby case/when statements? What advantages do they have, other than perhaps being more concise, to if/elsif Ruby statements? When would I use one over the other?
The case expression is not at all like a try/catch block. The Ruby equivalents to try and catch are begin and rescue.
In general, the case expression is used when you want to test one value for several conditions. For example:
case x
when String
"You passed a string but X is supposed to be a number. What were you thinking?"
when 0
"X is zero"
when 1..5
"X is between 1 and 5"
else
"X isn't a number we're interested in"
end
The case expression is orthogonal to the switch statement that exists in many other languages (e.g. C, Java, JavaScript), though Python doesn't include any such thing. The main difference with case is that it is an expression rather than a statement (so it yields a value) and it uses the === operator for equality, which allows us to express interesting things like "Is this value a String? Is it 0? Is it in the range 1..5?"
Ruby's begin/rescue/end is more similar to Python's try/catch (assuming Python's try/catch is similar to Javascript, Java, etc.). In both of the above the code runs, catches errors and continues.
case/when is like C's switch and ignoring the === operator that bjhaid mentions operates very much like if/elseif/end. Which you use is up to you, but there are some advantages to using case when the number of conditionals gets long. No one likes /if/elsif/elsif/elsif/elsif/elsif/end :-)
Ruby has some other magical things involving that === operator that can make case nice, but I'll leave that to the documentation which explains it better than I can.
In JavaScript,
f = function(x) {
return x + 1;
}
(5)
seems at a glance as though it should assign f the successor function, but actually assigns the value 6, because the lambda expression followed by parentheses is interpreted by the parser as a postfix expression, specifically a function call. Fortunately this is easy to fix:
f = function(x) {
return x + 1;
};
(5)
behaves as expected.
If Python allowed a block in a lambda expression, there would be a similar problem:
f = lambda(x):
return x + 1
(5)
but this time we can't solve it the same way because there are no semicolons. In practice Python avoids the problem by not allowing multiline lambda expressions, but I'm working on a language with indentation-based syntax where I do want multiline lambda and other expressions, so I'm trying to figure out how to avoid having a block parse as the start of a postfix expression. Thus far I'm thinking maybe each level of the recursive descent parser should have a parameter along the lines of 'we have already eaten a block in this statement so don't do postfix'.
Are there any existing languages that encounter this problem, and how do they solve it if so?
Python has semicolons. This is perfectly valid (though ugly and not recommended) Python code: f = lambda(x): x + 1; (5).
There are many other problems with multi-line lambdas in otherwise standard Python syntax though. It is completely incompatible with how Python handles indentation (whitespace in general, actually) inside expressions - it doesn't, and that's the complete opposite of what you want. You should read the numerous python-ideas thread about multi-line lambdas. It's somewhere between very hard to impossible.
If you want arbitrarily complex compound statements inside lambdas you can't use the existing rules for multi-line expressions even if you made all statements expressions. You'd have to change the indentation handling (see the language reference for how it works right now) so that expressions can also contain blocks. This is hard to do without breaking perfectly fine Python code, and will certainly result in a language many Python programmers will consider worse in several regards: Harder to understand, more complex to implement, permits some stupid errors, etc.
Most languages don't solve this exact problem at all. Most candidates (Scala, Ruby, Lisps, and variants of these three) have explicit end-of-block tokens. I know of two languages that have the same problem, one of which (Haskell) has been mentioned by another answer. Coffeescript also uses indentation without end-of-block tokens. It parses the transliteration of your example correctly. However, I could not find any specification of how or why it does this (and I won't dig through the parser source code). Both differ significantly from Python in syntax as well as design philosophy, so their solution is of little (if any) use for Python.
In Haskell, there is an implicit semicolon whenever you start a line with the same indentation as a previous one, assuming the parser is in a layout-sensitive mode.
More specifically, after a token is encountered that signals the start of a (layout-sensitive) block, the indentation level of the first token of the first block item is remembered. Each line that is indented more continues the current block item; each line that is indented the same starts a new block item, and the first line that is indented less implies the closure of the block.
How your last example would be treated depends on whether the f = is a block item in some block or not. If it is, then there will be an implicit semicolon between the lambda expression and the (5), since the latter is indented the same as the former. If it is not, then the (5) will be treated as continuing whatever block item the f = is a part of, making it an argument to the lamda function.
The details are a bit messier than this; look at the Haskell 2010 report.
Related Questions: Benefits of using short-circuit evaluation, Why would a language NOT use Short-circuit evaluation?, Can someone explain this line of code please? (Logic & Assignment operators)
There are questions about the benefits of a language using short-circuit code, but I'm wondering what are the benefits for a programmer? Is it just that it can make code a little more concise? Or are there performance reasons?
I'm not asking about situations where two entities need to be evaluated anyway, for example:
if($user->auth() AND $model->valid()){
$model->save();
}
To me the reasoning there is clear - since both need to be true, you can skip the more costly model validation if the user can't save the data.
This also has a (to me) obvious purpose:
if(is_string($userid) AND strlen($userid) > 10){
//do something
};
Because it wouldn't be wise to call strlen() with a non-string value.
What I'm wondering about is the use of short-circuit code when it doesn't effect any other statements. For example, from the Zend Application default index page:
defined('APPLICATION_PATH')
|| define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application'));
This could have been:
if(!defined('APPLICATION_PATH')){
define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application'));
}
Or even as a single statement:
if(!defined('APPLICATION_PATH'))
define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application'));
So why use the short-circuit code? Just for the 'coolness' factor of using logic operators in place of control structures? To consolidate nested if statements? Because it's faster?
For programmers, the benefit of a less verbose syntax over another more verbose syntax can be:
less to type, therefore higher coding efficiency
less to read, therefore better maintainability.
Now I'm only talking about when the less verbose syntax is not tricky or clever in any way, just the same recognized way of doing, but in fewer characters.
It's often when you see specific constructs in one language that you wish the language you use could have, but didn't even necessarily realize it before. Some examples off the top of my head:
anonymous inner classes in Java instead of passing a pointer to a function (way more lines of code).
in Ruby, the ||= operator, to evaluate an expression and assign to it if it evaluates to false or is null. Sure, you can achieve the same thing by 3 lines of code, but why?
and many more...
Use it to confuse people!
I don't know PHP and I've never seen short-circuiting used outside an if or while condition in the C family of languages, but in Perl it's very idiomatic to say:
open my $filehandle, '<', 'filename' or die "Couldn't open file: $!";
One advantage of having it all in one statement is the variable declaration. Otherwise you'd have to say:
my $filehandle;
unless (open $filehandle, '<', 'filename') {
die "Couldn't open file: $!";
}
Hard to claim the second one is cleaner in that case. And it'd be wordier still in a language that doesn't have unless
I think your example is for the coolness factor. There's no reason to write code like that.
EDIT: I have no problem with doing it for idiomatic reasons. If everyone else who uses a language uses short-circuit evaluation to make statement-like entities that everyone understands, then you should too. However, my experience is that code of that sort is rarely written in C-family languages; proper form is just to use the "if" statement as normal, which separates the conditional (which presumably has no side effects) from the function call that the conditional controls (which presumably has many side effects).
Short circuit operators can be useful in two important circumstances which haven't yet been mentioned:
Case 1. Suppose you had a pointer which may or may not be NULL and you wanted to check that it wasn't NULL, and that the thing it pointed to wasn't 0. However, you must not dereference the pointer if it's NULL. Without short-circuit operators, you would have to do this:
if (a != NULL) {
if (*a != 0) {
⋮
}
}
However, short-circuit operators allow you to write this more compactly:
if (a != NULL && *a != 0) {
⋮
}
in the certain knowledge that *a will not be evaluated if a is NULL.
Case 2. If you want to set a variable to a non-false value returned from one of a series of functions, you can simply do:
my $file = $user_filename ||
find_file_in_user_path() ||
find_file_in_system_path() ||
$default_filename;
This sets the value of $file to $user_filename if it's present, or the result of find_file_in_user_path(), if it's true, or … so on. This is seen perhaps more often in Perl than C, but I have seen it in C.
There are other uses, including the rather contrived examples which you cite above. But they are a useful tool, and one which I have missed when programming in less complex languages.
Related to what Dan said, I'd think it all depends on the conventions of each programming language. I can't see any difference, so do whatever is idiomatic in each programming language. One thing that could make a difference that comes to mind is if you had to do a series of checks, in that case the short-circuiting style would be much clearer than the alternative if style.
What if you had a expensive to call (performance wise) function that returned a boolean on the right hand side that you only wanted called if another condition was true (or false)? In this case Short circuiting saves you many CPU cycles. It does make the code more concise because of fewer nested if statements. So, for all the reasons you listed at the end of your question.
The truth is actually performance. Short circuiting is used in compilers to eliminate dead code saving on file size and execution speed. At run-time short-circuiting does not execute the remaining clause in the logical expression if their outcome does not affect the answer, speeding up the evaluation of the formula. I am struggling to remember an example. e.g
a AND b AND c
There are two terms in this formula evaluated left to right.
if a AND b evaluates to FALSE then the next expression AND c can either be FALSE AND TRUE or FALSE AND FALSE. Both evaluate to FALSE no matter what the value of c is. Therefore the compiler does not include AND c in the compiled format hence short-circuiting the code.
To answer the question there are special cases when the compiler cannot determine whether the logical expression has a constant output and hence would not short-circuit the code.
Think of it this way, if you have a statement like
if( A AND B )
chances are if A returns FALSE you'll only ever want to evaluate B in rare special cases. For this reason NOT using short ciruit evaluation is confusing.
Short circuit evaluation also makes your code more readable by preventing another bracketed indentation and brackets have a tendency to add up.