Why are scala Vals not lazy by default - performance

I have noticed that I almost exclusively use lazy val assignments as they often avoid unnecessary computations, and I can't see that many situations where one would not want to do so (dependency on mutable variables being a notable exceptions of course).
It would seem to me that this is one of the great advantages of functional programming and one should encourage its use whenever possible and, if I understood correctly, Haskell does exactly this by default.
So why are Scala values not lazy by default? Is it solely to avoid issues relating to mutable variables?

The big difference here between Scala and Haskell is that Scala allows side-effects whereas Haskell does not.
Laziness results in all sorts of problems in a language which allows side-effects at arbitrary points in the program, as the order in which the side-effects take place becomes non-deterministic.
Nearly transparent Java interoperability plays a large role in the design of Scala, and Java libraries are typically full of side-effects.

Scala is a strict language. Laziness is not only about vals, it's about an evaluation strategy. Should arguments to a function be evaluated before calling the function (what if they are not used?)? In Scala (like most other languages) they are. This strategy carries over to other contexts, including vals and vars.
It would be awkward to break this rule for vals, but laziness can be useful and provided as an opt-in.

As you noted, dependency on mutable variables is incompatible incompatible with lazy evaluations.
Note, that scala is JVM language and Scala programs often use Java libraries, wich are not functional at all. Laziness by default will cause a lot of problems with Java libraries.

Related

Why Clojure/Lisp Programs are faster than other dynamic languages?

Based on language shootouts in a past few years, Clojure and other Lisps consistently perform better than most other dynamic languages. Why is that so?
Is it because of its homoiconicity?
Edit:
I did not know that Clojure is compiled into bytecode just like Java and Scala.
This stackoverflow thread threw light on why Clojure reaps the advantage of being both compiled and interpreted.
It is almost impossible to answer this question - it depends very much on how well the benchmark code is written, what exactly is being tested, whether you are allowed to use libraries that exploit native code, whether you are writing "idiomatic" code or optimising for performance etc.
So as always, you should treat all microbenchmarks with caution.
Having said that, the following reasons might give Clojure an advantage however in certain circumstances:
Clojure is always compiled - first down to bytecode, then down to native code by the JVM JIT compiler. This can give it a speed advantage in many cases, particularly over langauges that rely on some form of interpretation. In theory at least, you should be able to match pure Java speed in any circumstance where Clojure enables you to produce the same bytecode (which is reasonably often, though not always....)
Clojure can exploit JVM primitives and static typing - despite being a dynamic language, Clojure will compile statically types or primitive code if you give it enough hints. This can easily give a 10x boost in performance, though at the expense of making your code a bit longer/uglier.....
Clojure has heavily optimised certain data structures and operations - in particular the immutable persistent data structures and certain functional programming constructs like "reduce".
Macros enable powerful compile-time optimisations - if you use macros cleverly, you can do some quite sophisticated optimisations at compile time, effectively using code-generation to produce the code that will be most efficient at runtime. This is an advantage many Lisps share (especially Common Lisp, which was one of the big inspirations for Clojure). As nulvinge points out, homoiconicity isn't strictly necessary to achieve this (C++ also has macros!), but being a homoiconic language makes macros a lot easier.
The modern JVMs are brilliantly engineered - Clojure takes advantage of the thousands of man-years of engineering that have gone into the JVM, the Java runtime libraries, the garbage collection algorithms etc. Non-JVM languages don't get these benefits.

Why doesn't Haskell have symbols (a la ruby) / atoms (a la erlang)?

The two languages where I have used symbols are Ruby and Erlang and I've always found them to be extremely useful.
Haskell does have algebraic datatypes, but I still think symbols would be mighty convenient. An immediate use that springs to mind is that since symbols are isomorphic to integers you can use them where you would use an integral or a string "primary key".
The syntactic sugar for atoms can be minor - :something or <something> is an atom. All atoms are instances of a Type called Atom which derives Show and Eq. You can then use it for more descriptive error codes, for example
type ErrorCode = Atom
type Message = String
data Error = Error ErrorCode Message
loginError = Error :redirect "Please login first"
In this case :redirect is more efficient than using a string ("redirect") and easier to understand than an integer (404).
The benefit may seem minor, but I say it is worth adding atoms as a language feature (or at least a GHC extension).
So why have symbols not been added to the language? Or am I thinking about this the wrong way?
I agree with camccann's answer that it's probably missing mainly because it would have to be baked quite deeply into the implementation and it is of too little use for this level of complication. In Erlang (and Prolog and Lisp) symbols (or atoms) usually serve as special markers and serve mostly the same notion as a constructor. In Lisp, the dynamic environment includes the compiler, so it's partly also a (useful) compiler concept leaking into the runtime.
The problem is the following, symbol interning is impure (it modifies the symbol table). Because we never modify an existing object it is referentially transparent, however, but if implemented naïvely can lead to space leaks in the runtime. In fact, as currently implemented in Erlang you can actually crash the VM by interning too many symbols/atoms (current limit is 2^20, I think), because they can never get garbage collected. It's also difficult to implement in a concurrent setting without a huge lock around the symbol table.
Both problems can be (and have been) solved, however. For example, see Erlang EEP 20. I use this technique in the simple-atom package. It uses unsafePerformIO under the hood, but only in (hopefully) rare cases. It could still use some help from the GC to perform an optimisation similar to indirection shortening. It also uses quite a few IORefs internally which isn't too great for performance and memory usage.
In summary, it can be done but implementing it properly is non-trivial. Compiler writers always weigh the power of a feature against its implementation and maintenance efforts, and it seems like first-class symbols lose out on this one.
I think the simplest answer is that, of the things Lisp-style symbols (which is where both Ruby and Erlang got the idea, I believe) are used for, in Haskell most are either:
Already done in some other fashion--e.g. a data type with a bunch of nullary constructors, which also behave as "convenient names for integers".
Awkward to fit in--things that exist at the level of language syntax instead of being regular data usually have more type information associated with them, but symbols would have to either be distinct types from each other (nearly useless without some sort of lightweight ad-hoc sum type) or all the same type (in which case they're barely different from just using strings).
Also, keep in mind that Haskell itself is actually a very, very small language. Very little is "baked in", and of the things that are most are just syntactic sugar for other primitives. This is a bit less true if you include a bunch of GHC extensions, but GHC with -XAndTheKitchenSinkToo is not the same language as Haskell proper.
Also, Haskell is very amenable to pseudo-syntax and metaprogramming, so there's a lot you can do even without having it built in. Particularly if you get into TH and scary type metaprogramming and whatever else.
So what it mostly comes down to is that most of the practical utility of symbols is already available from other features, and the stuff that isn't available would be more difficult to add than it's worth.
Atoms aren't provided by the language, but can be implemented reasonably as a library:
http://hackage.haskell.org/package/simple-atom
There are a few other libs on hackage, but this one looks the most recent and well-maintained.
Haskell uses type constructors* instead of symbols so that the set of symbols a function can take is closed, and can be reasoned about by the type system. You could add symbols to the language, but it would put you in the same place that using strings would - you'd have to check all possible symbols against the few with known meanings at runtime, add error handling all over the place, etc. It'd be a big workaround for all the compile-time checking.
The main difference between strings and symbols is interning - symbols are atomic and can be compared in constant time. Both are types with an essentially infinite number of distinct values, though, and against the grain of Haskell's specifying arguments and results with finite types.
I'm more familiar with OCaml than Haskell, so "type constructor" may not be the right term. Things like None or Just 3.
An immediate use that springs to mind is that since symbols are isomorphic to integers you can use them where you would use an integral or a string "primary key".
Use Enum instead.
data FileType = GZipped | BZipped | Plain
deriving Enum
descr ft = ["compressed with gzip",
"compressed with bzip2",
"uncompressed"] !! fromEnum ft

What are the features of dynamic languages (like Ruby or Clojure) which you are missing in Scala?

What do you lose in practice when you choose a statically-typed language such as Scala (or F#, Haskell, C#) instead of dynamically-typed ones like Ruby, Python, Clojure, Groovy (which have macros or runtime metaprogramming capabilities)? Please consider best statically-typed languages and best (in your opinion) dynamically-typed languages, not the worst ones.
Answers Summary:
Key advantages of dynamic languages like Ruby over statically-typed language like Scala IMHO are:
Quick edit-run cycle (does JavaRebel reduces the gap?)
Currently community of Scala/Lift is much smaller then of Ruby/Rails or Python/Django
Possible to modify type definitions (though motivation or need for that is not very clear)
In principle, you give up being able to ignore what type you're using when it is not clear (in the static context) what the right thing to do is, and that's about it.
Since complex type-checking can be rather time-consuming, you also probably are forced to give up fast on-line metaprogramming.
In practice, with Scala, you give up very little else--and nothing that I particularly care about. You can't inject new methods, but you can compile and run new code. You do have to specify types in function arguments (and the return type with recursive functions), which is slightly annoying if you never make type errors yourself. Since it compiles each command, the Scala REPL isn't as snappy as e.g. the Python shell. And since it uses Java reflection mechanisms, you don't have quite the ease of online inspection that you do with e.g. Python (not without building your own inspection library, anyway).
The choice of which static or dynamic language is more significant than the static/dynamic choice itself. Some dynamic languages have good performance and good tools. Some static languages can be concise, expressive, and incremental. Some languages have few of these qualities, but do have large libraries of proven code.
Dynamic languages tend to have much more flexible type systems. For example, Python lets you inject a new method into an existing classes, or even into a single object.
Many (not all) static languages lack the facility to construct complex literals. For instance, languages like C# and Java cannot easily mimic the following JavaScript { 'request':{'type':'GET', 'path':mypath}, 'oncomplete':function(response) { alert(response.result) } }.
Dynamic languages have very fluid semantics. Python allows import statements, function definitions and class definitions to appear inside functions and if statements.
eval is a staple of most dynamic languages and few static languages.
Higher order programming is easier (in my subjective opinion) in dynamic languages than static languages, due to the awkwardness of having to fully specify the types of function parameters.
This is particulary so with recursive HOP constructs where the type system can really get in the way.
Dynamic language users don't have to deal with covariance and contravariance.
Generic programming comes practically free in dynamic languages.
I'm not sure if you lose anything but simplicity. Static type systems are an additional burden to learn.
I suppose you usually also lose eval, but I never use it, even in dynamic languages.
I find the issue is much more about everything else when it comes to choosing which language to use for a given task. Tooling, culture, libraries are all much more interesting than typing when it comes to solving a problem with a language.
Programming language research, on the other hand, is completely different. :)
Some criticism of Scala has been expressed by Steve Yegge here and here, and by Guido van Rossum, who mainly attacked Scala's type system complexity. They clearly aren't "Scala programmers" though. On the other hand, here's some praise from James Strachan.
My 2 cents...
IMO (strong) statically-typed languages might reduce the amount of necessary testing code, because some of that work will be done by the compiler. On the other hand, if the compiling step is relatively long, it makes it more difficult to do "incremental-style" programming, which in the real life might result in error-prone code that was only tested to pass the compiler.
On the other hand, dynamically-typed languages feel like there is less threshold to change things, that might reduce the responding time from the point of bug-fixing and improvement, and as a result might provide a smoother curve during application development: handling constant flow of small changes is easier/less risky than handling changes which are coming in bug chunks.
For example, for the project where the design is very unclear and is supposed to change often, it might have been easier to use dynamic language than a static one, if it helps reduce interdependencies between different parts. (I don't insist on that one though:) )
I think Scala sits somewhere in between (e.g. you don't have to explicitly specify types of the variables, which might ease up code maintenance in comparison with e.g. C++, but if you end up with the wrong assumption about types, the compiler will remind about it, unlike in PHP where you can write whatever and if you don't have good tests covering the functionality, you are doomed to find it out when everything is live and bleeding). Might be terribly wrong of course :)
In my opinion, the difference between the static and dynamic typing comes down to the style of coding. Although there is structural types in Scala, most of the time the programmer is thinking in terms of the type of the object including cool gadgets like trait. On the other hand, I think Python/Javascript/Ruby programmers think in terms of prototype of the object (list of methods and properties), which is slightly different from types.
For example, suppose there's a family of classes called Vehicle whose subclasses include Plane, Train, and Automobile; and another family of classes called Animal whose subclasses include Cat, Dog, and Horse. A Scala programmer would probably create a trait called Transportation or something which has
def ride: SomeResult
def ride(rider: Someone): SomeResult
as a member, so she can handle both Train and Horse as a means of transportation. A Python programmer would just pass the train object without additional code. At the run time the language figures out that the object supports ride.
The fact that the method invocations are resolved at the runtime allows languages like Python and Ruby to have libraries that redefines the meaning of properties or methods. A good example of that is O/R mapping or XML data binding, in which undefined property name is interpreted to be the field name in a table/XML type. I think this is what people mean by "flexibility."
In my very limited experience of using dynamic languages, I think it's faster coding in them as long as you don't make mistakes. And probably as you or your coworkers get good at coding in dynamic language, they would make less mistakes or start writing more unit tests (good luck). In my limited experience, it took me very long to find simple errors in dynamic languages that Scala can catch in a second. Also having all types at compile time makes refactoring easier.

Questions about Scala from a Rubyist

I have recently been looking around to learn a new language during my spare time and Scala seems to be very attractive.
I have a few questions regarding it:
Will not knowing Java impose a
challange in learning it? Will it be
a big disadvantage
later on? ( i.e How often do people rely on
Java-specific libraries? )
How big of a difference it is
compared to Ruby? (Apart from being
statically typed) Does it introduce
a lot of new terms, or will I be
familiar with most of the language's
mechanisms?
What resources would you recommend?
I have my eye on Programming Scala
and Beginning Scala books
Although subjective, is Scala fun to programme in? : P
Thanks
There are many concepts that are shared between Ruby and Scala. It's been a while since I've coded Ruby, so this isn't exhaustive.
Ruby <==> Scala (Approximately!)
Mixins <==> Traits
Monkey Patching <==> Pimp My Library (Implicit Conversions to a wrapper with extra methods)
Proc/Closure <==> Function/Function Literal
Duck Typing <==> Structural Types
Last Argument as a Proc <==> Curried Parameter List (see Traversable#flatMap)
Enumerable <==> Traversable
collect <==> map
inject <==> foldLeft/foldRight
Symbol.toProc <==> Placeholder syntactic sugar: people.map(_.name)
Dynamic Typing conciseness <==> Type Inference
Nil <==> null, although Option is preferable. (Not Nil, which is an empty list!)
Everything is an expression <==> ditto
symbols/hashes as arguments <==> Named and Default Parameters
Singleton <==> object Foo {}
Everthing is an object <==> Everthing is a type or an object (including functions)
No Primitives <==> Unified type system, Any is supertype for primitives and objects.
Everything is a message <==> Operators are just method calls
Ruby's Features you might miss
method_missing
define_method etc
Scala Features you should learn
Pattern Matching
Immutable Classes, in particular Case Classes
Implicit Views and Implicit Parameters
Types, Types, and more Types: Generics, Variance, Abstract Type Members
Unification of Objects and Functions, special meaning of apply and update methods.
Here is my take on it:
Never mind not knowing Java.
Scala relies a lot on Java libraries. That doesn't matter at all. You might have trouble reading some examples, sure, but not enough to be a hindrance. With little time, you won't even notice the difference between reading a Java API doc and a Scala API doc (well, except for the completely different style of the newest scaladoc).
Familiarity with the JVM environment, however, is often assumed. If I can make one advise here, it is to avoid Maven at first, and use SBT as a build tool. It will be unnecessary for small programs, but it will make much of the kinks in the Java-lang world easier to deal with. As soon as you want an external library, get to know SBT. With it, you won't have to deal with any XML: you write your build rules in Scala itself.
You may find it hard to get the type concepts and terms. Scala not only is statically typed, but it has one of the most powerful type systems on non-academic languages out there. I'm betting this will be the source of most difficulty for you. Other concepts have different terminology, but you'll quickly draw parallels with Ruby.
This is not that big of a hurdle, though -- you can overcome it if you want to. The major downside is that you'll probably feel any other statically typed language you learn afterwards to be clumsy and limited.
You didn't mention which Programming Scala you had your eyes on. There are two, plus one Programming in Scala. That latter one was written, among others, by the language creator, and is widely considered to be an excellent book, though, perhaps, a bit slow. One of the Programming Scala was written by a Twitter guy -- Alex Payne -- and by ex-Object Mentor's Dean Wampler. It's a very good book too. Beginning Scala was written by Lift's creator, David Pollack, and people have said good things about it to. I haven't heard anyone complain about any of the Scala books, in fact.
One of these books would certainly be helpful. Also, support on Stack Overflow for Scala questions is pretty good -- I do my best to ensure so! :-) There's the scala-users mailing list, where one can get answers too (as long as people aren't very busy), and there's the #scala IRC channel on Freenode, where you'll get good support as well. Sometimes people are just not around, but, if they are, they'll help you.
Finally, there are blogs. The best one for beginners is probably Daily Scala. You can find many, many others are Planet Scala. Among them, my own Algorithmically Challenged, which isn't getting much love of late, but I'll get back to it. :-)
Scala has restored fun in programming for me. Of course, I was doing Java, which is booooring, imho. One reason I spend so much time answering Stack Overflow questions, is that I enjoy working out solutions for the questions asked.
I'm going to introduce a note of caution about how much Java knowledge is required because I disagree that it isn't an issue at all. There are things about Java that are directly relevant to scala and which you should understand.
The Java Memory Model and what mechanisms the platform provides for concurrency. I'm talking about synchronization, threads etc
The difference between primitive types (double, float etc) and reference types (i.e. subclasses of Object). Scala provides some nice mechanisms to hide this from the developer but it is very important, if writing code which must be performant, to know how these work
This goes both ways: the Java runtime provides features that (I suspect, although I may be wrong) are not available in Ruby and will be of enormous benefit to you:
Management Extensions (MBeans)
JConsole (for runtime monitoring of memory, CPU, debugging concurrency problems)
JVisualVM (for runtime instrumentation of code to debug memory and performance problems)
These points #1 and #2 are not insurmountable obstacles and I think that the other similarities mentioned here will work strongly in your favour. Oh, and Scala is most certainly a lot of fun!
I do not have a Ruby background but nevertheless, I might be able to help you out.
I don't thing not knowing Java is a disadvantage, but it might help. In my opinion, Java libraries are used relatively often, but even a trained Java coder don't know them all, so no disadvantage here. You will learn some parts of the Java library by learning Scala because even the Scala libraries use them.
--
I started out by reading Programming Scala and turned over to read the source of the Scala library. The latter helped a lot to understand the language. And as always: Code, Code, Code. Reading without coding wont get you anywhere, but I'm sure you know that already. :-)
Another helpful resources are blogs, see https://stackoverflow.com/questions/1445003/what-scala-blogs-do-you-regularly-follow for a compilation of good Scala blogs.
It is! As you stated, this is very subjective. But for me, coming from a Java background, It is a lot of fun.
This is very late, but I agree to some extent with what oxbow_lakes said. I have recently transitioned from Python to Scala, and knowing how Java works -- especially Java limitations regarding generic types -- has helped me understand certain aspects of Scala.
Most noticeably:
Java has a horribly broken misfeature known as "type erasure". This brokenness is unfortunately present in the JVM as well. This particularly affects programming with generic types -- an issue that simply doesn't come up at all in dynamically-typed languages like Ruby and Python but is very big in statically typed languages. Scala does about as good a job as it can working around this, but the magnitude of the breakage means that some of it inevitably bleeds through into Scala. In addition, some of the fixes in Scala for this issue (e.g. manifests) are recent and hackish, and really require an understanding of what's going in underneath. Note that this problem will probably not affect your understanding of Scala at first, but you'll run up against it when you start writing real programs that use generic types, as there are things you'll try to do that just won't work, and you won't know why unless/until you understand the limitations forced by type erasure.
Sooner or later you'll also run up against issues related to another Java misfeature, which is the division of types into objects (classes) vs. primitive types (ints, floats, booleans) -- and in particular, the fact that primitive types aren't part of the object system. Scala actually does an awesome job hiding this from you, but it can be helpful to know about what Java is doing in certain corner cases that otherwise may be tricky -- particularly involving generic types, largely because of the type-erasure brokenness described in #1. (Type erasure also results in a major performance hit when using arrays, hash tables, and similar generic types over primitives; this is one area where knowing Java will help a lot.)
Misfeature #3 -- arrays are also handled specially and non-orthogonally in Java. Scala's hiding of this is not quite as seamless as for primitives, but much better than for type erasure. The hiding mechanism sometimes gets exposed (e.g. the ArrayWrapper type), which may occasionally lead to issues -- but the biggest problem in practice, not surprisingly, is again with generic types.
Scala class parameters and the way that Scala handles class constructors. In this case, Java isn't broken. Arguably, Scala isn't either, but the way it handles class constructors is rather unusual, and in practice I've had a hard time understanding it. I've only really been able to make sense of Scala's behavior by figuring out how the relevant Scala code gets translated into Java (or more correctly, into compiled Java), and then reasoning over what Java would do. Since I assume that Ruby works much like Java in this respect, I don't think you'll run into too many problems, although you might have to do the same mental conversion.
I/O. This is actually a library issue rather than a language issue. In most cases, Scala provides its own libraries, but Scala doesn't really have an I/O library, so you pretty much have no choice but to use Java's I/O library directly. For a Python or Ruby programmer, this transition is a bit painful, since Java's I/O library is big and bulky, and not terribly easy to use for doing simple tasks, e.g. iterating over all the lines in a file.
Note that besides I/O, you also need to use Java libraries directly for other cases where you interact with the OS or related tasks, e.g. working with times and dates or getting environment variables, but usually this isn't too hard to figure out. The other main Java libraries you might need to use are
Subprocess invocation, also somewhat big and bulky
Networking -- but this is always somewhat painful
Reflection, i.e. dynamically examining the methods and/or fields on a class, or dynamically invoking a method by name when the name isn't known at compile time. This is somewhat esoteric stuff that most people don't need to deal with. Apparently Scala 2.10 will have its own reflection library, but currently you have to use the Java reflection API's, which means you need to know a fair amount about how Scala gets converted to Java. (Thankfully, there's a -print option to the Scala compiler to show exactly how this conversion happens.)
Re. point 1. Not being familiar with Java the language is not necessarily a problem. 3rd party libraries integrate largely seamlessly into Scala. However some awareness of the differences in collections may be good (e.g. a Scala list is not a traditional Java list, and APIs may expect the latter).
The Java-related skills that carry over are related to Java the platform. i.e. you're still working with a JVM that performs class-loading, garbage collection, JIT compilation etc. So experience in this is useful. But not at all necessary.
Note that Scala 2.8 is imminent, and there are some incompatible changes wrt. 2.7. So any book etc. you buy should be aware of such differences.
This is another late answer, having recently come to Scala myself, but I can answer 1, 3, and 4:
1) I ported a large, multifaceted F# project to Scala without any use of either Java or .NET libraries. So for many projects, one can stick totally to native Scala. Java ecosystem knowledge would be a plus, but it can be acquired gradually during and after learning Scala.
3) Programming in Scala is not only great for learning Scala, it's one of the few truly readable computer books on any language. And it's handy for later reference.
4) I've used close to a dozen different programming languages, from assembly languages to Prolog, but Scala and F# are the two most fun programming languages I've ever used -- by a wide margin. (Scala and F# are very similar, an example of "convergent evolution" in two different ecosystems -- JVM and .NET.)
-Neil

Is Ruby a functional language?

Wikipedia says Ruby is a functional language, but I'm not convinced. Why or why not?
Whether a language is or is not a functional language is unimportant. Functional Programming is a thesis, best explained by Philip Wadler (The Essence of Functional Programming) and John Hughes (Why Functional Programming Matters).
A meaningful question is, 'How amenable is Ruby to achieving the thesis of functional programming?' The answer is 'very poorly'.
I gave a talk on this just recently. Here are the slides.
Ruby does support higher-level functions (see Array#map, inject, & select), but it is still an imperative, Object-Oriented language.
One of the key characteristics of a functional language it that it avoids mutable state. Functional languages do not have the concept of a variable as you would have in Ruby, C, Java, or any other imperative language.
Another key characteristic of a functional language is that it focuses on defining a program in terms of "what", rather than "how". When programming in an OO language, we write classes & methods to hide the implementation (the "how") from the "what" (the class/method name), but in the end these methods are still written using a sequence of statements. In a functional language, you do not specify a sequence of execution, even at the lowest level.
I most definitely think you can use functional style in Ruby.
One of the most critical aspects to be able to program in a functional style is if the language supports higher order functions... which Ruby does.
That said, it's easy to program in Ruby in a non-functional style as well. Another key aspect of functional style is to not have state, and have real mathematical functions that always return the same value for a given set of inputs. This can be done in Ruby, but it is not enforced in the language like something more strictly functional like Haskell.
So, yeah, it supports functional style, but it also will let you program in a non-functional style as well.
I submit that supporting, or having the ability to program in a language in a functional style does not a functional language make.
I can even write Java code in a functional style if I want to hurt my collegues, and myself a few months weeks on.
Having a functional language is not only about what you can do, such as higher-order functions, first-class functions and currying. It is also about what you cannot do, like side-effects in pure functions.
This is important because it is a big part of the reason why functional programs are, or functional code in generel is, easier to reason about. And when code is easier to reason about, bugs become shallower and float to the conceptual surface where they can be fixed, which in turn gives less buggy code.
Ruby is object-oriented at its core, so even though it has reasonably good support for a functional style, it is not itself a functional language.
That's my non-scientific opinion anyway.
Edit:
In retrospect and with consideration for the fine comments I have recieved to this answer thus far, I think the object-oriented versus functional comparison is one of apples and oranges.
The real differentiator is that of being imparative in execution, or not. Functional languages have the expression as their primary linguistic construct and the order of execution is often undefined or defined as being lazy. Strict execution is possible but only used when needed. In an imparative language, strict execution is the default and while lazy execution is possible, it is often kludgy to do and can have unpredictable results in many edge cases.
Now, that's my non-scientific opinion.
Ruby will have to meet the following requirements in order to be "TRUELY" functional.
Immutable values: once a “variable” is set, it cannot be changed. In Ruby, this means you effectively have to treat variables like constants. The is not fully supported in the language, you will have to freeze each variable manually.
No side-effects: when passed a given value, a function must always return the same result. This goes hand in hand with having immutable values; a function can never take a value and change it, as this would be causing a side-effect that is tangential to returning a result.
Higher-order functions: these are functions that allow functions as arguments, or use functions as the return value. This is, arguably, one of the most critical features of any functional language.
Currying: enabled by higher-order functions, currying is transforming a function that takes multiple arguments into a function that takes one argument. This goes hand in hand with partial function application, which is transforming a multi-argument function into a function that takes less arguments then it did originally.
Recursion: looping by calling a function from within itself. When you don’t have access to mutable data, recursion is used to build up and chain data construction. This is because looping is not a functional concept, as it requires variables to be passed around to store the state of the loop at a given time.
Lazy-evaluation, or delayed-evaluation: delaying processing of values until the moment when it is actually needed. If, as an example, you have some code that generated list of Fibonacci numbers with lazy-evaluation enabled, this would not actually be processed and calculated until one of the values in the result was required by another function, such as puts.
Proposal (Just a thought)
I would be of great to have some kind of definition to have a mode directive to declare files with functional paradigm, example
mode 'functional'
Ruby is a multi-paradigm language that supports a functional style of programming.
Ruby is an object-oriented language, that can support other paradigms (functional, imperative, etc). However, since everything in Ruby is an object, it's primarily an OO language.
example:
"hello".reverse() = "olleh", every string is a string object instance and so on and so forth.
Read up here or here
It depends on your definition of a “functional language”. Personally, I think the term is itself quite problematic when used as an absolute. The are more aspects to being a “functional language” than mere language features and most depend on where you're looking from. For instance, the culture surrounding the language is quite important in this regard. Does it encourage a functional style? What about the available libraries? Do they encourage you to use them in a functional way?
Most people would call Scheme a functional language, for example. But what about Common Lisp? Apart from the multiple-/single-namespace issue and guaranteed tail-call elimination (which some CL implementations support as well, depending on the compiler settings), there isn't much that makes Scheme as a language more suited to functional programming than Common Lisp, and still, most Lispers wouldn't call CL a functional language. Why? Because the culture surrounding it heavily depends on CL's imperative features (like the LOOP macro, for example, which most Schemers would probably frown upon).
On the other hand, a C programmer may well consider CL a functional language. Most code written in any Lisp dialect is certainly much more functional in style than your usual block of C code, after all. Likewise, Scheme is very much an imperative language as compared to Haskell. Therefore, I don't think there can ever be a definite yes/no answer. Whether to call a language functional or not heavily depends on your viewpoint.
Ruby isn't really much of a multi-paradigm language either, I think. Multi-paradigm tends to be used by people wanting to label their favorite language as something which is useful in many different areas.
I'd describe Ruby is an object-oriented scripting language. Yes, functions are first-class objects (sort of), but that doesn't really make it a functional language. IMO, I might add.
Recursion is common in functional programming. Almost any language does support recursion, but recursive algorithms are often ineffective if there is no tail call optimization (TCO).
Functional programming languages are capable of optimizing tail recursion and can execute such code in constant space. Some Ruby implementations do optimize tail recursion, the other don't, but in general Ruby implementations are not required to do TCO. See Does Ruby perform Tail Call Optimization?
So, if you write some Ruby functional style and rely on TCO of some particular implementation, your code may be very ineffective in another Ruby interpreter. I think this is why Ruby is not a functional language (neither is Python).
Strictly speaking, it doesn't make sense to describe a language as "functional"; most languages are capable of functional programming. Even C++ is.
Functional style is more or less a subset of imperative language features, supported with syntactic sugar and some compiler optimizations like immutability and tail-recursion flattening,
The latter arguably is a minor implementation-specific technicality and has nothing to do with the actual language. The x64 C# 4.0 compiler does tail-recursion optimization, whereas the x86 one doesn't for whatever stupid reason.
Syntactic sugar can usually be worked around to some extent or another, especially if the language has a programmable precompiler (i.e. C's #define).
It might be slightly more meaningful to ask, "does language __ support imperative programming?", and the answer, for instance with Lisp, is "no".
Please, have a look at the beginning of the book: "A-Great-Ruby-eBook". It discusses the very specific topic you are asking. You can do different types of programming in Ruby. If you want to program like functionally, you can do it. If you want to program like imperatively, you can do it. It is a definition question how functional Ruby in the end is. Please, see the reply by the user camflan.

Resources