Related
I have noticed that I almost exclusively use lazy val assignments as they often avoid unnecessary computations, and I can't see that many situations where one would not want to do so (dependency on mutable variables being a notable exceptions of course).
It would seem to me that this is one of the great advantages of functional programming and one should encourage its use whenever possible and, if I understood correctly, Haskell does exactly this by default.
So why are Scala values not lazy by default? Is it solely to avoid issues relating to mutable variables?
The big difference here between Scala and Haskell is that Scala allows side-effects whereas Haskell does not.
Laziness results in all sorts of problems in a language which allows side-effects at arbitrary points in the program, as the order in which the side-effects take place becomes non-deterministic.
Nearly transparent Java interoperability plays a large role in the design of Scala, and Java libraries are typically full of side-effects.
Scala is a strict language. Laziness is not only about vals, it's about an evaluation strategy. Should arguments to a function be evaluated before calling the function (what if they are not used?)? In Scala (like most other languages) they are. This strategy carries over to other contexts, including vals and vars.
It would be awkward to break this rule for vals, but laziness can be useful and provided as an opt-in.
As you noted, dependency on mutable variables is incompatible incompatible with lazy evaluations.
Note, that scala is JVM language and Scala programs often use Java libraries, wich are not functional at all. Laziness by default will cause a lot of problems with Java libraries.
First I am a Haskell newbie.
I've read this:
Immutable functional objects in highly mutable domain
And my question is nearly the same -- how to efficiently write algorithms where the state is supposed to change. Let's take for example Dijkstra's algorithm. There will be new paths found and distances should be updated. And in traditional languages this is simple while in Haskell for example I can only think of creating entirely new distances which will be too slow and memory consuming. Are there something like design patterns for such cases where one should implement algorithm with mutable data structure and speed and memory usage are main concerns?
There are of course many ways functional languages address this issue.
Different data structures - many data structures can be implemented in a purely functional manner, with the same algorithmic complexity as imperative versions. Probably the most well-known work in this area is Chris Okasaki's Purely Functional Data Structures, but there are many other resources as well. For Dijkstra's algorithm, Martin Erwig's work on functional graphs is appropriate. See this question as well.
Different algorithms - some algorithms have assumptions of mutability built-in, Quicksort is an example of this. In this case an alternative algorithm can be used that's more amenable to immutability.
Mutable state - every functional language can model functional state with a State monad. Most provide other forms of mutability as well, such as Haskell's ST monad and IORef's.
The ST Monad lets you use mutable state internally, but present a pure external interface.
Creating new immutable objects isn't nearly as expense as you might think, since large amounts of structural sharing can occur because the compiler KNOWS they can't change and thus can be safely shared. That said, using highly imperative algorithms with lots of mutable state in Haskell is a bit of a code smell.
In ML derivatives (such as OCaml, SML, F#), there are "references", which can be used as mutable variables.
In Haskell, this isn't cleanly handled. State is simply not covered by the usual "purely functional" style. Pure FP languages deal with "eternal truths", and are thus not very suitable for working with "ephemeral truths" (although it can be done, definitely).
However, yes, sometimes we need mutable state. A language such as ATS incorporates linear types for handling destructive updates and safe resource manipulation.
I'm a C# developer and I don't have enough information about functional languages,
My question that is there any algorithm needs functional language exclusively to be implemented?
Regards.
As long as a language is Turing complete, any algorithm can be implemented in it (by definition of "algorithm"). But as others have said, functional languages can do certain things more elegantly. (Just take a look at Haskell. What a lovely language.) I'd also argue that there is a class of problems that OOP languages do better. (In my opinion, GUIs, although some may disagree.)
No, however a functional language may lead to a more elegant implementation for an algorithm that can exploit the features of such a language. For example, one that requires large recursive depth.
As I understand it, such algorithm would have to be translated into a set of machine commands executed on some micro-processor (whether you use compiled or interpreted language). And none of the current processors are 'functional'.
In fact, this leads to even broader assertion: any 'functional algorithm' can be implemented in C or assembler :)
What do you lose in practice when you choose a statically-typed language such as Scala (or F#, Haskell, C#) instead of dynamically-typed ones like Ruby, Python, Clojure, Groovy (which have macros or runtime metaprogramming capabilities)? Please consider best statically-typed languages and best (in your opinion) dynamically-typed languages, not the worst ones.
Answers Summary:
Key advantages of dynamic languages like Ruby over statically-typed language like Scala IMHO are:
Quick edit-run cycle (does JavaRebel reduces the gap?)
Currently community of Scala/Lift is much smaller then of Ruby/Rails or Python/Django
Possible to modify type definitions (though motivation or need for that is not very clear)
In principle, you give up being able to ignore what type you're using when it is not clear (in the static context) what the right thing to do is, and that's about it.
Since complex type-checking can be rather time-consuming, you also probably are forced to give up fast on-line metaprogramming.
In practice, with Scala, you give up very little else--and nothing that I particularly care about. You can't inject new methods, but you can compile and run new code. You do have to specify types in function arguments (and the return type with recursive functions), which is slightly annoying if you never make type errors yourself. Since it compiles each command, the Scala REPL isn't as snappy as e.g. the Python shell. And since it uses Java reflection mechanisms, you don't have quite the ease of online inspection that you do with e.g. Python (not without building your own inspection library, anyway).
The choice of which static or dynamic language is more significant than the static/dynamic choice itself. Some dynamic languages have good performance and good tools. Some static languages can be concise, expressive, and incremental. Some languages have few of these qualities, but do have large libraries of proven code.
Dynamic languages tend to have much more flexible type systems. For example, Python lets you inject a new method into an existing classes, or even into a single object.
Many (not all) static languages lack the facility to construct complex literals. For instance, languages like C# and Java cannot easily mimic the following JavaScript { 'request':{'type':'GET', 'path':mypath}, 'oncomplete':function(response) { alert(response.result) } }.
Dynamic languages have very fluid semantics. Python allows import statements, function definitions and class definitions to appear inside functions and if statements.
eval is a staple of most dynamic languages and few static languages.
Higher order programming is easier (in my subjective opinion) in dynamic languages than static languages, due to the awkwardness of having to fully specify the types of function parameters.
This is particulary so with recursive HOP constructs where the type system can really get in the way.
Dynamic language users don't have to deal with covariance and contravariance.
Generic programming comes practically free in dynamic languages.
I'm not sure if you lose anything but simplicity. Static type systems are an additional burden to learn.
I suppose you usually also lose eval, but I never use it, even in dynamic languages.
I find the issue is much more about everything else when it comes to choosing which language to use for a given task. Tooling, culture, libraries are all much more interesting than typing when it comes to solving a problem with a language.
Programming language research, on the other hand, is completely different. :)
Some criticism of Scala has been expressed by Steve Yegge here and here, and by Guido van Rossum, who mainly attacked Scala's type system complexity. They clearly aren't "Scala programmers" though. On the other hand, here's some praise from James Strachan.
My 2 cents...
IMO (strong) statically-typed languages might reduce the amount of necessary testing code, because some of that work will be done by the compiler. On the other hand, if the compiling step is relatively long, it makes it more difficult to do "incremental-style" programming, which in the real life might result in error-prone code that was only tested to pass the compiler.
On the other hand, dynamically-typed languages feel like there is less threshold to change things, that might reduce the responding time from the point of bug-fixing and improvement, and as a result might provide a smoother curve during application development: handling constant flow of small changes is easier/less risky than handling changes which are coming in bug chunks.
For example, for the project where the design is very unclear and is supposed to change often, it might have been easier to use dynamic language than a static one, if it helps reduce interdependencies between different parts. (I don't insist on that one though:) )
I think Scala sits somewhere in between (e.g. you don't have to explicitly specify types of the variables, which might ease up code maintenance in comparison with e.g. C++, but if you end up with the wrong assumption about types, the compiler will remind about it, unlike in PHP where you can write whatever and if you don't have good tests covering the functionality, you are doomed to find it out when everything is live and bleeding). Might be terribly wrong of course :)
In my opinion, the difference between the static and dynamic typing comes down to the style of coding. Although there is structural types in Scala, most of the time the programmer is thinking in terms of the type of the object including cool gadgets like trait. On the other hand, I think Python/Javascript/Ruby programmers think in terms of prototype of the object (list of methods and properties), which is slightly different from types.
For example, suppose there's a family of classes called Vehicle whose subclasses include Plane, Train, and Automobile; and another family of classes called Animal whose subclasses include Cat, Dog, and Horse. A Scala programmer would probably create a trait called Transportation or something which has
def ride: SomeResult
def ride(rider: Someone): SomeResult
as a member, so she can handle both Train and Horse as a means of transportation. A Python programmer would just pass the train object without additional code. At the run time the language figures out that the object supports ride.
The fact that the method invocations are resolved at the runtime allows languages like Python and Ruby to have libraries that redefines the meaning of properties or methods. A good example of that is O/R mapping or XML data binding, in which undefined property name is interpreted to be the field name in a table/XML type. I think this is what people mean by "flexibility."
In my very limited experience of using dynamic languages, I think it's faster coding in them as long as you don't make mistakes. And probably as you or your coworkers get good at coding in dynamic language, they would make less mistakes or start writing more unit tests (good luck). In my limited experience, it took me very long to find simple errors in dynamic languages that Scala can catch in a second. Also having all types at compile time makes refactoring easier.
Wikipedia says Ruby is a functional language, but I'm not convinced. Why or why not?
Whether a language is or is not a functional language is unimportant. Functional Programming is a thesis, best explained by Philip Wadler (The Essence of Functional Programming) and John Hughes (Why Functional Programming Matters).
A meaningful question is, 'How amenable is Ruby to achieving the thesis of functional programming?' The answer is 'very poorly'.
I gave a talk on this just recently. Here are the slides.
Ruby does support higher-level functions (see Array#map, inject, & select), but it is still an imperative, Object-Oriented language.
One of the key characteristics of a functional language it that it avoids mutable state. Functional languages do not have the concept of a variable as you would have in Ruby, C, Java, or any other imperative language.
Another key characteristic of a functional language is that it focuses on defining a program in terms of "what", rather than "how". When programming in an OO language, we write classes & methods to hide the implementation (the "how") from the "what" (the class/method name), but in the end these methods are still written using a sequence of statements. In a functional language, you do not specify a sequence of execution, even at the lowest level.
I most definitely think you can use functional style in Ruby.
One of the most critical aspects to be able to program in a functional style is if the language supports higher order functions... which Ruby does.
That said, it's easy to program in Ruby in a non-functional style as well. Another key aspect of functional style is to not have state, and have real mathematical functions that always return the same value for a given set of inputs. This can be done in Ruby, but it is not enforced in the language like something more strictly functional like Haskell.
So, yeah, it supports functional style, but it also will let you program in a non-functional style as well.
I submit that supporting, or having the ability to program in a language in a functional style does not a functional language make.
I can even write Java code in a functional style if I want to hurt my collegues, and myself a few months weeks on.
Having a functional language is not only about what you can do, such as higher-order functions, first-class functions and currying. It is also about what you cannot do, like side-effects in pure functions.
This is important because it is a big part of the reason why functional programs are, or functional code in generel is, easier to reason about. And when code is easier to reason about, bugs become shallower and float to the conceptual surface where they can be fixed, which in turn gives less buggy code.
Ruby is object-oriented at its core, so even though it has reasonably good support for a functional style, it is not itself a functional language.
That's my non-scientific opinion anyway.
Edit:
In retrospect and with consideration for the fine comments I have recieved to this answer thus far, I think the object-oriented versus functional comparison is one of apples and oranges.
The real differentiator is that of being imparative in execution, or not. Functional languages have the expression as their primary linguistic construct and the order of execution is often undefined or defined as being lazy. Strict execution is possible but only used when needed. In an imparative language, strict execution is the default and while lazy execution is possible, it is often kludgy to do and can have unpredictable results in many edge cases.
Now, that's my non-scientific opinion.
Ruby will have to meet the following requirements in order to be "TRUELY" functional.
Immutable values: once a “variable” is set, it cannot be changed. In Ruby, this means you effectively have to treat variables like constants. The is not fully supported in the language, you will have to freeze each variable manually.
No side-effects: when passed a given value, a function must always return the same result. This goes hand in hand with having immutable values; a function can never take a value and change it, as this would be causing a side-effect that is tangential to returning a result.
Higher-order functions: these are functions that allow functions as arguments, or use functions as the return value. This is, arguably, one of the most critical features of any functional language.
Currying: enabled by higher-order functions, currying is transforming a function that takes multiple arguments into a function that takes one argument. This goes hand in hand with partial function application, which is transforming a multi-argument function into a function that takes less arguments then it did originally.
Recursion: looping by calling a function from within itself. When you don’t have access to mutable data, recursion is used to build up and chain data construction. This is because looping is not a functional concept, as it requires variables to be passed around to store the state of the loop at a given time.
Lazy-evaluation, or delayed-evaluation: delaying processing of values until the moment when it is actually needed. If, as an example, you have some code that generated list of Fibonacci numbers with lazy-evaluation enabled, this would not actually be processed and calculated until one of the values in the result was required by another function, such as puts.
Proposal (Just a thought)
I would be of great to have some kind of definition to have a mode directive to declare files with functional paradigm, example
mode 'functional'
Ruby is a multi-paradigm language that supports a functional style of programming.
Ruby is an object-oriented language, that can support other paradigms (functional, imperative, etc). However, since everything in Ruby is an object, it's primarily an OO language.
example:
"hello".reverse() = "olleh", every string is a string object instance and so on and so forth.
Read up here or here
It depends on your definition of a “functional language”. Personally, I think the term is itself quite problematic when used as an absolute. The are more aspects to being a “functional language” than mere language features and most depend on where you're looking from. For instance, the culture surrounding the language is quite important in this regard. Does it encourage a functional style? What about the available libraries? Do they encourage you to use them in a functional way?
Most people would call Scheme a functional language, for example. But what about Common Lisp? Apart from the multiple-/single-namespace issue and guaranteed tail-call elimination (which some CL implementations support as well, depending on the compiler settings), there isn't much that makes Scheme as a language more suited to functional programming than Common Lisp, and still, most Lispers wouldn't call CL a functional language. Why? Because the culture surrounding it heavily depends on CL's imperative features (like the LOOP macro, for example, which most Schemers would probably frown upon).
On the other hand, a C programmer may well consider CL a functional language. Most code written in any Lisp dialect is certainly much more functional in style than your usual block of C code, after all. Likewise, Scheme is very much an imperative language as compared to Haskell. Therefore, I don't think there can ever be a definite yes/no answer. Whether to call a language functional or not heavily depends on your viewpoint.
Ruby isn't really much of a multi-paradigm language either, I think. Multi-paradigm tends to be used by people wanting to label their favorite language as something which is useful in many different areas.
I'd describe Ruby is an object-oriented scripting language. Yes, functions are first-class objects (sort of), but that doesn't really make it a functional language. IMO, I might add.
Recursion is common in functional programming. Almost any language does support recursion, but recursive algorithms are often ineffective if there is no tail call optimization (TCO).
Functional programming languages are capable of optimizing tail recursion and can execute such code in constant space. Some Ruby implementations do optimize tail recursion, the other don't, but in general Ruby implementations are not required to do TCO. See Does Ruby perform Tail Call Optimization?
So, if you write some Ruby functional style and rely on TCO of some particular implementation, your code may be very ineffective in another Ruby interpreter. I think this is why Ruby is not a functional language (neither is Python).
Strictly speaking, it doesn't make sense to describe a language as "functional"; most languages are capable of functional programming. Even C++ is.
Functional style is more or less a subset of imperative language features, supported with syntactic sugar and some compiler optimizations like immutability and tail-recursion flattening,
The latter arguably is a minor implementation-specific technicality and has nothing to do with the actual language. The x64 C# 4.0 compiler does tail-recursion optimization, whereas the x86 one doesn't for whatever stupid reason.
Syntactic sugar can usually be worked around to some extent or another, especially if the language has a programmable precompiler (i.e. C's #define).
It might be slightly more meaningful to ask, "does language __ support imperative programming?", and the answer, for instance with Lisp, is "no".
Please, have a look at the beginning of the book: "A-Great-Ruby-eBook". It discusses the very specific topic you are asking. You can do different types of programming in Ruby. If you want to program like functionally, you can do it. If you want to program like imperatively, you can do it. It is a definition question how functional Ruby in the end is. Please, see the reply by the user camflan.