Cons of first class continuations - ruby

What are some of the criticisms leveled against exposing continuations as first class objects?
I feel that it is good to have first class continuations. It allow complete control over the execution flow of instructions. Advanced programmers can develop intuitive solutions to certain kind of problems. For instance, continuations are used to manage state on web servers. A language implementation can provide useful abstractions on top of continuations. For example, green threads.
Despite all these, are there strong arguments against first class continuations?

The reality is that many of the useful situations where you could use continuations are already covered by specialized language constructs: throw/catch, return, C#/Python yield. Thus, language implementers don't really have all that much incentive to provide them in a generalized form usable for roll-your-own solutions.
In some languages, generalized continuations are quite hard to implement efficiently. Stack-based languages (i.e. most languages) basically have to copy the whole stack every time you create a continuation.
Those languages can implement certain continuation-like features, those that don't break the basic stack-based model, a lot more efficiently than the general case, but implementing generalized continuations is quite a bit harder and not worth it.
Functional languages are more likely to implement continuations for a couple of reasons:
They are frequently implemented in continuation passing style, which means the "call stack" is probably a linked list allocated on the heap. This makes it trivial to pass a pointer to the stack as a continuation, since you don't need to overwrite the stack context when you pop the current frame and push a new one. (I've never implemented CPS but that's my understanding of it.)
They favor immutable data bindings, which make your old continuation a lot more useful because you will not have altered the contents of variables that the stack pointed to when you created it.
For these reasons, continuations are likely to remain mostly just in the domain of functional languages.

First up, there is more then just call/cc when it comes to continuation. I suggest starting with Mark Feelys paper: A better API for first class continuations
Next up I suggest reading about the control operators shift and reset, which is a different way of representing contunations.

A significant objection is implementation cost. If the runtime uses a stack, then first-class continuations require a stack copy at some point. The copy cost can be controlled (see Representing Control in the Presence of First-Class Continuations for a good strategy), but it also means that mutable variables cannot be allocated on the stack. This isn't an issue for functional or mostly-functional (e.g., Scheme) languages, but this adds significant overhead for OO languages.

Most programmers don't understand them. If you have code that uses them, it's harder to find replacement programmers who will be able to work with it.
Continuations are hard to implement on some platforms. For example, JRuby doesn't support continuations.

First-class continuations undermine the ability to reason about code, especially in languages that allow continuations to be imperatively assigned to variables, because the insides of closures can be brought alive again in hairy ways.
Cf. Kent Pitman's complaint about continuations, about the tricky way that unwind-protect interacts with call/cc

Call/cc is the 'goto' of advanced functional programming (a la the example here).

in ruby 1.8 the implementation was extremely slow. better in 1.9, and of course most schemes have had them built in and performing well from the outset.

Related

Scheme and Lisp best practices: recursion yes for Scheme, no for Lisp?

As I understand -- and I'm here to be corrected if wrong -- good Scheme practice is to do anything requiring looping, repeating with recursion, and, furthermore, overflow won't be a problem because tail-recursion is built-in. Lisp, however, doesn't have protection from overflow, hence, all the loop iterative macros (loop, while, etc). And so in real-world Lisp you usually don't use recursion, while Scheme wants it all but exclusively.
If my assumptions are true, is there a way to be "pure" with Lisp and not risk overflow? Or is this too much swimming against the stream to use recursion in Lisp? I recall from The Little Schemer how they give you a thorough workout with recursion. And there was an earlier edition called The Little Lisper. Did it give you the same recursion workout in Lisp? And then Land of Lisp left me confused about whether loops or recursion was "best practice."
What I'm trying to do is decide whether to use Racket inside of Emacs Org-mode or just use built-in Elisp for beginner students. I want students to stay as purely functional as possible, e.g., I don't want to explain the very new and difficult topic of recursion, then say "Oh, but we won't be using it..."
As I understand -- and I'm here to be corrected if wrong -- good Scheme practice is to do anything requiring looping, repeating with recursion, and, furthermore, overflow won't be a problem because tail-recursion is built-in.
This is correct as far as I know.
Lisp, however, doesn't have protection from overflow [...]
Not exactly. Most self-respective Common Lisp implementations provide tail-call merging (with some restrictions, see https://0branch.com/notes/tco-cl.html). The difference is that there is no requirements from the language specification to have it. That grants compiler writers more freedom when implementing various Common Lisp features. Emacs Lisp does not have TCO, except in the form of libraries like recur or tco (self-recursion).
... hence, all the loop iterative macros (loop, while, etc). And so in real-world Lisp you usually don't use recursion, while Scheme wants it all but exclusively.
The difference is mostly cultural. Take a REPL (Read-Eval-Print-Loop). I think it makes more sense to consider the interaction as a loop rather than a infinitely tail-recursive function. Somehow, there seem to be some reluctance in purely functional languages to even consider loops as a primitive control structure, whereas iterative processes are considered more elegant. Why not have both?
If my assumptions are true, is there a way to be "pure" with Lisp and not risk overflow? Or is this too much swimming against the stream to use recursion in Lisp?
You can certainly use recursion in Lisp, provided you do not abuse it, which should not be the case with small programs. Consider for example that map in OCaml is not tail-recursive, but people still use it regularly. If you use SBCL, there is a section in the manual that explains how to enforce tail-call elimination.
What I'm trying to do is decide whether to use Racket inside of Emacs Org-mode or just use built-in Elisp for beginner students. I want students to stay as purely functional as possible, e.g., I don't want to explain the very new and difficult topic of recursion, then say "Oh, but we won't be using it..."
If you want to teach functional programming, use a more functional language. In other words, between Racket and Emacs Lisp, I'd say Racket is more appropriate for students. There are more materials to teach functional programming with Racket, there is also Typed Racket.
There are a lot of differences between the typical Lisp dialects and the various Scheme dialects:
Scheme
requires tail call optimization
favors tail recursion over imperative loop constructs
dislikes imperative control flow
provides functional abstractions
tries to map control structures to functions (see for example CALL-WITH-CURRENT-CONTINUATION)
implements some looping macros as extensions
Lisp
Parts of this applies for Lisp dialects like Emacs Lisp, Common Lisp or ISLisp.
has low level control flow, for example GOTO like constructs: don't use them in user level code, sometimes they may be useful in macros.
does not standardize or even provide tail call optimization: for maximum portability don't require TCO in your code
provides simple macros: DO, DOTIMES, DOLIST: use them where directly applicable
provides complex macros: LOOP, or ITERATE as a library
provides functional abstractions like MAP, MAPCAR, REDUCE
Implementations typically have hard limits on stack sizes, which makes non-TCO recursive functions problematic. Typically one can set a large stack size upfront or extend the stack size at runtime.
Also tail call optimization has some incompatibilities with dynamic scope constructs like special variables or similar.
For TCO support in Common Lisp see: Tail Call Optimisation in Common Lisp Implementations
Style
Some prefer recursive functions, but I would not use it generally. Generally I (that's my personal opinion) prefer explicit loop constructs over general recursive calls. Others prefer the idea of a more mathematical approach of recursive functions.
If there is a higher-order function like MAP, use that.
If the loop code gets more complex, using a powerful LOOP statement can be useful
In Racket the looping construct of choice is for.
Racket (like Scheme) also has TCO, so in principle you can
write all looping constructs using function calls - it just isn't
very convenient.
There is no Tail Call Optimisation (TCO) in Emacs Lisp, so even tho I personally like recursion very much, I'd generally recommend against its use in Elisp (except of course in those cases where it's the only natural choice).
Elisp as a programming environment is fairly good, but I must agree with "coredump" and would recommend Racket instead, which has great support for beginners.

Will there be functions if there were no stacks?

I know that a stack data structure is used to store the local variables among many other things of a function that is being run.
I also understand how stack can be used to eleganlty manage recursion.
Suppose there was a machine that did not provide a stack area in memory, I don't think there will be programming languages for the machine that will support recursion. I am also wondering if programming languages for the machine would support functions without recursion.
Please, someone shread some sight on this for me.
A bit of theoretical framework is needed to understand that recursion is indeed not tied to functions at all, rather it is tied to expressiveness.
I won't dig into that, leaving Google fill any gaps.
Yes, we can have functions without a stack.
We don't need even the call/ret machinery for functions, we can just have the compiler inline every function call.
So there is no need for a stack at all.
This considers only functions in the programming sense, not mathematical sense.
A better name would be routines.
Anyway that is a simply proof of concepts that functions, intended as reusable code, don't need a stack.
However not all functions, in the mathematical sense, can implemented this way.
This is analogous to say: "We can have dogs on the bed but not all dogs can be on the bed".
You are in the right track by citing recursion, however when it comes to recursion, we need to be a lot more formal as there are various forms of recursion.
For example in-lining every function call may loop the compiler if the function being inlined is not constrained somehow.
Without digging into the theory, in order to be always sure that our compiler won't loop we can only allow primitive (bounded) recursion.
What you probably means by "recursion" is general recursion, that cannot be achieved by in-lining, we can show that we need an infinite amount of memory for GR and that is the demarcation between PR and GR, not having a stack.
So we can have function without a stack, even recursive (for some form of recursion) functions.
If your question was more practical then just consider MIPS.
There is no stack instructions or stack pointer register in the MIPS ISA, everything related to stack is just convention.
The compiler could use any memory area and treat it like a stack.

Writing portable scheme code. Is anything "standard" beyond R5RS itself?

I'm learning scheme and until now have been using guile. I'm really just learning as a way to teach myself a functional programming language, but I'd like to publish an open source project of some sort to reenforce the study— not sure what yet... I'm a web developer, so probably something webby.
It's becoming apparent that publishing scheme code isn't very easy to do, with all these different implementations and no real standards beyond the core of the language itself (R5RS). For example, I'm almost certainly going to need to do basic IO on disk and over a TCP socket, along with string manipulation, such as scanning/regex, which seems not to be covered by R5RS, unless I'm not seeing it in the document. It seems like Scheme is more of a "concept" than a practical language... is this a fair assessment? Perhaps I should look to something like Haskell if I want to learn a functional programming language that lends itself more to use in open source projects?
In reality, how much pain do the differing scheme implementations pose when you want to publish an open source project? I don't really fancy having to maintain 5 different functions for basic things like string manipulation under various mainstream implementations (Chicken, guile, MIT, DrRacket). How many people actually write scheme for cross-implementation compatibility, as opposed to being tightly coupled with the library functions that only exist in their own scheme?
I have read http://www.ccs.neu.edu/home/dorai/scmxlate/scheme-boston/talk.html, which doesn't fill me with confidence ;)
EDIT | Let's re-define "standard" as "common".
I believe that in Scheme, portability is a fool's errand, since Scheme implementations are more different than they are similar, and there is no single implementation that other implementations try to emulate (unlike Python and Ruby, for example).
Thus, portability in Scheme is analogous to using software rendering for writing games "because it's in the common subset between OpenGL and DirectX". In other words, it's a lowest common denominator—it can be done, but you lose access to many features that the implementation offers.
For this reason, while SRFIs generally have a portable reference implementation (where practical), some of them are accompanied by notes that a quality Scheme implementation should tailor the library to use implementation-specific features in order to function optimally.
A prime example is case-lambda (SRFI 16); it can be implemented portably, and the reference implementation demonstrates it, but it's definitely less optimal compared to a built-in case-lambda, since you're having to implement function dispatch in "user" code.
Another example is stream-constant from SRFI 41. The reference implementation uses an O(n) simulation of circular lists for portability, but any decent implementation should adapt that function to use real circular lists so that it's O(1).†
The list goes on. Many useful things in Scheme are not portable—SRFIs help make more features portable, but there's no way that SRFIs can cover everything. If you want to get useful work done efficiently, chances are pretty good you will have to use non-portable features. The best you can do, I think, is to write a façade to encapsulate those features that aren't already covered by SRFIs.
† There is actually now a way to implement stream-constant in an O(1) fashion without using circular lists at all. Portable and fast for the win!
Difficult question.
Most people decide to be pragmatic. If portability between implementations is important, they write the bulk of the program in standard Scheme and isolate non-standard parts in (smallish) libraries. There have been various approaches of how exactly to do this. One recent effort is SnowFort.
http://snow.iro.umontreal.ca/
An older effort is SLIB.
http://people.csail.mit.edu/jaffer/SLIB
If you look - or ask for - libraries for regular expressions and lexer/parsers you'll quickly find some.
Since the philosophy of R5RS is to include only those language features that all implementors agree on, the standard is small - but also very stable.
However for "real world" programming R5RS might not be the best fit.
Therefore R6RS (and R7RS?) include more "real world" libraries.
That said if you only need portability because it seems to be the Right Thing, then reconsider carefully if you really want to put the effort in.
I would simply write my program on the implementation I know the best. Then if necessary port it afterwards. This often turns out to be easier than expected.
I write a blog that uses Scheme as its implementation language. Because I don't want to alienate users of any particular implementation of Scheme, I write in a restricted dialect of Scheme that is based on R5RS plus syntax-case macros plus my Standard Prelude. I don't find that overly restrictive for the kind of algorithmic programs that I write, but your needs may be different. If you look at the various exercises on the blog, you will see that I wrote my own regular-expression matcher, that I've done a fair amount of string manipulation, and that I've snatched files from the internet by shelling out to wget (I use Chez Scheme -- users have to provide their own non-portable shell mechanism if they use anything else); I've even done some limited graphics work by writing ANSI terminal sequences.
I'll disagree just a little bit with Jens. Instead of porting afterwards, I find it easier to build in portability from the beginning. I didn't use to think that way, but my experience over the last three years shows that it works.
It's worth pointing out that modern Scheme implementations are themselves fairly portable; you can often port whole programs to new environments simply by bringing the appropriate Scheme along. That doesn't help library programmers much, though, and that's where R7RS-small, the latest Scheme definition, comes in. It's not widely implemented yet, but it provides a larger common core than R5RS.

Is there a functional algorithm which is faster than an imperative one?

I'm searching for an algorithm (or an argument of such an algorithm) in functional style which is faster than an imperative one.
I like functional code because it's expressive and mostly easier to read than it's imperative pendants. But I also know that this expressiveness can cost runtime overhead. Not always due to techniques like tail recursion - but often they are slower.
While programming I don't think about runtime costs of functional code because nowadays PCs are very fast and development time is more expensive than runtime. Furthermore for me readability is more important than performance. Nevertheless my programs are fast enough so I rarely need to solve a problem in an imperative way.
There are some algorithms which in practice should be implemented in an imperative style (like sorting algorithms) otherwise in most cases they are too slow or requires lots of memory.
In contrast due to techniques like pattern matching a whole program like a parser written in an functional language may be much faster than one written in an imperative language because of the possibility of compilers to optimize the code.
But are there any algorithms which are faster in a functional style or are there possibilities to setting up arguments of such an algorithm?
A simple reasoning. I don't vouch for terminology, but it seems to make sense.
A functional program, to be executed, will need to be transformed into some set of machine instructions.
All machines (I've heard of) are imperative.
Thus, for every functional program, there's an imperative program (roughly speaking, in assembler language), equivalent to it.
So, you'll probably have to be satisfied with 'expressiveness', until we get 'functional computers'.
The short answer:
Anything that can be easily made parallel because it's free of side-effects will be quicker on a multi-core processor.
QuickSort, for example, scales up quite nicely when used with immutable collections: http://en.wikipedia.org/wiki/Quicksort#Parallelization
All else being equal, if you have two algorithms that can reasonably be described as equivalent, except that one uses pure functions on immutable data, while the second relies on in-place mutations, then the first algorithm will scale up to multiple cores with ease.
It may even be the case that your programming language can perform this optimization for you, as with the scalaCL plugin that will compile code to run on your GPU. (I'm wondering now if SIMD instructions make this a "functional" processor)
So given parallel hardware, the first algorithm will perform better, and the more cores you have, the bigger the difference will be.
FWIW there are Purely functional data structures, which benefit from functional programming.
There's also a nice book on Purely Functional Data Structures by Chris Okasaki, which presents data structures from the point of view of functional languages.
Another interesting article Announcing Intel Concurrent Collections for Haskell 0.1, about parallel programming, they note:
Well, it happens that the CnC notion
of a step is a pure function. A step
does nothing but read its inputs and
produce tags and items as output. This
design was chosen to bring CnC to that
elusive but wonderful place called
deterministic parallelism. The
decision had nothing to do with
language preferences. (And indeed, the
primary CnC implementations are for
C++ and Java.)
Yet what a great match Haskell and CnC
would make! Haskell is the only major
language where we can (1) enforce that
steps be pure, and (2) directly
recognize (and leverage!) the fact
that both steps and graph executions
are pure.
Add to that the fact that Haskell is
wonderfully extensible and thus the
CnC "library" can feel almost like a
domain-specific language.
It doesn't say about performance – they promise to discuss some of the implementation details and performance in future posts, – but Haskell with its "pureness" fits nicely into parallel programming.
One could argue that all programs boil down to machine code.
So, if I dis-assemble the machine code (of an imperative program) and tweak the assembler, I could perhaps end up with a faster program. Or I could come up with an "assembler algorithm" that exploits some specific CPU feature, and therefor it really is faster than the imperative language version.
Does this situation lead to the conclusion that we should use assembler everywhere? No, we decided to use imperative languages because they are less cumbersome. We write pieces in assembler because we really need to.
Ideally we should also use FP algorithms because they are less cumbersome to code, and use imperative code when we really need to.
Well, I guess you meant to ask if there is an implementation of an algorithm in functional programming language that is faster than another implementation of the same algorithm but in an imperative language. By "faster" I mean that it performs better in terms of execution time or memory footprint on some inputs according to some measurement that we deem trustworthy.
I do not exclude this possibility. :)
To elaborate on Yasir Arsanukaev's answer, purely functional data structures can be faster than mutable data structures in some situations becuase they share pieces of their structure. Thus in places where you might have to copy a whole array or list in an imperative language, where you can get away with a fraction of the copying because you can change (and copy) only a small part of the data structure. Lists in functional languages are like this -- multiple lists can share the same tail since nothing can be modified. (This can be done in imperative languages, but usually isn't, because within the imperative paradigm, people aren't usually used to talking about immutable data.)
Also, lazy evaluation in functional languages (particularly Haskell which is lazy by default) can also be very advantageous because it can eliminate code execution when the code's results won't actually be used. (One can be very careful not to run this code in the first place in imperative languages, however.)

Is Ruby a functional language?

Wikipedia says Ruby is a functional language, but I'm not convinced. Why or why not?
Whether a language is or is not a functional language is unimportant. Functional Programming is a thesis, best explained by Philip Wadler (The Essence of Functional Programming) and John Hughes (Why Functional Programming Matters).
A meaningful question is, 'How amenable is Ruby to achieving the thesis of functional programming?' The answer is 'very poorly'.
I gave a talk on this just recently. Here are the slides.
Ruby does support higher-level functions (see Array#map, inject, & select), but it is still an imperative, Object-Oriented language.
One of the key characteristics of a functional language it that it avoids mutable state. Functional languages do not have the concept of a variable as you would have in Ruby, C, Java, or any other imperative language.
Another key characteristic of a functional language is that it focuses on defining a program in terms of "what", rather than "how". When programming in an OO language, we write classes & methods to hide the implementation (the "how") from the "what" (the class/method name), but in the end these methods are still written using a sequence of statements. In a functional language, you do not specify a sequence of execution, even at the lowest level.
I most definitely think you can use functional style in Ruby.
One of the most critical aspects to be able to program in a functional style is if the language supports higher order functions... which Ruby does.
That said, it's easy to program in Ruby in a non-functional style as well. Another key aspect of functional style is to not have state, and have real mathematical functions that always return the same value for a given set of inputs. This can be done in Ruby, but it is not enforced in the language like something more strictly functional like Haskell.
So, yeah, it supports functional style, but it also will let you program in a non-functional style as well.
I submit that supporting, or having the ability to program in a language in a functional style does not a functional language make.
I can even write Java code in a functional style if I want to hurt my collegues, and myself a few months weeks on.
Having a functional language is not only about what you can do, such as higher-order functions, first-class functions and currying. It is also about what you cannot do, like side-effects in pure functions.
This is important because it is a big part of the reason why functional programs are, or functional code in generel is, easier to reason about. And when code is easier to reason about, bugs become shallower and float to the conceptual surface where they can be fixed, which in turn gives less buggy code.
Ruby is object-oriented at its core, so even though it has reasonably good support for a functional style, it is not itself a functional language.
That's my non-scientific opinion anyway.
Edit:
In retrospect and with consideration for the fine comments I have recieved to this answer thus far, I think the object-oriented versus functional comparison is one of apples and oranges.
The real differentiator is that of being imparative in execution, or not. Functional languages have the expression as their primary linguistic construct and the order of execution is often undefined or defined as being lazy. Strict execution is possible but only used when needed. In an imparative language, strict execution is the default and while lazy execution is possible, it is often kludgy to do and can have unpredictable results in many edge cases.
Now, that's my non-scientific opinion.
Ruby will have to meet the following requirements in order to be "TRUELY" functional.
Immutable values: once a “variable” is set, it cannot be changed. In Ruby, this means you effectively have to treat variables like constants. The is not fully supported in the language, you will have to freeze each variable manually.
No side-effects: when passed a given value, a function must always return the same result. This goes hand in hand with having immutable values; a function can never take a value and change it, as this would be causing a side-effect that is tangential to returning a result.
Higher-order functions: these are functions that allow functions as arguments, or use functions as the return value. This is, arguably, one of the most critical features of any functional language.
Currying: enabled by higher-order functions, currying is transforming a function that takes multiple arguments into a function that takes one argument. This goes hand in hand with partial function application, which is transforming a multi-argument function into a function that takes less arguments then it did originally.
Recursion: looping by calling a function from within itself. When you don’t have access to mutable data, recursion is used to build up and chain data construction. This is because looping is not a functional concept, as it requires variables to be passed around to store the state of the loop at a given time.
Lazy-evaluation, or delayed-evaluation: delaying processing of values until the moment when it is actually needed. If, as an example, you have some code that generated list of Fibonacci numbers with lazy-evaluation enabled, this would not actually be processed and calculated until one of the values in the result was required by another function, such as puts.
Proposal (Just a thought)
I would be of great to have some kind of definition to have a mode directive to declare files with functional paradigm, example
mode 'functional'
Ruby is a multi-paradigm language that supports a functional style of programming.
Ruby is an object-oriented language, that can support other paradigms (functional, imperative, etc). However, since everything in Ruby is an object, it's primarily an OO language.
example:
"hello".reverse() = "olleh", every string is a string object instance and so on and so forth.
Read up here or here
It depends on your definition of a “functional language”. Personally, I think the term is itself quite problematic when used as an absolute. The are more aspects to being a “functional language” than mere language features and most depend on where you're looking from. For instance, the culture surrounding the language is quite important in this regard. Does it encourage a functional style? What about the available libraries? Do they encourage you to use them in a functional way?
Most people would call Scheme a functional language, for example. But what about Common Lisp? Apart from the multiple-/single-namespace issue and guaranteed tail-call elimination (which some CL implementations support as well, depending on the compiler settings), there isn't much that makes Scheme as a language more suited to functional programming than Common Lisp, and still, most Lispers wouldn't call CL a functional language. Why? Because the culture surrounding it heavily depends on CL's imperative features (like the LOOP macro, for example, which most Schemers would probably frown upon).
On the other hand, a C programmer may well consider CL a functional language. Most code written in any Lisp dialect is certainly much more functional in style than your usual block of C code, after all. Likewise, Scheme is very much an imperative language as compared to Haskell. Therefore, I don't think there can ever be a definite yes/no answer. Whether to call a language functional or not heavily depends on your viewpoint.
Ruby isn't really much of a multi-paradigm language either, I think. Multi-paradigm tends to be used by people wanting to label their favorite language as something which is useful in many different areas.
I'd describe Ruby is an object-oriented scripting language. Yes, functions are first-class objects (sort of), but that doesn't really make it a functional language. IMO, I might add.
Recursion is common in functional programming. Almost any language does support recursion, but recursive algorithms are often ineffective if there is no tail call optimization (TCO).
Functional programming languages are capable of optimizing tail recursion and can execute such code in constant space. Some Ruby implementations do optimize tail recursion, the other don't, but in general Ruby implementations are not required to do TCO. See Does Ruby perform Tail Call Optimization?
So, if you write some Ruby functional style and rely on TCO of some particular implementation, your code may be very ineffective in another Ruby interpreter. I think this is why Ruby is not a functional language (neither is Python).
Strictly speaking, it doesn't make sense to describe a language as "functional"; most languages are capable of functional programming. Even C++ is.
Functional style is more or less a subset of imperative language features, supported with syntactic sugar and some compiler optimizations like immutability and tail-recursion flattening,
The latter arguably is a minor implementation-specific technicality and has nothing to do with the actual language. The x64 C# 4.0 compiler does tail-recursion optimization, whereas the x86 one doesn't for whatever stupid reason.
Syntactic sugar can usually be worked around to some extent or another, especially if the language has a programmable precompiler (i.e. C's #define).
It might be slightly more meaningful to ask, "does language __ support imperative programming?", and the answer, for instance with Lisp, is "no".
Please, have a look at the beginning of the book: "A-Great-Ruby-eBook". It discusses the very specific topic you are asking. You can do different types of programming in Ruby. If you want to program like functionally, you can do it. If you want to program like imperatively, you can do it. It is a definition question how functional Ruby in the end is. Please, see the reply by the user camflan.

Resources