The following two Haskell programs for computing the n'th term of the Fibonacci sequence have greatly different performance characteristics:
fib1 n =
case n of
0 -> 1
1 -> 1
x -> (fib1 (x-1)) + (fib1 (x-2))
fib2 n = fibArr !! n where
fibArr = 1:1:[a + b | (a, b) <- zip fibArr (tail fibArr)]
They are very close to mathematically identical, but fib2 uses the list notation to memoize its intermediate results, while fib1 has explicit recursion. Despite the potential for the intermediate results to be cached in fib1, the execution time gets to be a problem even for fib1 25, suggesting that the recursive steps are always evaluated. Does referential transparency contribute anything to Haskell's performance? How can I know ahead of time if it will or won't?
This is just an example of the sort of thing I'm worried about. I'd like to hear any thoughts about overcoming the difficulty inherent in reasoning about the performance of a lazily-executed, functional programming language.
Summary: I'm accepting 3lectrologos's answer, because the point that you don't reason so much about the language's performance, as about your compiler's optimization, seems to be extremely important in Haskell - more so than in any other language I'm familiar with. I'm inclined to say that the importance of the compiler is the factor that differentiates reasoning about performance in lazy, functional langauges, from reasoning about the performance of any other type.
Addendum: Anyone happening on this question may want to look at the slides from Johan Tibell's talk about high performance Haskell.
In your particular Fibonacci example, it's not very hard to see why the second one should run faster (although you haven't specified what f2 is).
It's mainly an algorithmic issue:
fib1 implements the purely recursive algorithm and (as far as I know) Haskell has no mechanism for "implicit memoization".
fib2 uses explicit memoization (using the fibArr list to store previously computed values.
In general, it's much harder to make performance assumptions for a lazy language like Haskell, than for an eager one. Nevertheless, if you understand the underlying mechanisms (especially for laziness) and gather some experience, you will be able to make some "predictions" about performance.
Referential transparency increases (potentially) performance in (at least) two ways:
First, you (as a programmer) can be sure that two calls to the same function will always return the same, so you can exploit this in various cases to benefit in performance.
Second (and more important), the Haskell compiler can be sure for the above fact and this may enable many optimizations that can't be enabled in impure languages (if you've ever written a compiler or have any experience in compiler optimizations you are probably aware of the importance of this).
If you want to read more about the reasoning behind the design choices (laziness, pureness) of Haskell, I'd suggest reading this.
Reasoning about performance is generally hard in Haskell and lazy languages in general, although not impossible. Some techniques are covered in Chris Okasaki's Purely Function Data Structures (also available online in a previous version).
Another way to ensure performance is to fix the evaluation order, either using annotations or continuation passing style. That way you get to control when things are evaluated.
In your example you might calculate the numbers "bottom up" and pass the previous two numbers along to each iteration:
fib n = fib_iter(1,1,n)
where
fib_iter(a,b,0) = a
fib_iter(a,b,1) = a
fib_iter(a,b,n) = fib_iter(a+b,a,n-1)
This results in a linear time algorithm.
Whenever you have a dynamic programming algorithm where each result relies on the N previous results, you can use this technique. Otherwise you might have to use an array or something completely different.
Your implementation of fib2 uses memoization but each time you call fib2 it rebuild the "whole" result. Turn on ghci time and size profiling:
Prelude> :set +s
If it was doing memoisation "between" calls the subsequent calls would be faster and use no memory. Call fib2 20000 twice and see for yourself.
By comparison a more idiomatic version where you define the exact mathematical identity:
-- the infinite list of all fibs numbers.
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
memoFib n = fibs !! n
actually do use memoisation, explicit as you see. If you run memoFib 20000 twice you'll see the time and space taken the first time then the second call is instantaneous and take no memory. No magic and no implicit memoization like a comment might have hinted at.
Now about your original question: optimizing and reasoning about performance in Haskell...
I wouldn't call myself an expert in Haskell, I have only been using it for 3 years, 2 of which at my workplace but I did have to optimize and get to understand how to reason somewhat about its performance.
As mentionned in other post laziness is your friend and can help you gain performance however YOU have to be in control of what is lazily evaluated and what is strictly evaluated.
Check this comparison of foldl vs foldr
foldl actually stores "how" to compute the value i.e. it is lazy. In some case you saves time and space beeing lazy, like the "infinite" fibs. The infinite "fibs" doesn't generate all of them but knows how. When you know you will need the value you might as well just get it "strictly" speaking... That's where strictness annotation are usefull, to give you back control.
I recall reading many times that in lisp you have to "minimize" consing.
Understanding what is stricly evaluated and how to force it is important but so is understanding how much "trashing" you do to the memory. Remember Haskell is immutable, that means that updating a "variable" is actually creating a copy with the modification. Prepending with (:) is vastly more efficient than appending with (++) because (:) does not copy memory contrarily to (++). Whenever a big atomic block is updated (even for a single char) the whole block needs to be copied to represent the "updated" version. The way you structure data and update it can have a big impact on performance. The ghc profiler is your friend and will help you spot these. Sure the garbage collector is fast but not having it do anything is faster!
Cheers
Aside from the memoization issue, fib1 also uses non-tailcall recursion. Tailcall recursion can be re-factored automatically into a simple goto and perform very well, but the recursion in fib1 cannot be optimized in this way, because you need the stack frame from each instance of fib1 in order to calculate the result. If you rewrote fib1 to pass a running total as an argument, thus allowing a tail call instead of needing to keep the stack frame for the final addition, the performance would improve immensely. But not as much as the memoized example, of course :)
Since allocation is a major cost in any functional language, an important part of understanding performance is to understand when objects are allocated, how long they live, when they die, and when they are reclaimed. To get this information you need a heap profiler. It's an essential tool, and luckily GHC ships with a good one.
For more information, read Colin Runciman's papers.
Related
A reddit thread brought up an apparently interesting question:
Tail recursive functions can trivially be converted into iterative functions. Other ones, can be transformed by using an explicit stack. Can every recursion be transformed into iteration?
The (counter?)example in the post is the pair:
(define (num-ways x y)
(case ((= x 0) 1)
((= y 0) 1)
(num-ways2 x y) ))
(define (num-ways2 x y)
(+ (num-ways (- x 1) y)
(num-ways x (- y 1))
Can you always turn a recursive function into an iterative one? Yes, absolutely, and the Church-Turing thesis proves it if memory serves. In lay terms, it states that what is computable by recursive functions is computable by an iterative model (such as the Turing machine) and vice versa. The thesis does not tell you precisely how to do the conversion, but it does say that it's definitely possible.
In many cases, converting a recursive function is easy. Knuth offers several techniques in "The Art of Computer Programming". And often, a thing computed recursively can be computed by a completely different approach in less time and space. The classic example of this is Fibonacci numbers or sequences thereof. You've surely met this problem in your degree plan.
On the flip side of this coin, we can certainly imagine a programming system so advanced as to treat a recursive definition of a formula as an invitation to memoize prior results, thus offering the speed benefit without the hassle of telling the computer exactly which steps to follow in the computation of a formula with a recursive definition. Dijkstra almost certainly did imagine such a system. He spent a long time trying to separate the implementation from the semantics of a programming language. Then again, his non-deterministic and multiprocessing programming languages are in a league above the practicing professional programmer.
In the final analysis, many functions are just plain easier to understand, read, and write in recursive form. Unless there's a compelling reason, you probably shouldn't (manually) convert these functions to an explicitly iterative algorithm. Your computer will handle that job correctly.
I can see one compelling reason. Suppose you've a prototype system in a super-high level language like [donning asbestos underwear] Scheme, Lisp, Haskell, OCaml, Perl, or Pascal. Suppose conditions are such that you need an implementation in C or Java. (Perhaps it's politics.) Then you could certainly have some functions written recursively but which, translated literally, would explode your runtime system. For example, infinite tail recursion is possible in Scheme, but the same idiom causes a problem for existing C environments. Another example is the use of lexically nested functions and static scope, which Pascal supports but C doesn't.
In these circumstances, you might try to overcome political resistance to the original language. You might find yourself reimplementing Lisp badly, as in Greenspun's (tongue-in-cheek) tenth law. Or you might just find a completely different approach to solution. But in any event, there is surely a way.
Is it always possible to write a non-recursive form for every recursive function?
Yes. A simple formal proof is to show that both µ recursion and a non-recursive calculus such as GOTO are both Turing complete. Since all Turing complete calculi are strictly equivalent in their expressive power, all recursive functions can be implemented by the non-recursive Turing-complete calculus.
Unfortunately, I’m unable to find a good, formal definition of GOTO online so here’s one:
A GOTO program is a sequence of commands P executed on a register machine such that P is one of the following:
HALT, which halts execution
r = r + 1 where r is any register
r = r – 1 where r is any register
GOTO x where x is a label
IF r ≠ 0 GOTO x where r is any register and x is a label
A label, followed by any of the above commands.
However, the conversions between recursive and non-recursive functions isn’t always trivial (except by mindless manual re-implementation of the call stack).
For further information see this answer.
Recursion is implemented as stacks or similar constructs in the actual interpreters or compilers. So you certainly can convert a recursive function to an iterative counterpart because that's how it's always done (if automatically). You'll just be duplicating the compiler's work in an ad-hoc and probably in a very ugly and inefficient manner.
Basically yes, in essence what you end up having to do is replace method calls (which implicitly push state onto the stack) into explicit stack pushes to remember where the 'previous call' had gotten up to, and then execute the 'called method' instead.
I'd imagine that the combination of a loop, a stack and a state-machine could be used for all scenarios by basically simulating the method calls. Whether or not this is going to be 'better' (either faster, or more efficient in some sense) is not really possible to say in general.
Recursive function execution flow can be represented as a tree.
The same logic can be done by a loop, which uses a data-structure to traverse that tree.
Depth-first traversal can be done using a stack, breadth-first traversal can be done using a queue.
So, the answer is: yes. Why: https://stackoverflow.com/a/531721/2128327.
Can any recursion be done in a single loop? Yes, because
a Turing machine does everything it does by executing a single loop:
fetch an instruction,
evaluate it,
goto 1.
Yes, using explicitly a stack (but recursion is far more pleasant to read, IMHO).
Yes, it's always possible to write a non-recursive version. The trivial solution is to use a stack data structure and simulate the recursive execution.
In principle it is always possible to remove recursion and replace it with iteration in a language that has infinite state both for data structures and for the call stack. This is a basic consequence of the Church-Turing thesis.
Given an actual programming language, the answer is not as obvious. The problem is that it is quite possible to have a language where the amount of memory that can be allocated in the program is limited but where the amount of call stack that can be used is unbounded (32-bit C where the address of stack variables is not accessible). In this case, recursion is more powerful simply because it has more memory it can use; there is not enough explicitly allocatable memory to emulate the call stack. For a detailed discussion on this, see this discussion.
All computable functions can be computed by Turing Machines and hence the recursive systems and Turing machines (iterative systems) are equivalent.
Sometimes replacing recursion is much easier than that. Recursion used to be the fashionable thing taught in CS in the 1990's, and so a lot of average developers from that time figured if you solved something with recursion, it was a better solution. So they would use recursion instead of looping backwards to reverse order, or silly things like that. So sometimes removing recursion is a simple "duh, that was obvious" type of exercise.
This is less of a problem now, as the fashion has shifted towards other technologies.
Recursion is nothing just calling the same function on the stack and once function dies out it is removed from the stack. So one can always use an explicit stack to manage this calling of the same operation using iteration.
So, yes all-recursive code can be converted to iteration.
Removing recursion is a complex problem and is feasible under well defined circumstances.
The below cases are among the easy:
tail recursion
direct linear recursion
Appart from the explicit stack, another pattern for converting recursion into iteration is with the use of a trampoline.
Here, the functions either return the final result, or a closure of the function call that it would otherwise have performed. Then, the initiating (trampolining) function keep invoking the closures returned until the final result is reached.
This approach works for mutually recursive functions, but I'm afraid it only works for tail-calls.
http://en.wikipedia.org/wiki/Trampoline_(computers)
I'd say yes - a function call is nothing but a goto and a stack operation (roughly speaking). All you need to do is imitate the stack that's built while invoking functions and do something similar as a goto (you may imitate gotos with languages that don't explicitly have this keyword too).
Have a look at the following entries on wikipedia, you can use them as a starting point to find a complete answer to your question.
Recursion in computer science
Recurrence relation
Follows a paragraph that may give you some hint on where to start:
Solving a recurrence relation means obtaining a closed-form solution: a non-recursive function of n.
Also have a look at the last paragraph of this entry.
It is possible to convert any recursive algorithm to a non-recursive
one, but often the logic is much more complex and doing so requires
the use of a stack. In fact, recursion itself uses a stack: the
function stack.
More Details: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Functions
tazzego, recursion means that a function will call itself whether you like it or not. When people are talking about whether or not things can be done without recursion, they mean this and you cannot say "no, that is not true, because I do not agree with the definition of recursion" as a valid statement.
With that in mind, just about everything else you say is nonsense. The only other thing that you say that is not nonsense is the idea that you cannot imagine programming without a callstack. That is something that had been done for decades until using a callstack became popular. Old versions of FORTRAN lacked a callstack and they worked just fine.
By the way, there exist Turing-complete languages that only implement recursion (e.g. SML) as a means of looping. There also exist Turing-complete languages that only implement iteration as a means of looping (e.g. FORTRAN IV). The Church-Turing thesis proves that anything possible in a recursion-only languages can be done in a non-recursive language and vica-versa by the fact that they both have the property of turing-completeness.
Here is an iterative algorithm:
def howmany(x,y)
a = {}
for n in (0..x+y)
for m in (0..n)
a[[m,n-m]] = if m==0 or n-m==0 then 1 else a[[m-1,n-m]] + a[[m,n-m-1]] end
end
end
return a[[x,y]]
end
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Is it meaningless to be asked to document "algorithms" of your software (say, in a design specification) if it is implemented in a functional paradigm? Whenever I think of algorithms in technical documents I imagine a while loop with a bunch of sequential steps.
Looking at the informal dictionary meaning of an algorithm:
In mathematics and computer science, an algorithm is a step-by-step procedure for calculations.
The phrase "step-by-step" appears to be against the paradigm of functional programming (as I understand it), because functional programs, in contrast to imperative programs, have no awareness of time in their hypothetical machine. Is this argument correct? Or does lazy evaluation enforce an implicit time component that makes it "step by step"?
EDIT - so many good answers, it's unfair for me to choose a best response :( Thanks for all the perspectives, they all make great observations.
Yes, algorithms still exist in functional languages, although they don't always look the same as imperative ones.
Instead of using an implicit notion of "time" based on state to model steps, functional languages do it with composed data transformations. As a really nice example, you could think of heap sort in two parts: a transformation from a list into a heap and then from a heap into a list.
You can model step-by-step logic quite naturally with recursion, or, better yet, using existing higher-order functions that capture the various computations you can do. Composing these existing pieces is probably what I'd really call the "functional style": you might express your algorithm as an unfold followed by a map followed by a fold.
Laziness makes this even more interesting by blurring the lines between "data structure" and "algorithm". A lazy data structure, like a list, never has to completely exist in memory. This means that you can compose functions that build up large intermediate data structures without actually needing to use all that space or sacrificing asymptotic performance. As a trivial example, consider this definition of factorial (yes, it's a cliche, but I can't come up with anything better :/):
factorial n = product [1..n]
This has two composed parts: first, we generate a list from 1 to n and then we fold it by multiplying (product). But, thanks to laziness, the list never has to exist in memory completely! We evaluate as much of the generating function as we need at each step of product, and the garbage collector reclaims old cells as we're done with them. So even though this looks like it'll need O(n) memory, it actually gets away with O(1). (Well, assuming numbers all take O(1) memory.)
In this case, the "structure" of the algorithm, the sequence of steps, is provided by the list structure. The list here is closer to a for-loop than an actual list!
So in functional programming, we can create an algorithm as a sequence of steps in a few different ways: by direct recursion, by composing transformations (maybe based on common higher-order functions) or by creating and consuming intermediate data structures lazily.
I think you might be misunderstanding the functional programming paradigm.
Whether you use a functional language (Lisp, ML, Haskell) or an imperative/procedural one (C/Java/Python), you are specifying the operations and their order (sometimes the order might not be specified, but this is a side issue).
The functional paradigm sets certain limits on what you can do (e.g., no side effects), which makes it easier to reason about the code (and, incidentally, easier to write a "Sufficiently Smart Compiler").
Consider, e.g., a functional implementation of factorial:
(defun ! (n)
(if (zerop n)
1
(* n (! (1- n)))))
One can easily see the order of execution: 1 * 2 * 3 * .... * n and the fact that there are
n-1 multiplications and subtractions for argument n.
The most important part of the Computer Science is to remember that the language is just the means of talking to computers. CS is about computers no more than Astronomy is about telescopes, and algorithms are to be executed on an abstract (Turing) machine, emulated by the actual box in front of us.
No, I think if you solved a problem functionally and you solved it imperatively, what you have come up with are two separate algorithms. Each is an algorithm. One is a functional algorithm and one is an imperative algorithm. There's many books about algorithms in functional programming languages.
It seems like you are getting caught up in technicalities/semantics here. If you are asked to document an algorithm to solve a problem, whoever asked you wants to know how you are solving the problem. Even if its functional, there will be series of steps to reach the solution (even with all of the lazy evaluation). If you can write the code to reach the solution, then you can write the code in pseudocode, which means you can write the code in terms of an algorithm as far as I'm concerned.
And, since it seems like you are getting very hung up on definitions here, I'll posit a question your way that proves my point. Programming languages, whether functional or imperative, ultimately are run on a machine. Right? Your computer has to be told a step-by-step procedure of low level instructions to run. If this statement holds true, then every high-level computer program can be described in terms of their low level instructions. Therefore, every program, whether functional or imperative, can be described by an algorithm. And if you can't seem to find a way to describe the high-level algorithm, then output the bytecode/assembly and explain your algorithm in terms of these instructions
Consider this functional Scheme example:
(define (make-list num)
(let loop ((x num) (acc '()))
(if (zero? x)
acc
(loop (- x 1) (cons x acc)))))
(make-list 5) ; dumb compilers might do this
(display (make-list 10)) ; force making a list (because we display it)
With your logic make-list wouldn't be considered an algorithm since it doesn't do it's calculation in steps, but is that really true?
Scheme is eager and follows computation in order. Even with lazy languages everything becomes calculations in stages until you have a value. The differences is that lazy languages does calculations in the order of dependencies and not in the order of your instructions.
The underlying machine of a functional language is a register machine so it's hard avoid your beautiful functional program to actually become assembly instructions that mutate registers and memory. Thus a functional language (or a lazy language) is just an abstraction to ease writing code with less bugs.
Here's a claim made in one of the answers at: https://softwareengineering.stackexchange.com/a/136146/18748
Try your hand at a Functional language or two. Try implementing
factorial in Erlang using recursion and watch your jaw hit the floor
when 20000! returns in 5 seconds (no stack overflow in site)
Why would it be faster/more efficient than using recursion/iteration in Java/C/C++/Python (any)? What is the underlying mathematical/theoretical concept that makes this happen? Unfortunately, I wasn't ever exposed to functional programming in my undergrad (started with 'C') so I may just lack knowing the 'why'.
A recursive solution would probably be slower and clunkier in Java or C++, but we don't do things that way in Java and C++, so. :)
As for why, functional languages invariably make very heavy usage of recursion (as by default they either have to jump through hoops, or simply aren't allowed, to modify variables, which on its own makes most forms of iteration ineffective or outright impossible). So they effectively have to optimize the hell out of it in order to be competitive.
Almost all of them implement an optimization called "tail call elimination", which uses the current call's stack space for the next call and turns a "call" into a "jump". That little change basically turns recursion into iteration, but don't remind the FPers of that. When it comes to iteration in a procedural language and recursion in a functional one, the two would prove about equal performancewise. (If either one's still faster, though, it'd be iteration.)
The rest is libraries and number types and such. Decent math code ==> decent math performance. There's nothing keeping procedural or OO languages from optimizing similarly, other than that most people don't care that much. Functional programmers, on the other hand, love to navel gaze about how easily they can calculate Fibonacci sequences and 20000! and other numbers that most of us will never use anyway.
Essentially, functional languages make recursion cheap.
Particularly via the tail call optimization.
Tail-call optimization
Lazy evaluation: "When using delayed evaluation, an expression is not evaluated as soon as it gets bound to a variable, but when the evaluator is forced to produce the expression's value."
Both concepts are illustrated in this question about various ways of implementing factorial in Clojure. You can also find a detailed discussion of lazy sequences and TCO in Stuart Halloway and Aaron Bedra's book, Programming Clojure.
The following function, adopted from Programming Clojure, creates a lazy sequence with the first 1M members of the Fibonacci sequence and realizes the one-hundred-thousandth member:
user=> (time (nth (take 1000000 (map first (iterate (fn [[a b]] [b (+ a b)]) [0N 1N]))) 100000))
"Elapsed time: 252.045 msecs"
25974069347221724166155034....
(20900 total digits)
(512MB heap, Intel Core i7, 2.5GHz)
It's not faster (than what?), and it has nothing to do with tail call optimization (it's not enough to throw a buzzword in here, one should also explain why tail call optimization should be faster than looping? it's simply not the case!)
Mind you that I am not a functional programming hater, to the contrary! But spreading myths does not serve the case for functional programming.
BTW, has any one here actually tried how long it takes to compute (and print, which should consume at least 50% of the CPU cycles needed) 20000!?
I did:
main _ = println (product [2n..20000n])
This is a JVM language compiled to Java, and it uses Java big integers (known to be slow). It also suffers of JVM startup costs. And it is not the fastest way to do it (explicit recursion could save list construction, for example).
The result is:
181920632023034513482764175686645876607160990147875264
...
... many many lines with digits
...
000000000000000000000000000000000000000000000000000000000000000
real 0m3.330s
user 0m4.472s
sys 0m0.212s
(On a Intel® Core™ i3 CPU M 350 # 2.27GHz × 4)
We can safely assume that the same in C with GMP wouldn't even use 50% of that time.
Hence, functional is faster is a myth, as well as functional is slower. It's not even a myth, it's simply nonsense as long as one does not say compared to what it is faster/slower.
I assume it doesn't.
My reason is that Haskell is pure functional programming (without I/O Monad), they could have made every "call by name" use the same evaluated value if the "name"s are the same.
I don't know anything about the implementation details but I'm really interested.
Detailed explanations will be much appreciated :)
BTW, I tried google, it was quite hard to get anything useful.
First of all, Haskell is a specification, not an implementation; the report does not actually require use of call-by-name evaluation, or lazy evaluation for that matter. Haskell implementations are only required to be non-strict, which does rule out call-by-value and similar strategies.
So, strictly (ha, ha) speaking, evaluation strategies can't slow down Haskell. I'm not sure what can slow down Haskell, though clearly something has or else it wouldn't have taken 12 years to get the next version of the Report out after Haskell 98. My guess is that it involves committees somehow.
Anyway, "lazy evaluation" refers to a "call by need" strategy, which is the most common implementation choice for Haskell. This differs from call-by-name in that if a subexpression is used in multiple places, it will be evaluated at most once.
The details of what qualifies as a subexpression that will be shared is a bit subtle and probably somewhat implementation-dependent, but to use an example from GHC Haskell: Consider the function cycle, which repeats an input list infinitely. A naive implementation might be:
cycle xs = xs ++ cycle xs
This ends up being inefficient because there is no single cycle xs expression that can be shared, so the resulting list has to be constructed continually as it's traversed, allocating more memory and doing more computation each time.
In contrast, the actual implementation looks like this:
cycle xs = xs' where xs' = xs ++ xs'
Here the name xs' is defined recursively as itself appended to the end of the input list. This time xs' is shared, and evaluated only once; the resulting infinite list is actually a finite, circular linked list in memory, and once the entire loop has been evaluated no further work is needed.
In general, GHC will not memoize functions for you: given f and x, each use of f x will be re-evaluated unless you give the result a name and use that. The resulting value will be the same in either case, but the performance can differ significantly. This is mostly a matter of avoiding pessimizations--it would be easy for GHC to memoize things for you, but in many cases this would cost large amounts of memory to gain tiny or nonexistent amounts of speed.
The flip side is that shared values are retained; if you have a data structure that's very expensive to compute, naming the result of constructing it and passing that to functions using it ensures that no work is duplicated--even if it's used simultaneously by different threads.
You can also pessimize things yourself this way--if a data structure is cheap to compute and uses lots of memory, you should probably avoid sharing references to the full structure, as that will keep the whole thing alive in memory as long as anything could possibly use it later.
Yes, it does, somewhat. The problem is that Haskell can't, in general, calculate the value too early (e.g. if it would lead to an exception), so it sometimes needs to keep a thunk (code for calculating the value) instead of the value itself, which uses more memory and slows things down. The compiler tries to detect cases where this can be avoided, but it's impossible to detect all of them.
I'm new to programming and learning Haskell by reading and working through Project Euler problems. Of course, the most important thing one can do to improve performance on these problems is to use a better algorithm. However, it is clear to me that there are other simple and easy to implement ways to improve performance. A cursory search brought up this question, and this question, which give the following tips:
Use the ghc flags -O2 and -fllvm.
Use the type Int, instead of Integer, because it is unboxed (or even Integer instead of Int64). This requires typing the functions, not letting the compiler decide on the fly.
Use rem, not mod, for division testing.
Use Schwartzian transformations when appropriate.
Using an accumulator in recursive functions (a tail-recursion optimization, I believe).
Memoization (?)
(One answer also mentions worker/wrapper transformation, but that seems fairly advanced.)
Question: What other simple optimizations can one make in Haskell to improve performance on Project Euler-style problems? Are there any other Haskell-specific (or functional programming specific?) ideas or features that could be used to help speed up solutions to Project Euler problems? Conversely, what should one watch out for? What are some common yet inefficient things to be avoided?
Here are some good slides by Johan Tibell that I frequently refer to:
Haskell Performance Patterns
One easy suggestion is to use hlint which is a program that checks your source code and makes suggestions for improvements syntax wise. This might not increase speed because most likely it's already done by the compiler or the lazy evaluation. But it might help the compiler in some cases. Further more it will make you a better Haskell programmer since you will learn better ways to do things, and it might be easier to understand your program and analyze it.
examples taken from http://community.haskell.org/~ndm/darcs/hlint/hlint.htm such as:
darcs-2.1.2\src\CommandLine.lhs:94:1: Error: Use concatMap
Found:
concat $ map escapeC s
Why not:
concatMap escapeC s
and
darcs-2.1.2\src\Darcs\Patch\Test.lhs:306:1: Error: Use a more efficient monadic variant
Found:
mapM (delete_line (fn2fp f) line) old
Why not:
mapM_ (delete_line (fn2fp f) line) old
I think the largest increases you can do in Project Euler problems is to understand the problem and remove unnecessary computations. Even if you don't understand everything you can do some small fixes which will make your program run twice the speed. Let's say you are looking for primes up to 1.000.000, then you of course can do filter isPrime [1..1000000]. But if you think a bit, then you can realize that well, no even number above is a prime, there you have removed (about) half the work. Instead doing [1,2] ++ filter isPrime [3,5..999999]
There is a fairly large section of the Haskell wiki about performance.
One fairly common problem is too little (or too much) strictness (this is covered by the sections listed in the General techniques section of the performance page above). Too much laziness causes a large number of thunks to be accumulated, too much strictness can cause too much to be evaluated.
These considerations are especially important when writing tail recursive functions (i.e. those with an accumulator); And, on that note, depending on how the function is used, a tail recursive function is sometimes less efficient in Haskell than the equivalent non-tail-recursive function, even with the optimal strictness annotations.
Also, as demonstrated by this recent question, sharing can make a huge difference to performance (in many cases, this can be considered a form of memoisation).
Project Euler is mostly about finding clever algorithmic solutions to the problems. Once you have the right algorithm, micro-optimization is rarely an issue, since even a straightforward or interpreted (e.g. Python or Ruby) implementation should run well within the speed constraints. The main technique you need is understanding lazy evaluation so you can avoid thunk buildups.