Haskell program runs very slow - performance

I wrote my first program calculating prime numbers. However it runs really slow, and I can't figure out why. I wrote similar code in java and for n = 10000 the java program doesn't take any time, while the Haskell program takes like 2 minutes.
import Data.List
main = do
print "HowManyPrimes? - OnlyInteger"
inputNumber <- getLine
let x = (read inputNumber :: Int)
print (firstNPrimes x)
-- prime - algorithm
primeNumber:: Int -> Bool
primeNumber 2 = True
primeNumber x = primNumberRec x (div x 2)
primNumberRec:: Int -> Int -> Bool
primNumberRec x y
|y == 0 = False
|y == 1 = True
|mod x y == 0 = False
|otherwise = primNumberRec x (y-1)
-- prime numbers till n
primesTillN:: Int -> [Int]
primesTillN n = 2:[ x | x <- [3,5..n], primeNumber x ]
--firstNPrimes
firstNPrimes:: Int -> [Int]
firstNPrimes 0 = []
firstNPrimes n = 2: take (n-1) [x|x <- [3,5..], primeNumber x]
Thanks in advance.
Similar java code:
import java.util.Scanner;
public class PrimeNumbers{
static Scanner scan = new Scanner(System.in);
public boolean primeAlgorithm(int x){
if (x < 2)
return false;
return primeAlgorithm(x, (int)Math.sqrt(x));
}
public boolean primeAlgorithm(int x, int divider){
if (divider == 1)
return true;
if (x%divider == 0)
return false;
return primeAlgorithm(x, divider-1);
}
public static void main(String[] args){
PrimeNumbers p = new PrimeNumbers();
int howManyPrimes = scan.nextInt();
int number = 3;
while(howManyPrimes!=0){
if(p.primeAlgorithm(number)){
System.out.print(number+" ");
howManyPrimes--;
}
number+=2;
}
}
}

When doing timing measurements, always compile; ghci is designed for a fast change-rebuild-run loop, not for speedy execution of the produced code. However, even after following this advice there is a huge timing difference between your two snippets.
The key difference between your java and Haskell is using sqrt instead of dividing by 2. Your originals, on my machine:
% javac Test.java && echo 10000 | /usr/bin/time java Test >/dev/null
0.21user 0.02system 0:00.13elapsed 186%CPU (0avgtext+0avgdata 38584maxresident)k
0inputs+0outputs (0major+5823minor)pagefaults 0swaps
% ghc -O2 test && echo 10000 | /usr/bin/time ./test >/dev/null
8.85user 0.00system 0:08.87elapsed 99%CPU (0avgtext+0avgdata 4668maxresident)k
0inputs+0outputs (0major+430minor)pagefaults 0swaps
So 0.2s for java, 8.9s for Haskell. After switching to using square root with the following change:
- primeNumber x = primNumberRec x (div x 2)
+ primeNumber x = primNumberRec x (ceiling (sqrt (fromIntegral x)))
I get the following timing for the Haskell:
% ghc -O2 test && echo 10000 | /usr/bin/time ./test >/dev/null
0.07user 0.00system 0:00.07elapsed 98%CPU (0avgtext+0avgdata 4560maxresident)k
0inputs+0outputs (0major+427minor)pagefaults 0swaps
Now 3x faster than the java code. (And of course there are significantly better algorithms that will make it even faster still.)

Compile it!
Haskell code in GHCi is far from optimised; try to compile it into a binary with ghc -o prime prime.hs or even better use -O2 optimisation. I had a script once that took 5min in GHCi but mere seconds once compiled.

Related

How can I speedup this Julia code?

The code implements an example of a Pollard rho() function for finding a factor of a positive integer, n. I've examined some of the code in the Julia "Primes" package that runs rapidly in an attempt to speedup the pollard_rho() function, all to no avail. The code should execute n = 1524157897241274137 in approximately 100 mSec to 30 Sec (Erlang, Haskell, Mercury, SWI Prolog) but takes about 3 to 4 minutes on JuliaBox, IJulia, and the Julia REPL. How can I make this go fast?
pollard_rho(1524157897241274137) = 1234567891
__precompile__()
module Pollard
export pollard_rho
function pollard_rho{T<:Integer}(n::T)
f(x::T, r::T, n) = rem(((x ^ T(2)) + r), n)
r::T = 7; x::T = 2; y::T = 11; y1::T = 11; z::T = 1
while z == 1
x = f(x, r, n)
y1 = f(y, r, n)
y = f(y1, r, n)
z = gcd(n, abs(x - y))
end
z >= n ? "error" : z
end
end # module
There are quite a few problems with type instability here.
Don't return either the string "error" or a result; instead explicitly call error().
As Chris mentioned, x and r ought to be annotated to be of type T, else they will be unstable.
There also seems to be a potential problem with overflow. A solution is to widen in the squaring step before truncating back to type T.
function pollard_rho{T<:Integer}(n::T)
f(x::T, r::T, n) = rem(Base.widemul(x, x) + r, n) % T
r::T = 7; x::T = 2; y::T = 11; y1::T = 11; z::T = 1
while z == 1
x = f(x, r, n)
y1 = f(y, r, n)
y = f(y1, r, n)
z = gcd(n, abs(x - y))
end
z >= n ? error() : z
end
After making these changes the function will run as fast as you could expect.
julia> #btime pollard_rho(1524157897241274137)
4.128 ms (0 allocations: 0 bytes)
1234567891
To find these problems with type instability, use the #code_warntype macro.

Haskell performance tuning

I'm quite new to Haskell, and to learn it better I started solving problems here and there and I ended up with this (project Euler 34).
145 is a curious number, as 1! + 4! + 5! = 1 + 24 + 120 = 145.
Find the sum of all numbers which are equal to the sum of the factorial >of their digits.
Note: as 1! = 1 and 2! = 2 are not sums they are not included.
I wrote a C and an Haskell brute force solution.
Could someone explain me the Haskell version is ~15x (~0.450 s vs ~6.5s )slower than the C implementation and how to possibly tune and speedup the Haskell solution?
unsigned int solve(){
unsigned int result = 0;
unsigned int i=10;
while(i<2540161){
unsigned int sumOfFacts = 0;
unsigned int number = i;
while (number > 0) {
unsigned int d = number % 10;
number /= 10;
sumOfFacts += factorial(d);
}
if (sumOfFacts == i)
result += i;
i++;
}
return result;
}
here the haskell solution
--BRUTE FORCE SOLUTION
solve:: Int
solve = sum (filter (\x-> sfc x 0 == x) [10..2540160])
--sum factorial of digits
sfc :: Int -> Int -> Int
sfc 0 acc = acc
sfc n acc = sfc n' (acc+fc r)
where
n' = div n 10
r = mod n 10 --n-(10*n')
fc 0 =1
fc 1 =1
fc 2 =2
fc 3 =6
fc 4 =24
fc 5 =120
fc 6 =720
fc 7 =5040
fc 8 =40320
fc 9 =362880
First, compile with optimizations. With ghc-7.10.1 -O2 -fllvm, the Haskell version runs in 0.54 secs for me. This is already pretty good.
If we want to do even better, we should first replace div with quot and mod with rem. div and mod do some extra work, because they handle the rounding of negative numbers differently. Since we only have positive numbers here, we should switch to the faster functions.
Second, we should replace the pattern matching in fc with an array lookup. GHC uses a branching construct for Int patterns, and uses binary search when the number of cases is large enough. We can do better here with a lookup.
The new code looks like this:
import qualified Data.Vector.Unboxed as V
facs :: V.Vector Int
facs =
V.fromList [1, 1, 2, 6, 24, 120, 720, 5040, 40320, 362880]
--BRUTE FORCE SOLUTION
solve:: Int
solve = sum (filter (\x-> sfc x 0 == x) [10..2540160])
--sum factorial of digits
sfc :: Int -> Int -> Int
sfc 0 acc = acc
sfc n acc = sfc n' (acc + V.unsafeIndex facs r)
where
(n', r) = quotRem n 10
main = print solve
It runs in 0.095 seconds on my computer.

An efficient algorithm to calculate the integer square root (isqrt) of arbitrarily large integers

Notice
For a solution in Erlang or C / C++, go to Trial 4 below.
Wikipedia Articles
Integer square root
The definition of "integer square root" could be found here
Methods of computing square roots
An algorithm that does "bit magic" could be found here
[ Trial 1 : Using Library Function ]
Code
isqrt(N) when erlang:is_integer(N), N >= 0 ->
erlang:trunc(math:sqrt(N)).
Problem
This implementation uses the sqrt() function from the C library, so it does not work with arbitrarily large integers (Note that the returned result does not match the input. The correct answer should be 12345678901234567890):
Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:8:8] [async-threads:10] [hipe] [kernel-poll:false]
Eshell V5.10.4 (abort with ^G)
1> erlang:trunc(math:sqrt(12345678901234567890 * 12345678901234567890)).
12345678901234567168
2>
[ Trial 2 : Using Bigint + Only ]
Code
isqrt2(N) when erlang:is_integer(N), N >= 0 ->
isqrt2(N, 0, 3, 0).
isqrt2(N, I, _, Result) when I >= N ->
Result;
isqrt2(N, I, Times, Result) ->
isqrt2(N, I + Times, Times + 2, Result + 1).
Description
This implementation is based on the following observation:
isqrt(0) = 0 # <--- One 0
isqrt(1) = 1 # <-+
isqrt(2) = 1 # |- Three 1's
isqrt(3) = 1 # <-+
isqrt(4) = 2 # <-+
isqrt(5) = 2 # |
isqrt(6) = 2 # |- Five 2's
isqrt(7) = 2 # |
isqrt(8) = 2 # <-+
isqrt(9) = 3 # <-+
isqrt(10) = 3 # |
isqrt(11) = 3 # |
isqrt(12) = 3 # |- Seven 3's
isqrt(13) = 3 # |
isqrt(14) = 3 # |
isqrt(15) = 3 # <-+
isqrt(16) = 4 # <--- Nine 4's
...
Problem
This implementation involves only bigint additions so I expected it to run fast. However, when I fed it with 1111111111111111111111111111111111111111 * 1111111111111111111111111111111111111111, it seems to run forever on my (very fast) machine.
[ Trial 3 : Using Binary Search with Bigint +1, -1 and div 2 Only ]
Code
Variant 1 (My original implementation)
isqrt3(N) when erlang:is_integer(N), N >= 0 ->
isqrt3(N, 1, N).
isqrt3(_N, Low, High) when High =:= Low + 1 ->
Low;
isqrt3(N, Low, High) ->
Mid = (Low + High) div 2,
MidSqr = Mid * Mid,
if
%% This also catches N = 0 or 1
MidSqr =:= N ->
Mid;
MidSqr < N ->
isqrt3(N, Mid, High);
MidSqr > N ->
isqrt3(N, Low, Mid)
end.
Variant 2 (modified above code so that the boundaries go with Mid+1 or Mid-1 instead, with reference to the answer by Vikram Bhat)
isqrt3a(N) when erlang:is_integer(N), N >= 0 ->
isqrt3a(N, 1, N).
isqrt3a(N, Low, High) when Low >= High ->
HighSqr = High * High,
if
HighSqr > N ->
High - 1;
HighSqr =< N ->
High
end;
isqrt3a(N, Low, High) ->
Mid = (Low + High) div 2,
MidSqr = Mid * Mid,
if
%% This also catches N = 0 or 1
MidSqr =:= N ->
Mid;
MidSqr < N ->
isqrt3a(N, Mid + 1, High);
MidSqr > N ->
isqrt3a(N, Low, Mid - 1)
end.
Problem
Now it solves the 79-digit number (namely 1111111111111111111111111111111111111111 * 1111111111111111111111111111111111111111) in lightening speed, the result is shown immediately. However, it takes 60 seconds (+- 2 seconds) on my machine to solve one million (1,000,000) 61-digit numbers (namely, from 1000000000000000000000000000000000000000000000000000000000000 to 1000000000000000000000000000000000000000000000000000001000000). I would like to do it even faster.
[ Trial 4 : Using Newton's Method with Bigint + and div Only ]
Code
isqrt4(0) -> 0;
isqrt4(N) when erlang:is_integer(N), N >= 0 ->
isqrt4(N, N).
isqrt4(N, Xk) ->
Xk1 = (Xk + N div Xk) div 2,
if
Xk1 >= Xk ->
Xk;
Xk1 < Xk ->
isqrt4(N, Xk1)
end.
Code in C / C++ (for your interest)
Recursive variant
#include <stdint.h>
uint32_t isqrt_impl(
uint64_t const n,
uint64_t const xk)
{
uint64_t const xk1 = (xk + n / xk) / 2;
return (xk1 >= xk) ? xk : isqrt_impl(n, xk1);
}
uint32_t isqrt(uint64_t const n)
{
if (n == 0) return 0;
if (n == 18446744073709551615ULL) return 4294967295U;
return isqrt_impl(n, n);
}
Iterative variant
#include <stdint.h>
uint32_t isqrt_iterative(uint64_t const n)
{
uint64_t xk = n;
if (n == 0) return 0;
if (n == 18446744073709551615ULL) return 4294967295U;
do
{
uint64_t const xk1 = (xk + n / xk) / 2;
if (xk1 >= xk)
{
return xk;
}
else
{
xk = xk1;
}
} while (1);
}
Problem
The Erlang code solves one million (1,000,000) 61-digit numbers in 40 seconds (+- 1 second) on my machine, so this is faster than Trial 3. Can it go even faster?
About My Machine
Processor : 3.4 GHz Intel Core i7
Memory : 32 GB 1600 MHz DDR3
OS : Mac OS X Version 10.9.1
Related Questions
Integer square root in python
The answer by user448810 uses "Newton's Method". I'm not sure whether doing the division using "integer division" is okay or not. I'll try this later as an update. [UPDATE (2015-01-11): It is okay to do so]
The answer by math involves using a 3rd party Python package gmpy, which is not very favourable to me, since I'm primarily interested in solving it in Erlang with only builtin facilities.
The answer by DSM seems interesting. I don't really understand what is going on, but it seems that "bit magic" is involved there, and so it's not quite suitable for me too.
Infinite Recursion in Meta Integer Square Root
This question is for C++, and the algorithm by AraK (the questioner) looks like it's from the same idea as Trial 2 above.
How about binary search like following doesn't need floating divisions only integer multiplications (Slower than newtons method) :-
low = 1;
/* More efficient bound
high = pow(10,log10(target)/2+1);
*/
high = target
while(low<high) {
mid = (low+high)/2;
currsq = mid*mid;
if(currsq==target) {
return(mid);
}
if(currsq<target) {
if((mid+1)*(mid+1)>target) {
return(mid);
}
low = mid+1;
}
else {
high = mid-1;
}
}
This works for O(logN) iterations so should not run forever for even very large numbers
Log10(target) Computation if needed :-
acc = target
log10 = 0;
while(acc>0) {
log10 = log10 + 1;
acc = acc/10;
}
Note : acc/10 is integer division
Edit :-
Efficient bound :- The sqrt(n) has about half the number of digits as n so you can pass high = 10^(log10(N)/2+1) && low = 10^(log10(N)/2-1) to get tighter bound and it should provide 2 times speed up.
Evaluate bound:-
bound = 1;
acc = N;
count = 0;
while(acc>0) {
acc = acc/10;
if(count%2==0) {
bound = bound*10;
}
count++;
}
high = bound*10;
low = bound/10;
isqrt(N,low,high);

Why is my Scala tail-recursion faster than the while loop?

Here are two solutions to exercise 4.9 in Cay Horstmann's Scala for the Impatient: "Write a function lteqgt(values: Array[Int], v: Int) that returns a triple containing the counts of values less than v, equal to v, and greater than v." One uses tail recursion, the other uses a while loop. I thought that both would compile to similar bytecode but the while loop is slower than the tail recursion by a factor of almost 2. This suggests to me that my while method is badly written.
import scala.annotation.tailrec
import scala.util.Random
object PerformanceTest {
def main(args: Array[String]): Unit = {
val bigArray:Array[Int] = fillArray(new Array[Int](100000000))
println(time(lteqgt(bigArray, 25)))
println(time(lteqgt2(bigArray, 25)))
}
def time[T](block : => T):T = {
val start = System.nanoTime : Double
val result = block
val end = System.nanoTime : Double
println("Time = " + (end - start) / 1000000.0 + " millis")
result
}
#tailrec def fillArray(a:Array[Int], pos:Int=0):Array[Int] = {
if (pos == a.length)
a
else {
a(pos) = Random.nextInt(50)
fillArray(a, pos+1)
}
}
#tailrec def lteqgt(values: Array[Int], v:Int, lt:Int=0, eq:Int=0, gt:Int=0, pos:Int=0):(Int, Int, Int) = {
if (pos == values.length)
(lt, eq, gt)
else
lteqgt(values, v, lt + (if (values(pos) < v) 1 else 0), eq + (if (values(pos) == v) 1 else 0), gt + (if (values(pos) > v) 1 else 0), pos+1)
}
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
if (values(pos) > v)
gt += 1
else if (values(pos) < v)
lt += 1
else
eq += 1
pos += 1
}
(lt, eq, gt)
}
}
Adjust the size of bigArray according to your heap size. Here is some sample output:
Time = 245.110899 millis
(50004367,2003090,47992543)
Time = 465.836894 millis
(50004367,2003090,47992543)
Why is the while method so much slower than the tailrec? Naively the tailrec version looks to be at a slight disadvantage, as it must always perform 3 "if" checks for every iteration, whereas the while version will often only perform 1 or 2 tests due to the else construct. (NB reversing the order I perform the two methods does not affect the outcome).
Test results (after reducing array size to 20000000)
Under Java 1.6.22 I get 151 and 122 ms for tail-recursion and while-loop respectively.
Under Java 1.7.0 I get 55 and 101 ms
So under Java 6 your while-loop is actually faster; both have improved in performance under Java 7, but the tail-recursive version has overtaken the loop.
Explanation
The performance difference is due to the fact that in your loop, you conditionally add 1 to the totals, while for recursion you always add either 1 or 0. So they are not equivalent. The equivalent while-loop to your recursive method is:
def lteqgt2(values:Array[Int], v:Int):(Int, Int, Int) = {
var lt = 0
var eq = 0
var gt = 0
var pos = 0
val limit = values.length
while (pos < limit) {
gt += (if (values(pos) > v) 1 else 0)
lt += (if (values(pos) < v) 1 else 0)
eq += (if (values(pos) == v) 1 else 0)
pos += 1
}
(lt, eq, gt)
}
and this gives exactly the same execution time as the recursive method (regardless of Java version).
Discussion
I'm not an expert on why the Java 7 VM (HotSpot) can optimize this better than your first version, but I'd guess it's because it's taking the same path through the code each time (rather than branching along the if / else if paths), so the bytecode can be inlined more efficiently.
But remember that this is not the case in Java 6. Why one while-loop outperforms the other is a question of JVM internals. Happily for the Scala programmer, the version produced from idiomatic tail-recursion is the faster one in the latest version of the JVM.
The difference could also be occurring at the processor level. See this question, which explains how code slows down if it contains unpredictable branching.
The two constructs are not identical. In particular, in the first case you don't need any jumps (on x86, you can use cmp and setle and add, instead of having to use cmp and jb and (if you don't jump) add. Not jumping is faster than jumping on pretty much every modern architecture.
So, if you have code that looks like
if (a < b) x += 1
where you may add or you may jump instead, vs.
x += (a < b)
(which only makes sense in C/C++ where 1 = true and 0 = false), the latter tends to be faster as it can be turned into more compact assembly code. In Scala/Java, you can't do this, but you can do
x += if (a < b) 1 else 0
which a smart JVM should recognize is the same as x += (a < b), which has a jump-free machine code translation, which is usually faster than jumping. An even smarter JVM would recognize that
if (a < b) x += 1
is the same yet again (because adding zero doesn't do anything).
C/C++ compilers routinely perform optimizations like this. Being unable to apply any of these optimizations was not a mark in the JIT compiler's favor; apparently it can as of 1.7, but only partially (i.e. it doesn't recognize that adding zero is the same as a conditional adding one, but it does at least convert x += if (a<b) 1 else 0 into fast machine code).
Now, none of this has anything to do with tail recursion or while loops per se. With tail recursion it's more natural to write the if (a < b) 1 else 0 form, but you can do either; and with while loops you can also do either. It just so happened that you picked one form for tail recursion and the other for the while loop, making it look like recursion vs. looping was the change instead of the two different ways to do the conditionals.

Strange space behavior of Haskell program

I thought that the Cont monad is just equivalent to CPS Transformation, so if I have
a monadic sum, if I run in the Identity monad, it will fail due to stack overflow, and if
I run it in the Cont Monad, it will be okay due to tail recursion.
So I've written a simple program to verify my idea. But to my surprise, the result is unreasonable due to my limited knowledge.
All programs are compiled using ghc --make Test.hs -o test && ./test
sum0 n = if n==0 then 0 else n + sum0 (n-1)
sum1 n = if n==0 then return 0 else sum1 (n-1) >>= \ v -> seq v (return (n+v))
sum2 n k = if n == 0 then k 0 else sum2 n (\v -> k (n + v))
sum3 n k = if n == 0 then k 0 else sum3 n (\ !v -> k (n + v))
sum4 n k = if n == 0 then k 0 else sum4 n (\ v -> seq v ( k (n + v)))
sum5 n = if n==0 then return 0 else sum5 (n-1) >>= \ v -> (return (n+v))
main = print (sum0 3000000)
Stack overflow. This is reasonable.
main = print (flip runCont id (sum1 3000000))
Uses 180M memory, which is reasonable, but I am not clear why seq needed here, since its continuation is not applied until n goes to 0.
main = print (flip runCont id (sum5 3000000))
Stack overflow. Why?
main = print (flip runCont (const 0) (sum1 3000000))
Uses 130M memory. This is reasonable.
main = print (flip runCont (const 0) (sum5 3000000))
Uses 118M memory. This is reasonable.
main = print (sum2 3000000 (const 0))
Uses a lot of memory (more than 1G). I thought sum2 is equivalent to sum5 (when sum5 is in Cont monad). Why?
main = print (sum3 3000000 (const 0))
Uses a lot of memory. I thought sum3 is equivalent to sum1 (Cont monad). Why?
main = print (runIdentity (sum1 3000000))
Stack overflow, exactly what I want.
main = print (sum3 3000000 id)
Uses a lot of memory. Equivalent to sum1, why?
main = print (sum4 3000000 id)
Uses a lot of memory. Equivalent to sum1, why?
main = print (sum [1 .. 3000000])
Stack overflow. The source of sum = foldl (+) 0, so this is reasonable.
main = print (foldl' (+) 0 [1 .. 3000000])
Uses 1.5M.
First of all, it looks to me like sum2, sum3, and sum4 never actually decrement n. So they're using lots of memory because they're going into an infinite loop that does allocation.
After correcting that, I've run each of your tests again with the following results, where "allocation" refers to approximate peak memory use:
main = print (sum0 3000000) : Stack overflow, after allocating very little memory
main = print (flip runCont id (sum1 3000000)) : Success, allocating similar amounts to what you saw
main = print (flip runCont id (sum5 3000000)) : Stack overflow, after allocating similar amounts of memory as sum1.
main = print (flip runCont (const 0) (sum1 3000000)) : Success, similar allocation as the above
main = print (flip runCont (const 0) (sum5 3000000)) : Same
main = print (sum2 3000000 (const 0)) : Success, about 70% as much allocation as sum1
main = print (sum3 3000000 (const 0)) : Success, about 50% as much allocation as sum1
main = print (runIdentity (sum1 3000000)) : Stack overflow, with little allocation
main = print (sum3 3000000 id) : Success, about 50% as much allocation as sum1
main = print (sum4 3000000 id) : Success, about 50% as much allocation as sum1
main = print (sum [1 .. 3000000]) : Stack overflow, with about 80% as much allocation as sum1
main = print (foldl' (+) 0 [1 .. 3000000]) : Success, with almost no allocation
So that's mostly what you expected, with the exception of why seq makes such a difference between sum1 vs. sum5.

Resources