I am trying to solve some issues with programming in haskell.
I am having this three types and the Hanoi-algorithm:
type Position = Int
type Move = (Position,Position)
type Towers = ([Int],[Int],[Int])
hanoi 1 i j = [(i,j)]
hanoi n i j = hanoi n' i otherT ++ [(i,j)] ++ hanoi n' otherT j
where
n' = n-1
otherT = 1+2+3-i-j -- other tower
Now I wrote a function with makes one single MOVE.
move ::([Move],Towers) -> ([Move],Towers)
move ::([Move],Towers) -> ([Move],Towers)
move ([],(xs,ys,zs) ) = ((error "Error"),(xs,ys,zs) )
move (((a,b): tail), (xs,ys,zs) )
| a > 3 = (tail, ((error "Error"),ys,zs) )
| b > 3 = (tail, ((error "Error"),ys,zs ) )
| otherwise = hilfsfunktion (((a,b): tail), (xs,ys,zs) )
hilfsfunktion (((1,2): tail), ((x:xs),(y:ys),zs) )
| x < y = (tail, (xs, (x:y:ys),zs) )
| x > y = (tail, (xs, (error "too big"),(error "too big")))
hilfsfunktion (((1,2): tail), ((x:xs), [],zs) ) = (tail, (xs, x:[],zs) )
The function is much longer, but you can see that I solved this with Pattern Matching.
Now I try to write a function, that called this "move"-function as long as the list is not empty. So at first I used Pattern Matching for the case, that the list is empty.
all_moves:: ([Move], Towers) -> Towers
all_moves ([],(xs,ys,zs) ) = (xs,ys,zs)
all_moves (((a,b): tail), (xs,ys,zs) ) = ????
Now I need some help, in Java I would use a loop to fix this. I think I have to call the function "move" recursive, but I don't know how to to this.
How can I solve this function?
I'm not sure exactly how your hanoi solution works, but I think this answers your question:
all_moves :: ([Move], Towers) -> Towers
all_moves ([], (xs, ys, zs)) = (xs, ys, zs)
all_moves movetowers = all_moves (move movetowers)
Hopefully you can see why this works - we keep doing a move and then a recursive call until there are no moves left.
I would like to do like this way even though my method is nor recursive as well.
allllmove :: Towers -> Towers
allmove ([0],[0],[0]) = ([0],[0],[0])
allmove ((x:xs),y,z)
length(x:xs) > length z && length z > length y = (xs,(x:y),z)
length(x:xs) >= length z && length (x:xs) > length y = (xs,(x:y),z)
...
..
.
Related
I have been trying to solve the following problem in haskell:
Generate a list of tuples (n, s) where 0 ≤ n ≤ 100 and n mod 2 = 0,
and where s = sum(1..n) The output should be the list
[(0,0),(2,3),(4,10),...,(100,5050)] Source
I tried to solve the problem with following code:
genListTupleSumUntilX :: Int -> [(Int,Int)]
genListTupleSumUntilX x =
take x [(n, s) | n <- [1..x], s <- sumUntilN x]
where
sumUntilN :: Int -> [Int]
sumUntilN n
| n == 0 = []
| n == 1 = [1]
| otherwise = sumUntilN (n-1) ++ [sum[1..n]]
However, this code does not give the expected result. (as #Guru Stron Pointed out- Thank you!)
I would also appreciate it if somebody could help me make this code more concise. I am also new to the concept of lazy evaluation, so am unable to determine the runtime complexity. Help will be appreciated.
However I feel like this code could still be improved upon, espically with:
take x in the function seems really inelegant. So Is there a way to have list comprhensions only map to the same index?
sumUntilN feels really verbose. Is there an idiomatic way to do the same in haskell?
Finally, I am extremely new to haskell and have trouble evaluating the time and space complexity of the function. Can somebody help me there?
sumOfNumsUptoN n = n * (n + 1) `div` 2
genListTupleSumUntilX :: Int -> [(Int, Int)]
genListTupleSumUntilX n = zip [0, 2 .. n] $ map sumOfNumsUptoN [0, 2 .. n]
This is of linear complexity on the size of the list.
I would say that you overcomplicate things. To produce correct output you can use simple list comprehension:
genListTupleSumUntilX :: Int -> [(Int,Int)]
genListTupleSumUntilX x = [(n, sum [1..n]) | n <- [0,2..x]]
Note that this solution will recalculate the same sums repeatedly (i.e for n+1 element sum is actually n + 2 + n + 1 + sumForNthElemnt, so you can potentially reuse the computation) which will lead to O(n^2) complexity, but for such relatively small n it is not a big issue. You can handle this using scanl function (though maybe there is more idiomatic approach for memoization):
genListTupleSumUntilX :: Int -> [(Int,Int)]
genListTupleSumUntilX 0 = []
genListTupleSumUntilX x = scanl (\ (prev, prevSum) curr -> (curr, prevSum + prev + 1 + curr)) (0,0) [2,4..x]
How can I efficiently represent the list [0..] \\ [t+0*p, t+1*p ..]?
I have defined:
Prelude> let factors p t = [t+0*p, t+1*p ..]
I want to efficiently represent an infinite list that is the difference of [0..] and factors p t, but using \\ from Data.List requires too much memory for even medium-sized lists:
Prelude Data.List> [0..10000] \\ (factors 5 0)
<interactive>: out of memory
I know that I can represent the values between t+0*p and t+1*p with:
Prelude> let innerList p1 p2 t = [t+p1+1, t+p1+2 .. t+p2-1]
Prelude> innerList 0 5 0
[1,2,3,4]
However, repeatedly calculating and concatenating innerList for increasing intervals seems clumsy.
Can I efficiently represent [0..] \\ (factors p t) without calculating rem or mod for each element?
For the infinite list [0..] \\ [t,t+p..],
yourlist t p = [0..t-1] ++ [i | m <- [0,p..], i <- [t+m+1..t+m+p-1]]
Of course this approach doesn't scale, at all, if you'd want to remove some other factors, like
[0..] \\ [t,t+p..] \\ [s,s+q..] \\ ...
in which case you'll have to remove them in sequence with minus, mentioned in Daniel Fischer's answer. There is no magic bullet here.
But there's also a union, with which the above becomes
[0..] \\ ( [t,t+p..] `union` [s,s+q..] `union` ... )
the advantage is, we can arrange the unions in a tree, and get algorithmic improvement.
You can't use (\\) for that, because
(\\) :: (Eq a) => [a] -> [a] -> [a]
(\\) = foldl (flip delete)
the list of elements you want to remove is infinite, and a left fold never terminates when the list it folds over is infinite.
If you rather want to use something already written than write it yourself, you can use minus from the data-ordlist package.
The performance should be adequate.
Otherwise,
minus :: Ord a => [a] -> [a] -> [a]
minus xxs#(x:xs) yys#(y:ys)
| x < y = x : minus xs yys
| x == y = minus xs ys
| otherwise = minus xss ys
minus xs _ = xs
You can use a list comprehesion with a predicate, using rem:
>>> let t = 0
>>> let p = 5
>>> take 40 $ [ x | x <- [1..], x `rem` p /= t ]
[1,2,3,4,6,7,8,9,11,12,13,14,16,17,18,19,21,22,23,24,26,27,28,29,31,32,33,34,36,37,38,39,41,42,43,44,46,47,48,49]
If you want efficiency, why does your solution have to use list comprehension syntax?
Why not something like this?
gen' n i p | i == p = gen' (n + p) 1 p
gen' n i p = (n+i) : gen' n (i+1) p
gen = gen' 0 1
and then do
gen 5
Because you have ascending lists, you can simply lazily merge them:
nums = [1..]
nogos = factors p t
result = merge nums (dropWhile (<head nums) nogos) where
merge (a:as) (b:bs)
| a < b = a : merge as (b:bs)
| a == b = merge as bs
| otherwise = error "should not happen"
Writing this in a general way so that we have a function that builds the difference of two infinite lists, provided only that they are in ascending order, is left as exercise. In the end, the following should be possible
[1..] `infiniteDifference` primes `infiniteDifference` squares
For this, make it a left associative operator.
I have been struggling with something that looks like a simple algorithm, but can't find a clean way to express it in a functional style so far. Here is an outline of the problem: suppose I have 2 arrays X and Y,
X = [| 1; 2; 2; 3; 3 |]
Y = [| 5; 4; 4; 3; 2; 2 |]
What I want is to retrieve the elements that match, and the unmatched elements, like:
matched = [| 2; 2; 3 |]
unmatched = [| 1; 3 |], [| 4; 4; 5 |]
In pseudo-code, this is how I would think of approaching the problem:
let rec match matches x y =
let m = find first match from x in y
if no match, (matches, x, y)
else
let x' = remove m from x
let y' = remove m from y
let matches' = add m to matches
match matches' x' y'
The problem I run into is the "remove m from x" part - I can't find a clean way to do this (I have working code, but it's ugly as hell). Is there a nice, idiomatic functional way to approach that problem, either the removal part, or a different way to write the algorithm itself?
This could be solved easily using the right data structures, but in case you wanted to do it manually, here's how I would do it in Haskell. I don't know F# well enough to translate this, but I hope it is similar enough. So, here goes, in (semi-)literate Haskell.
overlap xs ys =
I start by sorting the two sequences to get away from the problem of having to know about previous values.
go (sort xs) (sort ys)
where
The two base cases for the recursion are easy enough to handle -- if either list is empty, the result includes the other list in the list of elements that are not overlapping.
go xs [] = ([], (xs, []))
go [] ys = ([], ([], ys))
I then inspect the first elements in each list. If they match, I can be sure that the lists overlap on that element, so I add that to the included elements, and I let the excluded elements be. I continue the search for the rest of the list by recursing on the tails of the lists.
go (x:xs) (y:ys)
| x == y = let ( included, excluded) = go xs ys
in (x:included, excluded)
Then comes the interesting part! What I essentially want to know is if the first element of one of the lists does not exist in the second list – in that case I should add it to the excluded lists and then continue the search.
| x < y = let (included, ( xex, yex)) = go xs (y:ys)
in (included, (x:xex, yex))
| y < x = let (included, ( xex, yex)) = go (x:xs) ys
in (included, ( xex, y:yex))
And this is actually it. It seems to work for at least the example you gave.
> let (matched, unmatched) = overlap x y
> matched
[2,2,3]
> unmatched
([1,3],[4,4,5])
It seems that you're describing multiset (bag) and its operations.
If you use the appropriate data structures, operations are very easy to implement:
// Assume that X, Y are initialized bags
let matches = X.IntersectWith(Y)
let x = X.Difference(Y)
let y = Y.Difference(X)
There's no built-in Bag collection in .NET framework. You could use Power Collection library including Bag class where the above function signature is taken.
UPDATE:
You can represent a bag by a weakly ascending list. Here is an improved version of #kqr's answer in F# syntax:
let overlap xs ys =
let rec loop (matches, ins, outs) xs ys =
match xs, ys with
// found a match
| x::xs', y::ys' when x = y -> loop (x::matches, ins, outs) xs' ys'
// `x` is smaller than every element in `ys`, put `x` into `ins`
| x::xs', y::ys' when x < y -> loop (matches, x::ins, outs) xs' ys
// `y` is smaller than every element in `xs`, put `y` into `outs`
| x::xs', y::ys' -> loop (matches, ins, y::outs) xs ys'
// copy remaining elements in `xs` to `ins`
| x::xs', [] -> loop (matches, x::ins, outs) xs' ys
// copy remaining elements in `ys` to `outs`
| [], y::ys' -> loop (matches, ins, y::outs) xs ys'
| [], [] -> (List.rev matches, List.rev ins, List.rev outs)
loop ([], [], []) (List.sort xs) (List.sort ys)
After two calls to List.sort, which are probably O(nlogn), finding matches is linear to the sum of the lengths of two lists.
If you need a quick-and-dirty bag module, I would suggest a module signature like this:
type Bag<'T> = Bag of 'T list
module Bag =
val count : 'T -> Bag<'T> -> int
val insert : 'T -> Bag<'T> -> Bag<'T>
val intersect : Bag<'T> -> Bag<'T> -> Bag<'T>
val union : Bag<'T> -> Bag<'T> -> Bag<'T>
val difference : Bag<'T> -> Bag<'T> -> Bag<'T>
I have written a sudoku solver in Haskell. It goes through a list and when it finds '0' (an empty cell) it will get the numbers that could fit and try them:
import Data.List (group, (\\), sort)
import Data.Maybe (fromMaybe)
row :: Int -> [Int] -> [Int]
row y grid = foldl (\acc x -> (grid !! x):acc) [] [y*9 .. y*9+8]
where y' = y*9
column :: Int -> [Int] -> [Int]
column x grid = foldl (\acc n -> (grid !! n):acc) [] [x,x+9..80]
box :: Int -> Int -> [Int] -> [Int]
box x y grid = foldl (\acc n -> (grid !! n):acc) [] [x+y*9*3+y' | y' <- [0,9,18], x <- [x'..x'+2]]
where x' = x*3
isValid :: [Int] -> Bool
isValid grid = and [isValidRow, isValidCol, isValidBox]
where isValidRow = isValidDiv row
isValidCol = isValidDiv column
isValidBox = and $ foldl (\acc (x,y) -> isValidList (box x y grid):acc) [] [(x,y) | x <- [0..2], y <- [0..2]]
isValidDiv f = and $ foldl (\acc x -> isValidList (f x grid):acc) [] [0..8]
isValidList = all (\x -> length x <= 1) . tail . group . sort -- tail removes entries that are '0'
isComplete :: [Int] -> Bool
isComplete grid = length (filter (== 0) grid) == 0
solve :: Maybe [Int] -> Maybe [Int]
solve grid' = foldl f Nothing [0..80]
where grid = fromMaybe [] grid'
f acc x
| isValid grid = if isComplete grid then grid' else f' acc x
| otherwise = acc
f' acc x
| (grid !! x) == 0 = case guess x grid of
Nothing -> acc
Just x -> Just x
| otherwise = acc
guess :: Int -> [Int] -> Maybe [Int]
guess x grid
| length valid /= 0 = foldl f Nothing valid
| otherwise = Nothing
where valid = [1..9] \\ (row rowN grid ++ column colN grid ++ box (fst boxN) (snd boxN) grid) -- remove numbers already used in row/collumn/box
rowN = x `div` 9 -- e.g. 0/9=0 75/9=8
colN = x - (rowN * 9) -- e.g. 0-0=0 75-72=3
boxN = (colN `div` 3, rowN `div` 3)
before x = take x grid
after x = drop (x+1) grid
f acc y = case solve $ Just $ before x ++ [y] ++ after x of
Nothing -> acc
Just x -> Just x
For some puzzles this works, for example this one:
sudoku :: [Int]
sudoku = [5,3,0,6,7,8,0,1,2,
6,7,0,0,0,0,3,4,8,
0,0,8,0,0,0,5,0,7,
8,0,0,0,0,1,0,0,3,
4,2,6,0,0,3,7,9,0,
7,0,0,9,0,0,0,5,0,
9,0,0,5,0,7,0,0,0,
2,8,7,4,1,9,6,0,5,
3,0,0,2,8,0,1,0,0]
Took under a second, however this one:
sudoku :: [Int]
sudoku = [5,3,0,0,7,0,0,1,2,
6,7,0,0,0,0,3,4,8,
0,0,0,0,0,0,5,0,7,
8,0,0,0,0,1,0,0,3,
4,2,6,0,0,3,7,9,0,
7,0,0,9,0,0,0,5,0,
9,0,0,5,0,7,0,0,0,
2,8,7,4,1,9,6,0,5,
3,0,0,2,8,0,1,0,0]
I have not seen finish. I don't think this is a problem with the method, as it does return correct results.
Profiling showed that most of the time was spent in the "isValid" function. Is there something obviously inefficient/slow about that function?
The implementation is of course improvable, but that's not the problem. The problem is that for the second grid, the simple guess-and-check algorithm needs a lot of backtracking. Even if you speed up each of your functions 1000-fold, there will be grids where it still needs several times the age of the universe to find the (first, if the grid is not unique) solution.
You need a better algorithm to avoid that. A fairly efficient method to avoid such cases is to guess the square with the least number of possibilities first. That doesn't avoid all bad cases, but reduces them much.
One thing that you should also do is replace the length thing == 0 check with null thing. With the relatively short lists occurring here, the effect is limited, but in general it can be dramatic (and in general you should also not use length list <= 1, use null $ drop 1 list instead).
isValidList = all (\x -> length x <= 1) . tail . group . sort -- tail removes entries that are '0'
If the original list does not contain any zeros, tail will remove something else, perhaps a list of two ones. I'd replace tail . group. sort with group . sort . filter (/= 0).
I don't understand why isValidBox and isValidDiv use foldl as map appears to be adequate. Have I missed something / are they doing something terribly clever?
I'm a beginner to functional languages, and I'm trying to get the whole thing down in Haskell. Here's a quick-and-dirty function that finds all the factors of a number:
factors :: (Integral a) => a -> [a]
factors x = filter (\z -> x `mod` z == 0) [2..x `div` 2]
Works fine, but I found it to be unbearably slow for large numbers. So I made myself a better one:
factorcalc :: (Integral a) => a -> a -> [a] -> [a]
factorcalc x y z
| y `elem` z = sort z
| x `mod` y == 0 = factorcalc x (y+1) (z ++ [y] ++ [(x `div` y)])
| otherwise = factorcalc x (y+1) z
But here's my problem: Even though the code works, and can cut literally hours off the execution time of my programs, it's hideous!
It reeks of ugly imperative thinking: It constantly updates a counter and a data structure in a loop until it finishes. Since you can't change state in purely functional programming, I cheated by holding the data in the parameters, which the function simply passes to itself over and over again.
I may be wrong, but there simply must be a better way of doing the same thing...
Note that the original question asked for all the factors, not for only the prime factors. There being many fewer prime factors, they can probably be found more quickly. Perhaps that's what the OQ wanted. Perhaps not. But let's solve the original problem and put the "fun" back in "functional"!
Some observations:
The two functions don't produce the same output---if x is a perfect square, the second function includes the square root twice.
The first function enumerates checks a number of potential factors proportional to the size of x; the second function checks only proportional to the square root of x, then stops (with the bug noted above).
The first function (factors) allocates a list of all integers from 2 to n div 2, where the second function never allocates a list but instead visits fewer integers one at a time in a parameter. I ran the optimizer with -O and looked at the output with -ddump-simpl, and GHC just isn't smart enough to optimize away those allocations.
factorcalc is tail-recursive, which means it compiles into a tight machine-code loop; filter is not and does not.
Some experiments show that the square root is the killer:
Here's a sample function that produces the factors of x from z down to 2:
factors_from x 1 = []
factors_from x z
| x `mod` z == 0 = z : factors_from x (z-1)
| otherwise = factors_from x (z-1)
factors'' x = factors_from x (x `div` 2)
It's a bit faster because it doesn't allocate, but it's still not tail-recursive.
Here's a tail-recursive version that is more faithful to the original:
factors_from' x 1 l = l
factors_from' x z l
| x `mod` z == 0 = factors_from' x (z-1) (z:l)
| otherwise = factors_from' x (z-1) l
factors''' x = factors_from x (x `div` 2)
This is still slower than factorcalc because it enumerates all the integers from 2 to x div 2, whereas factorcalc stops at the square root.
Armed with this knowledge, we can now create a more functional version of factorcalc which replicates both its speed and its bug:
factors'''' x = sort $ uncurry (++) $ unzip $ takeWhile (uncurry (<=)) $
[ (z, x `div` z) | z <- [2..x], x `mod` z == 0 ]
I didn't time it exactly, but given 100 million as an input, both it and factorcalc terminate instantaneously, where the others all take a number of seconds.
How and why the function works is left as an exercise for the reader :-)
ADDENDUM: OK, to mitigate the eyeball bleeding, here's a slightly saner version (and without the bug):
saneFactors x = sort $ concat $ takeWhile small $
[ pair z | z <- [2..], x `mod` z == 0 ]
where pair z = if z * z == x then [z] else [z, x `div` z]
small [z, z'] = z < z'
small [z] = True
Okay, take a deep breath. It'll be all right.
First of all, why is your first attempt slow? How is it spending its time?
Can you think of a recursive definition for the prime factorization that doesn't have that property?
(Hint.)
Firstly, although factorcalc is "ugly", you could add a wrapper function factors' x = factorscalc x 2 [], add a comment, and move on.
If you want to make a 'beautiful' factors fast, you need to find out why it is slow. Looking at your two functions, factors walks the list about n/2 elements long, but factorcalc stops after around sqrt n iterations.
Here is another factors that also stops after about sqrt n iterations, but uses a fold instead of explicit iteration. It also breaks the problem into three parts: finding the factors (factor); stopping at the square root of x (small) and then computing pairs of factors (factorize):
factors' :: (Integral a) => a -> [a]
factors' x = sort (foldl factorize [] (takeWhile small (filter factor [2..])))
where
factor z = x `mod` z == 0
small z = z <= (x `div` z)
factorize acc z = z : (if z == y then acc else y : acc)
where y = x `div` z
This is marginally faster than factorscalc on my machine. You can fuse factor and factorize and it is about twice as fast as factorscalc.
The Profiling and Optimization chapter of Real World Haskell is a good guide to the GHC suite's performance tools for tackling tougher performance problems.
By the way, I have a minor style nitpick with factorscalc: it is much more efficient to prepend single elements to the front of a list O(1) than it is to append to the end of a list of length n O(n). The lists of factors are typically small, so it is not such a big deal, but factorcalc should probably be something like:
factorcalc :: (Integral a) => a -> a -> [a] -> [a]
factorcalc x y z
| y `elem` z = sort z
| x `mod` y == 0 = factorcalc x (y+1) (y : (x `div` y) : z)
| otherwise = factorcalc x (y+1) z
Since you can't change state in purely
functional programming, I cheated by
holding the data in the parameters,
which the function simply passes to
itself over and over again.
Actually, this is not cheating; this is a—no, make that the—standard technique! That sort of parameter is usually known as an "accumulator," and it's generally hidden within a helper function that does the actual recursion after being set up by the function you're calling.
A common case is when you're doing list operations that depend on the previous data in the list. The two problems you need to solve are, where do you get the data about previous iterations, and how do you deal with the fact that your "working area of interest" for any particular iteration is actually at the tail of the result list you're building. For both of these, the accumulator comes to the rescue. For example, to generate a list where each element is the sum of all of the elements of the input list up to that point:
sums :: Num a => [a] -> [a]
sums inp = helper inp []
where
helper [] acc = reverse acc
helper (x:xs) [] = helper xs [x]
helper (x:xs) acc#(h:_) = helper xs (x+h : acc)
Note that we flip the direction of the accumulator, so we can operate on the head of that, which is much more efficient (as Dominic mentions), and then we just reverse the final output.
By the way, I found reading The Little Schemer to be a useful introduction and offer good practice in thinking recursively.
This seemed like an interesting problem, and I hadn't coded any real Haskell in a while, so I gave it a crack. I've run both it and Norman's factors'''' against the same values, and it feels like mine's faster, though they're both so close that it's hard to tell.
factors :: Int -> [Int]
factors n = firstFactors ++ reverse [ n `div` i | i <- firstFactors ]
where
firstFactors = filter (\i -> n `mod` i == 0) (takeWhile ( \i -> i * i <= n ) [2..n])
Factors can be paired up into those that are greater than sqrt n, and those that are less than or equal to (for simplicity's sake, the exact square root, if n is a perfect square, falls into this category. So if we just take the ones that are less than or equal to, we can calculate the others later by doing div n i. They'll be in reverse order, so we can either reverse firstFactors first or reverse the result later. It doesn't really matter.
This is my "functional" approach to the problem. ("Functional" in quotes, because I'd approach this problem the same way even in non-functional languages, but maybe that's because I've been tainted by Haskell.)
{-# LANGUAGE PatternGuards #-}
factors :: (Integral a) => a -> [a]
factors = multiplyFactors . primeFactors primes 0 [] . abs where
multiplyFactors [] = [1]
multiplyFactors ((p, n) : factors) =
[ pn * x
| pn <- take (succ n) $ iterate (* p) 1
, x <- multiplyFactors factors ]
primeFactors _ _ _ 0 = error "Can't factor 0"
primeFactors (p:primes) n list x
| (x', 0) <- x `divMod` p
= primeFactors (p:primes) (succ n) list x'
primeFactors _ 0 list 1 = list
primeFactors (_:primes) 0 list x = primeFactors primes 0 list x
primeFactors (p:primes) n list x
= primeFactors primes 0 ((p, n) : list) x
primes = sieve [2..]
sieve (p:xs) = p : sieve [x | x <- xs, x `mod` p /= 0]
primes is the naive Sieve of Eratothenes. There's better, but this is the shortest method.
sieve [2..]
=> 2 : sieve [x | x <- [3..], x `mod` 2 /= 0]
=> 2 : 3 : sieve [x | x <- [4..], x `mod` 2 /= 0, x `mod` 3 /= 0]
=> 2 : 3 : sieve [x | x <- [5..], x `mod` 2 /= 0, x `mod` 3 /= 0]
=> 2 : 3 : 5 : ...
primeFactors is the simple repeated trial-division algorithm: it walks through the list of primes, and tries dividing the given number by each, recording the factors as it goes.
primeFactors (2:_) 0 [] 50
=> primeFactors (2:_) 1 [] 25
=> primeFactors (3:_) 0 [(2, 1)] 25
=> primeFactors (5:_) 0 [(2, 1)] 25
=> primeFactors (5:_) 1 [(2, 1)] 5
=> primeFactors (5:_) 2 [(2, 1)] 1
=> primeFactors _ 0 [(5, 2), (2, 1)] 1
=> [(5, 2), (2, 1)]
multiplyPrimes takes a list of primes and powers, and explodes it back out to a full list of factors.
multiplyPrimes [(5, 2), (2, 1)]
=> [ pn * x
| pn <- take (succ 2) $ iterate (* 5) 1
, x <- multiplyPrimes [(2, 1)] ]
=> [ pn * x | pn <- [1, 5, 25], x <- [1, 2] ]
=> [1, 2, 5, 10, 25, 50]
factors just strings these two functions together, along with an abs to prevent infinite recursion in case the input is negative.
I don't know much about Haskell, but somehow I think this link is appropriate:
http://www.willamette.edu/~fruehr/haskell/evolution.html
Edit: I'm not entirely sure why people are so aggressive about the downvoting on this. The original poster's real problem was that the code was ugly; while it's funny, the point of the linked article is, to some extent, that advanced Haskell code is, in fact, ugly; the more you learn, the uglier your code gets, to some extent. The point of this answer was to point out to the OP that apparently, the ugliness of the code that he was lamenting is not uncommon.