Inconsistent behaviour with Haskell - algorithm

I was reading on perceptrons and trying to implement one in haskell. The algorithm seems to be working as far as I can test. I'm going to rewrite the code entirely at some point, but before doing so I thought of asking a few questions that have arosen while coding this.
The neuron can be trained when returning the complete neuron. let neuron = train set [1,1] works, but if I change the train function to return an incomplete neuron without the inputs, or try to pattern match and create only an incomplete neuron, the code falls into neverending loop.
tl;dr when returning complete neuron everything works, but when returning curryable neuron, the code falls into a loop.
module Main where
import System.Random
type Inputs = [Float]
type Weights = [Float]
type Threshold = Float
type Output = Float
type Trainingset = [(Inputs, Output)]
data Neuron = Neuron Threshold Weights Inputs deriving Show
output :: Neuron -> Output
output (Neuron threshold weights inputs) =
if total >= threshold then 1 else 0
where total = sum $ zipWith (*) weights inputs
rate :: Float -> Float -> Float
rate t o = 0.1 * (t - o)
newweight :: Float -> Float -> Weights -> Inputs -> Weights
newweight t o weight input = zipWith nw weight input
where nw w x = w + (rate t o) * x
learn :: Neuron -> Float -> Neuron
learn on#(Neuron tr w i) t =
let o = output on
in Neuron tr (newweight t o w i) i
converged :: (Inputs -> Neuron) -> Trainingset -> Bool
converged n set = not $ any (\(i,o) -> output (n i) /= o) set
train :: Weights -> Trainingset -> Neuron
train w s = train' s (Neuron 1 w)
train' :: Trainingset -> (Inputs -> Neuron) -> Neuron
train' s n | not $ converged n set
= let (Neuron t w i) = train'' s n
in train' s (Neuron t w)
| otherwise = n $ fst $ head s
train'' :: Trainingset -> (Inputs -> Neuron) -> Neuron
train'' ((a,b):[]) n = learn (n a) b
train'' ((a,b):xs) n = let
(Neuron t w i) = learn (n a) b
in
train'' xs (Neuron t w)
set :: Trainingset
set = [
([1,0], 0),
([1,1], 1),
([0,1], 0),
([0,0], 0)
]
randomWeights :: Int -> IO [Float]
randomWeights n =
do
g <- newStdGen
return $ take n $ randomRs (-1, 1) g
main = do
w <- randomWeights 2
let (Neuron t w i) = train w set
print $ output $ (Neuron t w [1,1])
return ()
Edit: As per comments, specifying a little more.
Running with the code above, I get:
perceptron: <<loop>>
But by editing the main method to:
main = do
w <- randomWeights 2
let neuron = train w set
print $ neuron
return ()
(Notice the let neuron, and print rows), everything works and the output is:
Neuron 1.0 [0.71345896,0.33792675] [1.0,0.0]

Perhaps I am missing something, but I boiled your test case down to this program:
module Main where
data Foo a = Foo a
main = do
x ← getLine
let (Foo x) = Foo x
putStrLn x
This further simplifies to:
main = do
x ← getLine
let x = x
putStrLn x
The problem is that binding (Foo x) to something that depends on x
is a cyclic dependency. To evaluate x, we need to know the value of
x. OK, so we just need to calculate x. To calculate x, we need to
know the value of x. That's fine, we'll just calculate x. And so on.
This isn't C, remember: it's binding, not assignment, and the binding
is evaluated lazily.
Use better variable names, and it all works:
module Main where
data Foo a = Foo a
main = do
line ← getLine
let (Foo x) = Foo line
putStrLn x
(The variable in question, in your case, is w.)

This is a common mistake in Haskell. You cannot say things like:
let x = 0
let x = x + 1
And have it mean what it would in a language with assignment, or even nonrecursive binding. The first line is irrelevant, it gets shadowed by the second line, which defines x as x+1, that is, it defines recursively x = ((((...)+1)+1)+1)+1, which will loop upon evaluation.

Related

Gradient Descent algorithm not converging in Haskell

I am trying to implement the gradient descent algorithm in Andrew Ng's ML course. After reading in the data, I try to implement the following, updating my list of theta values 1000 times, with the expectation of some convergence.
The algorithm in question is gradientDescent. I know that typically a cause of this problem is that alpha can be too large, but when I change alpha by a factor of n for example, my results change by a factor of n. The same happens when I change iterations by a factor of n. I want to say this could be to do with haskell's laziness, but I'm completely unsure. Any help would be appreciated.
module LR1V where
import qualified Data.Matrix as M
import System.IO
import Data.List.Split
import qualified Data.Vector as V
main :: IO ()
main = do
contents <- getContents
let lns = lines contents :: [String]
entries = map (splitOn ",") lns :: [[String]]
mbPoints = mapM readPoints entries :: Maybe [[Double]]
case mbPoints of
Just points -> runData points
_ -> putStrLn "Error: it is possible the file is incorrectly formatted"
readPoints :: [String] -> Maybe [Double]
readPoints dat#(x:y:_) = return $ map read dat
readPoints _ = Nothing
runData :: [[Double]] -> IO ()
runData pts = do
let (mxs,ys) = runPoints pts
c = M.ncols mxs
m = M.nrows mxs
thetas = M.zero 1 (M.ncols mxs)
alpha = 0.01
iterations = 1000
results = gradientDescent mxs ys thetas alpha m c iterations
print results
runPoints :: [[Double]] -> (M.Matrix Double, [Double])
runPoints pts = (xs, ys) where
xs = M.fromLists $ addX0 $ map init pts
ys = map last pts
-- X0 will always be 1
addX0 :: [[Double]] -> [[Double]]
addX0 = map (1.0 :)
-- theta is 1xn and x is nx1, where n is the amount of features
-- so it is safe to assume a scalar results from the multiplication
hypothesis :: M.Matrix Double -> M.Matrix Double -> Double
hypothesis thetas x =
M.getElem 1 1 (M.multStd thetas x)
gradientDescent :: M.Matrix Double
-> [Double]
-> M.Matrix Double
-> Double
-> Int
-> Int
-> Int
-> [Double]
gradientDescent mxs ys thetas alpha m n it =
let x i = M.colVector $ M.getRow i mxs
y i = ys !! (i-1)
h i = hypothesis thetas (x i)
thL = zip [1..] $ M.toList thetas :: [(Int, Double)]
z i j = ((h i) - (y i))*(M.getElem i j $ mxs)
sumSquares j = sum [z i j | i <- [1..m]]
thetaJ t j = t - ((alpha * (1/ (fromIntegral m))) * (sumSquares j))
result = map snd $ foldl (\ts _ -> [(j,thetaJ t j) | (j,t) <- ts]) thL [1..it] in
result
and the data...
6.1101,17.592
5.5277,9.1302
8.5186,13.662
7.0032,11.854
5.8598,6.8233
8.3829,11.886
7.4764,4.3483
8.5781,12
6.4862,6.5987
5.0546,3.8166
5.7107,3.2522
14.164,15.505
5.734,3.1551
8.4084,7.2258
5.6407,0.71618
5.3794,3.5129
6.3654,5.3048
5.1301,0.56077
6.4296,3.6518
7.0708,5.3893
6.1891,3.1386
20.27,21.767
5.4901,4.263
6.3261,5.1875
5.5649,3.0825
18.945,22.638
12.828,13.501
10.957,7.0467
13.176,14.692
22.203,24.147
5.2524,-1.22
6.5894,5.9966
9.2482,12.134
5.8918,1.8495
8.2111,6.5426
7.9334,4.5623
8.0959,4.1164
5.6063,3.3928
12.836,10.117
6.3534,5.4974
5.4069,0.55657
6.8825,3.9115
11.708,5.3854
5.7737,2.4406
7.8247,6.7318
7.0931,1.0463
5.0702,5.1337
5.8014,1.844
11.7,8.0043
5.5416,1.0179
7.5402,6.7504
5.3077,1.8396
7.4239,4.2885
7.6031,4.9981
6.3328,1.4233
6.3589,-1.4211
6.2742,2.4756
5.6397,4.6042
9.3102,3.9624
9.4536,5.4141
8.8254,5.1694
5.1793,-0.74279
21.279,17.929
14.908,12.054
18.959,17.054
7.2182,4.8852
8.2951,5.7442
10.236,7.7754
5.4994,1.0173
20.341,20.992
10.136,6.6799
7.3345,4.0259
6.0062,1.2784
7.2259,3.3411
5.0269,-2.6807
6.5479,0.29678
7.5386,3.8845
5.0365,5.7014
10.274,6.7526
5.1077,2.0576
5.7292,0.47953
5.1884,0.20421
6.3557,0.67861
9.7687,7.5435
6.5159,5.3436
8.5172,4.2415
9.1802,6.7981
6.002,0.92695
5.5204,0.152
5.0594,2.8214
5.7077,1.8451
7.6366,4.2959
5.8707,7.2029
5.3054,1.9869
8.2934,0.14454
13.394,9.0551
5.4369,0.61705
When alpha is 0.01, my thetas evaluate to [58.39135051546406,653.2884974555699]. When alpha is 0.001 my values become [5.839135051546473,65.32884974555617]. When iterations is changed to 10,000 my values return to what they were before.
It appears that with each run of updating theta values, the approximation function h(x) was using the initial theta vector each time, rather than the updated vector. Now, I get an alright approximation of my theta values. However, increasing the number of iterations by a large factor changes my results in an odd way.

Solving CHRL4 on code chef (CHEF and Way) in Haskell

I am trying to solve this question in Haskell but the codechef compiler keeps on saying it is the wrong answer. The question is as follows:
After visiting a childhood friend, Chef wants to get back to his home. Friend lives at the first street, and Chef himself lives at the N-th (and the last) street. Their city is a bit special: you can move from the X-th street to the Y-th street if and only if 1 <= Y - X <= K, where K is the integer value that is given to you. Chef wants to get to home in such a way that the product of all the visited streets' special numbers is minimal (including the first and the N-th street). Please, help him to find such a product.
Input
The first line of input consists of two integer numbers - N and K - the number of streets and the value of K respectively. The second line consist of N numbers - A1, A2, ..., AN respectively, where Ai equals to the special number of the i-th street.
The output should be modulo 1000000007
Input
4 2
1 2 3 4
Output
8
The solution I used is as follows:
import qualified Data.ByteString.Char8 as B
import Data.Maybe (fromJust)
findMinIndex x index minIndex n
| index == n = minIndex
| (x!!index) < (x!!minIndex) = findMinIndex x (index+1) index n
| otherwise = findMinIndex x (index+1) minIndex n
minCost [] _ = 1
minCost (x:xs) k = let indexList = take k xs
minIndex = findMinIndex indexList 0 0 (length indexList)
in x * minCost(drop minIndex xs) k
main :: IO()
main = do
t <- B.getContents
let inputs = B.lines t
let firstLine = inputs !! 0
let secondLine = inputs !! 1
let [n,k] = map (fst . fromJust . B.readInt) $ B.split ' ' firstLine
let specialNums = reverse $ map (fst . fromJust . B.readInteger) $ B.split ' ' secondLine
putStrLn $ show ((minCost specialNums k) `mod` 1000000007)
It worked for the given test case and a few other test cases I tries out. But it is not being accepted by codechef. I followed the editorial for the problem and made it. Basically starting from the last number in the list of special numbers the program search it's immediate k predecessors and finds the minimum one in that range and multiplies it with the current value and so on till the beginning of the list
Your algorithm doesn't always give the smallest product for all the inputs, e.g. this one:
5 2
3 2 3 2 3
The editorial explained the problem throughout, you really should read it again.
This problem is basically a shortest path problem, streets are vertices, possible movements from street to street are edges of the graph, the weight of an edge is determined by the special value of the tail alone. While the total movement cost is defined as the product but not the sum of all the costs, the question can be normalized by taking logarithms of all the special values, since
a * b = exp(log(a) + log(b))
Given log is monotonically increasing function, the minimal product is just the minimal sum of logarithms.
In editorial the editor picked Dijkstra's algorithm, but after taking the log transformation, it will be a standard shortest path problem and can be solved with any shortest path algorithm you like.
There are many implementations of Dijkstra's algorithm in Haskell, I found two on Hackage and one here. The parsing and graph initializing code is straight forward.
import Control.Monad (foldM)
import Control.Monad.ST
import Data.Array
import Data.Array.MArray
import Data.Array.ST
import Data.Function (on)
import Data.IntMap.Strict as M
import Data.List (groupBy)
import Data.Set as S
-- Code from http://rosettacode.org/wiki/Dijkstra's_algorithm#Haskell
dijkstra :: (Ix v, Num w, Ord w, Bounded w) => v -> v -> Array v [(v,w)] -> (Array v w, Array v v)
dijkstra src invalid_index adj_list = runST $ do
min_distance <- newSTArray b maxBound
writeArray min_distance src 0
previous <- newSTArray b invalid_index
let aux vertex_queue =
case S.minView vertex_queue of
Nothing -> return ()
Just ((dist, u), vertex_queue') ->
let edges = adj_list Data.Array.! u
f vertex_queue (v, weight) = do
let dist_thru_u = dist + weight
old_dist <- readArray min_distance v
if dist_thru_u >= old_dist then
return vertex_queue
else do
let vertex_queue' = S.delete (old_dist, v) vertex_queue
writeArray min_distance v dist_thru_u
writeArray previous v u
return $ S.insert (dist_thru_u, v) vertex_queue'
in
foldM f vertex_queue' edges >>= aux
aux (S.singleton (0, src))
m <- freeze min_distance
p <- freeze previous
return (m, p)
where b = bounds adj_list
newSTArray :: Ix i => (i,i) -> e -> ST s (STArray s i e)
newSTArray = newArray
shortest_path_to :: (Ix v) => v -> v -> Array v v -> [v]
shortest_path_to target invalid_index previous =
aux target [] where
aux vertex acc | vertex == invalid_index = acc
| otherwise = aux (previous Data.Array.! vertex) (vertex : acc)
-- Code I wrote
instance Bounded Double where
minBound = -1e100
maxBound = 1e100
constructInput :: Int -> Int -> M.IntMap Integer -> Array Int [(Int, Double)]
constructInput n k specMap =
let
specMap' = fmap (log . fromIntegral) specMap
edges = [(src, [(dest, specMap' M.! dest) | dest <- [src+1..src+k], dest <= n]) | src <- [1..n]]
in
array (1, n) edges
main :: IO ()
main = do
rawInput <- getContents
let
[l, l'] = lines rawInput
[n,k] = fmap read . words $ l
specs = fmap read . words $ l'
specMap = M.fromList $ [1..n] `zip` specs
adj_list = constructInput n k specMap
(_, previous) = dijkstra 1 0 adj_list
path = shortest_path_to n 0 previous
weight = (product $ fmap (specMap M.!) path) `mod` 1000000007
print weight
PS: My program scores 30 with a lot of TLE (short for "Too Long Execution" I guess) on CodeChief, for the full mark you may have to try it yourself and get a better solution.

Randomized algorithm not behaving as expected

I am implementing an approximate counting algorithm where we:
Maintain a counter X using log (log n) bits
Initialize X to 0
When an item arrives, increase X by 1 with probability (½)X
When the stream is over, output 2X − 1 so that E[2X]= n + 1
My implementation is as follows:
import System.Random
type Prob = Double
type Tosses = Int
-- * for sake of simplicity we assume 0 <= p <= 1
tos :: Prob -> StdGen -> (Bool,StdGen)
tos p s = (q <= 100*p, s')
where (q,s') = randomR (1,100) s
toses :: Prob -> Tosses -> StdGen -> [(Bool,StdGen)]
toses _ 0 _ = []
toses p n s = let t#(b,s') = tos p s in t : toses p (pred n) s'
toses' :: Prob -> Tosses -> StdGen -> [Bool]
toses' p n = fmap fst . toses p n
morris :: StdGen -> [a] -> Int
morris s xs = go s xs 0 where
go _ [] n = n
go s (_:xs) n = go s' xs n' where
(h,s') = tos (0.5^n) s
n' = if h then succ n else n
main :: IO Int
main = do
s <- newStdGen
return $ morris s [1..10000]
The problem is that my X is always incorrect for any |stream| > 2, and it seems like for all StdGen and |stream| > 1000, X = 7
I tested the same algorithm in Matlab and it works there, so I assume it's either
an issue with my random number generator, or
raising 1/2 to a large n in Double
Please suggest a path forward?
The problem is actually very simple: with randomR (1,100) you preclude values within the first percent, so you have a complete cutoff at high powers of 1/2 (which all lie in that small interval). Actually a general thing: ranges should start at zero, not at one†, unless there's a specific reason.
But why even use a range of 100 in the first place? I'd just make it
tos :: Prob -> StdGen -> (Bool,StdGen)
tos p s = (q <= p, s')
where (q,s') = randomR (0,1) s
†I know, Matlab gets this wrong all over the place. Just one of the many horrible things about that language.
Unrelated to your problem: as chi remarked this kind of code looks a lot nicer if you use a suitable random monad, instead of manually passing around StdGens.
import Data.Random
import Data.Random.Source.Std
type Prob = Double
tos :: Prob -> RVar Bool
tos p = do
q <- uniform 0 1
return $ q <= p
morris :: [a] -> RVar Int
morris xs = go xs 0 where
go [] n = return n
go (_:xs) n = do
h <- tos (0.5^n)
go xs $ if h then succ n else n
morrisTest :: Int -> IO Int
morrisTest n = do
runRVar (morris [1..n]) StdRandom

Most efficient algorithm to find integer points within an ellipse

I'm trying to find all the integer lattice points within various 3D ellipses.
I would like my program to take an integer N, and count all the lattice points within the ellipses of the form ax^2 + by^2 + cz^2 = n, where a,b,c are fixed integers and n is between 1 and N. This program should then return N tuples of the form (n, numlatticePointsWithinEllipse n).
I'm currently doing it by counting the points on the ellipses ax^2 + by^2 + cz^2 = m, for m between 0 and n inclusive, and then summing over m. I'm also only looking at x, y and z all positive initially, and then adding in the negatives by permuting their signs later.
Ideally, I'd like to reach numbers of N = 1,000,000+ within the scale of hours
Taking a specific example of x^2 + y^2 + 3z^2 = N, here's the Haskell code I'm currently using:
import System.Environment
isqrt :: Int -> Int
isqrt 0 = 0
isqrt 1 = 1
isqrt n = head $ dropWhile (\x -> x*x > n) $ iterate (\x -> (x + n `div` x) `div` 2) (n `div` 2)
latticePointsWithoutNegatives :: Int -> [[Int]]
latticePointsWithoutNegatives 0 = [[0,0,0]]
latticePointsWithoutNegatives n = [[x,y,z] | x<-[0.. isqrt n], y<- [0.. isqrt (n - x^2)], z<-[max 0 (isqrt ((n-x^2 -y^2) `div` 3))], x^2 +y^2 + z^2 ==n]
latticePoints :: Int -> [[Int]]
latticePoints n = [ zipWith (*) [x1,x2,x3] y | [x1,x2,x3] <- (latticePointsWithoutNegatives n), y <- [[a,b,c] | a <- (if x1 == 0 then [0] else [-1,1]), b<-(if x2 == 0 then [0] else [-1,1]), c<-(if x3 == 0 then [0] else [-1,1])]]
latticePointsUpTo :: Int -> Int
latticePointsUpTo n = sum [length (latticePoints x) | x<-[0..n]]
listResults :: Int -> [(Int, Int)]
listResults n = [(x, latticePointsUpTo x) | x<- [1..n]]
main = do
args <- getArgs
let cleanArgs = read (head args)
print (listResults cleanArgs)
I've compiled this with
ghc -O2 latticePointsTest
but using the PowerShell "Measure-Command" command, I get the following results:
Measure-Command{./latticePointsTest 10}
TotalMilliseconds : 12.0901
Measure-Command{./latticePointsTest 100}
TotalMilliseconds : 12.0901
Measure-Command{./latticePointsTest 1000}
TotalMilliseconds : 31120.4503
and going any more orders of magnitude up takes us onto the scale of days, rather than hours or minutes.
Is there anything fundamentally wrong with the algorithm I'm using? Is there any core reason why my code isn't scaling well? Any guidance will be greatly appreciated. I may also want to process the data between "latticePoints" and "latticePointsUpTo", so I can't just rely entirely on clever number theoretic counting techniques - I need the underlying tuples preserved.
Some things I would try:
isqrt is not efficient for the range of values you are working work. Simply use the floating point sqrt function:
isqrt = floor $ sqrt ((fromIntegral n) :: Double)
Alternatively, instead of computing integer square roots, use logic like this in your list comprehensions:
x <- takeWhile (\x -> x*x <= n) [0..],
y <- takeWhile (\y -> y*y <= n - x*x) [0..]
Also, I would use expressions like x*x instead of x^2.
Finally, why not compute the number of solutions with something like this:
sols a b c n =
length [ () | x <- takeWhile (\x -> a*x*x <= n) [0..]
, y <- takeWhile (\y -> a*x*x+b*y*y <= n) [0..]
, z <- takeWhile (\z -> a*x*x+b*y*y+c*z*z <= n) [0..]
]
This does not exactly compute the same answer that you want because it doesn't account for positive and negative solutions, but you could easily modify it to compute your answer. The idea is to use one list comprehension instead of iterating over various values of n and summing.
Finally, I think using floor and sqrt to compute the integral square root is completely safe in this case. This code verifies that the integer square root by sing sqrt of (x*x) == x for all x <= 3037000499:
testAll :: Int -> IO ()
testAll n =
print $ head [ (x,a) | x <- [n,n-1 .. 1], let a = floor $ sqrt (fromIntegral (x*x) :: Double), a /= x ]
main = testAll 3037000499
Note I am running this on a 64-bit GHC - otherwise just use Int64 instead of Int since Doubles are 64-bit in either case. Takes only a minute or so to verify.
This shows that taking the floor of sqrt y will never result in the wrong answer if y <= 3037000499^2.

More efficient algorithm preforms worse in Haskell

A friend of mine showed me a home exercise in a C++ course which he attend. Since I already know C++, but just started learning Haskell I tried to solve the exercise in the "Haskell way".
These are the exercise instructions (I translated from our native language so please comment if the instructions aren't clear):
Write a program which reads non-zero coefficients (A,B,C,D) from the user and places them in the following equation:
A*x + B*y + C*z = D
The program should also read from the user N, which represents a range. The program should find all possible integral solutions for the equation in the range -N/2 to N/2.
For example:
Input: A = 2,B = -3,C = -1, D = 5, N = 4
Output: (-1,-2,-1), (0,-2, 1), (0,-1,-2), (1,-1, 0), (2,-1,2), (2,0, -1)
The most straight-forward algorithm is to try all possibilities by brute force. I implemented it in Haskell in the following way:
triSolve :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
triSolve a b c d n =
let equation x y z = (a * x + b * y + c * z) == d
minN = div (-n) 2
maxN = div n 2
in [(x,y,z) | x <- [minN..maxN], y <- [minN..maxN], z <- [minN..maxN], equation x y z]
So far so good, but the exercise instructions note that a more efficient algorithm can be implemented, so I thought how to make it better. Since the equation is linear, based on the assumption that Z is always the first to be incremented, once a solution has been found there's no point to increment Z. Instead, I should increment Y, set Z to the minimum value of the range and keep going. This way I can save redundant executions.
Since there are no loops in Haskell (to my understanding at least) I realized that such algorithm should be implemented by using a recursion. I implemented the algorithm in the following way:
solutions :: (Integer -> Integer -> Integer -> Bool) -> Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
solutions f maxN minN x y z
| solved = (x,y,z):nextCall x (y + 1) minN
| x >= maxN && y >= maxN && z >= maxN = []
| z >= maxN && y >= maxN = nextCall (x + 1) minN minN
| z >= maxN = nextCall x (y + 1) minN
| otherwise = nextCall x y (z + 1)
where solved = f x y z
nextCall = solutions f maxN minN
triSolve' :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
triSolve' a b c d n =
let equation x y z = (a * x + b * y + c * z) == d
minN = div (-n) 2
maxN = div n 2
in solutions equation maxN minN minN minN minN
Both yield the same results. However, trying to measure the execution time yielded the following results:
*Main> length $ triSolve' 2 (-3) (-1) 5 100
3398
(2.81 secs, 971648320 bytes)
*Main> length $ triSolve 2 (-3) (-1) 5 100
3398
(1.73 secs, 621862528 bytes)
Meaning that the dumb algorithm actually preforms better than the more sophisticated one. Based on the assumption that my algorithm was correct (which I hope won't turn as wrong :) ), I assume that the second algorithm suffers from an overhead created by the recursion, which the first algorithm isn't since it's implemented using a list comprehension.
Is there a way to implement in Haskell a better algorithm than the dumb one?
(Also, I'll be glad to receive general feedbacks about my coding style)
Of course there is. We have:
a*x + b*y + c*z = d
and as soon as we assume values for x and y, we have that
a*x + b*y = n
where n is a number we know.
Hence
c*z = d - n
z = (d - n) / c
And we keep only integral zs.
It's worth noticing that list comprehensions are given special treatment by GHC, and are generally very fast. This could explain why your triSolve (which uses a list comprehension) is faster than triSolve' (which doesn't).
For example, the solution
solve :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
-- "Buffalo buffalo buffalo buffalo Buffalo buffalo buffalo..."
solve a b c d n =
[(x,y,z) | x <- vals, y <- vals
, let p = a*x +b*y
, let z = (d - p) `div` c
, z >= minN, z <= maxN, c * z == d - p ]
where
minN = negate (n `div` 2)
maxN = (n `div` 2)
vals = [minN..maxN]
runs fast on my machine:
> length $ solve 2 (-3) (-1) 5 100
3398
(0.03 secs, 4111220 bytes)
whereas the equivalent code written using do notation:
solveM :: Integer -> Integer -> Integer -> Integer -> Integer -> [(Integer,Integer,Integer)]
solveM a b c d n = do
x <- vals
y <- vals
let p = a * x + b * y
z = (d - p) `div` c
guard $ z >= minN
guard $ z <= maxN
guard $ z * c == d - p
return (x,y,z)
where
minN = negate (n `div` 2)
maxN = (n `div` 2)
vals = [minN..maxN]
takes twice as long to run and uses twice as much memory:
> length $ solveM 2 (-3) (-1) 5 100
3398
(0.06 secs, 6639244 bytes)
Usual caveats about testing within GHCI apply -- if you really want to see the difference, you need to compile the code with -O2 and use a decent benchmarking library (like Criterion).

Resources