Why does a strict length function perform noticeably faster? - performance

I toyed around with definitions to better understand the evaluation model, and wrote two for the length of a list.
The naive definition:
len :: [a] -> Int
len [] = 0
len (_:xs) = 1 + len xs
The strict (and tail-recursive) definition:
slen :: [a] -> Int -> Int
slen [] n = n
slen (_:xs) !n = slen xs (n+1)
len [1..10000000] takes about 5-6 seconds to perform.
slen [1..10000000] 0 takes about 3-4 seconds to perform.
I'm curious why. Before I checked the performances I was positive that they would perform about the same because len should have only one more thunk to evaluate at most. For demonstration purposes:
len [a,b,c,d]
= 1 + len [b,c,d]
= 1 + 1 + len [c,d]
= 1 + 1 + 1 + len [d]
= 1 + 1 + 1 + 1 + len []
= 1 + 1 + 1 + 1 + 0
= 4
And
slen [a,b,c,d] 0
= slen [b,c,d] 1
= slen [c,d] 2
= slen [d] 3
= slen [] 4
= 4
What makes slen noticeably faster?
P.S. I also wrote a tail-recursive lazy function (just like slen but lazy) as an attempt to close-in on the reason -- maybe it's because it was tail-recursive -- but it performed about the same as the naive definition.

The final step of len is not O(1). It is O(n) to add together n numbers. len also uses O(n) memory while slen uses O(1) memory.
The reason it uses O(n) memory is that each thunk uses up some memory. So when you have something like this:
1 + 1 + 1 + 1 + len []
there are five unevaluated thunks (including len [])
In GHCi, we can examine this thunk behavior a little easier with the :sprint command. The :sprint command prints the given value without forcing the evaluating of any thunks (you can learn more from :help). I'll use conses ((:)) since we can more easily evaluate each thunk one at a time, but the principle is the same.
λ> let ys = map id $ 1 : 2 : 3 : [] :: [Int] -- map id prevents GHCi from being too eager here
λ> :sprint ys
ys = _
λ> take 1 ys
[1]
λ> :sprint ys
ys = 1 : _
λ> take 2 ys
[1,2]
λ> :sprint ys
ys = 1 : 2 : _
λ> take 3 ys
[1,2,3]
λ> :sprint ys
ys = 1 : 2 : 3 : _
λ> take 4 ys
[1,2,3]
λ> :sprint ys
ys = [1,2,3]
Unevaluated thunks are represented by _ and you can see that in the original ys there are 4 thunks nested inside of each other, one for each part of the list (including []).
There isn't a good way that I know of to see this in Int because its evaluation is more all or nothing, but it still builds up a nested thunk in the same way. If you could see it like this, its evaluation would look something like this:
len [a,b,c,d]
= 1 + len [b,c,d]
= 1 + 1 + len [c,d]
= 1 + 1 + 1 + len [d]
= 1 + 1 + 1 + 1 + len []
= 1 + 1 + 1 + 1 + 0
= 1 + 1 + 1 + 1 -- Here it stops building the thunks and starts evaluating them
= 1 + 1 + 2
= 1 + 3
= 4

David Young's answer gives the correct explanation of the difference in evaluation order. You should think about Haskell evaluation in the way he outlines.
Let me show you how you can see the difference in the Core. I think it's actually more visible with optimizations on, because the evaluation ends up as an explicit case statement. If you've never played with Core before, see the canonical SO question on the topic: Reading GHC Core.
Generate the core output with ghc -O2 -ddump-simpl -dsuppress-all -ddump-to-file SO27392665.hs. You'll see that GHC splits both len and slen into a recursive "worker" function, $wlen or $wslen, and a nonrecursive "wrapper" function. Because the vast majority of the time is spent in the recursive "workers," focus on them:
Rec {
$wlen
$wlen =
\ # a_arZ w_sOR ->
case w_sOR of _ {
[] -> 0;
: ds_dNU xs_as0 ->
case $wlen xs_as0 of ww_sOU { __DEFAULT -> +# 1 ww_sOU }
}
end Rec }
len
len =
\ # a_arZ w_sOR ->
case $wlen w_sOR of ww_sOU { __DEFAULT -> I# ww_sOU }
Rec {
$wslen
$wslen =
\ # a_arR w_sOW ww_sP0 ->
case w_sOW of _ {
[] -> ww_sP0;
: ds_dNS xs_asW -> $wslen xs_asW (+# ww_sP0 1)
}
end Rec }
slen
slen =
\ # a_arR w_sOW w1_sOX ->
case w1_sOX of _ { I# ww1_sP0 ->
case $wslen w_sOW ww1_sP0 of ww2_sP4 { __DEFAULT -> I# ww2_sP4 }
}
You can see that $wslen has only one case, while $wlen has two. If you go look at David's answer, you can trace what happens in $wlen: it does its case analysis on the outermost list constructor ([]/:), then makes the recursive call to $wlen xs_as0 (i.e. len xs), which it also cases, i.e. forces the accumulated thunk.
In $wslen, on the other hand, there's only the one case statement. In the recursive branch, there's simply an unboxed addition, (+# ww_sP0 1), which doesn't create a thunk.
(Note: a previous version of this answer had stated that with -O GHC could specialize $wslen but not $wlen to use unboxed Int#s. That's not the case.)

Related

Difference between two kinds of recursive function

In OCaml, there are two ways I have seen to write a map function for example
let rec map f xs =
match xs with
| [] -> []
| x::rest -> f x :: map f rest
and
let map f xs =
let rec go xs =
match xs with
| [] -> []
| x::rest -> f x :: go rest
in go xs
The second one looks like more optimizing because it is similar to loop invariant elimination but in functional programming it may involve allocating a new closure. Can anyone explain the difference between the two styles of recursive function, in particular in terms of performance? Thanks for your help!
I couldn't find similar questions in SO and I'm expecting there is a term like "recursive invariant elimination" to describe the kind of transformation from the first program to the second one.
I've always wondered the exact same thing: does the compiler optimizes invariant argument in recursive function ?
Since your question motivated me to benchmark it, let me share here my results.
Protocol
I have not tried it with map, since it would require big lists, which would result in a stack_overflow. I could try it with rev_map but i don't see the point of allocating huge lists while it's easier to test an equivalent behavior on integers (plus I'm afraid that allocations. would ultimately trigger a round of GC which would mess with my time measures).
The following code reproduces your use-case with a dummy recursive function with an invariant argument, as in map:
let rec g f x = if x = 0 then 0 else g f (f x)
let g2 f x =
let rec aux x = if x = 0 then 0 else aux (f x) in
aux x
let time title f x =
let t = Sys.time () in
let fx = f x in
Printf.printf "%s: %fs\n%!" title (Sys.time () -. t) ;
fx
let main =
let nb = int_of_string Sys.argv.(1) in
ignore (time "normal" (g pred) nb) ;
ignore (time "invariant elimination" (g2 pred) nb)
You can compile it (ocamlopt a.ml for example) and run it by doing
./a.out 10000000000. You can obviously change the integer parameter to tune the number of recursive calls.
Results
On my computer, for an input number of 10000000000, it outputs:
normal: 11.813643s
invariant elimination: 11.646377s
On bigger values:
20000000000
normal: 23.353022s
invariant elimination: 22.977813s
30000000000
normal: 35.586871s
invariant elimination: 35.421313s
I didn't bother going higher.
This to me seems to indicate that both versions are equivalent, maybe the compiler does optimize invariant argument in recursive function and it's just not measurable, maybe it doesn't.
Bytecode comparison
I have also tried to see if the generated bytecode is the same or not (ocamlc -dinstr a.ml), and it does differ slightly as you can see in the following code snippet
normal
compiling a file with only this in it:
let g f x =
let rec aux f x = if x = 0 then 0 else aux f (f x) in
aux f x
gives
branch L2
restart
L3: grab 1
acc 1
push
const 0
eqint
branchifnot L4
const 0
return 2
L4: acc 1
push
acc 1
apply 1
push
acc 1
push
offsetclosure 0
appterm 2, 4
restart
L1: grab 1
closurerec 3, 0
acc 2
push
acc 2
push
acc 2
appterm 2, 5
L2: closure L1, 0
push
acc 0
makeblock 1, 0
pop 1
setglobal E!
invariant elimination
compiling a file with only this in it:
let g2 f x =
let rec aux x = if x = 0 then 0 else aux (f x) in
aux x
gives:
branch L2
L3: acc 0
push
const 0
eqint
branchifnot L4
const 0
return 1
L4: acc 0
push
envacc 1
apply 1
push
offsetclosure 0
appterm 1, 2
restart
L1: grab 1
acc 0
closurerec 3, 1
acc 2
push
acc 1
appterm 1, 4
L2: closure L1, 0
push
acc 0
makeblock 1, 0
pop 1
setglobal E2!
But i'm not expert enough to draw any conclusion as i don't speak bytecode.
That's also around here that i decided that the answer is not that important for now and it's easier anyway to ask #gasche next time i see him.
The use of go suggests a Haskell background. Both OCaml and Haskell are functional programming languages, but there are substantial differences and what one knows about Haskell should not be used to make assumptions about OCaml.
I see no particular reason to write map the second way. If you're using OCaml 4.14.0 or later, you might want to use tail_mod_cons to make map tail-recursive without an explicit accumulator as in Stef's comment.
let[#tail_mod_cons] rec map f =
function
| [] -> []
| x::xs -> f x :: map f xs
And of course, the real solution is:
let map = List.map
As others, I never seen the second form. And it's hard for me to imagine what kind of optimization it can provide. What I know however is that (as #Stef and #Chris pointed out) this function can be written in a tail-recursive way. So just for the sake of completeness:
let map f xs =
let rec go xs ys =
match xs with
| [] -> ys
| x::rest -> go rest ((f x)::ys)
in List.rev (go xs [])
This version is more optimized than the two forms from your post, as each next recursive call can reuse the same stack frame eliminating unnecessary allocations, saving space and the execution time.

List of tuples by taking the same index for an element in haskell

I have been trying to solve the following problem in haskell:
Generate a list of tuples (n, s) where 0 ≤ n ≤ 100 and n mod 2 = 0,
and where s = sum(1..n) The output should be the list
[(0,0),(2,3),(4,10),...,(100,5050)] Source
I tried to solve the problem with following code:
genListTupleSumUntilX :: Int -> [(Int,Int)]
genListTupleSumUntilX x =
take x [(n, s) | n <- [1..x], s <- sumUntilN x]
where
sumUntilN :: Int -> [Int]
sumUntilN n
| n == 0 = []
| n == 1 = [1]
| otherwise = sumUntilN (n-1) ++ [sum[1..n]]
However, this code does not give the expected result. (as #Guru Stron Pointed out- Thank you!)
I would also appreciate it if somebody could help me make this code more concise. I am also new to the concept of lazy evaluation, so am unable to determine the runtime complexity. Help will be appreciated.
However I feel like this code could still be improved upon, espically with:
take x in the function seems really inelegant. So Is there a way to have list comprhensions only map to the same index?
sumUntilN feels really verbose. Is there an idiomatic way to do the same in haskell?
Finally, I am extremely new to haskell and have trouble evaluating the time and space complexity of the function. Can somebody help me there?
sumOfNumsUptoN n = n * (n + 1) `div` 2
genListTupleSumUntilX :: Int -> [(Int, Int)]
genListTupleSumUntilX n = zip [0, 2 .. n] $ map sumOfNumsUptoN [0, 2 .. n]
This is of linear complexity on the size of the list.
I would say that you overcomplicate things. To produce correct output you can use simple list comprehension:
genListTupleSumUntilX :: Int -> [(Int,Int)]
genListTupleSumUntilX x = [(n, sum [1..n]) | n <- [0,2..x]]
Note that this solution will recalculate the same sums repeatedly (i.e for n+1 element sum is actually n + 2 + n + 1 + sumForNthElemnt, so you can potentially reuse the computation) which will lead to O(n^2) complexity, but for such relatively small n it is not a big issue. You can handle this using scanl function (though maybe there is more idiomatic approach for memoization):
genListTupleSumUntilX :: Int -> [(Int,Int)]
genListTupleSumUntilX 0 = []
genListTupleSumUntilX x = scanl (\ (prev, prevSum) curr -> (curr, prevSum + prev + 1 + curr)) (0,0) [2,4..x]

Haskell Optimizations for List Processing stymied by Lazy Evaluation

I'm trying to improve the efficiency of the following code. I want to count all occurrences of a symbol before a given point (as part of pattern-matching using a Burrows-Wheeler transform). There's some overlap in how I'm counting symbols. However, when I have tried to implement what looks like it should be more efficient code, it turns out to be less efficient, and I'm assuming that lazy evaluation and my poor understanding of it is to blame.
My first attempt at a counting function went like this:
count :: Ord a => [a] -> a -> Int -> Int
count list sym pos = length . filter (== sym) . take pos $ list
Then in the body of the matching function itself:
matching str refCol pattern = match 0 (n - 1) (reverse pattern)
where n = length str
refFstOcc sym = length $ takeWhile (/= sym) refCol
match top bottom [] = bottom - top + 1
match top bottom (sym : syms) =
let topCt = count str sym top
bottomCt = count str sym (bottom + 1)
middleCt = bottomCt - topCt
refCt = refFstOcc sym
in if middleCt > 0
then match (refCt + topCt) (refCt + bottomCt - 1) syms
else 0
(Stripped down for brevity - I'm memoizing first occurrences of symbols in refCol through a Map, and a couple other details as well).
Edit: Sample use would be:
matching "AT$TCTAGT" "$AACGTTTT" "TCG"
which should be 1 (assuming I didn't mistype anything).
Now, I'm recounting everything in the middle between the top pointer and the bottom twice, which adds up when I count a million character DNA string with only 4 possible choices for characters (and profiling tells me that this is the big bottleneck, too, taking 48% of my time for bottomCt and around 38% of my time for topCt). For reference, when calculating this for a million character string and trying to match 50 patterns (each of which is between 1 and 1000 characters), the program takes about 8.5 to 9.5 seconds to run.
However, if I try to implement the following function:
countBetween :: Ord a => [a] -> a -> Int -> Int -> (Int, Int)
countBetween list sym top bottom =
let (topList, bottomList) = splitAt top list
midList = take (bottom - top) bottomList
getSyms = length . filter (== sym)
in (getSyms topList, getSyms midList)
(with changes made to the matching function to compensate), the program takes between 18 and 22 seconds to run.
I've also tried passing in a Map which can keep track of previous calls, but that also takes about 20 seconds to run and runs up the memory usage.
Similarly, I've shorted length . filter (== sym) to a fold, but again - 20 seconds for foldr, and 14-15 for foldl.
So what would be a proper Haskell way to optimize this code through rewriting it? (Specifically, I'm looking for something that doesn't involve precomputation - I may not be reusing strings very much - and which explains something of why this is happening).
Edit: More clearly, what I am looking for is the following:
a) Why does this behaviour happen in Haskell? How does lazy evaluation play a role, what optimizations is the compiler making to rewrite the count and countBetween functions, and what other factors may be involved?
b) What is a simple code rewrite which would address this issue so that I don't traverse the lists multiple times? I'm looking specifically for something which addresses that issue, rather than a solution which sidesteps it. If the final answer is, count is the most efficient possible way to write the code, why is that?
I'm not sure lazy evaluation has much to do with the performance of the code. I think the main problem is the use of String - which is a linked list - instead of more performant string type.
Note that this call in your countBetween function:
let (topList, bottomList) = splitAt top list
will re-create the linked link corresponding to topList meaning
a lot more allocations.
A Criterion benchmark to compare splitAt versus using take n/drop n
may be found here: http://lpaste.net/174526. The splitAt version is
about 3 times slower and, of course, has a lot more allocations.
Even if you don't want to "pre-compute" the counts you can improve
matters a great deal by simply switching to either ByteString or Text.
Define:
countSyms :: Char -> ByteString -> Int -> Int -> Int
countSyms sym str lo hi =
length [ i | i <- [lo..hi], BS.index str i == sym ]
and then:
countBetween :: ByteString -> Char -> Int -> Int -> (Int,Int)
countBetween str sym top bottom = (a,b)
where a = countSyms sym str 0 (top-1)
b = countSyms sym str top (bottom-1)
Also, don't use reverse on large lists - it will reallocate the
entire list. Just index into a ByteString / Text in reverse.
Memoizing counts may or may not help. It all depends on how it's done.
It seems that the main point of the match routine is
to transform a interval (bottom,top) to another interval
based on the current symbol sym. The formulas are
basically:
ref_fst = index of sym in ref_col
-- defined in an outer scope
match :: Char -> (Int,Int) -> (Int,Int)
match sym (bottom, top) | bottom > top = (bottom, top) -- if the empty interval
match sym (bottom, top) =
let
top_count = count of sym in str from index 0 to top
bot_count = count of sym in str from index 0 to bottom
mid_count = top_count - bot_count
in if mid_count > 0
then (ref_fst + bot_count, ref_fst + top_count)
else (1,0) -- the empty interval
And then matching is just a fold over pattern using match
with the initial interval (0, n-1).
Both top_count and bot_count can be computed efficiently
using a precomputed lookup table, and below is code which
does that.
If you run test1 you'll see a trace of how the interval
is transformed via each symbol in the pattern.
Note: There may be off-by-1 errors, and I've hard coded
ref_fst to be 0 - I'm not sure how this fits into the
larger algorithm, but the basic idea should be sound.
Note that once the counts vector has been created
there is no need to index into the original string anymore.
Therefore, even though I use a ByteString here for
the (larger) DNA sequence, it's not crucial, and the
mkCounts routine should work just as well if passed a String
instead.
Code also available at http://lpaste.net/174288
{-# LANGUAGE OverloadedStrings #-}
import Data.Vector.Unboxed ((!))
import qualified Data.Vector.Unboxed as UV
import qualified Data.Vector.Unboxed.Mutable as UVM
import qualified Data.ByteString.Char8 as BS
import Debug.Trace
import Text.Printf
import Data.List
mkCounts :: BS.ByteString -> UV.Vector (Int,Int,Int,Int)
mkCounts syms = UV.create $ do
let n = BS.length syms
v <- UVM.new (n+1)
let loop x i | i >= n = return x
loop x i = let s = BS.index syms i
(a,t,c,g) = x
x' = case s of
'A' -> (a+1,t,c,g)
'T' -> (a,t+1,c,g)
'C' -> (a,t,c+1,g)
'G' -> (a,t,c,g+1)
_ -> x
in do UVM.write v i x
loop x' (i+1)
x <- loop (0,0,0,0) 0
UVM.write v n x
return v
data DNA = A | C | T | G
deriving (Show)
getter :: DNA -> (Int,Int,Int,Int) -> Int
getter A (a,_,_,_) = a
getter T (_,t,_,_) = t
getter C (_,_,c,_) = c
getter G (_,_,_,g) = g
-- narrow a window
narrow :: Int -> UV.Vector (Int,Int,Int,Int) -> DNA -> (Int,Int) -> (Int,Int)
narrow refcol counts sym (lo,hi) | trace msg False = undefined
where msg = printf "-- lo: %d hi: %d refcol: %d sym: %s top_cnt: %d bot_count: %d" lo hi refcol (show sym) top_count bot_count
top_count = getter sym (counts ! (hi+1))
bot_count = getter sym (counts ! lo)
narrow refcol counts sym (lo,hi) =
let top_count = getter sym (counts ! (hi+1))
bot_count = getter sym (counts ! (lo+0))
mid_count = top_count - bot_count
in if mid_count > 0
then ( refcol + bot_count, refcol + top_count-1 )
else (lo+1,lo) -- signal an wmpty window
findFirst :: DNA -> UV.Vector (Int,Int,Int,Int) -> Int
findFirst sym v =
let n = UV.length v
loop i | i >= n = n
loop i = if getter sym (v ! i) > 0
then i
else loop (i+1)
in loop 0
toDNA :: String -> [DNA]
toDNA str = map charToDNA str
charToDNA :: Char -> DNA
charToDNA = go
where go 'A' = A
go 'C' = C
go 'T' = T
go 'G' = G
dnaToChar A = 'A'
dnaToChar C = 'C'
dnaToChar T = 'T'
dnaToChar G = 'G'
first :: DNA -> BS.ByteString -> Int
first sym str = maybe len id (BS.elemIndex (dnaToChar sym) str)
where len = BS.length str
test2 = do
-- matching "AT$TCTAGT" "$AACGTTTT" "TCG"
let str = "AT$TCTAGT"
refcol = "$AACGTTTT"
syms = toDNA "TCG"
-- hard coded for now
-- may be computeed an memoized
refcol_G = 4
refcol_C = 3
refcol_T = 5
counts = mkCounts str
w0 = (0, BS.length str -1)
w1 = narrow refcol_G counts G w0
w2 = narrow refcol_C counts C w1
w3 = narrow refcol_T counts T w2
firsts = (first A refcol, first T refcol, first C refcol, first G refcol)
putStrLn $ "firsts: " ++ show firsts
putStrLn $ "w0: " ++ show w0
putStrLn $ "w1: " ++ show w1
putStrLn $ "w2: " ++ show w2
putStrLn $ "w3: " ++ show w3
let (lo,hi) = w3
len = if lo <= hi then hi - lo + 1 else 0
putStrLn $ "length: " ++ show len
matching :: BS.ByteString -> BS.ByteString -> String -> Int
matching str refcol pattern =
let counts = mkCounts str
n = BS.length str
syms = toDNA (reverse pattern)
firsts = (first A refcol, first T refcol, first C refcol, first G refcol)
go (lo,hi) sym = narrow refcol counts sym (lo,hi)
where refcol = getter sym firsts
(lo, hi) = foldl' go (0,n-1) syms
len = if lo <= hi then hi - lo + 1 else 0
in len
test3 = matching "AT$TCTAGT" "$AACGTTTT" "TCG"

Make a century in OCaml

Here is quite a typical make a century problem.
We have a natural number list [1;2;3;4;5;6;7;8;9].
We have a list of possible operators [Some '+'; Some '*';None].
Now we create a list of operators from above possibilities and insert each operator into between each consecutive numbers in the number list and compute the value.
(Note a None b = a * 10 + b)
For example, if the operator list is [Some '+'; Some '*'; None; Some '+'; Some '+'; Some '+'; Some '+'; Some '+'], then the value is 1 + 2 * 34 + 5 + 6 + 7 + 8 + 9 = 104.
Please find all possible operator lists, so the value = 10.
The only way I can think of is brute-force.
I generate all possible operator lists.
Compute all possible values.
Then filter so I get all operator lists which produce 100.
exception Cannot_compute
let rec candidates n ops =
if n = 0 then [[]]
else
List.fold_left (fun acc op -> List.rev_append acc (List.map (fun x -> op::x) (candidates (n-1) ops))) [] ops
let glue l opl =
let rec aggr acc_l acc_opl = function
| hd::[], [] -> (List.rev (hd::acc_l), List.rev acc_opl)
| hd1::hd2::tl, None::optl -> aggr acc_l acc_opl (((hd1*10+hd2)::tl), optl)
| hd::tl, (Some c)::optl -> aggr (hd::acc_l) ((Some c)::acc_opl) (tl, optl)
| _ -> raise Cannot_glue
in
aggr [] [] (l, opl)
let compute l opl =
let new_l, new_opl = glue l opl in
let rec comp = function
| hd::[], [] -> hd
| hd::tl, (Some '+')::optl -> hd + (comp (tl, optl))
| hd1::hd2::tl, (Some '-')::optl -> hd1 + (comp ((-hd2)::tl, optl))
| hd1::hd2::tl, (Some '*')::optl -> comp (((hd1*hd2)::tl), optl)
| hd1::hd2::tl, (Some '/')::optl -> comp (((hd1/hd2)::tl), optl)
| _, _ -> raise Cannot_compute
in
comp (new_l, new_opl)
let make_century l ops =
List.filter (fun x -> fst x = 100) (
List.fold_left (fun acc x -> ((compute l x), x)::acc) [] (candidates ((List.length l)-1) ops))
let rec print_solution l opl =
match l, opl with
| hd::[], [] -> Printf.printf "%d\n" hd
| hd::tl, (Some op)::optl -> Printf.printf "%d %c " hd op; print_solution tl optl
| hd1::hd2::tl, None::optl -> print_solution ((hd1*10+hd2)::tl) optl
| _, _ -> ()
I believe my code is ugly. So I have the following questions
computer l opl is to compute using the number list and operator list. Basically it is a typical math evaluation. Is there any nicer implementation?
I have read Chapter 6 in Pearls of Functional Algorithm Design. It used some techniques to improve the performance. I found it really really obscurity and hard to understand. Anyone who read it can help?
Edit
I refined my code. Basically, I will scan the operator list first to glue all numbers where their operator is None.
Then in compute, if I meet a '-' I will simply negate the 2nd number.
A classic dynamic programming solution (which finds the = 104
solution instantly) that does not risk any problem with operators
associativity or precedence. It only returns a boolean saying whether
it's possible to come with the number; modifying it to return the
sequences of operations to get the solution is an easy but interesting
exercise, I was not motivated to go that far.
let operators = [ (+); ( * ); ]
module ISet = Set.Make(struct type t = int let compare = compare end)
let iter2 res1 res2 f =
res1 |> ISet.iter ## fun n1 ->
res2 |> ISet.iter ## fun n2 ->
f n1 n2
let can_make input target =
let has_zero = Array.fold_left (fun acc n -> acc || (n=0)) false input in
let results = Array.make_matrix (Array.length input) (Array.length input) ISet.empty in
for imax = 0 to Array.length input - 1 do
for imin = imax downto 0 do
let add n =
(* OPTIMIZATION: if the operators are known to be monotonous, we need not store
numbers above the target;
(Handling multiplication by 0 requires to be a bit more
careful, and I'm not in the mood to think hard about this
(I think one need to store the existence of a solution,
even if it is above the target), so I'll just disable the
optimization in that case)
*)
if n <= target && not has_zero then
results.(imin).(imax) <- ISet.add n results.(imin).(imax) in
let concat_numbers =
(* concatenates all number from i to j:
i=0, j=2 -> (input.(0)*10 + input.(1))*10 + input.(2)
*)
let rec concat acc k =
let acc = acc + input.(k) in
if k = imax then acc
else concat (10 * acc) (k + 1)
in concat 0 imin
in add concat_numbers;
for k = imin to imax - 1 do
let res1 = results.(imin).(k) in
let res2 = results.(k+1).(imax) in
operators |> List.iter (fun op ->
iter2 res1 res2 (fun n1 n2 -> add (op n1 n2););
);
done;
done;
done;
let result = results.(0).(Array.length input - 1) in
ISet.mem target result
Here is my solution, which evaluates according to the usual rules of precedence. It finds 303 solutions to find [1;2;3;4;5;6;7;8;9] 100 in under 1/10 second on my MacBook Pro.
Here are two interesting ones:
# 123 - 45 - 67 + 89;;
- : int = 100
# 1 * 2 * 3 - 4 * 5 + 6 * 7 + 8 * 9;;
- : int = 100
This is a brute force solution. The only slightly clever thing is that I treat concatenation of digits as simply another (high precedence) operation.
The eval function is the standard stack-based infix expression evaluation that you will find described many places. Here is an SO article about it: How to evaluate an infix expression in just one scan using stacks? The essence is to postpone evaulating by pushing operators and operands onto stacks. When you find that the next operator has lower precedence you can go back and evaluate what you pushed.
type op = Plus | Minus | Times | Divide | Concat
let prec = function
| Plus | Minus -> 0
| Times | Divide -> 1
| Concat -> 2
let succ = function
| Plus -> Minus
| Minus -> Times
| Times -> Divide
| Divide -> Concat
| Concat -> Plus
let apply op stack =
match op, stack with
| _, [] | _, [_] -> [] (* Invalid input *)
| Plus, a :: b :: tl -> (b + a) :: tl
| Minus, a :: b :: tl -> (b - a) :: tl
| Times, a :: b :: tl -> (b * a) :: tl
| Divide, a :: b :: tl -> (b / a) :: tl
| Concat, a :: b :: tl -> (b * 10 + a) :: tl
let rec eval opstack numstack ops nums =
match opstack, numstack, ops, nums with
| [], sn :: _, [], _ -> sn
| sop :: soptl, _, [], _ ->
eval soptl (apply sop numstack) ops nums
| [], _, op :: optl, n :: ntl ->
eval [op] (n :: numstack) optl ntl
| sop :: soptl, _, op :: _, _ when prec sop >= prec op ->
eval soptl (apply sop numstack) ops nums
| _, _, op :: optl, n :: ntl ->
eval (op :: opstack) (n :: numstack) optl ntl
| _ -> 0 (* Invalid input *)
let rec incr = function
| [] -> []
| Concat :: rest -> Plus :: incr rest
| x :: rest -> succ x :: rest
let find nums tot =
match nums with
| [] -> []
| numhd :: numtl ->
let rec try1 ops accum =
let accum' =
if eval [] [numhd] ops numtl = tot then
ops :: accum
else
accum
in
if List.for_all ((=) Concat) ops then
accum'
else try1 (incr ops) accum'
in
try1 (List.map (fun _ -> Plus) numtl) []
I came up with a slightly obscure implementation (for a variant of this problem) that is a bit better than brute force. It works in place, rather than generating intermediate data structures, keeping track of the combined values of the operators that have already been evaluated.
The trick is to keep track of a pending operator and value so that you can evaluate the "none" operator easily. That is, if the algorithm had just progressed though 1 + 23, the pending operator would be +, and the pending value would be 23, allowing you to easily generate either 1 + 23 + 4 or 1 + 234 as necessary.
type op = Add | Sub | Nothing
let print_ops ops =
let len = Array.length ops in
print_char '1';
for i = 1 to len - 1 do
Printf.printf "%s%d" (match ops.(i) with
| Add -> " + "
| Sub -> " - "
| Nothing -> "") (i + 1)
done;
print_newline ()
let solve k target =
let ops = Array.create k Nothing in
let rec recur i sum pending_op pending_value =
let sum' = match pending_op with
| Add -> sum + pending_value
| Sub -> if sum = 0 then pending_value else sum - pending_value
| Nothing -> pending_value in
if i = k then
if sum' = target then print_ops ops else ()
else
let digit = i + 1 in
ops.(i) <- Add;
recur (i + 1) sum' Add digit;
ops.(i) <- Sub;
recur (i + 1) sum' Sub digit;
ops.(i) <- Nothing;
recur (i + 1) sum pending_op (pending_value * 10 + digit) in
recur 0 0 Nothing 0
Note that this will generate duplicates - I didn't bother to fix that. Also, if you are doing this exercise to gain strength in functional programming, it might be beneficial to reject the imperative approach taken here and search for a similar solution that doesn't make use of assignments.

Understanding the runtime of a recursive SML function involving list appending (using #)

I'm new to algorithm analysis and SML and got hung up on the average-case runtime of the following SML function. I would appreciate some feedback on my thinking.
fun app([]) = []
| app(h::t) = [h] # app(t)
So after every recursion we will end up with a bunch of single element lists (and one no-element list).
[1]#[2]#[3]#...#[n]#[]
Where n is the number of elements in the original list and 1, 2, 3, ..., n is just to illustrate what elements in the original list we are talking about. L # R takes time linear in the length of list L. Assuming A is the constant amount of time # takes for every element, I imagine this as if:
[1,2]#[3]#[4]#...#[n]#[] took 1A
[1,2,3]#[4]#...#[n]#[] took 2A
[1,2,3,4]#...#[n]#[] took 3A
...
[1,2,3,4,...,n]#[] took (n-1)A
[1,2,3,4,...,n] took nA
I'm therefore thinking that a recurrence for the time would look something like this:
T(0) = C (if n = 0)
T(n) = T(n-1) + An + B (if n > 0)
Where C is just the final matching of the base case app([]) and B is the constant for h::t. Close the recurrence and we will get this (proof omitted):
T(n) = (n²+n)A/2 + Bn + C = (A/2)n² + (A/2)n + Bn + C = Θ(n²)
This is my own conclusion which differs from the answer that was presented to me, namely:
T(0) = B (if n = 0)
T(n) = T(n-1) + A (if n > 0)
Closed form
T(n) = An + B = Θ(n)
Which is quite different. (Θ(n) vs Θ(n²)!) But isn't this assuming that L # R takes constant time rather than linear? For example, it would be true for addition
fun add([]) = 0
| add(h::t) = h + add(t) (* n + ... + 2 + 1 + 0 *)
or even concatenation
fun con([]) = []
| con(h::t) = h::con(t) (* n :: ... :: 2 :: 1 :: [] *)
Am I misunderstanding the way that L # R exists or is my analysis (at least sort of) correct?
Yes. Running the app [1,2,3] command by hand one function call at a time gives:
app [1,2,3]
[1]#(app [2,3])
[1]#([2]#(app [3]))
[1]#([2]#([3]#(app [])))
[1]#([2]#([3]#([])))
[1]#([2]#[3])
[1]#([2,3])
[1,2,3]
This is a consequence of the function call being on the left-side of the #.
Compare this to a naïve version of rev:
fun rev [] = []
| rev (x::xs) = rev xs # [x]
This one has the running time you expect: Once the recursion has fully expanded into an expression ((([])#[3])#[2])#[1] (taking linear time), it requires n + (n - 1) + (n - 2) + ... + 1, or n(n+1)/2, or O(n^2) steps to complete the computation. A more effective rev could look like this:
local
fun rev' [] ys = ys
| rev' (x::xs) ys = rev' xs (x::ys)
in
fun rev xs = rev' xs []
end

Resources