Haskell; performance of where clause - performance

I was analyzing the effect of where clauses on performance of Haskell programs.
In Haskell, The craft of functional programming, Thomspson, chapter 20.4, I found the following example:
exam1 :: Int -> [Int]
exam1 n = [1 .. n] ++ [1 .. n]
exam2 :: Int -> [Int]
exam2 n = list ++ list
where list = [1 .. n]
and, I quote,
The time taken to calculate [exam1] will be O(n), and the space used will be O(1), but we will have to calculate the expression [1 .. n] twice.
...
The effect [of exam2] is to compute the list [1 .. n] once, so that we save its value after calculating it in order to be able to use it again.
...
If we save something by referring to it in a where clause, we have to pay the penalty of the space that it occupies.
So I go wild and think that the -O2 flag must handle this and choose the best behavior for me. I analyze the time-complexity of these two examples using Criterion.
import Criterion.Main
exam1 :: Int -> [Int]
exam1 n = [1 .. n] ++ [1 .. n]
exam2 :: Int -> [Int]
exam2 n = list ++ list
where list = [1 .. n]
m :: Int
m = 1000000
main :: IO ()
main = defaultMain [ bench "exam1" $ nf exam1 m
, bench "exam2" $ nf exam2 m
]
I compile with -O2, and find:
benchmarking exam1
time 15.11 ms (15.03 ms .. 15.16 ms)
1.000 R² (1.000 R² .. 1.000 R²)
mean 15.11 ms (15.08 ms .. 15.14 ms)
std dev 83.20 μs (53.18 μs .. 122.6 μs)
benchmarking exam2
time 76.27 ms (72.84 ms .. 82.75 ms)
0.987 R² (0.963 R² .. 0.997 R²)
mean 74.79 ms (70.20 ms .. 77.70 ms)
std dev 6.204 ms (3.871 ms .. 9.233 ms)
variance introduced by outliers: 26% (moderately inflated)
What a difference! Why would that be? I thought that exam2 should be faster but memory inefficient (according to the quote above). But no, it is actually much slower (and probably more memory inefficient but that needs to be tested).
Maybe it is slower because [1 .. 1e6] has to be stored in memory, and this takes a lot of time. What do you think?
PS: I found a possibly related question, but not really.

You can inspect GHC Core using -ddump-simpl and observe the optimized code GHC produced. Core is not as readable as Haskell, but usually one can still get the idea of what is going on.
For exam2 we get plain boring code:
exam2
= \ (n_aX5 :: Int) ->
case n_aX5 of { GHC.Types.I# y_a1lJ ->
let {
list_s1nF [Dmd=<S,U>] :: [Int]
[LclId]
list_s1nF = GHC.Enum.eftInt 1# y_a1lJ } in
++ # Int list_s1nF list_s1nF
}
Roughly, this defines list_s1nF as [1..n] (eftInt = enum from to) and calls ++. No inlining happened here. GHC was afraid to inline list_s1nF since it is used twice, and inlining a definition in such case can be harmful. Indeed if let x = expensive in x+x is inlined, expensive might get recomputed twice, which is bad. Here GHC trusts the programmer, thinking that if they used a let / where they want that to be computed only once. Failing to inline list_s1nF prevents further optimization.
So this code allocates list = [1..n], and then copies that resulting in 1:2:...:n:list where the tail pointer is made to point to the original list.
Copying an arbitrary list requires to follow a pointer chain and allocating cells for the new list, which is intuitively more expensive than [1..n] which only needs to allocate the cells for the new list and keep a counter around.
Instead, exam1 is optimized much further: after some minor unboxing
exam1
= \ (w_s1os :: Int) ->
case w_s1os of { GHC.Types.I# ww1_s1ov ->
PerfList.$wexam1 ww1_s1ov
}
we get to the actual worker function.
PerfList.$wexam1
= \ (ww_s1ov :: GHC.Prim.Int#) ->
let {
n_a1lT :: [Int]
[LclId]
n_a1lT = GHC.Enum.eftInt 1# ww_s1ov } in
case GHC.Prim.># 1# ww_s1ov of {
__DEFAULT ->
letrec {
go_a1lX [Occ=LoopBreaker] :: GHC.Prim.Int# -> [Int]
[LclId, Arity=1, Str=<L,U>, Unf=OtherCon []]
go_a1lX
= \ (x_a1lY :: GHC.Prim.Int#) ->
GHC.Types.:
# Int
(GHC.Types.I# x_a1lY)
(case GHC.Prim.==# x_a1lY ww_s1ov of {
__DEFAULT -> go_a1lX (GHC.Prim.+# x_a1lY 1#);
1# -> n_a1lT
}); } in
go_a1lX 1#;
1# -> n_a1lT
}
Here, the first "enum from to" [1..n] was inlined, and that also triggered the inlining of ++. The resulting recursive function go_a1lX only relies of : and basic arithmetics. When the recursion is over, n_a1lT is returned which is the second "enum from to" [1..n]. This is not inlined, since it would trigger no more optimization.
Here, no list is generated and then copied, so we get better performance.
Note that this also produces optimized code:
exam3 :: Int -> [Int]
exam3 n = list1 ++ list2
where list1 = [1 .. n]
list2 = [1 .. n]
as well as this, since GHC won't automatically cache the result of functions, so those can be inlined.
exam4 :: Int -> [Int]
exam4 n = list () ++ list ()
where list () = [1 .. n]

Related

Haskell explicit recursion vs `iterate`

While writing a function using iterate in Haskell, I found that an equivalent version with explicit recursion seemed noticeably faster - even though I believed that explicit recursion ought to be frowned upon in Haskell.
Similarly, I expected GHC to be able to inline/optimise list combinators appropriately so that the resulting machine code is at least similarly performing to the explicit recursion.
Here's a (different) example, which also displays the slowdown I observed.
steps m n and its variant steps' compute the number of Collatz steps n takes to reach 1, giving up after m attempts.
steps uses explicit recursion while steps' uses list functions.
import Data.List (elemIndex)
import Control.Exception (evaluate)
import Control.DeepSeq (rnf)
collatz :: Int -> Int
collatz n
| even n = n `quot` 2
| otherwise = 3 * n + 1
steps :: Int -> Int -> Maybe Int
steps m = go 0
where go k n
| n == 1 = Just k
| k == m = Nothing
| otherwise = go (k+1) (collatz n)
steps' :: Int -> Int -> Maybe Int
steps' m = elemIndex 1 . take m . iterate collatz
main :: IO ()
main = evaluate $ rnf $ map (steps 800) $ [1..10^7]
I tested these by evaluating for all values up to 10^7, each giving up after 800 steps. On my machine (compiled with ghc -O2), explicit recursion took just under 4 seconds (3.899s) but list combinators took about 5 times longer (19.922s).
Why is explicit recursion so much better in this case, and is there a way of writing this without explicit recursion while preserving performance?
Updated: I submitted Trac 15426 for this bug.
The problem disappears if you copy the definitions of elemIndex and findIndex into your module:
import Control.Exception (evaluate)
import Control.DeepSeq (rnf)
import Data.Maybe (listToMaybe)
import Data.List (findIndices)
elemIndex :: Eq a => a -> [a] -> Maybe Int
elemIndex x = findIndex (x==)
findIndex :: (a -> Bool) -> [a] -> Maybe Int
findIndex p = listToMaybe . findIndices p
collatz :: Int -> Int
collatz n
| even n = n `quot` 2
| otherwise = 3 * n + 1
steps' :: Int -> Int -> Maybe Int
steps' m = elemIndex 1 . take m . iterate collatz
main :: IO ()
main = evaluate $ rnf $ map (steps' 800) $ [1..10^7]
The problem seems to be that these must be inlinable for GHC to get the fusion right. Unfortunately, neither of them is marked inlinable in Data.OldList.
The change to allow findIndex to participate in fusion is relatively recent (see Trac 14387) where listToMaybe was reimplemented as a foldr. So, it probably hasn't seen a lot of testing yet.

Haskell - how to avoid messing pure with IO

I am implementing some algorithm on haskell. This algorithm requires generating some data.
I have a function of an algorithm which takes generation function as a parameter. For example, algorithm is just multiplying input data by n:
algo :: a -> ??? -> [a]
algo n dgf = map (\x -> x * n) $ dgf
dgf is used to generate data. How to write function header correctly, as dgf can be any function with any number of parameters?
Another variant is accepting not the generation function but already generated data.
algo :: a -> [b] -> [a]
algo n d = (\x -> n*x) d
So, now let's imagine I'm generation data with stdGen, which uses IO. How can I make function more generic, so that it could accept both IO instance and plain values like just [1,2,3]. This also relates to variant with function, as it can also produce IO.
All in all, which solution is better - having a generation function or a pre-generated data?
Thanks in advance.
One option is to take a stream rather than a list. If generating the values involves performing IO, and there may be many many values, this is often the best approach. There are several packages that offer streams of some sort, but I'll use the streaming package in this example.
import qualified Streaming.Prelude as S
import Streaming
algo :: Monad m => a -> Stream (Of a) m r -> Stream (Of a) m r
algo a = S.map (a +)
You can read Stream (Of a) m r as "a way to use operations in m to produce successive values of type a and finally a result of type r". This algo function doesn't commit to any particular way of generating the data; they can be created purely:
algo a (S.each [these, are, my, elements])
or within IO,
algo a $ S.takeWhile (> 3) (S.readLn :: Stream (Of Int) IO ())
or using a randomness monad, or whatever you like.
For contrast, I'm going to take the opposite approach as dfeuer's answer.
Just use lists.
Consider your first example:
algo :: a -> ??? -> [a]
algo n dgf = map (\x -> x * n) $ dgf
You ask "How to write function header correctly, as dgf can be any function with any number of parameters?"
Well, one way is to use uncurrying.
Normally, Haskell functions are curried. If we have a function like
add :: Int -> Int -> Int
add x y = x + y
And we want a function that adds two to its input we can just use add 2.
>>> map (add 2) [1..10]
[3,4,5,6,7,8,9,10,11,12]
Because add is not actually a function that takes two arguments,
it's a function of one argument that returns a function of one argument.
We could have added parentheses to the argument of add above to make this more clear:
add :: Int -> (Int -> Int)
In Haskell, all functions are functions of one argument.
However, we can also go the other way - uncurry a function
that returns a function to get a function that takes a pair:
>>> :t uncurry
uncurry :: (a -> b -> c) -> (a, b) -> c
>>> :t uncurry add
uncurry add :: (Int, Int) -> Int
This can also be useful, say if we want to find the sum of each pair in a list:
>>> map (uncurry add) [ (1,2), (3,4), (5,6), (7,8), (9,10) ]
[3,7,11,15,19]
In general, we can uncurry any function of type a0-> a1 -> ... -> aN -> b
into a function (a0, a1, ..., aN) -> b, though there might not be
a cute library function to do it for us.
With that in mind, we could implement algo by passing it an uncurried
function and a tuple of values:
algo :: Num a => a -> (t -> [a]) -> t -> [a]
algo n f t = map (\x -> x * n) $ f t
And then use anonymous functions to uncurry our argument functions:
>>> algo 2 (\(lo,hi) -> enumFromTo lo hi) (5, 10)
[10,12,14,16,18,20]
>>> algo 3 (\(a,b,c,d) -> zipWith (+) [a..b] [c..d]) (1, 5, 10, 14)
[33,39,45,51,57]
Now we could do it this way, but we don't need to. As implemented above,
algo is only using f and t once. So why not pass it the list directly?
algo' :: Num a => a -> [a] -> [a]
algo' n ns = map (\x -> x * n) ns
It calculates the same results:
>>> algo' 2 $ (\(lo,hi) -> enumFromTo lo hi) (5, 10)
[10,12,14,16,18,20]
>>> algo' 2 $ enumFromTo 5 10
[10,12,14,16,18,20]
>>> algo' 3 $ (\(a,b,c,d) -> zipWith (+) [a..b] [c..d]) (1, 5, 10, 14)
[33,39,45,51,57]
>>> algo' 3 $ zipWith (+) [1..5] [10..14]
[33,39,45,51,57]
Furthermore, since haskell is non-strict, the argument to algo' isn't evaluated
until it's actually used, so we don't have to worry about "wasting" time computing
arguments that won't actually be used:
algo'' :: Num a => a -> [a] -> [a]
algo'' n ns = [n,n,n,n]
algo'' doesn't use the list passed to it, so it's never forced, so whatever
computation is used to calculate it never runs:
>>> let isPrime n = n > 2 && null [ i | i <- [2..n-1], n `rem` i == 0 ]
>>> :set +s
>>> isPrime 10000019
True
(6.18 secs, 2,000,067,648 bytes)
>>> algo'' 5 (filter isPrime [1..999999999999999])
[5,5,5,5]
(0.01 secs, 68,936 bytes)
Now to the second part of your question - what if your data is being generated within some monad?
Rather than convince algo to operate on monadic values, you could take the stream
based approach as dfeuer explains. Or you could just use a list.
Just because you're in a monad, doesn't mean that your values suddenly become strict.
For example, want a infinite list of random numbers? No problem.
newRandoms :: Num a -> IO [a]
newRandoms = unfoldr (\g -> Just (random g)) <$> newStdGen
Now I can just pass those to some algorithm:
>>> rints <- newRandoms :: IO [Int]
(0.00 secs, 60,624 bytes)
>>> algo'' 5 rints
[5,5,5,5]
(0.00 secs, 68,920 bytes)
For a small program which is just reading input from a file or two, there's no problem
with just using readFile and lazy I/O to get a list to operate on.
For example
>>> let grep pat lines = [ line | line <- lines, pat `isInfixOf` line ]
>>> :set +s
>>> dict <- lines <$> readFile "/usr/share/dict/words"
(0.01 secs, 81,504 bytes)
>>> grep "poop" dict
["apoop","epoophoron","nincompoop","nincompoopery","nincompoophood","nincompoopish","poop","pooped","poophyte","poophytic","whisterpoop"]
(0.72 secs, 423,650,152 bytes)

Optimize a list function that creates too much garbage (not stack overflow)

I have that Haskell function, that's causing more than 50% of all the allocations of my program, causing 60% of my run time to be taken by the GC. I run with a small stack (-K10K) so there is no stack overflow, but can I make this function faster, with less allocation?
The goal here is to calculate the product of a matrix by a vector. I cannot use hmatrix for example because this is part of a bigger function using the ad Automatic Differentiation package, so I need to use lists of Num. At runtime I suppose the use of the Numeric.AD module means my types must be Scalar Double.
listMProd :: (Num a) => [a] -> [a] -> [a]
listMProd mdt vdt = go mdt vdt 0
where
go [] _ s = [s]
go ls [] s = s : go ls vdt 0
go (y:ys) (x:xs) ix = go ys xs (y*x+ix)
Basically we loop through the matrix, multiplying and adding an accumulator until we reach the end of the vector, storing the result, then continuing restarting the vector again. I have a quickcheck test verifying that I get the same result than the matrix/vector product in hmatrix.
I have tried with foldl, foldr, etc. Nothing I've tried makes the function faster (and some things like foldr cause a space leak).
Running with profiling tells me, on top of the fact that this function is where most of the time and allocation is spent, that there are loads of Cells being created, Cells being a data type from the ad package.
A simple test to run:
import Numeric.AD
main = do
let m :: [Double] = replicate 400 0.2
v :: [Double] = replicate 4 0.1
mycost v m = sum $ listMProd m v
mygrads = gradientDescent (mycost (map auto v)) (map auto m)
print $ mygrads !! 1000
This on my machine tells me GC is busy 47% of the time.
Any ideas?
A very simple optimization is to make the go function strict by its accumulator parameter, because it's small, can be unboxed if a is primitive and always needs to be fully evaluated:
{-# LANGUAGE BangPatterns #-}
listMProd :: (Num a) => [a] -> [a] -> [a]
listMProd mdt vdt = go mdt vdt 0
where
go [] _ !s = [s]
go ls [] !s = s : go ls vdt 0
go (y:ys) (x:xs) !ix = go ys xs (y*x+ix)
On my machine, it gives 3-4x speedup (compiled with -O2).
On the other hand, intermediate lists shouldn't be strict so they could be fused.

Efficient summation in OCaml

Please note I am almost a complete newbie in OCaml. In order to learn a bit, and test its performance, I tried to implement a module that approximates Pi using the Leibniz series.
My first attempt led to a stack overflow (the actual error, not this site). Knowing from Haskell that this may come from too many "thunks", or promises to compute something, while recursing over the addends, I looked for some way of keeping just the last result while summing with the next. I found the following tail-recursive implementations of sum and map in the notes of an OCaml course, here and here, and expected the compiler to produce an efficient result.
However, the resulting executable, compiled with ocamlopt, is much slower than a C++ version compiled with clang++. Is this code as efficient as possible? Is there some optimization flag I am missing?
My complete code is:
let (--) i j =
let rec aux n acc =
if n < i then acc else aux (n-1) (n :: acc)
in aux j [];;
let sum_list_tr l =
let rec helper a l = match l with
| [] -> a
| h :: t -> helper (a +. h) t
in helper 0. l
let rec tailmap f l a = match l with
| [] -> a
| h :: t -> tailmap f t (f h :: a);;
let rev l =
let rec helper l a = match l with
| [] -> a
| h :: t -> helper t (h :: a)
in helper l [];;
let efficient_map f l = rev (tailmap f l []);;
let summand n =
let m = float_of_int n
in (-1.) ** m /. (2. *. m +. 1.);;
let pi_approx n =
4. *. sum_list_tr (efficient_map summand (0 -- n));;
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (pi_approx n);;
Just for reference, here are the measured times on my machine:
❯❯❯ time ocaml/main 10000000
3.14159275359
ocaml/main 10000000 3,33s user 0,30s system 99% cpu 3,625 total
❯❯❯ time cpp/main 10000000
3.14159
cpp/main 10000000 0,17s user 0,00s system 99% cpu 0,174 total
For completeness, let me state that the first helper function, an equivalent to Python's range, comes from this SO thread, and that this is run using OCaml version 4.01.0, installed via MacPorts on a Darwin 13.1.0.
As I noted in a comment, OCaml's float are boxed, which puts OCaml to a disadvantage compared to Clang.
However, I may be noticing another typical rough edge trying OCaml after Haskell:
if I see what your program is doing, you are creating a list of stuff, to then map a function on that list and finally fold it into a result.
In Haskell, you could more or less expect such a program to be automatically “deforested” at compile-time, so that the resulting generated code was an efficient implementation of the task at hand.
In OCaml, the fact that functions can have side-effects, and in particular functions passed to high-order functions such as map and fold, means that it would be much harder for the compiler to deforest automatically. The programmer has to do it by hand.
In other words: stop building huge short-lived data structures such as 0 -- n and (efficient_map summand (0 -- n)). When your program decides to tackle a new summand, make it do all it wants to do with that summand in a single pass. You can see this as an exercise in applying the principles in Wadler's article (again, by hand, because for various reasons the compiler will not do it for you despite your program being pure).
Here are some results:
$ ocamlopt v2.ml
$ time ./a.out 1000000
3.14159165359
real 0m0.020s
user 0m0.013s
sys 0m0.003s
$ ocamlopt v1.ml
$ time ./a.out 1000000
3.14159365359
real 0m0.238s
user 0m0.204s
sys 0m0.029s
v1.ml is your version. v2.ml is what you might consider an idiomatic OCaml version:
let rec q_pi_approx p n acc =
if n = p
then acc
else q_pi_approx (succ p) n (acc +. (summand p))
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (4. *. (q_pi_approx 0 n 0.));;
(reusing summand from your code)
It might be more accurate to sum from the last terms to the first, instead of from the first to the last. This is orthogonal to your question, but you may consider it as an exercise in modifying a function that has been forcefully made tail-recursive. Besides, the (-1.) ** m expression in summand is mapped by the compiler to a call to the pow() function on the host, and that's a bag of hurt you may want to avoid.
I've also tried several variants, here are my conclusions:
Using arrays
Using recursion
Using imperative loop
Recursive function is about 30% more effective than array implementation. Imperative loop is approximately as much effective as a recursion (maybe even little slower).
Here're my implementations:
Array:
open Core.Std
let pi_approx n =
let f m = (-1.) ** m /. (2. *. m +. 1.) in
let qpi = Array.init n ~f:Float.of_int |>
Array.map ~f |>
Array.reduce_exn ~f:(+.) in
qpi *. 4.0
Recursion:
let pi_approx n =
let rec loop n acc m =
if m = n
then acc *. 4.0
else
let acc = acc +. (-1.) ** m /. (2. *. m +. 1.) in
loop n acc (m +. 1.0) in
let n = float_of_int n in
loop n 0.0 0.0
This can be further optimized, by moving local function loop outside, so that compiler can inline it.
Imperative loop:
let pi_approx n =
let sum = ref 0. in
for m = 0 to n -1 do
let m = float_of_int m in
sum := !sum +. (-1.) ** m /. (2. *. m +. 1.)
done;
4.0 *. !sum
But, in the code above creating a ref to the sum will incur boxing/unboxing on each step, that we can further optimize this code by using float_ref trick:
type float_ref = { mutable value : float}
let pi_approx n =
let sum = {value = 0.} in
for m = 0 to n - 1 do
let m = float_of_int m in
sum.value <- sum.value +. (-1.) ** m /. (2. *. m +. 1.)
done;
4.0 *. sum.value
Scoreboard
for-loop (with float_ref) : 1.0
non-local recursion : 0.89
local recursion : 0.86
Pascal's version : 0.77
for-loop (with float ref) : 0.62
array : 0.47
original : 0.08
Update
I've updated the answer, as I've found a way to give 40% speedup (or 33% in comparison with #Pascal's answer.
I would like to add that although floats are boxed in OCaml, float arrays are unboxed. Here is a program that builds a float array corresponding to the Leibnitz sequence and uses it to approximate π:
open Array
let q_pi_approx n =
let summand n =
let m = float_of_int n
in (-1.) ** m /. (2. *. m +. 1.) in
let a = Array.init n summand in
Array.fold_left (+.) 0. a
let n = int_of_string Sys.argv.(1);;
Printf.printf "%F\n" (4. *. (q_pi_approx n));;
Obviously, it is still slower than a code that doesn't build any data structure at all. Execution times (the version with array is the last one):
time ./v1 10000000
3.14159275359
real 0m2.479s
user 0m2.380s
sys 0m0.104s
time ./v2 10000000
3.14159255359
real 0m0.402s
user 0m0.400s
sys 0m0.000s
time ./a 10000000
3.14159255359
real 0m0.453s
user 0m0.432s
sys 0m0.020s

Batching actions for caching and performance while avoiding the dirty work

Say I have two pure but unsafe functions, that do the same, but one of them is working on batches, and is asymptotically faster:
f :: Int -> Result -- takes O(1) time
f = unsafePerformIO ...
g :: [Int] -> [Result] -- takes O(log n) time
g = unsafePerformIO ...
A naive implementation:
getUntil :: Int -> [Result]
getUntil 0 = f 0
getUntil n = f n : getUntil n-1
switch is the n value where g gets cheaper than f.
getUntil will in practice be called with ever increasing n, but it might not start at 0. So since the Haskell runtime can memoize getUntil, performance will be optimal if getUntil is called with an interval lower than switch. But once the interval gets larger, this implementation is slow.
In an imperative program, I guess I would make a TreeMap (which could quickly be checked for gaps) for caching all calls. On cache misses, it would get filled with the results of g, if the gap was greater than switch in length, and f otherwise, respectively.
How can this be optimized in Haskell?
I think I am just looking for:
an ordered map filled on-demand using a fill function that would fill all values up to the requested index using one function if the missing range is small, another if it is large
a get operation on the map which returns a list of all lower values up to the requested index. This would result in a function similar to getUntil above.
I'll elaborate in my proposal for using map, after some tests I just ran.
import System.IO
import System.IO.Unsafe
import Control.Concurrent
import Control.Monad
switch :: Int
switch = 1000
f :: Int -> Int
f x = unsafePerformIO $ do
threadDelay $ 500 * x
putStrLn $ "Calculated from scratch: f(" ++ show x ++ ")"
return $ 500*x
g :: Int -> Int
g x = unsafePerformIO $ do
threadDelay $ x*x `div` 2
putStrLn $ "Calculated from scratch: g(" ++ show x ++ ")"
return $ x*x `div` 2
cachedFG :: [Int]
cachedFG = map g [0 .. switch] ++ map f [switch+1 ..]
main :: IO ()
main = forever $ getLine >>= print . (cachedFG !!) . read
… where f, g and switch have the same meaning indicated in the question.
The above program can be compiled as is using GHC. When executed, positive integers can be entered, followed by a newline, and the application will print some value based on the number entered by the user plus some extra indication on what values are being calculated from scratch.
A short session with this program is:
User: 10000
Program: Calculated from scratch: f(10000)
Program: 5000000
User: 10001
Program: Calculated from scratch: f(10001)
Program: 5000500
User: 10000
Program: 5000000
^C
The program has to be killed/terminated manually.
Notice that the last value entered doesn't show a "calculated from scratch" message. This indicates that the program has the value cached/memoized somewhere. You can try executing this program yourself; but have into account that threadDelay's lag is proportional to the value entered.
The getUntil function then could be implemented using:
getUntil :: Int -> [Int]
getUntil n = take n cachedFG
or:
getUntil :: Int -> [Int]
getUntil = flip take cachedFG
If you don't know the value for switch, you can try evaluating f and g in parallel and use the fastest result, but that's another show.

Resources