Infinite/Lazy Reservoir Sampling in Haskell - algorithm

I tried to implement a simple reservoir sampling in haskell following http://jeremykun.com/2013/07/05/reservoir-sampling/ (note that the algorithm shown is possibly semantically incorrect)
According to this: Iterative or Lazy Reservoir Sampling lazy reservoir sampling is impossible unless you know the population size ahead of time.
Even so, I'm not understanding why (operationally speaking) the below sampleReservoir doesn't work on infinite lists. Just where exactly is laziness broken?
import System.Random (randomRIO)
-- equivalent to python's enumerate
enumerate :: (Num i, Enum i) => i -> [e] -> [(i, e)]
enumerate start = zip [start..]
sampleReservoir stream =
foldr
(\(i, e) reservoir -> do
r <- randomRIO (0.0, 1.0) :: IO Double
-- randomRIO gets confused about 0.0 and 1.0
if r < (1.0 / fromIntegral i) then
fmap (e:) reservoir
else
reservoir)
(return [])
(enumerate 1 stream)
The challenge and test is fmap (take 1) $ sampleReservoir [1..].
Furthermore, if reservoir sampling can't be lazy, what can take in a lazy list and produce a sampled lazy list?
I get the idea that there must be a way of making the above function lazy in the output as well, because I could change this:
if r < (1.0 / fromIntegral i) then
fmap (e:) reservoir
else
To:
if r < (1.0 / fromIntegral i) then
do
print e
fmap (e:) reservoir
This shows results as the function is iterating over the list. Using coroutine abstraction, perhaps instead of print e there can be a yield e, and the rest of the computation can be held as a continuation.

The problem is that the IO monad maintains a strict sequence between actions. Writing fmap (e:) reservoir will first execute all of the effects associated with reservoir, which will be infinite if the input list is infinite.
I was able to fix this with liberal use of unsafeInterleaveIO, which allows you to break the semantics of IO:
sampleReservoir2 :: [e] -> IO [e]
sampleReservoir2 stream =
foldr
(\(i, e) reservoir -> do
r <- unsafeInterleaveIO $ randomRIO (0.0, 1.0) :: IO Double -- randomRIO gets confused about 0.0 and 1.0
if r < (1.0 / fromIntegral i) then unsafeInterleaveIO $ do
rr <- reservoir
return (e:rr)
else
reservoir)
(return [])
(enumerate 1 stream)
Obviously, this will allow the interleaving of IO actions, but since all you're doing is generating random numbers it shouldn't matter. However, this solution isn't very satisfactory; the correct solution is to refactor your code somewhat. You should generate an infinite list of random numbers, then consume that infinite list (lazily) with foldr:
sampleReservoir3 :: MonadRandom m => [a] -> m [a]
sampleReservoir3 stream = do
ws <- getRandomRs (0, 1 :: Double)
return $ foldr
(\(w, (i, e)) reservoir ->
(if w < (1 / fromIntegral i) then (e:) else id) reservoir
)
[]
(zip ws $ enumerate 1 stream)
This can also (equivalently) be written as
sampleReservoir4 :: [a] -> IO [a]
sampleReservoir4 stream = do
seed <- newStdGen
let ws = randomRs (0, 1 :: Double) seed
return $ foldr
(\(w, (i, e)) reservoir ->
(if w < (1 / fromIntegral i) then (e:) else id) reservoir
)
[]
(zip ws $ enumerate 1 stream)
As an aside, I'm not sure as to the correctness of the algorithm, since it seems to always return the first element of the input list first. Not very random.

Related

Why is `filterM + mapM_` so much slower than `mapM_ + when`, with large lists?

I don't know very much about how Haskell optimization works internally but I've been using filters quite much hoping that they are optimized into something equivalent to a simple if in C++. For example
mapM_ print $ filter (\n -> n `mod` 2 == 0) [0..10]
will compile into equivalent of
for (int i = 0; i < 10; i++)
if (i%2 == 0)
printf("%d\n", i);
With long lists (10 000 000 elements) it seems to be true for a basic filter but there is a huge difference if I use the monadic filterM. I wrote a piece of code for this speed testing and it's obvious that the usage of filterM lasts much longer (250x) than a more imperative approach using when.
import Data.Array.IO
import Control.Monad
import System.CPUTime
main :: IO ()
main = do
start <- getCPUTime
arr <- newArray (0, 100) 0 :: IO (IOUArray Int Int)
let
okSimple i =
i < 100
ok i = do
return $ i < 100
-- -- of course we don't need IO for a simple i < 100
-- -- but my goal is to ask for the contents of the array, e.g.
-- ok i = do
-- current <- readArray arr (i `mod` 101)
-- return$ i `mod` 37 > current `mod` 37
write :: Int -> IO ()
write i =
writeArray arr (i `mod` 101) i
writeIfOkSimple :: Int -> IO ()
writeIfOkSimple i =
when (okSimple i) $ write i
writeIfOk :: Int -> IO ()
writeIfOk i =
ok i >>= (\isOk -> when isOk $ write i)
-------------------------------------------------------------------
---- these four methods have approximately same execution time ----
---- (but the last one is executed on 250 times shorter list) ----
-------------------------------------------------------------------
-- mapM_ write$ filter okSimple [0..10000000*250] -- t = 20.694
-- mapM_ writeIfOkSimple [0..10000000*250] -- t = 20.698
-- mapM_ writeIfOk [0..10000000*250] -- t = 20.669
filterM ok [0..10000000] >>= mapM_ write -- t = 17.200
-- evaluate array
elems <- getElems arr
print $ sum elems
end <- getCPUTime
print $ fromIntegral (end - start) / (10^12)
My question is: shouldn't both approaches (using writeIfOk / using filterM ok and write) compile into the same code (iterate list, ask for condition, write data)? If not, can I do something (rewrite code, add compilation flags, use inline pragma or something) to make them computationally equivalent or should I always use when when performance is critical?
Boiling this question down to its essence, your asking about the difference between
f (filter g xs)
and
f =<< filterM (pure . g) xs
This basically comes down to laziness. filter g xs produces its result incrementally as it's demanded, only walking xs far enough to find the next element of the result. filterM is defined something like this:
filterM _p [] = pure []
filterM p (x : xs)
= liftA2 (\q r -> if q then x : r else r)
(p x)
(filterM p xs)
Since IO is a "strict" applicative, this will not produce anything at all until it's walked the whole list, accumulating the p x results in memory.

Is runInBoundThread the best tool for parallelism?

Say, I want to fold monoids in parallel. My computer has 8 cores. I have this function to split a list into equal-sized smaller lists (with bounded modulo-bias):
import Data.List
parallelize :: Int -> [a] -> [[a]]
parallelize 0 _ = []
parallelize n [] = replicate n []
parallelize n xs = let
(us,vs) = splitAt (quot (length xs) n) xs
in us : parallelize (n-1) vs
The first version of parallel fold I made was:
import Control.Concurrent
import Control.Concurrent.QSemN
import Data.Foldable
import Data.IORef
foldP :: Monoid m => [m] -> IO m
foldP xs = do
result <- newIORef mempty
sem <- newQSemN 0
n <- getNumCapabilities
let yss = parallelize n xs
for_ yss (\ys -> forkIO (modifyIORef result (fold ys <>) >> signalQSemN sem 1))
waitQSemN sem n
readIORef result
But usage of IORefs and semaphores seemed ugly to me. So I made another version:
import Data.Traversable
foldP :: Monoid m => [m] -> IO m
foldP xs = do
n <- getNumCapabilities
let yss = parallelize n xs
rs <- for yss (\ys -> runInUnboundThread (return (fold ys)))
return (fold rs)
The test code I used is:
import Data.Monoid
import System.CPUTime
main :: IO ()
main = do
start <- getCPUTime
Product result <- foldP (fmap Product [1 .. 100])
end <- getCPUTime
putStrLn ("Time took: " ++ show (end - start) ++ "ps.")
putStrLn ("Result: " ++ show result)
The second version of foldP outperformed the first version. When I used runInBoundThread instead of runInUnboundThread, it became even faster.
By what are these performance differences made?
TLDR; Use fold function from massiv package and you will likely get the most efficient solution in Haskell.
I would like to start by saying that the first thing that people forget when trying to implement concurrent patterns like this is exception handling. In the solution from the question the exception handling is non-existent thus it is totally wrong. Therefore I'd recommend to use existing implementations for common concurrency patterns. async is the goto library for concurrency, but for such use case it will not be the most efficient solution.
This particular example can easily be solved with scheduler package, in fact it is exactly the kind of stuff it was designed for. Here is how you can use it to achieve folding of monoids:
import Control.Scheduler
import Control.Monad.IO.Unlift
foldP :: (MonadUnliftIO m, Monoid n) => Comp -> [n] -> m n
foldP comp xs = do
rs <-
withScheduler comp $ \scheduler ->
mapM_ (scheduleWork scheduler . pure . fold) (parallelize (numWorkers scheduler) xs)
pure $ fold rs
See the Comp type for explanation on best parallelization strategies. From what I found in practice Par will usually work best, because it will use pinned threads created with forkOn
Note that the parallelize function is implemented inefficiently and dangerously as well, it is better to write it this way:
parallelize :: Int -> [a] -> [[a]]
parallelize n' xs' = go 0 id xs'
where
n = max 1 n'
-- at least two elements make sense to get benefit of parallel fold
k = max 2 $ quot (length xs') n
go i acc xs
| null xs = acc []
| i < n =
case splitAt k xs of
(ls, rs) -> go (i + 1) (acc . (ls :)) rs
| otherwise = acc . (xs:) $ []
One more bit of advise is that list is far from ideal data structure for parallelization and efficiency in general. In order to split the lists into chunks before parallelizing computation you already have to go through the data structure with parallelize, which can be avoided if you were to use an array. What I am getting at is use an array instead, as suggested in the beginning of this answer.

Haskell - how to avoid messing pure with IO

I am implementing some algorithm on haskell. This algorithm requires generating some data.
I have a function of an algorithm which takes generation function as a parameter. For example, algorithm is just multiplying input data by n:
algo :: a -> ??? -> [a]
algo n dgf = map (\x -> x * n) $ dgf
dgf is used to generate data. How to write function header correctly, as dgf can be any function with any number of parameters?
Another variant is accepting not the generation function but already generated data.
algo :: a -> [b] -> [a]
algo n d = (\x -> n*x) d
So, now let's imagine I'm generation data with stdGen, which uses IO. How can I make function more generic, so that it could accept both IO instance and plain values like just [1,2,3]. This also relates to variant with function, as it can also produce IO.
All in all, which solution is better - having a generation function or a pre-generated data?
Thanks in advance.
One option is to take a stream rather than a list. If generating the values involves performing IO, and there may be many many values, this is often the best approach. There are several packages that offer streams of some sort, but I'll use the streaming package in this example.
import qualified Streaming.Prelude as S
import Streaming
algo :: Monad m => a -> Stream (Of a) m r -> Stream (Of a) m r
algo a = S.map (a +)
You can read Stream (Of a) m r as "a way to use operations in m to produce successive values of type a and finally a result of type r". This algo function doesn't commit to any particular way of generating the data; they can be created purely:
algo a (S.each [these, are, my, elements])
or within IO,
algo a $ S.takeWhile (> 3) (S.readLn :: Stream (Of Int) IO ())
or using a randomness monad, or whatever you like.
For contrast, I'm going to take the opposite approach as dfeuer's answer.
Just use lists.
Consider your first example:
algo :: a -> ??? -> [a]
algo n dgf = map (\x -> x * n) $ dgf
You ask "How to write function header correctly, as dgf can be any function with any number of parameters?"
Well, one way is to use uncurrying.
Normally, Haskell functions are curried. If we have a function like
add :: Int -> Int -> Int
add x y = x + y
And we want a function that adds two to its input we can just use add 2.
>>> map (add 2) [1..10]
[3,4,5,6,7,8,9,10,11,12]
Because add is not actually a function that takes two arguments,
it's a function of one argument that returns a function of one argument.
We could have added parentheses to the argument of add above to make this more clear:
add :: Int -> (Int -> Int)
In Haskell, all functions are functions of one argument.
However, we can also go the other way - uncurry a function
that returns a function to get a function that takes a pair:
>>> :t uncurry
uncurry :: (a -> b -> c) -> (a, b) -> c
>>> :t uncurry add
uncurry add :: (Int, Int) -> Int
This can also be useful, say if we want to find the sum of each pair in a list:
>>> map (uncurry add) [ (1,2), (3,4), (5,6), (7,8), (9,10) ]
[3,7,11,15,19]
In general, we can uncurry any function of type a0-> a1 -> ... -> aN -> b
into a function (a0, a1, ..., aN) -> b, though there might not be
a cute library function to do it for us.
With that in mind, we could implement algo by passing it an uncurried
function and a tuple of values:
algo :: Num a => a -> (t -> [a]) -> t -> [a]
algo n f t = map (\x -> x * n) $ f t
And then use anonymous functions to uncurry our argument functions:
>>> algo 2 (\(lo,hi) -> enumFromTo lo hi) (5, 10)
[10,12,14,16,18,20]
>>> algo 3 (\(a,b,c,d) -> zipWith (+) [a..b] [c..d]) (1, 5, 10, 14)
[33,39,45,51,57]
Now we could do it this way, but we don't need to. As implemented above,
algo is only using f and t once. So why not pass it the list directly?
algo' :: Num a => a -> [a] -> [a]
algo' n ns = map (\x -> x * n) ns
It calculates the same results:
>>> algo' 2 $ (\(lo,hi) -> enumFromTo lo hi) (5, 10)
[10,12,14,16,18,20]
>>> algo' 2 $ enumFromTo 5 10
[10,12,14,16,18,20]
>>> algo' 3 $ (\(a,b,c,d) -> zipWith (+) [a..b] [c..d]) (1, 5, 10, 14)
[33,39,45,51,57]
>>> algo' 3 $ zipWith (+) [1..5] [10..14]
[33,39,45,51,57]
Furthermore, since haskell is non-strict, the argument to algo' isn't evaluated
until it's actually used, so we don't have to worry about "wasting" time computing
arguments that won't actually be used:
algo'' :: Num a => a -> [a] -> [a]
algo'' n ns = [n,n,n,n]
algo'' doesn't use the list passed to it, so it's never forced, so whatever
computation is used to calculate it never runs:
>>> let isPrime n = n > 2 && null [ i | i <- [2..n-1], n `rem` i == 0 ]
>>> :set +s
>>> isPrime 10000019
True
(6.18 secs, 2,000,067,648 bytes)
>>> algo'' 5 (filter isPrime [1..999999999999999])
[5,5,5,5]
(0.01 secs, 68,936 bytes)
Now to the second part of your question - what if your data is being generated within some monad?
Rather than convince algo to operate on monadic values, you could take the stream
based approach as dfeuer explains. Or you could just use a list.
Just because you're in a monad, doesn't mean that your values suddenly become strict.
For example, want a infinite list of random numbers? No problem.
newRandoms :: Num a -> IO [a]
newRandoms = unfoldr (\g -> Just (random g)) <$> newStdGen
Now I can just pass those to some algorithm:
>>> rints <- newRandoms :: IO [Int]
(0.00 secs, 60,624 bytes)
>>> algo'' 5 rints
[5,5,5,5]
(0.00 secs, 68,920 bytes)
For a small program which is just reading input from a file or two, there's no problem
with just using readFile and lazy I/O to get a list to operate on.
For example
>>> let grep pat lines = [ line | line <- lines, pat `isInfixOf` line ]
>>> :set +s
>>> dict <- lines <$> readFile "/usr/share/dict/words"
(0.01 secs, 81,504 bytes)
>>> grep "poop" dict
["apoop","epoophoron","nincompoop","nincompoopery","nincompoophood","nincompoopish","poop","pooped","poophyte","poophytic","whisterpoop"]
(0.72 secs, 423,650,152 bytes)

Speeding up a stream like data type

I've made a type which is supposed to emulate a "stream". This is basically a list without memory.
data Stream a = forall s. Stream (s -> Maybe (a, s)) s
Basically a stream has two elements. A state s, and a function that takes the state, and returns an element of type a and the new state.
I want to be able to perform operations on streams, so I've imported Data.Foldable and defined streams on it as such:
import Data.Foldable
instance Foldable Stream where
foldr k z (Stream sf s) = go (sf s)
where
go Nothing = z
go (Just (e, ns)) = e `k` go (sf ns)
To test the speed of my stream, I've defined the following function:
mysum = foldl' (+) 0
And now we can compare the speed of ordinary lists and my stream type:
x1 = [1..n]
x2 = Stream (\s -> if (s == n + 1) then Nothing else Just (s, s + 1)) 1
--main = print $ mysum x1
--main = print $ mysum x2
My streams are about half the speed of lists (full code here).
Furthermore, here's a best case situation, without a list or a stream:
bestcase :: Int
bestcase = go 1 0 where
go i c = if i == n then c + i else go (i+1) (c+i)
This is a lot faster than both the list and stream versions.
So I've got two questions:
How to I get my stream version to be at least as fast as a list.
How to I get my stream version to be close to the speed of bestcase.
As it stands the foldl' you are getting from Foldable is defined in terms of the foldr you gave it. The default implementation is the brilliant and surprisingly good
foldl' :: (b -> a -> b) -> b -> t a -> b
foldl' f z0 xs = foldr f' id xs z0
where f' x k z = k $! f z x
But foldl' is the specialty of your type; fortunately the Foldable class includes foldl' as a method, so you can just add this to your instance.
foldl' op acc0 (Stream sf s0) = loop s0 acc0
where
loop !s !acc = case sf s of
Nothing -> acc
Just (a,s') -> loop s' (op acc a)
For me this seems to give about the same time as bestcase
Note that this is a standard case where we need a strictness annotation on the accumulator. You might look in the vector package's treatment of a similar type https://hackage.haskell.org/package/vector-0.10.12.2/docs/src/Data-Vector-Fusion-Stream.html for some ideas; or in the hidden 'fusion' modules of the text library https://github.com/bos/text/blob/master/Data/Text/Internal/Fusion .

Precise flow control in Haskell

The Idea
Hello! I'm trying to implement in Haskell an image processing library based on dataflow ideology. I've got a problem connected to how I want to handle the flow of control.
The main idea is to introduce a time. The time is a Float, which could be accessed anywhere in the code (you can think of it like about State monad, but a little funnier). The funny thing about it, is that we can use timeShift operation on results, affecting the time corresponding operations would see.
An example would be best to explain this situation. Lets use following dataflow diagram:
-- timeShift(*2) --
-- / \
-- readImage -- addImages -> out
-- \ /
-- blur ----------
and its pseudocode (which deos not typecheck - its not important if we use do or let notation here, the idea should be clear):
test = do
f <- frame
a <- readImage $ "test" + show f + ".jpg"
aBlur <- blur a
a' <- a.timeShift(*2)
out <- addImage aBlur a'
main = print =<< runStateT test 5
The 5 is the time we want to run the test function with. The timeShift function affects all the operations on the left of it (in the dataflow diagram) - in this case the function readImage would be run twice - for both branches - the lower one would use frame 5 and the upper one frame 5*2 = 10.
The problem
I'm providing here a very simple implementation, that works great, but has some caveats I want to solve. The problem is, that I want to keep the order of all IO operations. Look at the bottom for example, which will clarify what I mean.
Sample implementation
Below is a sample implementation of the algorithm and a code, which constructs following dataflow graph:
-- A --- blur --- timeShift(*2) --
-- \
-- addImages -> out
-- /
-- B --- blur --------------------
the code:
import Control.Monad.State
-- for simplicity, lets assume an Image is just a String
type Image = String
imagesStr = ["a0","b1","c2","d3","e4","f5","g6","h7","i8","j9","k10","l11","m12","n13","o14","p15","q16","r17","s18","t19","u20","v21","w22","x23","y24","z25"]
images = "abcdefghjiklmnoprstuwxyz"
--------------------------------
-- Ordinary Image processing functions
blurImg' :: Image -> Image
blurImg' img = "(blur " ++ img ++ ")"
addImage' :: Image -> Image -> Image
addImage' img1 img2 = "(add " ++ img1 ++ " " ++ img2 ++ ")"
--------------------------------
-- Functions processing Images in States
readImage1 :: StateT Int IO Image
readImage1 = do
t <- get
liftIO . putStrLn $ "[1] reading image with time: " ++ show t
return $ imagesStr !! t
readImage2 :: StateT Int IO Image
readImage2 = do
t <- get
liftIO . putStrLn $ "[2] reading image with time: " ++ show t
return $ imagesStr !! t
blurImg :: StateT Int IO Image -> StateT Int IO Image
blurImg img = do
i <- img
liftIO $ putStrLn "blurring"
return $ blurImg' i
addImage :: StateT Int IO Image -> StateT Int IO Image -> StateT Int IO Image
addImage img1 img2 = do
i1 <- img1
i2 <- img2
liftIO $ putStrLn "adding images"
return $ addImage' i1 i2
timeShift :: StateT Int IO Image -> (Int -> Int) -> StateT Int IO Image
timeShift img f = do
t <- get
put (f t)
i <- img
put t
return i
test = out where
i1 = readImage1
j1 = readImage2
i2 = blurImg i1
j2 = blurImg j1
i3 = timeShift i2 (*2)
out = addImage i3 j2
main = do
print =<< runStateT test 5
print "end"
The output is:
[1] reading image with time: 10
blurring
[2] reading image with time: 5
blurring
adding images
("(add (blur k10) (blur f5))",5)
"end"
and should be:
[1] reading image with time: 10
[2] reading image with time: 5
blurring
blurring
adding images
("(add (blur k10) (blur f5))",5)
"end"
Please note that the correct output is ("(add (blur k10) (blur f5))",5) - which means, that we added image k10 to f5 - from respectively 10th and 5th frame.
Further requirements
I'm looking for a solution, which would allow users to write simple code (like in test function - it could of course be in a Monad), but I do not want them to handle the time-shifting logic by hand.
Conclusions
The only difference is the order of IO actions execution. I would love to preserve the order of the IO actions just like they are written in the test function. I was trying to implement the idea using Countinuations, Arrows and some funny states, but without success.
Dataflow and functional reactive programming libraries in Haskell are usually written in terms of Applicative or Arrow. These are abstractions for computations that are less general than Monads - the Applicative and Arrow typeclasses do not expose a way for the structure of computations to depend on the results of other computations. As a result, libraries exposing only these typeclasses can reason about the structure of computations in the library independently of performing those computations. We will solve your problem in terms of the Applicative typeclass
class Functor f => Applicative f where
-- | Lift a value.
pure :: a -> f a
-- | Sequential application.
(<*>) :: f (a -> b) -> f a -> f b
Applicative allows a library user to make new computations with pure, operate on existing computations with fmap (from Functor) and compose computations together with <*>, using the result of one computation as an input for another. It does not allow a library user to make a computation that makes another computation and then use the result of that computation directly; there's no way a user can write join :: f (f a) -> f a. This restriction will keep our library from running into the problem I described in my other answer.
Transformers, free, and the ApT transformer
Your example problem is quite involved, so we are going to pull out a bunch of high level Haskell tricks, and make a few new ones of our own. The first two tricks we are going to pull out are transformers and free data types. Transformers are types that take types with a kind like that of Functors, Applicatives or Monads and produce new types with the same kind.
Transformers typically look like the following Double example. Double can take any Functor or Applicative or Monad and make a version of it that always holds two values instead of one
newtype Double f a = Double {runDouble :: f (a, a)}
Free data types are transformers that do two things. First, given some simpler property of the underlying type the gain new exciting properties for the transformed type. The Free Monad provides a Monad given any Functor, and the free Applicative, Ap, makes an Applicative out of any Functor. The other thing "free" types do is they "free" the implementation of the interpreter as much as possible. Here are the types for the free Applicative, Ap, the free Monad, Free, and the free monad transfomer, FreeT. The free monad transformer provides a monad transformer for "free" given a Functor
-- Free Applicative
data Ap f a where
Pure :: a -> Ap f a
Ap :: f a -> Ap f (a -> b) -> Ap f b
-- Base functor of the free monad transformer
data FreeF f a b
= Pure a
| Free (f b)
-- Free monad transformer
newtype FreeT f m a = FreeT {runFreeT :: m (FreeF f a (FreeT f m a)}
-- The free monad is the free monad transformer applied to the Identity monad
type Free f = FreeT f Identity
Here's a sketch of our goal - we want to provide an Applicative interface for combining computations, which, at the bottom, allows Monadic computations. We want to "free" the interpreter as much as possible so that it can hopefully reorder computations. To do this, we will be combining both the free Applicative and the free monad transformer.
We want an Applicative interface, and the easiest one to make is the one we can get for "free", which aligns nicely with out goal of "freeing the interpeter" as much as possible. This suggests our type is going to look like
Ap f a
for some Functor f and any a. We'd like the underlying computation to be over some Monad, and Monads are functors, but we'd like to "free" the interpreter as much as posssible. We'll grab the free monad transformer as the underlying functor for Ap, giving us
Ap (FreeT f m) a
for some Functor f, some Monad m, and any a. We know the Monad m is probably going to be IO, but we'll leave our code as generic as possible. We just need to provide the Functor for FreeT. All Applicatives are Functors, so Ap itself could be used for f, we'd write something like
type ApT m a = Ap (FreeT (ApT m) m) a
This gives the compiler fits, so instead we'll mover the Ap inside and define
newtype ApT m a = ApT {unApT :: FreeT (Ap (ApT m)) m a}
We'll derive some instances for this and discuss its real motivation after an interlude.
Interlude
To run all of this code, you'll need the following. The Map and Control.Concurrent are only needed for sharing computations, more on that much later.
{-# LANGUAGE GADTs #-}
{-# LANGUAGE RankNTypes #-}
{-# LANGUAGE GeneralizedNewtypeDeriving #-}
module Main where
import Control.Monad.Trans.Class
import Control.Monad.IO.Class
import Control.Monad.Trans.Reader
import Control.Applicative
import Control.Applicative.Free hiding (Pure)
import qualified Control.Applicative.Free as Ap (Ap(Pure))
import Control.Monad.Trans.Free
import qualified Data.Map as Map
import Control.Concurrent
Stuffing it
I mislead you in the previous section, and pretended to discover ApT from resoning about the problem. I actually discovered ApT by trying anything and everything to try to stuff Monadic computations into an Applicative and be able to control their order when it came out. For a long time, I was trying to solve how to implement mapApM (below) in order to write flipImage (my replacement for your blur). Here's the ApT Monad transformer in all its glory. It's intended to be used as the Functor for an Ap, and, by using Ap as its own Functor for FreeT, can magically stuff values into an Applicative that shouldn't seem possible.
newtype ApT m a = ApT {unApT :: FreeT (Ap (ApT m)) m a}
deriving (Functor, Applicative, Monad, MonadIO)
It could derive even more instances from FreeT, these are just the ones we need. It can't derive MonadTrans, but we can do that ourselves:
instance MonadTrans ApT where
lift = ApT . lift
runApT :: ApT m a -> m (FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a))
runApT = runFreeT . unApT
The real beauty of ApT is we can write some seemingly impossible code like
stuffM :: (Functor m, Monad m) => m (ApT m a) -> ApT m a
stuffMAp :: (Functor m, Monad m) => m (ApT m a) -> Ap (ApT m) a
The m on the outside disappeares, even into Ap that's merely Applicative.
This works because of the following cycle of functions, each of which can stuff the output from the function above it into the input of the function below it. The first function starts with an ApT m a, and the last one ends with one. (These definitions aren't part of the program)
liftAp' :: ApT m a ->
Ap (ApT m) a
liftAp' = liftAp
fmapReturn :: (Monad m) =>
Ap (ApT m) a ->
Ap (ApT m) (FreeT (Ap (ApT m)) m a)
fmapReturn = fmap return
free' :: Ap (ApT m) (FreeT (Ap (ApT m)) m a) ->
FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)
free' = Free
pure' :: a ->
FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)
pure' = Pure
return' :: (Monad m) =>
FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a) ->
m (FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a))
return' = return
freeT :: m (FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)) ->
FreeT (Ap (ApT m)) m a
freeT = FreeT
apT :: FreeT (Ap (ApT m)) m a ->
ApT m a
apT = ApT
This lets us write
-- Get rid of an Ap by stuffing it into an ApT.
stuffAp :: (Monad m) => Ap (ApT m) a -> ApT m a
stuffAp = ApT . FreeT . return . Free . fmap return
-- Stuff ApT into Free
stuffApTFree :: (Monad m) => ApT m a -> FreeF (Ap (ApT m)) a (FreeT (Ap (ApT m)) m a)
stuffApTFree = Free . fmap return . liftAp
-- Get rid of an m by stuffing it into an ApT
stuffM :: (Functor m, Monad m) => m (ApT m a) -> ApT m a
stuffM = ApT . FreeT . fmap stuffApTFree
-- Get rid of an m by stuffing it into an Ap
stuffMAp :: (Functor m, Monad m) => m (ApT m a) -> Ap (ApT m) a
stuffMAp = liftAp . stuffM
And some utility functions for working on a transformer stack
mapFreeT :: (Functor f, Functor m, Monad m) => (m a -> m b) -> FreeT f m a -> FreeT f m b
mapFreeT f fa = do
a <- fa
FreeT . fmap Pure . f . return $ a
mapApT :: (Functor m, Monad m) => (m a -> m b) -> ApT m a -> ApT m b
mapApT f = ApT . mapFreeT f . unApT
mapApM :: (Functor m, Monad m) => (m a -> m b) -> Ap (ApT m) a -> Ap (ApT m) b
mapApM f = liftAp . mapApT f . stuffAp
We'd like to start writing our example image processors, but first we need to take another diversion to address a hard requirement.
A hard requirement - input sharing
Your first example shows
-- timeShift(*2) --
-- / \
-- readImage -- addImages -> out
-- \ /
-- blur ----------
implying that the result of readImage should be shared between blur and timeShift(*2). I take this to mean that the results of readImage should only be computed once for each time.
Applicative isn't powerful enough to capture this. We'll make a new typeclass to represent computations whose output can be divided into multiple identical streams.
-- The class of things where input can be shared and divided among multiple parts
class Applicative f => Divisible f where
(<\>) :: (f a -> f b) -> f a -> f b
We'll make a transformer that adds this capability to existing Applicatives
-- A transformer that adds input sharing
data LetT f a where
NoLet :: f a -> LetT f a
Let :: LetT f b -> (LetT f b -> LetT f a) -> LetT f a
And provide some utility functions and instances for it
-- A transformer that adds input sharing
data LetT f a where
NoLet :: f a -> LetT f a
Let :: LetT f b -> (LetT f b -> LetT f a) -> LetT f a
liftLetT :: f a -> LetT f a
liftLetT = NoLet
mapLetT :: (f a -> f b) -> LetT f a -> LetT f b
mapLetT f = go
where
go (NoLet a) = NoLet (f a)
go (Let b g) = Let b (go . g)
instance (Applicative f) => Functor (LetT f) where
fmap f = mapLetT (fmap f)
-- I haven't checked that these obey the Applicative laws.
instance (Applicative f) => Applicative (LetT f) where
pure = NoLet . pure
NoLet f <*> a = mapLetT (f <*>) a
Let c h <*> a = Let c ((<*> a) . h)
instance (Applicative f) => Divisible (LetT f) where
(<\>) = flip Let
Image processors
With all of our transformers in place, we can start writing our image processors. At the bottom of our stack we have our ApT from an earlier section
Ap (ApT IO)
The computations need to be able to read the time from the environment, so we'll add a ReaderT for that
ReaderT Int (Ap (ApT IO))
Finally, we'd like to be able to share computations, so we'll add out LetT transformer on top, giving the entire type IP for our image processors
type Image = String
type IP = LetT (ReaderT Int (Ap (ApT IO)))
We'll read images from IO. getLine makes fun interactive examples.
readImage :: Int -> IP Image
readImage n = liftLetT $ ReaderT (\t -> liftAp . liftIO $ do
putStrLn $ "[" ++ show n ++ "] reading image for time: " ++ show t
--getLine
return $ "|image [" ++ show n ++ "] for time: " ++ show t ++ "|"
)
We can shift the time of inputs
timeShift :: (Int -> Int) -> IP a -> IP a
timeShift f = mapLetT shift
where
shift (ReaderT g) = ReaderT (g . f)
Add multiple images together
addImages :: Applicative f => [f Image] -> f Image
addImages = foldl (liftA2 (++)) (pure [])
And flip images pretending to use some library that's stuck in IO. I couldn't figure out how to blur a string...
inIO :: (IO a -> IO b) -> IP a -> IP b
inIO = mapLetT . mapReaderT . mapApM
flipImage :: IP [a] -> IP [a]
flipImage = inIO flip'
where
flip' ma = do
a <- ma
putStrLn "flipping"
return . reverse $ a
Interpreting LetT
Our LetT for sharing results is at the top of our transformer stack. We'll need to interpret it to get at the computations underneath it. To interpret LetT we will need a way to share results in IO, which memoize provides, and an interpeter that removes the LetT transformer from the top of the stack.
To share computations we need to store them somewhere, this memoizes an IO computation in IO, making sure it happens only once even across multiple threads.
memoize :: (Ord k) => (k -> IO a) -> IO (k -> IO a)
memoize definition = do
cache <- newMVar Map.empty
let populateCache k map = do
case Map.lookup k map of
Just a -> return (map, a)
Nothing -> do
a <- definition k
return (Map.insert k a map, a)
let fromCache k = do
map <- readMVar cache
case Map.lookup k map of
Just a -> return a
Nothing -> modifyMVar cache (populateCache k)
return fromCache
In order to interpret a Let, we need an evaluator for the underlying ApT IO to incorporate into the definitions for the Let bindings. Since the result of computations depends on the environment read from the ReaderT, we will incorporate dealing with the ReaderT into this step. A more sophisticated approach would use transformer classes, but transformer classes for Applicative is a topic for a different question.
compileIP :: (forall x. ApT IO x -> IO x) -> IP a -> IO (Int -> ApT IO a)
compileIP eval (NoLet (ReaderT f)) = return (stuffAp . f)
compileIP eval (Let b lf) = do
cb <- compileIP eval b
mb <- memoize (eval . cb)
compileIP eval . lf . NoLet $ ReaderT (liftAp . lift . mb)
Interpreting ApT
Our interpreter uses the following State to avoid needing to peek inside AsT, FreeT, and FreeF all the time.
data State m a where
InPure :: a -> State m a
InAp :: State m b -> State m (b -> State m a) -> State m a
InM :: m a -> State m a
instance Functor m => Functor (State m) where
fmap f (InPure a) = InPure (f a)
fmap f (InAp b sa) = InAp b (fmap (fmap (fmap f)) sa)
fmap f (InM m) = InM (fmap f m)
Interpereting Ap is harder than it looks. The goal is to take data that's in Ap.Pure and put it in InPure and data that's in Ap and put it in InAp. interpretAp actually needs to call itself with a larger type each time it goes into a deeper Ap; the function keeps picking up another argument. The first argument t provides a way to simplify these otherwise exploding types.
interpretAp :: (Functor m) => (a -> State m b) -> Ap m a -> State m b
interpretAp t (Ap.Pure a) = t a
interpretAp t (Ap mb ap) = InAp sb sf
where
sb = InM mb
sf = interpretAp (InPure . (t .)) $ ap
interperetApT gets data out of ApT, FreeT, and FreeF and into State m
interpretApT :: (Functor m, Monad m) => ApT m a -> m (State (ApT m) a)
interpretApT = (fmap inAp) . runApT
where
inAp (Pure a) = InPure a
inAp (Free ap) = interpretAp (InM . ApT) $ ap
With these simple interpreting pieces we can make strategies for interpreting results. Each strategy is a function from the interpreter's State to a new State, with possible side effect happening on the way. The order the strategy chooses to execute side effects in determines the order of the side effects. We'll make two example strategies.
The first strategy performs only one step on everything that's ready to be computed, and combines results when they are ready. This is probably the strategy that you want.
stepFB :: (Functor m, Monad m) => State (ApT m) a -> m (State (ApT m) a)
stepFB (InM ma) = interpretApT ma
stepFB (InPure a) = return (InPure a)
stepFB (InAp b f) = do
sf <- stepFB f
sb <- stepFB b
case (sf, sb) of
(InPure f, InPure b) -> return (f b)
otherwise -> return (InAp sb sf)
This other strategy performs all the calculations as soon as it knows about them. It performs them all in a single pass.
allFB :: (Functor m, Monad m) => State (ApT m) a -> m (State (ApT m) a)
allFB (InM ma) = interpretApT ma
allFB (InPure a) = return (InPure a)
allFB (InAp b f) = do
sf <- allFB f
sb <- allFB b
case (sf, sb) of
(InPure f, InPure b) -> return (f b)
otherwise -> allFB (InAp sb sf)
Many, many other strategies are possible.
We can evaluate a strategy by running it until it produces a single result.
untilPure :: (Monad m) => ((State f a) -> m (State f a)) -> State f a -> m a
untilPure s = go
where
go state =
case state of
(InPure a) -> return a
otherwise -> s state >>= go
Executing the intepreter
To execute the interpreter, we need some example data. Here are a few interesting examples.
example1 = (\i -> addImages [timeShift (*2) i, flipImage i]) <\> readImage 1
example1' = (\i -> addImages [timeShift (*2) i, flipImage i, flipImage . timeShift (*2) $ i]) <\> readImage 1
example1'' = (\i -> readImage 2) <\> readImage 1
example2 = addImages [timeShift (*2) . flipImage $ readImage 1, flipImage $ readImage 2]
The LetT interpreter needs to know what evaluator to use for bound values, so we'll define our evaluator only once. A single interpretApT kicks off the evaluation by finding the initial State of the interpreter.
evaluator :: ApT IO x -> IO x
evaluator = (>>= untilPure stepFB) . interpretApT
We'll compile example2, which is essentially your example, and run it for time 5.
main = do
f <- compileIP evaluator example2
a <- evaluator . f $ 5
print a
Which produces almost the desired result, with all reads happening before any flips.
[2] reading image for time: 5
[1] reading image for time: 10
flipping
flipping
"|01 :emit rof ]1[ egami||5 :emit rof ]2[ egami|"
A Monad can not reorder the component steps that make up img1 and img2 in
addImage :: (Monad m) => m [i] -> m [i] -> m [i]
addImage img1 img2 = do
i1 <- img1
i2 <- img2
return $ i1 ++ i2
if there exists any m [i] whose result depends on a side effect. Any MonadIO m has an m [i] whose result depends on a side effect, therefore you cannot reorder the component steps of img1 and img2.
The above desugars to
addImage :: (Monad m) => m [i] -> m [i] -> m [i]
addImage img1 img2 =
img1 >>=
(\i1 ->
img2 >>=
(\i2 ->
return (i1 ++ i2)
)
)
Let's focus on the first >>= (remembering that (>>=) :: forall a b. m a -> (a -> m b) -> m b). Specialized for our type, this is (>>=) :: m [i] -> ([i] -> m [i]) -> m [i]. If we are going to implement it, we'd have to write something like
(img1 :: m [i]) >>= (f :: [i] -> m [i]) = ...
In order to do anything with f, we need to pass it an [i]. The only correct [i] we have is stuck inside img1 :: m [i]. We need the result of img1 to do anything with f. There are now two possibilities. We either can or can not determine the result of img1 without executing its side effects. We will examine both cases, starting with when we can not.
can not
When we can not determine the result of img1 without executing its side effects, we have only one choice - we must execute img1 and all of its side effects. We now have an [i], but all of img1s side effects have already been executed. There's no way we can execute any of the side effects from img2 before some of the side effects of img1 because the side effects of img1 have already happened.
can
If we can determine the result of img1 without executing its side effects, we're in luck. We find the result of img1 and pass that to f, getting a new m [i] holding the result we want. We can now examine the side effects of both img1 and the new m [i] and reorder them (although there's a huge caveat here about the associative law for >>=).
the problem at hand
As this applies to our case, for any MonadIO, there exists the following, whose result can not be determined without executing its side effects, placing us firmly in the can not case where we can not re-order side effects.
counterExample :: (MonadIO m) => m String
counterExample = liftIO getLine
There are also many other counter examples, such as anything like readImage1 or readImage2 that must actually read the image from IO.

Resources