Switching to an event wrapped in monadic context - frp

My specific problem is like this:
Given an Event t [a] and an Event t () (let's say it's a tick event), I want to produce an Event t a, that is, an event that is giving me consecutive items from input list for every occurence of tick event.
Reflex has following helper:
zipListWithEvent :: (Reflex t, MonadHold t m, MonadFix m) => (a -> b -> c) -> [a] -> Event t b -> m (Event t c)
which is doing exactly what I want, but does not take an event as an input, but just a list. Given that I have an Event t [a], I thought I could produce an event containing event and just switch, but the problem is that zipListWithEven operates in monadic context, therefore I can get:
Event t (m (Event t a))
which is something that switch primitive does not accept.
Now, maybe I'm approaching it in wrong way, so here's my general problem. Given an event that's producing list of coordinates and tick event, I want to produce an event that I can "use" to move an object along the coordinates. So each time tick fires, the position is updated. And each time I update the coordinates list, it begins to produce positions from that new list.

I'm not entirely sure if I understand the semantics of your desired functions correctly, but in the reactive-banana library, I would solve the problem like this:
trickle :: MonadMoment m => Event [a] -> Event () -> Event a
trickle eadd etick = do
bitems <- accumB [] $ unions -- 1
[ flip (++) <$> eadd -- 2
, drop 1 <$ etick -- 3
]
return $ head <$> filterE (not . null) (bitems <# etick) -- 4
The code works as follows:
The Behavior bitems records the current lists of items.
Items are added when eadd happens, ...
... and one item is removed when etick happens.
The result is an event that happens whenever etick happens, and that contains the first element of the (previously) current list whenever that list is nonempty.
This solution does not seem to require any fancy or intricate reasoning.

Naming the parts:
coords :: Event t [Coord]
ticks :: Event t ()
If we want to remember the most recent Coord until the next firing of ticks, then we necessarily have to be in the some monad Reflex m. This is the monad that allow the transient Event to be persisted.
The core thing you'd like to remember is a stack of Coord. Let's try this:
data Stack a = CS {
cs_lastPop :: Maybe a
, cs_stack :: [a]
} deriving (Show)
stack0 = CS Nothing []
pop :: Stack a -> Stack a
pop (CS _ [] ) = CS Nothing []
pop (CS _ (x:xs)) = CS (Just x) xs
reset :: [a] -> Stack a -> Stack a
reset cs (CS l _) = CS l cs
Nothing reactive there yet, two functions that tweak the Stack Coord in the way you mention in your question.
The reflex code to drive this would build a Dynamic t (Stack Coord), by specifying its initial state and all the things that modify it:
coordStack <- foldDyn ($) stack0 (leftmost [
reset <$> coords
, pop <$ ticks
])
The leftmost here takes a list of Stack Coord -> Stack Coord functions, which are applied in turn to stack0 by foldDyn ($) (as long as coords and ticks never occur in same frame).
Driving all this in main:
main :: IO ()
main = mainWidget $ do
t0 <- liftIO getCurrentTime
-- Some make up 'coords' data, pretending (Coord ~ Char)
coordTimes <- tickLossy 2.5 t0
coords <- zipListWithEvent (\c _ -> c) ["greg","TOAST"] coordTimes
ticks <- tickLossy 1 t0
coordStack <- foldDyn ($) stack0 (leftmost [
reset <$> coords
, pop <$ ticks
])
display coordStack

Related

How to write function for N-ary tree traversal in Haskell

I need to traverse N-ary tree and to the each node add number when I visited in in preorder. I have n-ary tree defined like this:
data NT a = N a [NT a] deriving Show
Example:
If I have following tree:
let ntree = N "eric" [N "lea" [N "kristy" [],N "pedro" [] ,N "rafael" []],N "anna" [],N "bety" []]
I want to transform it to
let ntree = N (1,"eric") [N (2,"lea") [N (3,"kristy") [],N (4,"pedro") [] ,N (5,"rafael") []],N (6,"anna") [],N (7,"bety") []]
"Preordedness" isnt that important.
I want to see how to write a function that passes values between levels, like how to pass number down to successor list and how to pass updated number to parent and go with that number to other branches.
So far I has been able to write functions like this:
traverse :: NT String -> String
traverse (N val []) =" "++val++" "
traverse (N val list) =val++" " ++ (concat $ map traverse list)
which outputs
"eric lea kristy pedro rafael anna bety "
EDIT: Question is:
How can I write a function
numberNodes :: NT a -> NT (a,Int)
that numbers nodes according to the preorder traversal of the tree?
Hard part for me to understand is passing auxilliary data around, could you please elaborate on that ?
In this concrete case it is one Int that means "time" or order in which I traverse this tree.
First Attempt: Hard Work
For the case of n-ary trees, there are three things going on: numbering elements, numbering trees, and numbering lists of trees. It would help to treat them separately. Types first:
aNumber :: a -- thing to number
-> Int -- number to start from
-> ( (a, Int) -- numbered thing
, Int -- next available number afterwards
)
ntNumber :: NT a -- thing to number
-> Int -- number to start from
-> ( NT (a, Int) -- numbered thing
, Int -- next available number afterwards
)
ntsNumber :: [NT a] -- thing to number
-> Int -- number to start from
-> ( [NT (a, Int)] -- numbered thing
, Int -- next available number afterwards
)
Notice that all three types share the same pattern. When you see that there is a pattern that you are following, apparently by coincidence, you know you have an opportunity to learn something. But let's press on for now and learn later.
Numbering an element is easy: copy the starting number into the output and return its successor as the next available.
aNumber a i = ((a, i), i + 1)
For the other two, the pattern (there's that word again) is
split the input into its top-level components
number each component in turn, threading the numbers through
It's easy to do the first with pattern matching (inspecting the data visually) and the second with where clauses (grabbing the two parts of the output).
For trees, a top level split gives us two components: an element and a list. In the where clause, we call the appropriate numbering functions as directed by those types. In each case, the "thing" output tells us what to put in place of the "thing" input. Meanwhile, we thread the numbers through, so the starting number for the whole is the starting number for the first component, the "next" number for the first component starts the second, and the "next" number from the second is the "next" number for the whole.
ntNumber (N a ants) i0 = (N ai aints, i2) where
(ai, i1) = aNumber a i0
(aints, i2) = ntsNumber ants i1
For lists, we have two possibilities. An empty list has no components, so we return it directly without using any more numbers. A "cons" has two components, we do exactly as we did before, using the appropriate numbering functions as directed by the type.
ntsNumber [] i = ([], i)
ntsNumber (ant : ants) i0 = (aint : aints, i2) where
(aint, i1) = ntNumber ant i0
(aints, i2) = ntsNumber ants i1
Let's give it a go.
> let ntree = N "eric" [N "lea" [N "kristy" [],N "pedro" [] ,N "rafael" []],N "anna" [],N "bety" []]
> ntNumber ntree 0
(N ("eric",0) [N ("lea",1) [N ("kristy",2) [],N ("pedro",3) [],N ("rafael",4) []],N ("anna",5) [],N ("bety",6) []],7)
So we're there. But are we happy? Well, I'm not. I have the annoying sensation that I wrote pretty much the same type three times and pretty much the same program twice. And if I wanted to do more element-numbering for differently organised data (e.g., your binary trees), I'd have to write the same thing again again. Repetitive patterns in Haskell code are always missed opportunities: it's important to develop a sense of self-criticism and ask whether there's a neater way.
Second Attempt: Numbering and Threading
Two of the repetitive patterns we saw, above, are
1. the similarity of the types,
2. the similarity of the way the numbers get threaded.
If you match up the types to see what's in common, you'll notice they're all
input -> Int -> (output, Int)
for different inputs and outputs. Let's give the largest common component a name.
type Numbering output = Int -> (output, Int)
Now our three types are
aNumber :: a -> Numbering (a, Int)
ntNumber :: NT a -> Numbering (NT (a, Int))
ntsNumber :: [NT a] -> Numbering [NT (a, Int)]
You often see such types in Haskell:
input -> DoingStuffToGet output
Now, to deal with the threading, we can build some helpful tools to work with and combine Numbering operations. To see which tools we need, look at how we combine the outputs after we've numbered the components. The "thing" parts of the outputs are always built by applying some functions which don't get numbered (data constructors, usually) to some "thing" outputs from numberings.
To deal with the functions, we can build a gadget that looks a lot like our [] case, where no actual numbering was needed.
steady :: thing -> Numbering thing
steady x i = (x, i)
Don't be put off by the way the type makes it look as if steady has only one argument: remember that Numbering thing abbreviates a function type, so there really is another -> in there. We get
steady [] :: Numbering [a]
steady [] i = ([], i)
just like in the first line of ntsNumber.
But what about the other constructors, N and (:)? Ask ghci.
> :t steady N
steady N :: Numbering (a -> [NT a] -> NT a)
> :t steady (:)
steady (:) :: Numbering (a -> [a] -> [a])
We get numbering operations with functions as outputs, and we want to generate the arguments to those function by more numbering operations, producing one big overall numbering operation with the numbers threaded through. One step of that process is to feed a numbering-generated function one numbering-generated input. I'll define that as an infix operator.
($$) :: Numbering (a -> b) -> Numbering a -> Numbering b
infixl 2 $$
Compare with the type of the explicit application operator, $
> :t ($)
($) :: (a -> b) -> a -> b
This $$ operator is "application for numberings". If we can get it right, our code becomes
ntNumber :: NT a -> Numbering (NT (a, Int))
ntNumber (N a ants) i = (steady N $$ aNumber a $$ ntsNumber ants) i
ntsNumber :: [NT a] -> Numbering [NT (a, Int)]
ntsNumber [] i = steady [] i
ntsNumber (ant : ants) i = (steady (:) $$ ntNumber ant $$ ntsNumber ants) i
with aNumber as it was (for the moment). This code just does the data reconstruction, plugging together the constructors and the numbering processes for the components. We had better give the definition of $$ and make sure it gets the threading right.
($$) :: Numbering (a -> b) -> Numbering a -> Numbering b
(fn $$ an) i0 = (f a, i2) where
(f, i1) = fn i0
(a, i2) = an i1
Here, our old threading pattern gets done once. Each of fn and an is a function, expecting a starting number, and the whole of fn $$ sn is a function, which gets the starting number i0. We thread the numbers through, collecting first the function, then the argument. We then do the actual application and hand back the final "next" number.
Now, notice that in every line of code, the i input is fed in as the argument to a numbering process. We can simplify this code by just talking about the processes, not the numbers.
ntNumber :: NT a -> Numbering (NT (a, Int))
ntNumber (N a ants) = steady N $$ aNumber a $$ ntsNumber ants
ntsNumber :: [NT a] -> Numbering [NT (a, Int)]
ntsNumber [] = steady []
ntsNumber (ant : ants) = steady (:) $$ ntNumber ant $$ ntsNumber ants
One way to read this code is to filter out all the Numbering, steady and $$ uses.
ntNumber :: NT a -> ......... (NT (a, Int))
ntNumber (N a ants) = ...... N .. (aNumber a) .. (ntsNumber ants)
ntsNumber :: [NT a] -> ......... [NT (a, Int)]
ntsNumber [] = ...... []
ntsNumber (ant : ants) = ...... (:) .. (ntNumber ant) .. (ntsNumber ants)
and you see it just looks like a preorder traversal, reconstructing the original data structure after processing the elements. We're doing the right thing with the values, provided steady and $$ are correctly combining the processes.
We could try to do the same for aNumber
aNumber :: a -> Numbering a
aNumber a = steady (,) $$ steady a $$ ????
but the ???? is where we actually need the number. We could build a numbering process that fits in that hole: a numbering process that issues the next number.
next :: Numbering Int
next i = (i, i + 1)
That's the essence of numbering, the "thing" output is the number to be used now (which is the starting number), and the "next" number output is the one after. We may write
aNumber a = steady (,) $$ steady a $$ next
which simplifies to
aNumber a = steady ((,) a) $$ next
In our filtered view, that's
aNumber a = ...... ((,) a) .. next
What we've done is to bottle the idea of a "numbering process" and we've built the right tools to do ordinary functional programming with those processes. The threading pattern turns into the definitions of steady and $$.
Numbering is not the only thing that works this way. Try this...
> :info Applicative
class Functor f => Applicative (f :: * -> *) where
pure :: a -> f a
(<*>) :: f (a -> b) -> f a -> f b
...and you also get some more stuff. I just want to draw attention to the types of pure and <*>. They're a lot like steady and $$, but they are not just for Numbering. Applicative is the type class for every kind of process which works that way. I'm not saying "learn Applicative now!", just suggesting a direction of travel.
Third Attempt: Type-Directed Numbering
So far, our solution is directed towards one particular data structure, NT a, with [NT a] showing up as an auxiliary notion because it's used in NT a. We can make the whole thing a bit more plug-and-play if we focus on one layer of the type at a time. We defined numbering a list of trees in terms of numbering trees. In general, we know how to number a list of stuff if we know how to number each item of stuff.
If we know how to number an a to get b, we should be able to number a list of a to get a list of b. We can abstract over "how to process each item".
listNumber :: (a -> Numbering b) -> [a] -> Numbering [b]
listNumber na [] = steady []
listNumber na (a : as) = steady (:) $$ na a $$ listNumber na as
and now our old list-of-trees-numbering function becomes
ntsNumber :: [NT a] -> Numbering [NT (a, Int)]
ntsNumber = listNumber ntNumber
which is hardly worth naming. We can just write
ntNumber :: NT a -> Numbering (NT (a, Int))
ntNumber (N a ants) = steady N $$ aNumber a $$ listNumber ntNumber ants
We can play the same game for the trees themselves. If you know how to number stuff, you know how to number a tree of stuff.
ntNumber' :: (a -> Numbering b) -> NT a -> Numbering (NT b)
ntNumber' na (N a ants) = steady N $$ na a $$ listNumber (ntNumber' na) ants
Now we can do things like this
myTree :: NT [String]
myTree = N ["a", "b", "c"] [N ["d", "e"] [], N ["f"] []]
> ntNumber' (listNumber aNumber) myTree 0
(N [("a",0),("b",1),("c",2)] [N [("d",3),("e",4)] [],N [("f",5)] []],6)
Here, the node data is now itself a list of things, but we've been able to number those things individually. Our equipment is more adaptable because each component aligns with one layer of the type.
Now, try this:
> :t traverse
traverse :: (Applicative f, Traversable t) => (a -> f b) -> t a -> f (t b)
It's an awful lot like the thing we just did, where f is Numbering and t is sometimes lists and sometimes trees.
The Traversable class captures what it means to be a type-former that lets you thread some sort of process through the stored elements. Again, the pattern you're using is very common and has been anticipated. Learning to use traverse saves a lot of work.
Eventually...
...you'll learn that a thing to do the job of Numbering already exists in the library: it's called State Int and it belongs to the Monad class, which means it must also be in the Applicative class. To get hold of it,
import Control.Monad.State
and the operation which kicks off a stateful process with its initial state, like our feeding-in of 0, is this thing:
> :t evalState
evalState :: State s a -> s -> a
Our next operation becomes
next' :: State Int Int
next' = get <* modify (1+)
where get is the process that accesses the state, modify makes a process that changes the state, and <* means "but also do".
If you start you file with the language extension pragma
{-# LANGUAGE DeriveFunctor, DeriveFoldable, DeriveTraversable #-}
you can declare your datatype like this
data NT a = N a [NT a] deriving (Show, Functor, Foldable, Traversable)
and Haskell will write traverse for you.
Your program then becomes one line...
evalState (traverse (\ a -> pure ((,) a) <*> get <* modify (1+)) ntree) 0
-- ^ how to process one element ^^^^^^^^^^^^^^^
-- ^ how to process an entire tree of elements ^^^^^^^^^
-- ^ processing your particular tree ^^^^^^^^^^^^^^^^^^^^^^^^^^^
-- ^ kicking off the process with a starting number of 0 ^^^^^^^^^^^^^^^^
...but the journey to that one line involves a lot of "bottling the pattern" steps, which takes some (hopefully rewarding) learning.
I'll update this answer as soon as I get some progress.
Right now I reduced problem from n-ary tree to binary tree.
data T a = Leaf a | N (T a) a (T a) deriving Show
numberNodes:: T a -> T (a,Int)
numberNodes tree = snd $ numberNodes2 tree 0
numberNodes2:: T a -> Int -> (Int, T (a,Int))
numberNodes2 (Leaf a) time = (time,Leaf (a,time))
numberNodes2 (N left nodeVal right) time = (rightTime, N leftTree (nodeVal,time) rightTree )
where (leftTime,leftTree) = numberNodes2 left (time+1)
(rightTime,rightTree) = numberNodes2 right (leftTime+1)
Function numberNodes creates from this tree:
let bt = N (N (Leaf "anna" ) "leo" (Leaf "laura")) "eric" (N (Leaf "john") "joe" (Leaf "eddie"))
following tree:
N (N (Leaf ("anna",2)) ("leo",1) (Leaf ("laura",3))) ("eric",0) (N (Leaf ("john",5)) ("joe",4) (Leaf ("eddie",6)))
And now just rewrite it for n-ary tree...( which I don't know how to do, any hints? )
This answer by #pigworker is excellent, and I learned lots from it.
However, I believe we can use mapAccumL from Data.Traversable to achieve a very similar behaviour:
{-# LANGUAGE DeriveTraversable #-}
import Data.Traversable
import Data.Tuple
-- original data type from the question
data NT a = N a [NT a]
deriving (Show, Functor, Foldable, Traversable)
-- additional type from #pigworker's answer
type Numbering output = Int -> (output, Int)
-- compare this to signature of ntNumber
-- swap added to match the signature
ntNumberSimple :: (NT a) -> Numbering (NT (a, Int))
ntNumberSimple t n = swap $ mapAccumL func n t
where
func i x = (i+1, (x, i))
I believe that mapAccumL is using the very same State monad under the hood, but at the very least it's completely hidden from the caller.

Haskell any way to improve this code

Hey I have implemented this code segment as a move ordering system for a alpha-beta pruning function. It does speed up my code by a little but when I profiled my code I saw it was very clunky.
move_ord [] (primary_ord,secondary_ord) = primary_ord ++ secondary_ord
move_ord (y:ys) (primary_ord,secondary_ord) = case no_of_pebbles state y of
0 -> move_ord ys (primary_ord,secondary_ord)
13 -> move_ord ys (y : primary_ord,secondary_ord)
x
| 7 - y == x -> move_ord ys (y : primary_ord,secondary_ord)
| otherwise -> move_ord ys (primary_ord,y : secondary_ord)
It is meant to place moves with specific pebble values (13 and 7-y==x) at the front of the list. While also filtering out illegal moves of 0 pebbles.
Pebbles are stored as Int. y is a Int.
Thank you in advance.
Does the order in which the elements of primary_ord appear matter?
No it does not. I am ordering branches to check first for alpha-beta cutoffs. The cases I outlined have a higher probability of triggering a pruning on the next branch evaluated. Though since I have no other information they can be in any order as long as they appear in front of the other cases.
In that case, you should deliver the good ones as soon as you find them, and only defer delivering the bad ones.
If move_ord is - except in the recursive calls - only called with ([],[]) as the second argument, I'd recommend
move_ord = go []
where
go acc (y:ys) = case no_of_pebbles state y of
0 -> go acc ys
13 -> y : go acc ys
x | x == 7-y -> y : go acc ys
| otherwise -> go (y:acc) ys
go acc _ = acc
Thus a) you can run in smaller space (unless the consumer accumulates the entire result) and b) the consumer need not wait for the entire list to be traversed before it can start working.
Of course, if there are only very few or even none "good" ys, it may not make a difference, and if the consumer needs the entire list before it can do anything neither. But usually, that should improve matters somewhat. Otherwise, there is not much that can be done in this function, no_of_pebbles would be what uses the most resources here.
If move_ord can be called with non-empty primary_ord or secondary_ord, use a wrapper
move_ord xs (primary, secondary) = primary ++ go secondary xs
where
go acc ... -- as above
I'm assuming that move_ord starts out being called as move_ord ys ([], []). We then have a streaming filter pattern on Either.
import Data.Either
sorter :: (a -> Bool) -> [a] -> [Either a a]
sorter p = map go where go x = if p x then Left x else Right x
then,
uncurry (++)
. partitionEithers
. sorter (\x -> no_of_pebbles x == 13 || 7 - x == no_of_pebbles x)
. filter (\x -> no_of_pebbles x != 0)
Which is still a little ugly because we keep computing no_of_pebbles in various places. This might be alright for documentation purposes, but we could also precompute no_of_pebbles.
uncurry (++)
. partitionEithers
. sorter (\(x, num) -> num == 13 || 7 - x == num)
. filter ((!=0) . snd)
. map (\x -> (x, no_of_pebbles x))
Specialization.
As you use literal constant the compiler will infer default Integer type not Int.
Then you need to specialize the type signature of your function, like so.
move_ord :: [Int] -> ([Int], [Int]) -> ([Int], [Int])
Memoization.
Your input list can contain duplicate element, then two strategy are possible.
Memoize your call of no_of_pebbles, it will save you extract computation, or you can sort and remove the duplicate of your input list before processing it.
Return a tuple.
You accumulate the response as a tuple then maybe you should return it as is.
Trying to merge the two element of it into the function seems to be out of scope.
Should be manage later in your code, and it's good to know that list store in tuple are common data type know as dlist.

Event handling in Netwire compared to conventional FRP frameworks

Most Haskell FRP frameworks like AFRP, Yampa and Reactive-banana make a difference between continuous time-varying functions and discrete ones. Usually they call them behaviors and events.
One exception is Netwire, which uses an inhibition monoid to model events. What are pros and cons of such an approach?
In particular, I'm interested in application of FRP to robot controlling. For example, this paper http://haskell.cs.yale.edu/?post_type=publication&p=182 show a way to encode a task and HSM abstractions in FRP using events. Can this be directly translated to Netwire?
The advantage of events as potentially inhibited signals is that it allows you to encode most even complicated reactive formulas very concisely. Imagine a switch that displays "yes" when pressed and "no" otherwise:
"yes" . switchPressed <|> "no"
The idea is that switchPressed acts like the identity wire if its corresponding event occurs and inhibits otherwise. That's where <|> comes in. If the first wire inhibits, it tries the second. Here is a hypothetical robot arm controlled by two buttons (left and right):
robotArm = integral_ 0 . direction
direction =
((-1) . leftPressed <|> 0) +
(1 . rightPressed <|> 0)
While the robot arm is hypothetical, this code is not. It's really the way you would write this in Netwire.
After some trials I've implemented the behavior I needed. Basically, You write a custom inhibitor type which catches the concept of events you need. In my case it was
data Inhibitor = Done | Timeout | Interrupt deriving Show
Done means normal finishing and the rest constructors signal some kind of an error.
After it, you write any custom combinators you need. In my case I needed a way to stop computations and signal a error further:
timeout deadline w | deadline <= 0 = inhibit Timeout
| otherwise = mkGen $ \dt a -> do
res <- stepWire w dt a
case res of
(Right o, w') -> return (Right o, timeout (deadline - dt) w')
(Left e, _) -> return (Left e, inhibit e)
This is a variant of switchBy which allows you to change the wire once. Note, it passes the inhibition signal of a new wire:
switchOn new w0 =
mkGen $ \dt x' ->
let select w' = do
(mx, w) <- stepWire w' dt x'
case mx of
Left ex -> stepWire (new ex) dt x'
Right x -> return (Right x, switchOn new w)
in select w0
And this is a variant of (-->) which catches the idea of interrupting the task chain.
infixr 1 ~>
w1 ~> w2 = switchOn ( \e -> case e of
Done -> w2
_ -> inhibit e
) w1

Haskell: partially drop lazy evaluated results

I have a very large decision tree. It is used as follows:
-- once per application start
t :: Tree
t = buildDecisionTree
-- done several times
makeDecision :: Something -> Decision
makeDecision something = search t something
This decision tree is way too large to fit in memory. But, thanks to lazy evaluation, it is only partially evaluated.
The problem is, that there are scenarios where all possible decisions are tried causing the whole tree to be evaluated. This is not going to terminate, but should not cause a memory overflow either. Further, if this process is aborted, the memory usage does not decrease, as a huge subtree is still evaluated already.
A solution would be to reevaluate the tree every time makeDecision is called, but this would loose the benefits of caching decisions and significantly slow down makeDecision.
I would like to go a middle course. In particular it is very common in my application to do successive decisions with common path prefix in the tree. So I would like to cache the last used path but drop the others, causing them to reevaluate the next time they are used. How can I do this in Haskell?
It is not possible in pure haskell, see question Can a thunk be duplicated to improve memory performance? (as pointed out by #shang). You can, however, do this with IO.
We start with the module heade and list only the type and the functions that should make this module (which will use unsafePerformIO) safe. It is also possible to do this without unsafePerformIO, but that would mean that the user has to keep more of his code in IO.
{-# LANGUAGE ExistentialQuantification #-}
module ReEval (ReEval, newReEval, readReEval, resetReEval) where
import Data.IORef
import System.IO.Unsafe
We start by defining a data type that stores a value in a way that prevents all sharing, by keeping the function and the argument away from each other, and only apply the function when we want the value. Note that the value returned by unsharedValue can be shared, but not with the return value of other invocations (assuming the function is doing something non-trivial):
data Unshared a = forall b. Unshared (b -> a) b
unsharedValue :: Unshared a -> a
unsharedValue (Unshared f x) = f x
Now we define our data type of resettable computations. We need to store the computation and the current value. The latter is stored in an IORef, as we want to be able to reset it.
data ReEval a = ReEval {
calculation :: Unshared a,
currentValue :: IORef a
}
To wrap a value in a ReEval box, we need to have a function and an argument. Why not just a -> ReEval a? Because then there would be no way to prevent the parameter to be shared.
newReEval :: (b -> a) -> b -> ReEval a
newReEval f x = unsafePerformIO $ do
let c = Unshared f x
ref <- newIORef (unsharedValue c)
return $ ReEval c ref
Reading is simple: Just get the value from the IORef. This use of unsafePerformIO is safe becuase we will always get the value of unsharedValue c, although a different “copy” of it.
readReEval :: ReEval a -> a
readReEval r = unsafePerformIO $ readIORef (currentValue r)
And finally the resetting. I left it in the IO monad, not because it would be any less safe than the other function to be wrapped in unsafePerformIO, but because this is the easiest way to give the user control over when the resetting actually happens. You don’t want to risk that all your calls to resetReEval are lazily delayed until your memory has run out or even optimized away because there is no return value to use.
resetReEval :: ReEval a -> IO ()
resetReEval r = writeIORef (currentValue r) (unsharedValue (calculation r))
This is the end of the module. Here is example code:
import Debug.Trace
import ReEval
main = do
let func a = trace ("func " ++ show a) negate a
let l = [ newReEval func n | n <- [1..5] ]
print (map readReEval l)
print (map readReEval l)
mapM_ resetReEval l
print (map readReEval l)
And here you can see that it does what expected:
$ runhaskell test.hs
func 1
func 2
func 3
func 4
func 5
[-1,-2,-3,-4,-5]
[-1,-2,-3,-4,-5]
func 1
func 2
func 3
func 4
func 5
[-1,-2,-3,-4,-5]

Haskell: FIFO queue algorithm complexity

This is my attempt at a FIFO queue:
type Queue a = [a] -> [a]
empty :: Queue a
empty = id
remove :: Int -> Queue a -> ([a], Queue a)
remove n queue = (take n (queue []), (\x -> drop n (queue x)));
add :: [a] -> Queue a -> Queue a
add elems queue = (\x -> queue (elems ++ x))
empty creates an empty queue, remove takes the first n elements of the queue and returns the rest of the queue as the second element of the tuple, and add adds the list elems to the queue.
Will this add/remove 1 element in O(1) time and n elements in O(n) time?
What you have implemented effectively amounts to difference lists. (See: dlist.)
Difference lists allow for cheap appends, but unfortunately your removal will take linear time. It becomes more clear if we rewrite your code slightly:
type Queue a = [a] -> [a]
empty :: Queue a
empty = id
toList :: Queue a -> [a]
toList q = q []
fromList :: [a] -> Queue a
fromList = (++)
remove :: Int -> Queue a -> ([a], Queue a)
remove n q = (xs, fromList ys)
where
(xs, ys) = splitAt n (toList q)
add :: [a] -> Queue a -> Queue a
add xs q = (++ xs) . q
Note that I have made the conversion to and from lists a bit more explicit than it was in your code. You clearly see that the core of your removal code gets bracketed between toList and fromList.
Well, sidestepping your question somewhat, the classic purely functional implementation of a FIFO queue is as a pair of lists, one for the "front" and one for the "back." You enqueue elements by adding them as the head of the back list, and dequeue by taking the head of the front list; if the front list is empty, you "rotate" the queue by reversing the back list and swapping that with the empty front list. In code:
import Control.Monad
import Data.List
import Data.Maybe
data FIFO a = FIFO [a] [a]
deriving Show
empty :: FIFO a
empty = FIFO [] []
isEmpty :: FIFO a -> Bool
isEmpty (FIFO [] []) = True
isEmpty _ = False
enqueue :: a -> FIFO a -> FIFO a
enqueue x (FIFO front back) = FIFO front (x:back)
-- | Remove the head off the queue. My type's different from yours
-- because I use Maybe to handle the case where somebody tries to
-- dequeue off an empty FIFO.
dequeue :: FIFO a -> Maybe (a, FIFO a)
dequeue queue = case queue of
FIFO [] [] -> Nothing
FIFO (x:f) b -> Just (x, FIFO f b)
otherwise -> dequeue (rotate queue)
where rotate (FIFO [] back) = FIFO (reverse back) []
-- | Elements exit the queue in the order they appear in the list.
fromList :: [a] -> FIFO a
fromList xs = FIFO xs []
-- | Elements appear in the result list in the order they exit the queue.
toList :: FIFO a -> [a]
toList = unfoldr dequeue
That's the classic implementation. Now your operations can be written in terms of that:
-- | Enqueue multiple elements. Elements exit the queue in the order
-- they appear in xs.
add :: [a] -> FIFO a -> FIFO a
add xs q = foldl' (flip enqueue) q xs
To write remove in terms of dequeue, you need to handle all of those intermediate FIFOs from the (a, FIFO a) result of dequeue. One way to do that is to use the State monad:
import Control.Monad.State
-- | Remove n elements from the queue. My result type is different
-- from yours, again, because I handle the empty FIFO case. If you
-- try to remove too many elements, you get a bunch of Nothings at
-- the end of your list.
remove :: Int -> FIFO a -> ([Maybe a], FIFO a)
remove n q = runState (removeM n) q
-- | State monad action to dequeue n elements from the state queue.
removeM :: Int -> State (FIFO a) [Maybe a]
removeM n = replicateM n dequeueM
-- | State monad action to dequeue an element from the state queue.
dequeueM :: State (FIFO a) (Maybe a)
dequeueM = do q <- get
case dequeue q of
Just (x, q') -> put q' >> return (Just x)
Nothing -> return Nothing
I was looking for a FIFO queue that's faster than taking a list and reversing it. Stefan's add isn't performant (O(n)), so here's what's worked for me after benchmarking:
add :: a -> Queue a -> Queue a
add x f = f . (x:)

Resources