I’m trying to get a very quick and dirty animated display of some data produced using Haskell. The simplest thing to try seems to be ASCII art — in other words, something along the lines of:
type Frame = [[Char]] -- each frame is given as an array of characters
type Animation = [Frame]
display :: Animation -> IO ()
display = ??
How can I best do this?
The part I can’t figure out at all is how to ensure a minimal pause between frames; the rest is straightforward using putStrLn together with clearScreen from the ansi-terminal package, found via this answer.
Well, here's a rough sketch of what I'd do:
import Graphics.UI.SDL.Time (getTicks)
import Control.Concurrent (threadDelay)
type Frame = [[Char]]
type Animation = [Frame]
displayFrame :: Frame -> IO ()
displayFrame = mapM_ putStrLn
timeAction :: IO () -> IO Integer
timeAction act = do t <- getTicks
act
t' <- getTicks
return (fromIntegral $ t' - t)
addDelay :: Integer -> IO () -> IO ()
addDelay hz act = do dt <- timeAction act
let delay = calcDelay dt hz
threadDelay $ fromInteger delay
calcDelay dt hz = max (frame_usec - dt_usec) 0
where frame_usec = 1000000 `div` hz
dt_usec = dt * 1000
runFrames :: Integer -> Animation -> IO ()
runFrames hz frs = mapM_ (addDelay hz . displayFrame) frs
Obviously I'm using SDL here purely for getTicks, because it's what I've used before. Feel free to replace it with any other function to get the current time.
The first argument to runFrames is--as the name suggests--the frame rate in hertz, i.e., frames per second. The runFrames function first converts each frame into an action that draws it, then gives each to the addDelay function, which checks the time before and after running the action, then sleeps until the frame time has passed.
My own code would look a bit different than this, because I'd generally have a more complicated loop that would do other stuff, e.g., polling SDL for events, doing background processing, passing data to the next iteration, &c. But the basic idea is the same.
Obviously the nice thing about this approach is that, while still being fairly simple, you get a consistent frame rate when possible, with a clear means of specifying the target speed.
This builds upon C. A. McCann's anwer, which works nicely but is not time-stable in the long run, in particular when the framerate is not an integer fraction of the tick rate.
import GHC.Word (Word32)
-- import CAMcCann'sAnswer (Frame, Animation, displayFrame, getTicks, threadDelay)
atTick :: IO () -> Word32 -> IO ()
act `atTick` t = do
t' <- getTicks
let delay = max (1000 * (t-t')) 0
threadDelay $ fromIntegral delay
act
runFrames :: Integer -> Animation -> IO ()
runFrames fRate frs = do
t0 <- getTicks
mapM_ (\(t,f) -> displayFrame f `atTick` t) $ timecode fRate32 t0 frs
where timecode ν t0 = zip [ t0 + (1000 * i) `div` ν | i <- [0..] ]
fRate32 = fromIntegral fRate :: Word32
Related
I have an IO action called action in which I am doing a fairly heavy computation. I am using an IO monad in in order to have easy access to random numbers further down in the computation.
I also have the below functions which replicate the action and take the mean of the result. Because the action takes a fair amount of time to complete, I was wondering what the effect on the performance is from doing the sampling in this manner. Would it be better to do the sampling later in the evaluation so that parts of the program that are the same for each sample are not repeated or does the Haskell compiler optimise this already?
samp :: (Fractional b) => Int -> IO b -> IO b
samp n action = do
samples <- replicateM n action
return $ mean samples
mean :: (Fractional a) => [a] -> a
mean as = s / (genericLength as)
where s = sum as
My specific problem is like this:
Given an Event t [a] and an Event t () (let's say it's a tick event), I want to produce an Event t a, that is, an event that is giving me consecutive items from input list for every occurence of tick event.
Reflex has following helper:
zipListWithEvent :: (Reflex t, MonadHold t m, MonadFix m) => (a -> b -> c) -> [a] -> Event t b -> m (Event t c)
which is doing exactly what I want, but does not take an event as an input, but just a list. Given that I have an Event t [a], I thought I could produce an event containing event and just switch, but the problem is that zipListWithEven operates in monadic context, therefore I can get:
Event t (m (Event t a))
which is something that switch primitive does not accept.
Now, maybe I'm approaching it in wrong way, so here's my general problem. Given an event that's producing list of coordinates and tick event, I want to produce an event that I can "use" to move an object along the coordinates. So each time tick fires, the position is updated. And each time I update the coordinates list, it begins to produce positions from that new list.
I'm not entirely sure if I understand the semantics of your desired functions correctly, but in the reactive-banana library, I would solve the problem like this:
trickle :: MonadMoment m => Event [a] -> Event () -> Event a
trickle eadd etick = do
bitems <- accumB [] $ unions -- 1
[ flip (++) <$> eadd -- 2
, drop 1 <$ etick -- 3
]
return $ head <$> filterE (not . null) (bitems <# etick) -- 4
The code works as follows:
The Behavior bitems records the current lists of items.
Items are added when eadd happens, ...
... and one item is removed when etick happens.
The result is an event that happens whenever etick happens, and that contains the first element of the (previously) current list whenever that list is nonempty.
This solution does not seem to require any fancy or intricate reasoning.
Naming the parts:
coords :: Event t [Coord]
ticks :: Event t ()
If we want to remember the most recent Coord until the next firing of ticks, then we necessarily have to be in the some monad Reflex m. This is the monad that allow the transient Event to be persisted.
The core thing you'd like to remember is a stack of Coord. Let's try this:
data Stack a = CS {
cs_lastPop :: Maybe a
, cs_stack :: [a]
} deriving (Show)
stack0 = CS Nothing []
pop :: Stack a -> Stack a
pop (CS _ [] ) = CS Nothing []
pop (CS _ (x:xs)) = CS (Just x) xs
reset :: [a] -> Stack a -> Stack a
reset cs (CS l _) = CS l cs
Nothing reactive there yet, two functions that tweak the Stack Coord in the way you mention in your question.
The reflex code to drive this would build a Dynamic t (Stack Coord), by specifying its initial state and all the things that modify it:
coordStack <- foldDyn ($) stack0 (leftmost [
reset <$> coords
, pop <$ ticks
])
The leftmost here takes a list of Stack Coord -> Stack Coord functions, which are applied in turn to stack0 by foldDyn ($) (as long as coords and ticks never occur in same frame).
Driving all this in main:
main :: IO ()
main = mainWidget $ do
t0 <- liftIO getCurrentTime
-- Some make up 'coords' data, pretending (Coord ~ Char)
coordTimes <- tickLossy 2.5 t0
coords <- zipListWithEvent (\c _ -> c) ["greg","TOAST"] coordTimes
ticks <- tickLossy 1 t0
coordStack <- foldDyn ($) stack0 (leftmost [
reset <$> coords
, pop <$ ticks
])
display coordStack
Say I have two pure but unsafe functions, that do the same, but one of them is working on batches, and is asymptotically faster:
f :: Int -> Result -- takes O(1) time
f = unsafePerformIO ...
g :: [Int] -> [Result] -- takes O(log n) time
g = unsafePerformIO ...
A naive implementation:
getUntil :: Int -> [Result]
getUntil 0 = f 0
getUntil n = f n : getUntil n-1
switch is the n value where g gets cheaper than f.
getUntil will in practice be called with ever increasing n, but it might not start at 0. So since the Haskell runtime can memoize getUntil, performance will be optimal if getUntil is called with an interval lower than switch. But once the interval gets larger, this implementation is slow.
In an imperative program, I guess I would make a TreeMap (which could quickly be checked for gaps) for caching all calls. On cache misses, it would get filled with the results of g, if the gap was greater than switch in length, and f otherwise, respectively.
How can this be optimized in Haskell?
I think I am just looking for:
an ordered map filled on-demand using a fill function that would fill all values up to the requested index using one function if the missing range is small, another if it is large
a get operation on the map which returns a list of all lower values up to the requested index. This would result in a function similar to getUntil above.
I'll elaborate in my proposal for using map, after some tests I just ran.
import System.IO
import System.IO.Unsafe
import Control.Concurrent
import Control.Monad
switch :: Int
switch = 1000
f :: Int -> Int
f x = unsafePerformIO $ do
threadDelay $ 500 * x
putStrLn $ "Calculated from scratch: f(" ++ show x ++ ")"
return $ 500*x
g :: Int -> Int
g x = unsafePerformIO $ do
threadDelay $ x*x `div` 2
putStrLn $ "Calculated from scratch: g(" ++ show x ++ ")"
return $ x*x `div` 2
cachedFG :: [Int]
cachedFG = map g [0 .. switch] ++ map f [switch+1 ..]
main :: IO ()
main = forever $ getLine >>= print . (cachedFG !!) . read
… where f, g and switch have the same meaning indicated in the question.
The above program can be compiled as is using GHC. When executed, positive integers can be entered, followed by a newline, and the application will print some value based on the number entered by the user plus some extra indication on what values are being calculated from scratch.
A short session with this program is:
User: 10000
Program: Calculated from scratch: f(10000)
Program: 5000000
User: 10001
Program: Calculated from scratch: f(10001)
Program: 5000500
User: 10000
Program: 5000000
^C
The program has to be killed/terminated manually.
Notice that the last value entered doesn't show a "calculated from scratch" message. This indicates that the program has the value cached/memoized somewhere. You can try executing this program yourself; but have into account that threadDelay's lag is proportional to the value entered.
The getUntil function then could be implemented using:
getUntil :: Int -> [Int]
getUntil n = take n cachedFG
or:
getUntil :: Int -> [Int]
getUntil = flip take cachedFG
If you don't know the value for switch, you can try evaluating f and g in parallel and use the fastest result, but that's another show.
I have a little Haskell program that uses the Gtk2Hs bindings. One can draw points (small squares) on the program's window by clicking on a DrawingArea:
[...]
image <- builderGetObject gui castToDrawingArea "drawingarea"
p <- widgetGetDrawWindow image
gc <- gcNewWithValues p (newGCValues { foreground = Color 0 0 0,
function = Copy })
on image buttonPressEvent (point p gc)
set image [ widgetCanFocus := True ]
[...]
point :: DrawWindow -> GC -> EventM EButton Bool
point p gc = tryEvent $ do
(x', y') <- eventCoordinates
liftIO $ do
let x = round x'
let y = round y'
let relx = x `div` 4
let rely = y `div` 4
gcval <- gcGetValues gc
gcSetValues gc (newGCValues { function = Invert })
drawRectangle p gc True (relx * 4) (rely * 4) 4 4
gcSetValues gc gcval
Through the trial-and-error method and after reading the docs at Hackage, I managed to add a button press event to the drawing area, since the widget doesn't provide a signal for this event by default. However, I don't understand the definition and usage of EventM, so I'm afraid I'll have to struggle with the EventM monad if I must add a new event to a widget again. I must say I'm still not proficient enough in Haskell. I somewhat understand how simple monads work, but this one "type EventM t a = ReaderT (Ptr t) IO a" (defined in Graphics.UI.Gtk.Gdk.EventM) seems a mistery to me.
My question is: Could someone please explain the internals of the EventM monad? For example in the case of "buttonPressEvent :: WidgetClass self => Signal self (EventM EButton Bool)".
I am stacked by the similar problem,seems that EventM is a ReadT which will read the EButton and return Bool.
Suppose someone makes a program to play chess, or solve sudoku. In this kind of program it makes sense to have a tree structure representing game states.
This tree would be very large, "practically infinite". Which isn't by itself a problem as Haskell supports infinite data structures.
An familiar example of an infinite data structure:
fibs = 0 : 1 : zipWith (+) fibs (tail fibs)
Nodes are only allocated when first used, so the list takes finite memory. One may also iterate over an infinite list if they don't keep references to its head, allowing the garbage collector to collect its parts which are not needed anymore.
Back to the tree example - suppose one does some iteration over the tree, the tree nodes iterated over may not be freed if the root of the tree is still needed (for example in an iterative deepening search, the tree would be iterated over several times and so the root needs to be kept).
One possible solution for this problem that I thought of is using an "unmemo-monad".
I'll try to demonstrate what this monad is supposed to do using monadic lists:
import Control.Monad.ListT (ListT) -- cabal install List
import Data.Copointed -- cabal install pointed
import Data.List.Class
import Prelude hiding (enumFromTo)
nums :: ListT Unmemo Int -- What is Unmemo?
nums = enumFromTo 0 1000000
main = print $ div (copoint (foldlL (+) 0 nums)) (copoint (lengthL nums))
Using nums :: [Int], the program would take a lot of memory as a reference to nums is needed by lengthL nums while it is being iterated over foldlL (+) 0 nums.
The purpose of Unmemo is to make the runtime not keep the nodes iterated over.
I attempted using ((->) ()) as Unmemo, but it yields the same results as nums :: [Int] does - the program uses a lot of memory, as evident by running it with +RTS -s.
Is there anyway to implement Unmemo that does what I want?
Same trick as with a stream -- don't capture the remainder directly, but instead capture a value and a function which yields a remainder. You can add memoization on top of this as necessary.
data UTree a = Leaf a | Branch a (a -> [UTree a])
I'm not in the mood to figure it out precisely at the moment, but this structure arises, I'm sure, naturally as the cofree comonad over a fairly straightforward functor.
Edit
Found it: http://hackage.haskell.org/packages/archive/comonad-transformers/1.6.3/doc/html/Control-Comonad-Trans-Stream.html
Or this is perhaps simpler to understand: http://hackage.haskell.org/packages/archive/streams/0.7.2/doc/html/Data-Stream-Branching.html
In either case, the trick is that your f can be chosen to be something like data N s a = N (s -> (s,[a])) for an appropriate s (s being the type of your state parameter of the stream -- the seed of your unfold, if you will). That might not be exactly correct, but something close should do...
But of course for real work, you can scrap all this and just write the datatype directly as above.
Edit 2
The below code illustrates how this can prevent sharing. Note that even in the version without sharing, there are humps in the profile indicating that the sum and length calls aren't running in constant space. I'd imagine that we'd need an explicit strict accumulation to knock those down.
{-# LANGUAGE DeriveFunctor #-}
import Data.Stream.Branching(Stream(..))
import qualified Data.Stream.Branching as S
import Control.Arrow
import Control.Applicative
import Data.List
data UM s a = UM (s -> Maybe a) deriving Functor
type UStream s a = Stream (UM s) a
runUM s (UM f) = f s
liftUM x = UM $ const (Just x)
nullUM = UM $ const Nothing
buildUStream :: Int -> Int -> Stream (UM ()) Int
buildUStream start end = S.unfold (\x -> (x, go x)) start
where go x
| x < end = liftUM (x + 1)
| otherwise = nullUM
sumUS :: Stream (UM ()) Int -> Int
sumUS x = S.head $ S.scanr (\x us -> maybe 0 id (runUM () us) + x) x
lengthUS :: Stream (UM ()) Int -> Int
lengthUS x = S.head $ S.scanr (\x us -> maybe 0 id (runUM () us) + 1) x
sumUS' :: Stream (UM ()) Int -> Int
sumUS' x = last $ usToList $ liftUM $ S.scanl (+) 0 x
lengthUS' :: Stream (UM ()) Int -> Int
lengthUS' x = last $ usToList $ liftUM $ S.scanl (\acc _ -> acc + 1) 0 x
usToList x = unfoldr (\um -> (S.head &&& S.tail) <$> runUM () um) x
maxNum = 1000000
nums = buildUStream 0 maxNum
numsL :: [Int]
numsL = [0..maxNum]
-- All these need to be run with increased stack to avoid an overflow.
-- This generates an hp file with two humps (i.e. the list is not shared)
main = print $ div (fromIntegral $ sumUS' nums) (fromIntegral $ lengthUS' nums)
-- This generates an hp file as above, and uses somewhat less memory, at the cost of
-- an increased number of GCs. -H helps a lot with that.
-- main = print $ div (fromIntegral $ sumUS nums) (fromIntegral $ lengthUS nums)
-- This generates an hp file with one hump (i.e. the list is shared)
-- main = print $ div (fromIntegral $ sum $ numsL) (fromIntegral $ length $ numsL)