Suppose:
The events are A perceived, B perceived or Ping perceived.
A possible sequence of events could be A,A,A,B,Ping.
The states are InA, InB, PingMissing.
The rules are
No Ping in all events -> PingMissing.
A -> InA
B -> InB
(Only Ping events -> InA)
I would like to have one recommended action/ state.
I see three possibilities for the transition function f(s,e)->s:
Create a pseudo event likePingMissing perceived. Hence everything is in one function.
Two separate transition functions and combining the result.
One transition function with two states as a tuple and combining the result.
Any thoughts? Best practices?
Implementation of 2. in F# (language doesn't really matter):
type Event =
| A
| B
| Ping
type State1 =
| InA
| InB
type State2 =
| PingReceived
| PingMissing
type StateCombined =
| InA'
| InB'
| PingMissing'
let f1 s e :State1 =
match s,e with
| _, A -> InA
| _, B -> InB
| _, _ -> s
let f2 s e :State2 =
match s,e with
| _, Ping -> PingReceived
| _, _ -> s
let fCombined events =
let finalState1 = events |> Seq.fold f1 InA
let finalState2 = events |> Seq.fold f2 PingMissing
match finalState1, finalState2 with
| _, PingMissing -> PingMissing'
| InA, _ -> InA'
| InB, _ -> InB'
fCombined [A;A;A;B]
// PingMissing'
fCombined [A;A;A;B;Ping]
// InB'
I would tend to model the unified state as a tuple of the two substates (broadly in this case: "has a ping been received" and "if a ping has been received, was the last perception an A or a B"). A convenience function can then distill that into a recommendation.
This has the advantage of not reusing the sequence of events, so is a bit more compatible with a view of the events as a stream: at the very least this results in not having to refetch the events from an event store or keep the entire sequence of events in memory.
For example, in Scala (and explicitly modeling the situation where no A nor B has been perceived yet):
sealed trait Event
case object A extends Event
case object B extends Event
case object Ping extends Event
sealed trait PingState
case object PingReceived extends Event // Don't strictly need...
case object PingMissing extends Event
sealed trait LastPerceivedState
case object InA extends Event
case object InB extends Event
// ... could just as well be (Option[PingMissing], Option[LastPerceivedState])...
type State = (PingState, Option[LastPerceivedState])
// ... in which case, this is (Some(PingMissing), None)
val InitialState = PingMissing -> None
def distilledState(state: State): Either[PingMissing, Option[LastPerceivedState]] =
state match {
case (PingMissing, _) => Left(PingMissing)
case (_, lpsOpt) => Right(lpsOpt)
}
The transition function could then be written directly (taking advantage of the fact that the events can be partitioned into events which affect PingState or LastPerceivedState but never both):
val transitionF = { (state: State, evt: Event) =>
val (ps, lpsOpt) = state
evt match {
case A => ps -> Some(InA)
case B => ps -> Some(InB)
case Ping => PingReceived -> lpsOpt
}
}
In the event that there are events which affect both, then decomposing into subhandlers might simplify the code (at the expense of some possibly redundant invocations):
val pingStateTransition = { (ps: PingState, evt: Event) =>
if (ps == PingReceived) PingReceived
else if (evt == Ping) PingReceived
else ps
}
val lastPerceivedStateTransition = { (lpsOpt: Option[LastPerceivedState], evt: Event) =>
evt match {
case A => Some(InA)
case B => Some(InB)
case _ => lpsOpt
}
}
val transitionF = { (state: State, evt: Evt) =>
pingStateTransition(state._1, evt) -> lastPerceivedStateTransition(state._2, evt)
}
Related
I'm new to Fabulous and MUV model, and I'm trying to implement application that works with BLE. I'm also a bit new to F#, mostly worked with erlang and C# in the past, so a bit lost with external events processing. CrossBluetoothLE.Current.Adapter has DeviceDiscovered event handler (IEvent). What's the most correct way of linking this event handler to the Fabulous update function?
E.g. after I will call CrossBluetoothLE.Current.Adapter.StartScanningForDevicesAsync(), I want that this event handler supply newly discovered devices to the update function.
And if I will do something like this (this is not working):
type MyApp () as app =
inherit Application ()
let deviceDiscovered dispatch =
CrossBluetoothLE.Current.Adapter.DeviceDiscovered.Subscribe (fun x -> dispatch (App.Msg.Discovered x.Device) )
let runner =
App.program
|> Program.withConsoleTrace
|> Program.withSubscription (fun _ -> Cmd.ofSub deviceDiscovered)
|> XamarinFormsProgram.run app
if it works, it will be ok for device discovery because CrossBluetoothLE.Current.Adapter is static. However after device will be discovered, I will need to work with (e.g. receive notifications or replies from it), and it will not be possible to include dynamic device handler into Program.withSubscription.
Not sure whether the Fabulous is applicable here.
Ok, I was able to find some solution and it works now, but the overall architecture looks a bit weird. So generic approach is to create an external mailbox, that will dispatch messages to the MUV loop.
Describe all messages of the MUV in the external module, e.g.:
type Msg =
| Scan
| Discovered of IDevice
| Connect of IDevice
| ClockMsg of System.DateTime
| TextMsg of string
Create type that encapsulates mailbox:
type DispatchFunc = Msgs.Msg -> unit
type State =
| Initialized of DispatchFunc
| NotInitialized
type Mail =
| Dispatch of DispatchFunc
| Msg of Msgs.Msg
| None
let rand = System.Random()
let id = rand.NextDouble()
let postbox = MailboxProcessor.Start(fun inbox ->
let rec messageLoop (state:State) = async{
let! mail = inbox.Receive()
let new_state =
match mail with
| None ->
state
| Msg msg ->
match state with
| NotInitialized -> NotInitialized
| Initialized df ->
df msg
state
| Dispatch df ->
Initialized df
return! messageLoop (new_state)
}
messageLoop (NotInitialized))
let post(o) =
postbox.Post o
Here, mailbox starts with NotInitialized state and wait while application will start. When everything is done, mailbox received dispatch function, that will be used in further dispatching of the external messages to the MUV main loop.
Pass dispatch handler to the mailbox:
type MyApp () as app =
inherit Application ()
// generate initial events + start threads + pass dispatch reference to the mailbox
let initThreads dispatch =
// init & start external (e.g. bluetooth receiver) threads here
// or start them asynchronously from MUV loop
Postbox.post (Postbox.Dispatch dispatch)
()
let runner =
App.program
|> Program.withConsoleTrace
|> Program.withSubscription (fun _ -> Cmd.ofSub initThreads)
|> XamarinFormsProgram.run app
So now, if you want to send event to the MUV from external thread, just start it inside initThreads (or, e.g. from within MUV loop) and use something like: Postbox.post (Postbox.Msg (Msgs.TextMsg "It works!")).
E.g. for my purposes (BLE discovery) it will look like this:
let update msg model =
match msg with
| Msgs.Scan ->
CrossBluetoothLE.Current.Adapter.StopScanningForDevicesAsync() |> Async.AwaitTask |> ignore
CrossBluetoothLE.Current.Adapter.DeviceDiscovered.Subscribe (
fun (a) ->
Postbox.post (Postbox.Msg (Msgs.Discovered a.Device))
()
) |> ignore
CrossBluetoothLE.Current.Adapter.StartScanningForDevicesAsync() |> Async.AwaitTask |> ignore
model, Cmd.none
| Msgs.ClockMsg msg ->
{ model with label = msg.ToString() }, Cmd.none
| Msgs.TextMsg msg ->
{ model with label = msg }, Cmd.none
| Msgs.Discovered d ->
{ model with gattDevices = d::model.gattDevices; label = "Discovered " + d.ToString() }, Cmd.none
| Msgs.Connect d -> { model with connectedDevice = d }, Cmd.none
This is for sure a very ugly solution, but I wasn't able to imagine something more beautiful :(.
can I write something like this
let echo (ws: WebSocket) =
fun ctx -> socket {
let loop = ref true
while !loop do
let! message = Async.Choose (ws.read()) (inbox.Receive())
match message with
| Choice1Of2 (wsMessage) ->
match wsMessage with
| Ping, _, _ -> do! ws.send Pong [||] true
| _ -> ()
| Choice2Of2 pushMessage -> do! ws.send Text pushMessage true
}
or do I need 2 seperate socket-loop for concurrent read-write?
I think you could solve this using Async.Choose (there is a bunch of implementations - though I'm not sure where is the most canonical one).
That said, you can certainly create two loops - the reading one inside socket { .. } so that you can receive data from web sockets; the writing one can be ordinary async { ... } block.
Something like this should do the trick:
let echo (ws: WebSocket) =
// Loop that waits for the agent and writes to web socket
let notifyLoop = async {
while true do
let! msg = inbox.Receive()
do! ws.send Text msg }
// Start this using cancellation token, so that you can stop it later
let cts = new CancellationTokenSource()
Async.Start(notifyLoop, cts.Token)
// The loop that reads data from the web socket
fun ctx -> socket {
let loop = ref true
while !loop do
let! message = ws.read()
match message with
| Ping, _, _ -> do! ws.send Pong [||] true
| _ -> () }
There isn't a proper implementation of Async.Choose (for this case at least), so we need two async-loop for concurrent read-write; see this for more detail
I am trying to solve the following problem. I have some agents running in real-time, with a large heartbeat of couple of milliseconds, and the order of operations they process for this reason are mostly deterministic (as the message processing is not the bottleneck).
Now, I am running a large amount of simulations of the system where I no longer have a heartbeat (otherwise it will take a couple of centuries) - but I need to make sure the order of operations are preserved. For this, I adopted the following solution: the simulator makes sure each agent has processed his message queue, by posting a dummy synchronization message and blocking while waiting for the answer. This does work for my application, but the time it takes is not intuitive - as a single threaded implementation would be an order of magnitude faster (I guess - x 100 ish - although I have not tested ).
I have isolated a small test that shows the issue, even trying to use another library, akka.net
type Greet =
| Greet of string
| Hello of AsyncReplyChannel<bool>
| Hello2
[<EntryPoint>]
let main argv =
let system = System.create "MySystem" <| Configuration.load()
let greeter = spawn system "greeter" <| fun mailbox ->
let rec loop() = actor {
let! msg = mailbox.Receive()
let sender = mailbox.Sender()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello2 -> sender.Tell(true)
| _ -> ()
return! loop()
}
loop()
let greeterF =
MailboxProcessor.Start
(fun inbox ->
async {
while true do
let! msg = inbox.Receive()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello reply -> reply.Reply true
| _ -> ()
}
)
let n = 1000000
let t1 = System.Diagnostics.Stopwatch()
t1.Start()
for i = 1 to n do
let rep = greeterF.PostAndReply(fun reply -> (Hello reply)) |> ignore
()
printfn "elapsed Mailbox:%A" t1.ElapsedMilliseconds
t1.Restart()
for i = 1 to n do
let res = greeter.Ask (Hello2)
let rr = res.Result
()
printfn "elapsed Akka:%A" t1.ElapsedMilliseconds
System.Console.ReadLine () |> ignore
0
Basically, both take about 10 seconds for a mere 1 millions of synchronizations - and not computation what-so-ever involved, and this is... unfortunate.
I am wondering if anybody has come across the same problem and if there is anyway to switch off the overhead forcing everything to run in a single threaded mode... something like better than de-activating all the cpus but 1 in the bios - or writing a clone of the whole system without the agents.
Any help well appreciated.
The reason for the Akka.NET version beeing slow here is how you communicate with the actor:
main process Task FutureActorRef !!ThreadPool!! greeter
Ask ---------------------->
Tell----------->
MailboxRun ----->
(greeter mailbox is empty) |
<--------------------------Tell
<--Complete task
<----------.Result
For each iteration, a TPL task will be created
Then a single message is sent to the greeter.
The main process then blocks while waiting for the response to come back.
The greeter replies back which in turn completes the task inside the FutureActorRef
Rinse and Repeat..
This design will cause Akka.NET to start and stop the greeters "mailbox run" for each message as the mailbox queue becomes empty for each iteration.
This results in threadpool schedulation for each single message that is passed.
It's a bit like entering your car, putting the pedal to the metal, then abruptly stop and step out of the car, and then repeating the procedure again.
That is just not a very effective way to travel fast.
#Aaronontheweb's suggestion will only have effect if you work out the above kinks in your code.
The mailbox needs to be able to constantly pick messages of the internal queue to work with messages in batches to achieve full throughput.
Instead, separate the producer from the consumer.
Create an actor that listens for the responses from your greeter.
And once that actor have processes your 1000000 messages, let that actor send a WorkCompleted message back to the consumer.
[Edit]
I gave it a shot myself, I don't know F# so it might not be completely idiomatic :)
open Akka
open Akka.Actor
open Akka.FSharp
type Greet =
| Greet of string
| Hello of AsyncReplyChannel<bool>
| Hello2
type Consume =
| Response
| SetSender
[<EntryPoint>]
let main argv =
let system = System.create "MySystem" <| Configuration.load()
let greeter = spawn system "greeter" <| fun mailbox ->
let rec loop() = actor {
let! msg = mailbox.Receive()
let sender = mailbox.Sender()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello2 -> sender.Tell(Response)
| _ -> ()
return! loop()
}
loop()
let consumer = spawn system "consumer" <| fun mailbox ->
let rec loop(count,sender : IActorRef) = actor {
if count = 1000000 then sender.Tell(true)
let! msg = mailbox.Receive()
match msg with
| Response -> return! loop(count+1,sender)
| SetSender -> return! loop(count,mailbox.Sender())
}
loop(0,null)
let n = 1000000
let t1 = System.Diagnostics.Stopwatch()
t1.Start()
for i = 1 to n do
greeter.Tell(Hello2,consumer)
let workdone = consumer.Ask SetSender
workdone.Wait()
printfn "elapsed Akka:%A" t1.ElapsedMilliseconds
System.Console.ReadLine () |> ignore
0
I updated your code to use a separate consumer for the actor responses and then reply back once all replies had been processed.
By doing so, your processing time is now down to 650ms on my machine.
If you want better throughput, you need to involve more actors to parallelize more.
I'm not sure if this helps in your specific scenario
Here is a slightly modified MailboxProcessor version:
module MBPAsync =
type Greet =
| Greet of string
| Hello of AsyncReplyChannel<bool>
let run n =
let timer = Stopwatch.StartNew ()
use greeter =
MailboxProcessor.Start <| fun inbox -> async {
while true do
let! msg = inbox.Receive()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello reply -> reply.Reply true
}
Async.RunSynchronously <| async {
for i = 1 to n do
do! Async.Ignore (greeter.PostAndAsyncReply Hello)
}
let elapsed = timer.Elapsed
printfn "%A" elapsed
The difference here is that this version uses PostAndAsyncReply and keeps the computation in an async workflow. On my quick test this seemed to be much faster that using PostAndReply, but YMMV.
The timings I get from the above MBP version look roughly like this:
> MBPAsync.run 1000000 ;;
00:00:02.6883486
val it : unit = ()
A comment earlier mentioned my Hopac library. Here is an optimized version using Hopac:
module Hop =
type Greet =
| Greet of string
| Hello of IVar<bool>
let run n =
let timer = Stopwatch.StartNew ()
let greeterCh = ch ()
do greeterCh >>= function
| Greet who -> Job.unit ()
| Hello reply -> reply <-= true
|> Job.forever
|> server
Job.forUpToIgnore 1 n <| fun _ ->
let reply = ivar ()
greeterCh <-- Hello reply >>.
reply
|> run
let elapsed = timer.Elapsed
printfn "%A" elapsed
The timings I get from the above Hopac version look roughly like this:
> Hop.run 1000000 ;;
00:00:00.1088768
val it : unit = ()
I'm not an F# developer, but I'm a core dev on Akka.NET. A couple of ideas for your scenario:
If you're only using a single actor for this work, you can try using a PinnedDispatcher - that way the actor runs on its own dedicated thread all the time. That will save you on unnecessary context switching overhead.
You can also set the throughput of the mailbox to be much higher for this PinnedDispatcher than the normal settings. i.e. set a throughput value of 10000 (or something) instead of the normal 25. Assuming that the contents of your mailbox grow in large bursts this should save you on mailbox synchronization overhead.
Here's what your dispatcher configuration might look like:
my-pinned-dispatcher {
type = PinnedDispatcher
throughput = 1000 #your mileage may vary
}
And then configure an actor to use it
C# Fluent Interface
var myActor = myActorSystem.ActorOf(Props.Create<FooActor>()
.WithDispatcher("my-pinned-dispatcher");
Config
akka.actor.deployment{
/greeter{
dispatcher = my-pinned-dispatcher
}
}
These are both options you can configure via HOCON in App.config or Web.config or you can use the fluent interface on the Props class to do this. Also worth noting: there's a bug with pinned dispatchers at the moment, but that should be fixed in our next maintenance release (v1.0.1,) which should be out next week.
Your mileage may vary, but this is what I would try - basically it's just designed to help reduce contention and overhead around a single actor.
I need to publish messages of different types to event stream, and those
messages should have different priorities for example, if 10 messages of type
A have been posted, and one message of type B is posted after all, and
priority of B is higher than the priority of A - message B should be picked up
by next actor even if there are 10 messages of type A in queue.
I have read about prioritized messages here and created my simple implementation of that mailbox:
class PrioritizedMailbox(settings: Settings, cfg: Config) extends UnboundedPriorityMailbox(
PriorityGenerator {
case ServerPermanentlyDead => println("Priority:0"); 0
case ServerDead => println("Priority:1"); 1
case _ => println("Default priority"); 10
}
)
then I configured it in application.conf
akka {
actor {
prio-dispatcher {
type = "Dispatcher"
mailbox-type = "mailbox.PrioritizedMailbox"
}
}
}
and wired into my actor:
private val myActor = actors.actorOf(
Props[MyEventHandler[T]].
withRouter(RoundRobinRouter(HIVE)).
withDispatcher("akka.actor.prio-dispatcher").
withCreator(
new Creator[Actor] {
def create() = new MyEventHandler(storage)
}), name = "eventHandler")
I'm using ActorSystem.eventStream.publish in order to send messages, and my actor
is subscribed to it (I can see in logs that messages are processed, but in
FIFO order).
However looks like it is not enough, because in logs/console I've never seen the
messages like "Default priority". Am I missing something here? Does the
described approach work with event streams or just with direct invocations of
sending a message on actor? And how do I get prioritized messages with
eventStream?
Your problem is that your actors are insanely fast so messages get processed before they have time to queue up, so there cannot be any priorization done by the mailbox. The example below proves the point:
trait Foo
case object X extends Foo
case object Y extends Foo
case object Z extends Foo
class PrioritizedMailbox(settings: ActorSystem.Settings, cfg: Config)
extends UnboundedPriorityMailbox(
PriorityGenerator {
case X ⇒ 0
case Y ⇒ 1
case Z ⇒ 2
case _ ⇒ 10
})
val s = ActorSystem("prio", com.typesafe.config.ConfigFactory.parseString(
""" prio-dispatcher {
type = "Dispatcher"
mailbox-type = "%s"
}""".format(classOf[PrioritizedMailbox].getName)))
val latch = new java.util.concurrent.CountDownLatch(1)
val a = s.actorOf(Props(new akka.actor.Actor {
latch.await // Just wait here so that the messages are queued up
inside the mailbox
def receive = {
case any ⇒ /*println("Processing: " + any);*/ sender ! any
}
}).withDispatcher("prio-dispatcher"))
implicit val sender = testActor
a ! "pig"
a ! Y
a ! Z
a ! Y
a ! X
a ! Z
a ! X
a ! "dog"
latch.countDown()
Seq(X, X, Y, Y, Z, Z, "pig", "dog") foreach { x => expectMsg(x) }
s.shutdown()
This test passes with flying colors
How can I set an action to occur on a key release in xmonad?
I don't like menu bars and panels.
Instead of a panel like xmobar I want to have a full screen page of info, (time, currently selected window and workspace etc) appear when I hold down a key combo and then vanish when I let the keys go.
I could code the info page application myself.
I can set the info page to spawn on a key press.
I can not set anything to happen on a key release.
How can I set an action to occur on a key release?
I am considering extending xmonad myself to do this.
I hope I don't have to though because it'd be really annoying.
XMonad passes all received events, including KeyPress events, to the handleEventHook, so this code would be able to react on keyRelease events:
module KeyUp where
import Data.Monoid
import qualified Data.Map as M
import XMonad
import Control.Monad
keyUpEventHook :: Event -> X All
keyUpEventHook e = handle e >> return (All True)
keyUpKeys (XConf{ config = XConfig {XMonad.modMask = modMask} }) = M.fromList $
[ ((modMask, xK_v), io (print "Hi")) ]
handle :: Event -> X ()
handle (KeyEvent {ev_event_type = t, ev_state = m, ev_keycode = code})
| t == keyRelease = withDisplay $ \dpy -> do
s <- io $ keycodeToKeysym dpy code 0
mClean <- cleanMask m
ks <- asks keyUpKeys
userCodeDef () $ whenJust (M.lookup (mClean, s) ks) id
handle _ = return ()
You would use it like that in your xmonad.hs file:
handleEventHook = handleEventHook defaultConfig `mappend`
keyUpEventHook `mappend`
fullscreenEventHook
Unfortunately, this does not work yet: It will only react on KeyRelease events that have a corresponding entry in the regular keys configuration. This is due to grayKeys in XMonad.Main, grabbing only keys mentioned in keys. You can work-around this by defining a dummy action for every combination that you want to handle in KeyUp:
myKeys conf#(XConfig {XMonad.modMask = modMask}) = M.fromList $
...
, ((modMask , xK_v ), return ())
myStartupHook :: X ()
myStartupHook = do
XConf { display = dpy, theRoot = rootw } <- ask
myKeyCode <- io $ (keysymToKeycode dpy xK_Super_R)
io $ grabKey dpy (myKeyCode) anyModifier rootw True grabModeAsync grabModeAsync
spawn "~/ScriptsVcs/hideTint2.sh"
myHook :: Event -> X All
myHook e = do
case e of
ke#(KeyEvent _ _ _ _ _ _ _ _ _ _ _ _ _ _ _) -> do
if ev_keycode ke == 134
then if ev_state ke == 0
then do
-- key has been pressed
spawn "~/ScriptsVcs/showTint2.sh"
else do
spawn "~/ScriptsVcs/hideTint2.sh"
else pure ()
_ -> pure ()
pure $ All True
The above is an example. Do take note that a 'key release' could occur with a modifier key (ev_state).