F# Fabulous Xamarin: external event subscription - xamarin

I'm new to Fabulous and MUV model, and I'm trying to implement application that works with BLE. I'm also a bit new to F#, mostly worked with erlang and C# in the past, so a bit lost with external events processing. CrossBluetoothLE.Current.Adapter has DeviceDiscovered event handler (IEvent). What's the most correct way of linking this event handler to the Fabulous update function?
E.g. after I will call CrossBluetoothLE.Current.Adapter.StartScanningForDevicesAsync(), I want that this event handler supply newly discovered devices to the update function.
And if I will do something like this (this is not working):
type MyApp () as app =
inherit Application ()
let deviceDiscovered dispatch =
CrossBluetoothLE.Current.Adapter.DeviceDiscovered.Subscribe (fun x -> dispatch (App.Msg.Discovered x.Device) )
let runner =
App.program
|> Program.withConsoleTrace
|> Program.withSubscription (fun _ -> Cmd.ofSub deviceDiscovered)
|> XamarinFormsProgram.run app
if it works, it will be ok for device discovery because CrossBluetoothLE.Current.Adapter is static. However after device will be discovered, I will need to work with (e.g. receive notifications or replies from it), and it will not be possible to include dynamic device handler into Program.withSubscription.
Not sure whether the Fabulous is applicable here.

Ok, I was able to find some solution and it works now, but the overall architecture looks a bit weird. So generic approach is to create an external mailbox, that will dispatch messages to the MUV loop.
Describe all messages of the MUV in the external module, e.g.:
type Msg =
| Scan
| Discovered of IDevice
| Connect of IDevice
| ClockMsg of System.DateTime
| TextMsg of string
Create type that encapsulates mailbox:
type DispatchFunc = Msgs.Msg -> unit
type State =
| Initialized of DispatchFunc
| NotInitialized
type Mail =
| Dispatch of DispatchFunc
| Msg of Msgs.Msg
| None
let rand = System.Random()
let id = rand.NextDouble()
let postbox = MailboxProcessor.Start(fun inbox ->
let rec messageLoop (state:State) = async{
let! mail = inbox.Receive()
let new_state =
match mail with
| None ->
state
| Msg msg ->
match state with
| NotInitialized -> NotInitialized
| Initialized df ->
df msg
state
| Dispatch df ->
Initialized df
return! messageLoop (new_state)
}
messageLoop (NotInitialized))
let post(o) =
postbox.Post o
Here, mailbox starts with NotInitialized state and wait while application will start. When everything is done, mailbox received dispatch function, that will be used in further dispatching of the external messages to the MUV main loop.
Pass dispatch handler to the mailbox:
type MyApp () as app =
inherit Application ()
// generate initial events + start threads + pass dispatch reference to the mailbox
let initThreads dispatch =
// init & start external (e.g. bluetooth receiver) threads here
// or start them asynchronously from MUV loop
Postbox.post (Postbox.Dispatch dispatch)
()
let runner =
App.program
|> Program.withConsoleTrace
|> Program.withSubscription (fun _ -> Cmd.ofSub initThreads)
|> XamarinFormsProgram.run app
So now, if you want to send event to the MUV from external thread, just start it inside initThreads (or, e.g. from within MUV loop) and use something like: Postbox.post (Postbox.Msg (Msgs.TextMsg "It works!")).
E.g. for my purposes (BLE discovery) it will look like this:
let update msg model =
match msg with
| Msgs.Scan ->
CrossBluetoothLE.Current.Adapter.StopScanningForDevicesAsync() |> Async.AwaitTask |> ignore
CrossBluetoothLE.Current.Adapter.DeviceDiscovered.Subscribe (
fun (a) ->
Postbox.post (Postbox.Msg (Msgs.Discovered a.Device))
()
) |> ignore
CrossBluetoothLE.Current.Adapter.StartScanningForDevicesAsync() |> Async.AwaitTask |> ignore
model, Cmd.none
| Msgs.ClockMsg msg ->
{ model with label = msg.ToString() }, Cmd.none
| Msgs.TextMsg msg ->
{ model with label = msg }, Cmd.none
| Msgs.Discovered d ->
{ model with gattDevices = d::model.gattDevices; label = "Discovered " + d.ToString() }, Cmd.none
| Msgs.Connect d -> { model with connectedDevice = d }, Cmd.none
This is for sure a very ugly solution, but I wasn't able to imagine something more beautiful :(.

Related

Event Sourcing: How to combine divergent states?

Suppose:
The events are A perceived, B perceived or Ping perceived.
A possible sequence of events could be A,A,A,B,Ping.
The states are InA, InB, PingMissing.
The rules are
No Ping in all events -> PingMissing.
A -> InA
B -> InB
(Only Ping events -> InA)
I would like to have one recommended action/ state.
I see three possibilities for the transition function f(s,e)->s:
Create a pseudo event likePingMissing perceived. Hence everything is in one function.
Two separate transition functions and combining the result.
One transition function with two states as a tuple and combining the result.
Any thoughts? Best practices?
Implementation of 2. in F# (language doesn't really matter):
type Event =
| A
| B
| Ping
type State1 =
| InA
| InB
type State2 =
| PingReceived
| PingMissing
type StateCombined =
| InA'
| InB'
| PingMissing'
let f1 s e :State1 =
match s,e with
| _, A -> InA
| _, B -> InB
| _, _ -> s
let f2 s e :State2 =
match s,e with
| _, Ping -> PingReceived
| _, _ -> s
let fCombined events =
let finalState1 = events |> Seq.fold f1 InA
let finalState2 = events |> Seq.fold f2 PingMissing
match finalState1, finalState2 with
| _, PingMissing -> PingMissing'
| InA, _ -> InA'
| InB, _ -> InB'
fCombined [A;A;A;B]
// PingMissing'
fCombined [A;A;A;B;Ping]
// InB'
I would tend to model the unified state as a tuple of the two substates (broadly in this case: "has a ping been received" and "if a ping has been received, was the last perception an A or a B"). A convenience function can then distill that into a recommendation.
This has the advantage of not reusing the sequence of events, so is a bit more compatible with a view of the events as a stream: at the very least this results in not having to refetch the events from an event store or keep the entire sequence of events in memory.
For example, in Scala (and explicitly modeling the situation where no A nor B has been perceived yet):
sealed trait Event
case object A extends Event
case object B extends Event
case object Ping extends Event
sealed trait PingState
case object PingReceived extends Event // Don't strictly need...
case object PingMissing extends Event
sealed trait LastPerceivedState
case object InA extends Event
case object InB extends Event
// ... could just as well be (Option[PingMissing], Option[LastPerceivedState])...
type State = (PingState, Option[LastPerceivedState])
// ... in which case, this is (Some(PingMissing), None)
val InitialState = PingMissing -> None
def distilledState(state: State): Either[PingMissing, Option[LastPerceivedState]] =
state match {
case (PingMissing, _) => Left(PingMissing)
case (_, lpsOpt) => Right(lpsOpt)
}
The transition function could then be written directly (taking advantage of the fact that the events can be partitioned into events which affect PingState or LastPerceivedState but never both):
val transitionF = { (state: State, evt: Event) =>
val (ps, lpsOpt) = state
evt match {
case A => ps -> Some(InA)
case B => ps -> Some(InB)
case Ping => PingReceived -> lpsOpt
}
}
In the event that there are events which affect both, then decomposing into subhandlers might simplify the code (at the expense of some possibly redundant invocations):
val pingStateTransition = { (ps: PingState, evt: Event) =>
if (ps == PingReceived) PingReceived
else if (evt == Ping) PingReceived
else ps
}
val lastPerceivedStateTransition = { (lpsOpt: Option[LastPerceivedState], evt: Event) =>
evt match {
case A => Some(InA)
case B => Some(InB)
case _ => lpsOpt
}
}
val transitionF = { (state: State, evt: Evt) =>
pingStateTransition(state._1, evt) -> lastPerceivedStateTransition(state._2, evt)
}

streaming data from events to a Suave socket

I am experimenting with Suave to send a stream of data updates; I want to replace a system we have that does polling with a socket implementation.
Here's some code:
let updateStreamSocket (webSocket : WebSocket) (context: HttpContext) =
socket {
printfn "connection"
candleUpdateEvent.Publish.Add(fun d ->
(webSocket.send Binary (d |> ByteSegment) true |> Async.RunSynchronously |> ignore)
)
let mutable loop = true
while loop do
let! msg = webSocket.read()
match msg with
| (Close, _, _) ->
let emptyResponse = [||] |> ByteSegment
do! webSocket.send Close emptyResponse true
loop <- false
| _ -> ()
printfn "disconnection"
}
Since I'm testing, I just care about the Close message, but eventually I'll have to process the Text messages to handle subscriptions.
The model is that data gets processed and each batch triggers an event (through a mailbox processor to separate threads). In the socket code, I need to handle both the socket messages I receive but also these events to send the data.
How could I join this in a single loop and wait for either event?
Right now the event handler in the socket {} section will be added / removed with connection / disconnections, but it would be possible that the close get called and then an event arrives and tries to send data, etc.. while it works while testing, this is not right.

F# | How to manage WebSocketClient ReceiveAsync on multithread scenario?

Looking for WebSocketClient example I only found simple example with a single request/response scenario.
Kind of:
type WSClientSimple (url) =
let ws = new ClientWebSocket()
let lockConnection = Object()
let connect() =
lock lockConnection ( fun () ->
if not (ws.State = WebSocketState.Open) then
ws.ConnectAsync(Uri(url), CancellationToken.None)
|> Async.AwaitTask |> Async.RunSynchronously // await
else ()
)
let receive () =
lock lockConnection ( fun () ->
let rec readStream finalText endOfMessage =
let buffer = ArraySegment(Array.zeroCreate<byte> 1024)
let result = ws.ReceiveAsync(buffer, CancellationToken.None) |> Async.AwaitTask |> Async.RunSynchronously
let text = finalText + Encoding.UTF8.GetString (buffer.Array |> Array.take result.Count)
if result.EndOfMessage then text
else readStream text true
readStream "" false
)
let sendRequest jsonMessage =
let bytes = Encoding.UTF8.GetBytes(jsonMessage:string)
let bytesMessage = ArraySegment(bytes, 0, bytes.Length)
if not (ws.State = WebSocketState.Open) then
connect()
// send request...
ws.SendAsync(bytesMessage, WebSocketMessageType.Text, true, CancellationToken.None) |> Async.AwaitTask |> Async.RunSynchronously
// ... read response
receive()
member this.SendRequest request = sendRequest request
Obviously it works with:
[<Test>]
member this.``Receive sequentially`` () =
let client = WSClientSimple("url")
for i in 1..100 do
client.SendRequest "aaa" |> ignore
and also (thanks to the orrible lock) with multiple thread using the same Client:
[<Test>]
member this.``Receive parallel on same client`` () =
let client = WSClientSimple("url")
for _ in 1..100 do
async {
client.SendRequest "aaa" |> ignore
} |> Async.Start
Now, if I really want to get the beast from WebSocket "duplex" cpmmunication I would continuosly read from the socket, send requests without any block, and distribute the received messages to the right call.
So, this is an ongoing receive function that collect all the inbound messages.
type WSClientTest2 (url:string) =
let onMessageReceived = new Event<string>()
let responseMessage = new Event<ResponseMessage>()
let receivedMesasages = System.Collections.Concurrent.ConcurrentQueue<ResponseMessage>()
let responseCallbacks = Map.empty<int, (string -> unit)>
let manageMessage (message:string) =
match message.Split(':') with
| [|id;message|] ->
responseMessage.Trigger {Id=int(id);Message=message}
receivedMesasages.Enqueue {Id=int(id);Message=message}
| _ -> ()
let startReceiving() =
let mutable counter = 1
async {
// simulate receiving from a WebSocket
while true do
System.Threading.Tasks.Task.Delay 100 |> Async.AwaitTask |> Async.RunSynchronously
onMessageReceived.Trigger (sprintf "message %d" counter)
manageMessage (sprintf "%d:message" counter)
counter <- counter + 1
} |> Async.Start
do
startReceiving()
How can I send a request and wait for the correlated response message?
This is my try:
let mutable requestId = 0
let sendRequest message: string =
let requestId = requestId+1
let received = new Event<string>()
let receivedCall = fun (msg:string) ->
received.Trigger msg
responseCallbacks.Add(requestId, receivedCall) |> ignore
let cancel = fun () -> failwith "Timeout"
async {
System.Threading.Thread.Sleep 500 // wait x seconds
cancel()
} |> Async.Start
// simulate send/receive messsage after some time
let generateRequest () =
System.Threading.Thread.Sleep 100 // wait x time for the response
responseMessage.Trigger {Id=requestId; Message=message}
generateRequest()
Async.AwaitEvent(received.Publish, cancel)
|> Async.RunSynchronously
Async.AwaitWaitHandle seems the right thing to use but I don't know how to create a WaitHandle.
I'm using Async.AwaitEvent but it seems not to work.
The cancel() is always called but it does not raise any Exception!
What could be a proper way to wait for an Event while executing a function and then check and return its content?
I also tried to use a Map<id, response> populatd with any inbound message but still I don't know how to "wait" for the proper message and also it probably requires a check for orphan response messages (add complexity).
More in general, if the resulting code is so crappy I would prefer to use a simple API for this Request/Response scenario and use the WebSocket only for a realtime update.
I'm looking for a nice solution, otherwise I think it is not really worth for the sake of performance, not for my needs.

How to implement server-push over websocket in suave?

can I write something like this
let echo (ws: WebSocket) =
fun ctx -> socket {
let loop = ref true
while !loop do
let! message = Async.Choose (ws.read()) (inbox.Receive())
match message with
| Choice1Of2 (wsMessage) ->
match wsMessage with
| Ping, _, _ -> do! ws.send Pong [||] true
| _ -> ()
| Choice2Of2 pushMessage -> do! ws.send Text pushMessage true
}
or do I need 2 seperate socket-loop for concurrent read-write?
I think you could solve this using Async.Choose (there is a bunch of implementations - though I'm not sure where is the most canonical one).
That said, you can certainly create two loops - the reading one inside socket { .. } so that you can receive data from web sockets; the writing one can be ordinary async { ... } block.
Something like this should do the trick:
let echo (ws: WebSocket) =
// Loop that waits for the agent and writes to web socket
let notifyLoop = async {
while true do
let! msg = inbox.Receive()
do! ws.send Text msg }
// Start this using cancellation token, so that you can stop it later
let cts = new CancellationTokenSource()
Async.Start(notifyLoop, cts.Token)
// The loop that reads data from the web socket
fun ctx -> socket {
let loop = ref true
while !loop do
let! message = ws.read()
match message with
| Ping, _, _ -> do! ws.send Pong [||] true
| _ -> () }
There isn't a proper implementation of Async.Choose (for this case at least), so we need two async-loop for concurrent read-write; see this for more detail

F# Akka.NET agents performance optimization when synchronizing

I am trying to solve the following problem. I have some agents running in real-time, with a large heartbeat of couple of milliseconds, and the order of operations they process for this reason are mostly deterministic (as the message processing is not the bottleneck).
Now, I am running a large amount of simulations of the system where I no longer have a heartbeat (otherwise it will take a couple of centuries) - but I need to make sure the order of operations are preserved. For this, I adopted the following solution: the simulator makes sure each agent has processed his message queue, by posting a dummy synchronization message and blocking while waiting for the answer. This does work for my application, but the time it takes is not intuitive - as a single threaded implementation would be an order of magnitude faster (I guess - x 100 ish - although I have not tested ).
I have isolated a small test that shows the issue, even trying to use another library, akka.net
type Greet =
| Greet of string
| Hello of AsyncReplyChannel<bool>
| Hello2
[<EntryPoint>]
let main argv =
let system = System.create "MySystem" <| Configuration.load()
let greeter = spawn system "greeter" <| fun mailbox ->
let rec loop() = actor {
let! msg = mailbox.Receive()
let sender = mailbox.Sender()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello2 -> sender.Tell(true)
| _ -> ()
return! loop()
}
loop()
let greeterF =
MailboxProcessor.Start
(fun inbox ->
async {
while true do
let! msg = inbox.Receive()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello reply -> reply.Reply true
| _ -> ()
}
)
let n = 1000000
let t1 = System.Diagnostics.Stopwatch()
t1.Start()
for i = 1 to n do
let rep = greeterF.PostAndReply(fun reply -> (Hello reply)) |> ignore
()
printfn "elapsed Mailbox:%A" t1.ElapsedMilliseconds
t1.Restart()
for i = 1 to n do
let res = greeter.Ask (Hello2)
let rr = res.Result
()
printfn "elapsed Akka:%A" t1.ElapsedMilliseconds
System.Console.ReadLine () |> ignore
0
Basically, both take about 10 seconds for a mere 1 millions of synchronizations - and not computation what-so-ever involved, and this is... unfortunate.
I am wondering if anybody has come across the same problem and if there is anyway to switch off the overhead forcing everything to run in a single threaded mode... something like better than de-activating all the cpus but 1 in the bios - or writing a clone of the whole system without the agents.
Any help well appreciated.
The reason for the Akka.NET version beeing slow here is how you communicate with the actor:
main process Task FutureActorRef !!ThreadPool!! greeter
Ask ---------------------->
Tell----------->
MailboxRun ----->
(greeter mailbox is empty) |
<--------------------------Tell
<--Complete task
<----------.Result
For each iteration, a TPL task will be created
Then a single message is sent to the greeter.
The main process then blocks while waiting for the response to come back.
The greeter replies back which in turn completes the task inside the FutureActorRef
Rinse and Repeat..
This design will cause Akka.NET to start and stop the greeters "mailbox run" for each message as the mailbox queue becomes empty for each iteration.
This results in threadpool schedulation for each single message that is passed.
It's a bit like entering your car, putting the pedal to the metal, then abruptly stop and step out of the car, and then repeating the procedure again.
That is just not a very effective way to travel fast.
#Aaronontheweb's suggestion will only have effect if you work out the above kinks in your code.
The mailbox needs to be able to constantly pick messages of the internal queue to work with messages in batches to achieve full throughput.
Instead, separate the producer from the consumer.
Create an actor that listens for the responses from your greeter.
And once that actor have processes your 1000000 messages, let that actor send a WorkCompleted message back to the consumer.
[Edit]
I gave it a shot myself, I don't know F# so it might not be completely idiomatic :)
open Akka
open Akka.Actor
open Akka.FSharp
type Greet =
| Greet of string
| Hello of AsyncReplyChannel<bool>
| Hello2
type Consume =
| Response
| SetSender
[<EntryPoint>]
let main argv =
let system = System.create "MySystem" <| Configuration.load()
let greeter = spawn system "greeter" <| fun mailbox ->
let rec loop() = actor {
let! msg = mailbox.Receive()
let sender = mailbox.Sender()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello2 -> sender.Tell(Response)
| _ -> ()
return! loop()
}
loop()
let consumer = spawn system "consumer" <| fun mailbox ->
let rec loop(count,sender : IActorRef) = actor {
if count = 1000000 then sender.Tell(true)
let! msg = mailbox.Receive()
match msg with
| Response -> return! loop(count+1,sender)
| SetSender -> return! loop(count,mailbox.Sender())
}
loop(0,null)
let n = 1000000
let t1 = System.Diagnostics.Stopwatch()
t1.Start()
for i = 1 to n do
greeter.Tell(Hello2,consumer)
let workdone = consumer.Ask SetSender
workdone.Wait()
printfn "elapsed Akka:%A" t1.ElapsedMilliseconds
System.Console.ReadLine () |> ignore
0
I updated your code to use a separate consumer for the actor responses and then reply back once all replies had been processed.
By doing so, your processing time is now down to 650ms on my machine.
If you want better throughput, you need to involve more actors to parallelize more.
I'm not sure if this helps in your specific scenario
Here is a slightly modified MailboxProcessor version:
module MBPAsync =
type Greet =
| Greet of string
| Hello of AsyncReplyChannel<bool>
let run n =
let timer = Stopwatch.StartNew ()
use greeter =
MailboxProcessor.Start <| fun inbox -> async {
while true do
let! msg = inbox.Receive()
match msg with
| Greet who -> () // printf "Hello, %s!\n" who
| Hello reply -> reply.Reply true
}
Async.RunSynchronously <| async {
for i = 1 to n do
do! Async.Ignore (greeter.PostAndAsyncReply Hello)
}
let elapsed = timer.Elapsed
printfn "%A" elapsed
The difference here is that this version uses PostAndAsyncReply and keeps the computation in an async workflow. On my quick test this seemed to be much faster that using PostAndReply, but YMMV.
The timings I get from the above MBP version look roughly like this:
> MBPAsync.run 1000000 ;;
00:00:02.6883486
val it : unit = ()
A comment earlier mentioned my Hopac library. Here is an optimized version using Hopac:
module Hop =
type Greet =
| Greet of string
| Hello of IVar<bool>
let run n =
let timer = Stopwatch.StartNew ()
let greeterCh = ch ()
do greeterCh >>= function
| Greet who -> Job.unit ()
| Hello reply -> reply <-= true
|> Job.forever
|> server
Job.forUpToIgnore 1 n <| fun _ ->
let reply = ivar ()
greeterCh <-- Hello reply >>.
reply
|> run
let elapsed = timer.Elapsed
printfn "%A" elapsed
The timings I get from the above Hopac version look roughly like this:
> Hop.run 1000000 ;;
00:00:00.1088768
val it : unit = ()
I'm not an F# developer, but I'm a core dev on Akka.NET. A couple of ideas for your scenario:
If you're only using a single actor for this work, you can try using a PinnedDispatcher - that way the actor runs on its own dedicated thread all the time. That will save you on unnecessary context switching overhead.
You can also set the throughput of the mailbox to be much higher for this PinnedDispatcher than the normal settings. i.e. set a throughput value of 10000 (or something) instead of the normal 25. Assuming that the contents of your mailbox grow in large bursts this should save you on mailbox synchronization overhead.
Here's what your dispatcher configuration might look like:
my-pinned-dispatcher {
type = PinnedDispatcher
throughput = 1000 #your mileage may vary
}
And then configure an actor to use it
C# Fluent Interface
var myActor = myActorSystem.ActorOf(Props.Create<FooActor>()
.WithDispatcher("my-pinned-dispatcher");
Config
akka.actor.deployment{
/greeter{
dispatcher = my-pinned-dispatcher
}
}
These are both options you can configure via HOCON in App.config or Web.config or you can use the fluent interface on the Props class to do this. Also worth noting: there's a bug with pinned dispatchers at the moment, but that should be fixed in our next maintenance release (v1.0.1,) which should be out next week.
Your mileage may vary, but this is what I would try - basically it's just designed to help reduce contention and overhead around a single actor.

Resources