Wait for any event of multiple events simultaneously in F# - events

In F# I know how to wait asynchronously for one event using Async.AwaitEvent:
let test = async {
let! move = Async.AwaitEvent(form.MouseMove)
...handle move... }
Suppose I want to wait for either the MouseMove or the KeyDown event. I'd like to have something like this:
let! moveOrKeyDown = Async.AwaitEvent(form.MouseMove, form.KeyDown)
This function doesn't exist but is there another way to do this?

let ignoreEvent e = Event.map ignore e
let merged = Event.merge (ignoreEvent f.KeyDown) (ignoreEvent f.MouseMove)
Async.AwaitEvent merged
EDIT: another version that preserves original types
let merged = Event.merge (f.KeyDown |> Event.map Choice1Of2) (f.MouseMove |> Event.map Choice2Of2)
Async.AwaitEvent merged
EDIT 2: according to comments of Tomas Petricek
let e1 = f.KeyDown |> Observable.map Choice1Of2
let e2 = f.MouseMove |> Observable.map Choice2Of2
let! evt = Observable.merge e1 e2 |> Async.AwaitObservable
AwaitObservable primitive can be taken from here ('Reactive demos in Silverlight' by Tomas Petricek).

I used an implementation of a method that you use in your sample in the talk about reactive programming that I had in London (there is a download link at the bottom of the page). If you're interested in this topic, you may find the talk useful as well :-).
The version I'm using takes IObservable instead of IEvent (so the name of the method is AwaitObservable). There are some serious memory leaks when using Event.merge (and other combinators from the Event module) together with AwaitEvent, so you should use Observable.merge etc. and AwaitObservable instead.
The problem is described in more detail here (see Section 3 for a clear example). Briefly - when you use Event.merge, it attaches a handler to the source event (e.g. MouseDown), but it does not remove the handler after you finish waiting using AwaitEvent, so the event is never removed - if you keep waiting in a loop coded using asynchronous workflow, you keep adding new handlers (that do not do anything when run).
A simple correct solution (based on what desco posted) would look like this:
let rec loop () = async {
let e1 = f.KeyDown |> Observable.map Choice1Of2
let e2 = f.MouseMove |> Observable.map Choice2Of2
let! evt = Observable.merge e1 e2 |> Async.AwaitObservable
// ...
return! loop() } // Continue looping
BTW: You may also want to look at this article (based on chapter 16 from my book).

In the interest of understanding what's going on I looked up the source code to Event.map, Event.merge and Choice.
type Choice<'T1,'T2> =
| Choice1Of2 of 'T1
| Choice2Of2 of 'T2
[<CompiledName("Map")>]
let map f (w: IEvent<'Delegate,'T>) =
let ev = new Event<_>()
w.Add(fun x -> ev.Trigger(f x));
ev.Publish
[<CompiledName("Merge")>]
let merge (w1: IEvent<'Del1,'T>) (w2: IEvent<'Del2,'T>) =
let ev = new Event<_>()
w1.Add(fun x -> ev.Trigger(x));
w2.Add(fun x -> ev.Trigger(x));
ev.Publish
This means our solution is creating 3 new events.
async {
let merged = Event.merge
(f.KeyDown |> Event.map Choice1Of2)
(f.MouseMove |> Event.map Choice2Of2)
let! move = Async.AwaitEvent merged
}
We could reduce this to one event by making a tightly coupled version of this library code.
type EventChoice<'T1, 'T2> =
| EventChoice1Of2 of 'T1
| EventChoice2Of2 of 'T2
with
static member CreateChoice (w1: IEvent<_,'T1>) (w2: IEvent<_,'T2>) =
let ev = new Event<_>()
w1.Add(fun x -> ev.Trigger(EventChoice1Of2 x))
w2.Add(fun x -> ev.Trigger(EventChoice2Of2 x))
ev.Publish
And here is our new code.
async {
let merged = EventChoice.CreateChoice form.MouseMove form.KeyDown
let! move = Async.AwaitEvent merged
}

You can use a combination of Event.map and Event.merge:
let eventOccurs e = e |> Event.map ignore
let mouseOrKey = Event.merge (eventOccurs frm.MouseMove) (eventOccurs frm.KeyDown)
Then you can use Async.AwaitEvent with this new event. If MouseMove and KeyDown had the same type, you could skip the Event.map step and just directly merge them.
EDIT
But at Tomas points out, you should use the Observable combinators in preference to the Event ones.

Related

appending results to array after parallel processing

Some eloquency questions:
A. How to add a list that was formed from parallel processing directly to the a Concurrent results array in an eloquent way.
let results = System.Collections.Concurrent.ConcurrentBag<string>()
let tasks = System.Collections.Generic.List<string>()
tasks.add("a")
tasks.add("b")
let answers = tasks
|> Seq.map asyncRequest
|> Async.Parallel
|> Async.RunSynchronously
|> Array.toList
Array.append results answers
Attempt Is there a way to append via pipe operator?
let answers = tasks
|> Seq.map asyncRequest
|> Async.Parallel
|> Async.RunSynchronously
|> Array.append results
B. Is there a way to add items via List constructor?
let tasks = System.Collections.Generic.List<string>()
tasks.add("a")
tasks.add("b")
C. Is there a way to construct a queue from array using Queue constructor?
let items: string[] = [|"a", "b", "c"|]
let jobs = System.Collections.Generic.Queue<string>()
items |> Array.map jobs.Enqueue |> ignore
A. you can't use Array.append on results, because results is a ConcurrentBag, but Array.append expects its argument to be an Array. To add stuff to ConcurrentBag, use its Add method. Add items one by one:
tasks
|> Seq.map asyncRequest
|> Async.Parallel
|> Async.RunSynchronously
|> Array.iter results.Add
Adding items one by one is a little inefficient. If your ConcurrentBag is really created right in the same function, as your example shows, you may consider using its constructor that takes an IEnumerable<T>:
let answers = tasks
|> Seq.map asyncRequest
|> Async.Parallel
|> Async.RunSynchronously
let results = System.Collections.Concurrent.ConcurrentBag<string>( answers )
B. yes, there is a way to add stuff to a System.Collections.Generic.List<T>. This class provides a handy Add method for this purpose:
tasks.Add "a"
tasks.Add "b"
Enclosing the argument in parentheses (as in your attempt) is not necessary, but allowed:
tasks.Add("a")
tasks.Add("b")
C. yes, there is a way to construct a queue from an array. The Queue class has a constructor that takes an IEnumerable<T>, and arrays implement IEnumerable<T>, so you can call that constructor on an array:
let jobs = System.Collections.Generic.Queue<string>( items )
Please note that you hardly needed my help to get any of the above information. Everything is freely available on MSDN (see links above) or from autocompletion/intellisense in your favorite code editor.

F# efficiency implications of passing large data structures between functions

How does F# pass data from a caller function to a called function? Does it make a copy of the data before handing it over or does it just pass a pointer? I would think the latter but want to make sure.
On a related note, are there any performance implications of the following 2 F# code styles.
let someFunction e =
1//pretend this is a complicated function
let someOtherFunction e =
2//pretend this is a complicated function
let foo f largeList=
List.map (fun elem -> f elem)
let bar largeList =
largeList
|> foo someFunction
|> foo someOtherFunction
let bar2 largeList =
let foo2 f f2 =
largeList
|> List.map (fun elem -> f elem)
|> List.map (fun elem -> f2 elem)
foo2 someFunction someOtherFunction
Would you expect bar to have a different performance to bar2? If not, are there any situations I should be aware of that would make a difference?
The short answer:
No. The entire list is not copied, just the reference to it is.
The long answer:
In F# (just like in C#) both value and reference types can be passed either by value or by reference.
Both value types and reference types are, by default, passed by value.
In the case of value types (structs) this means that you'll be
passing around a copy of the entire data structure.
In the case of reference types (classes, discriminated unions, records, etc.) this means that the reference is passed by value. This does not mean that the entire data structure is copied, it just means that an int/int64 which references the data structure is copied.
If you're working with mutable data structures, e.g. ResizeArray<'T> (.NET List<'T>) which are classes, passing references by value could have implications. Perhaps the function you've passed it to adds elements to the list, for example? Such an update would apply to the data structure referenced from both locations. Since your question uses the immutable F# List though, you don't have to worry about this!
You can also pass value/reference types by reference, for more detail about that see: https://msdn.microsoft.com/en-us/library/dd233213.aspx#Anchor_4
F# list is implemented as a singly linked list, that means that access the head and prepend operations are O(1). These data structures are also very memory efficient because when you prepend an element to the list you only need to store the new value and a reference to the rest of the list.
So you can see how it works, such a data structure can be implemented like this:
type ExampleList<'T> =
|Empty
|Cons of 'T * List<'T>
Additional Information:
List.map is eagerly evaluated meaning that every time you call it, a new list will be created. If you use Seq.map (F# List implements the IEnumerable<'T> interface), which is lazily evaluated, you can evaluate both map operations in only enumeration of the list.
largeList
|> Seq.map (fun elem -> f elem)
|> Seq.map (fun elem -> f2 elem)
|> List.ofSeq
This is likely to be a lot more efficient for large lists because it involves allocating only one new list of results, rather than two.

F#: Attempt to memoize member function resets cache on each call?

I'm trying to memoize a member function of a class, but every time the member is called (by another member) it makes a whole new cache and 'memoized' function.
member x.internal_dec_rates =
let cache = new Dictionary< Basis*(DateTime option), float*float>()
fun (basis:Basis) (tl:DateTime option) ->
match cache.TryGetValue((basis,tl)) with
| true, (sgl_mux, sgl_lps) -> (sgl_mux, sgl_lps)
| _ ->
let (sgl_mux, sgl_lps) =
(* Bunch of stuff *)
cache.Add((basis,tl),(sgl_mux,sgl_lps))
sgl_mux,sgl_lps
I'm using Listing 10.5 in "Real World Functional Programming" as a model. I've tried using a memoization higher-order function and that doesn't help. The above listing has the memoization built in directly.
The problem is, when I call it e.g.
member x.px (basis:Basis) (tl: DateTime option) =
let (q,l) = (x.internal_dec_rates basis tl)
let (q2,l2) = (x.internal_dec_rates basis tl)
(exp -q)*(1.-l)
execution goes to the 'let cache=...' line, defeating the whole point. I put in the (q2,l2) line in order to make sure it wasn't a scope problem, but it doesn't seem to be.
In fact I did a test using Petricek's code as a member function and that seems to have the same issue:
// Not a member function
let memo1 f =
let cache = new Dictionary<_,_>()
(fun x ->
match cache.TryGetValue(x) with
| true, v -> v
| _ -> let v = f x
cache.Add(x,v)
v
)
member x.factorial = memo1(fun y->
if (y<=0) then 1 else y*x.factorial(y-1))
Even the internal recursion of x.factorial seems to set up a new 'cache' for each level.
What am I doing wrong, and how can I make this work?
In response to your comment on Jack's answer, this doesn't have to become tedious. Given a memoize function:
let memoize f =
let cache = Dictionary()
fun x ->
match cache.TryGetValue(x) with
| true, v -> v
| _ ->
let v = f x
cache.Add(x, v)
v
Define each of your functions as let-bound values and return them from your methods:
type T() as x =
let internalDecRates = memoize <| fun (basis: Basis, tl: DateTime option) ->
(* compute result *)
Unchecked.defaultof<float * float>
let px = memoize <| fun (basis, tl) ->
let (q,l) = x.InternalDecRates(basis, tl)
let (q2,l2) = x.InternalDecRates(basis, tl)
(exp -q)*(1.-l)
member x.InternalDecRates = internalDecRates
member x.Px = px
The only "boilerplate" is the let binding and call to memoize.
EDIT: As kvb noted, in F# 3.0 auto-properties allow a more concise solution:
type T() as x =
member val InternalDecRates = memoize <| fun (basis: Basis, tl: DateTime option) ->
(* compute result *)
Unchecked.defaultof<float * float>
member val Px = memoize <| fun (basis, tl) ->
let (q,l) = x.InternalDecRates(basis, tl)
let (q2,l2) = x.InternalDecRates(basis, tl)
(exp -q)*(1.-l)
I see a lot of long answers here; the short answer is that
member x.P = code()
defines a property P which has a getter that runs code() every time P is accessed. You need to move the cache creation into the class's constructor, so that it will only run once.
As others already said, this cannot be done just by defining a single member in F# 2.0. You either need a separate field (let bound value) for a cache or for a local function that is memoized.
As mentioned by kvb, in F# 3.0, you can do this using member val which is a property that is initialized when the object is created (and has an automatically generated backing field where the result is stored). Here is a complete sample that demonstrates this (it will work in Visual Studio 2012):
open System.Collections.Generic
type Test() =
/// Property that is initialized when the object is created
/// and stores a function value 'int -> int'
member val Foo =
// Initialize cache and return a function value
let cache = Dictionary<int, int>()
fun arg ->
match cache.TryGetValue(arg) with
| true, res -> res
| false, _ ->
let res = arg * arg
printfn "calculating %d" arg
cache.Add(arg, res)
res
// Part of the property declaration that instructs
// the compiler to generate getter for the property
with get
The with get part of the declaration can be omitted, but I include it here to make the sample clearer (you can also use with get, set to get a mutable property). Now you can call test.Foo as a function and it caches the value as required
let t = Test()
t.Foo(10)
t.Foo(10)
The only problem with this approach is that t.Foo is actually compiled as a property that returns a function (instead of being compiled as a method). This is not a big problem when you use the class from F#, but it would be a problem if you were calling it from C# (because C# would see the member as a property of type FSharpFunc<int, int>, which is hard to use).
John is correct -- you need to move the cache dictionary into a private, let-bound member of the type.
Type members are compiled a bit differently than let-bound values in modules, which is the reason for the difference in behavior. If you copy/paste the body of your x.internal_dec_rates method and assign it to a let-bound value in a module, it should work correctly then, because the F# compiler will compile it as a closure which gets created once and then assigned to a static readonly field of the module.
A couple of other tips, for good measure:
Type member methods can use optional parameters -- so you can slightly simplify the method signature if you like.
You can create the cache key just once and reuse it (this also helps avoid mistakes).
You can simplify the (sgl_mux, sgl_lps) pattern-matching code by just assigning the tuple a name (e.g., value), since you're just returning the whole tuple anyway.
Here's my take on your code:
type FooBar () =
let cache = new Dictionary< Basis*(DateTime option), float*float>()
member x.internal_dec_rates (basis : Basis, ?tl : DateTime) =
let key = basis, tl
match cache.TryGetValue key with
| true, value -> value
| _ ->
// sgl_mux, sgl_lps
let value =
(* Bunch of stuff *)
cache.Add (key, value)
value
You need to move the dictionary outside the function call - like
let cache = new Dictionary< Basis*(DateTime option), float*float>()
member x.internal_dec_rates =
fun (basis:Basis) (tl:DateTime option) ->
match cache.TryGetValue((basis,tl)) with
| true, (sgl_mux, sgl_lps) -> (sgl_mux, sgl_lps)
| _ ->
let (sgl_mux, sgl_lps) =
(* Bunch of stuff *)
cache.Add((basis,tl),(sgl_mux,sgl_lps))
sgl_mux,sgl_lps
This way the cache persists across the function calls. Your memo1 has the same problem. In the original version, you create a new cache every time you call the function, this way we just have a single cache, which persists across function calls.
In addition to the other answers, note that in F# 3.0 you can use automatically implemented properties, which will behave as you want:
member val internal_dec_rates = ...
Here, the right hand side is evaluated only once, but everything is self-contained.

OCaml event/channel tutorial?

I'm in OCaml.
I'm looking to simulate communicating nodes to look at how quickly messages propagate under different communication schemes etc.
The nodes can 1. send and 2. receive a fixed message. I guess the obvious thing to do is have each node as a separate thread.
Apparently you can get threads to pass messages to each other using the Event module and channels, but I can't find any examples of this. Can someone point me in the right direction or just give me a simple relevant example?
Thanks a lot.
Yes, you can use the Event module of OCaml. You can find an example of its use in the online O'Reilly book.
If you are going to attempt a simulation, then you will need a lot more control over your nodes than simply using threads will allow — or at least, without major pains.
My subjective approach to the topic would be to create a simple, single-threaded virtual machine in order to keep full control over the simulation. The easiest way to do so in OCaml is to use a monad-like structure (as is done in Lwt, for instance) :
(* A thread is a piece of code that can be executed to perform some
side-effects and fork zero, one or more threads before returning.
Some threads may block when waiting for an event to happen. *)
type thread = < run : thread list ; block : bool >
(* References can be used as communication channels out-of-the box (simply
read and write values ot them). To implement a blocking communication
pattern, use these two primitives: *)
let write r x next = object (self)
method block = !r <> None
method run = if self # block then [self]
else r := Some x ; [next ()]
end
let read r next = object (self)
method block = !r = None
method run = match r with
| None -> [self]
| Some x -> r := None ; [next x]
end
You can create better primitives that suit your needs, such as adding a "time required for transmitting" property in your channels.
The next step is defining a simulation engine.
(* The simulation engine can be implemented as a simple queue. It starts
with a pre-defined set of threads and returns when no threads are left,
or when all threads are blocking. *)
let simulate threads =
let q = Queue.create () in
let () = List.iter (fun t -> Queue.push t q) threads in
let rec loop blocking =
if Queue.is_empty q then `AllThreadsTerminated else
if Queue.length q = blocking then `AllThreadsBlocked else
let thread = Queue.pop q in
if thread # block then (
Queue.push thread q ;
loop (blocking + 1)
) else (
List.iter (fun t -> Queue.push t q) (thread # run) ;
loop 0
)
in
loop 0
Again, you may adjust the engine to keep track of what node is executing which thread, to keep per-node priorities in order to simulate one node being massively slower or faster than others, or randomly picking a thread for execution on every step, and so on.
The last step is executing a simulation. Here, I'm going to have two threads sending random numbers back and forth.
let rec thread name input output =
write output (Random.int 1024) (fun () ->
read input (fun value ->
Printf.printf "%s : %d" name value ;
print_newline () ;
thread name input output
))
let a = ref None and b = ref None
let _ = simulate [ thread "A -> B" a b ; thread "B -> A" b a ]
It sounds like you're thinking of John Reppy's Concurrent ML. There seems to be something similar for OCaml here.
The answer #Thomas has given is also valuable, but if you want to use this style of concurrent programming I would recommend reading John Reppy's PhD thesis which is extremely readable and gives a very clear treatment of the motivation behind CML and some substantial examples of its use. If you aren't interested in the semantics the document is still readable if you skip that part.

Scrambled looking records in F#-Interactive

When trying to print
pop
I get all this weird looking formatting in F# interactive, which basically turns the printing useless. Is there someway other to correctly format this?
The code is the following:
#light
open System
let rng = new Random()
type Individual = { x:int; y:int }
type ScoredIndividual = { individual:Individual; score:int }
let genGene() = rng.Next(-10, 10)
let genRandInd() = { x=genGene(); y=genGene() }
let genRandPop popSize = [ for _ in 1 .. popSize -> genRandInd() ]
let getScoredPop f pop = List.map (fun i -> { individual=i; score=(f i)}) pop
let fitnessFun ind = ind.x * ind.x + ind.y * ind.y
let pop = 30 |> genRandPop |> getScoredPop fitnessFun
You can override ToString or use StructuredFormatDisplayAttribute to customize the string representation. This article contains some useful information about customizing output in fsi.
you might want to do fsi.AddPrinter for your ScoredIndividual type to control what's written to the console
That's pretty rough, and I couldn't find any "easy" way to fix it. However, FsEye can make it nicer (while it does delete the newlines, those spaces are in there good):

Resources