Elm.js "lift" and Bacon.map: Are they functionally the same? - frp

I'm trying to understand Elm. I have a bit of experience with Bacon.js, and it seems to me that lift is, basically, Bacon.js's internal map() function renamed.
Is there more to it than that?

Sure, that's the same thing. With the lift2..8 functions you can do the same thing that you'd do with Bacon.combineWith.
Signals in Elm (just like Properties in Bacon) are Functors and Applicative Functors, where the former allows you to lift an unary function to the realm of Signals (Elm: lift, Bacon: map, Rx: select), while the latter allows you to lift n-ary functions (Elm: lift2..8, Bacon: combineWith, Rx: combineLatest).

Related

Understanding Js.Promise.resolve(. ) dot syntax in ReasonML

I'm trying to understand the docs:
https://reasonml.github.io/docs/en/promise
In the usage section there is:
let myPromise = Js.Promise.make((~resolve, ~reject) => resolve(. 2));
Why is there is dot before the 2? What does it mean and what does it do?
(. ) as used here, in function application, means the function should be called with an uncurried calling convention.
When used in a function type, like the type of resolve here, (. 'a) => unit, it means the function is uncurried.
Ok, so what the hell does that mean? Wweeellll, that's a bit of a story. Here goes:
What is uncurrying?
Uncurrying is the opposite of currying, so let's explain that first and then compare.
Currying is the process of transforming a function that takes multiple arguments into a series of functions that take exactly one argument each and return either the final return value or a function taking the next argument. In Reason/OCaml this is done automatically for us, and is why in OCaml function types have arrows between its arguments (e.g. 'a -> 'b -> 'ret). You can write function types in this way in Reason too ('a => 'b => 'ret), but by default it's hidden by the syntax (('a, 'b) => 'ret), which is meant well but might also make it more difficult to understand why functions behave unexpectedly in some circumstances.
In uncurried languages supporting first-class functions you can also curry functions manually. Let's look at an example in ES6. This is a normal "uncurried" ES6 function:
let add = (a, b) => a + b;
and this is its curried form:
let add = a => b => a + b;
and with parentheses to emphasize the separate functions:
let add = a => (b => a + b);
The first function takes the argument a, then returns a function (that closes over a) which takes the argument b and then computes the final return value.
This is cool because we can easily partially apply the a argument without using bind, but it's a bit inconvenient to apply all arguments to it at once, since we have to call each function individually:
let result = add(2)(3);
So not only does Reason/OCaml automatically curry functions upon creation, but it also provides a calling convention that lets us conveniently apply multiple arguments too.
And this all works great! ...as long as every function is curried. But then we want to hang out and talk with JavaScript, where most functions are not (but see Ramda for one notable exception). To be able to call uncurried JavaScript functions we need an uncurried calling convention, and to be able to create functions that can be called as expected from JavaScript we need an uncurried function type.
Why does resolve need to be uncurried?
An even better question might be "why aren't all external functions uncurried"? The answer is that they actually are, but the type and calling convention can often both be inferred at compile-time. And if not it can often be "reflected" at runtime by inspecting the function value, at a small (but quickly compounding) cost to performance. The exceptions to this are where it gets a bit muddy, since the documentation doesn't explain exactly when explicit uncurrying is required for correct functioning, and when it isn't required but can be beneficial for performance reasons. In addition, there's actually two annotations for uncurried functions, one which can infer the calling convention and one which requires it to be explicit, as is the case here. But this is what I've gathered.
Let's look at the full signature for Js.Promise.make, which is interesting because it includes three kinds of uncurried functions:
[#bs.new]
external make :
([#bs.uncurry] (
(~resolve: (. 'a) => unit,
~reject: (. exn) => unit) => unit)) => t('a) = "Promise";
Or in OCaml syntax, which I find significantly more readable in this case:
external make : (resolve:('a -> unit [#bs]) ->
reject:(exn -> unit [#bs]) -> unit [#bs.uncurry]) -> 'a t = "Promise" [##bs.new]
The first kind of function is make itself, which is an external and can be inferred to be uncurried because all externals are of course implemented in JavaScript.
The second kind of function is the callback we'll create and pass to make. This must be uncurried because it's called from JavaScript with an uncurried calling convention. But since functions we create are curried by default, [#bs.uncurry] is used here to specify both that it expects an uncurried function, and that it should be uncurried automatically.
The third kind of function is resolve and reject, which are callback functions passed back from JavaScript and thus uncurried. But these are also 1-ary functions, where you'd think the curried and uncurried form should be exactly the same. And with ordinary monomorphic functions you'd be right, but unfortunately resolve is polymorphic, which creates some problems.
If the return type had been polymorphic, the function might actually not be 1-ary in curried form, since the return value could itself be a function taking another argument, which could return another function and so on. That's one downside of currying. But fortunately it isn't, so we know it's 1-ary.
I think the problem is even more subtle than that. It might arise because we need to be able to represent 0-ary uncurried functions using curried function types, which are all 1-ary. How do we do that? Well, if you were to implement an equivalent function in Reason/OCaml, you'd use unit as the argument type, so let's just do that. But now, if you have a polymorphic function, it might be 0-ary if it's monomorphized as unit and 1-ary otherwise. And I suppose calling a 0-ary function with one argument has been deemed unsound in some way.
But why, then, does reject need to be uncurried, when it's not polymorphic?
Well... my best guess is that it's just for consistency.
For more info, see the manual (but note that it confuses currying with partial application)

fat arrow in Idris

I hope this question is appropriate for this site, it's just about the choice of concrete syntax in Idris compared to Haskell, since both are very similar. I guess it's not that important, but I'm very curious about it. Idris uses => for some cases where Haskell uses ->. So far I've seen that Idris only uses -> in function types and => for other things like lambdas and case _ of. Did this choice come from realizing that it's useful in practice to have a clear syntactical distinction between these use cases? Is it just an arbitrary cosmetic choice and I'm overthinking it?
Well, in Haskell, type signatures and values are in different namespaces, so something defined in one is at no risk of clashing with something in the other. In Idris, types and values occupy the same namespace, which is why you don't see e.g. data Foo = Foo as you would in Haskell, but rather, data Foo = MkFoo - the type is called Foo, and the constructor is called MkFoo, as there is already a value (the type Foo), bound to the name Foo, e.g. data Pair = MkPair http://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#tuples
So it's probably for the best it didn't try to use the arrow used to construct the type of functions, with the arrow used for lambdas - those are rather different things. You can combine them with e.g. the (Int -> Int) (\x => x).
I think it is because they interpret the -> symbol differently.
From Wikipedia:
A => B means if A is true then B is also true; if A is false then nothing is said about B
which seems right for case expressions, and
-> may mean the same as =>, or it may have the meaning for functions given below
which is
f: X -> Y means the function f maps the set X into the set Y
So my guess is that Idris just uses -> for the narrow second meaning, i.e. for mapping one type to another in type signatures, whereas Haskell uses the broader interpretation, where it means the same as =>.

Call by name vs normal order

I know this topic has been discussed several times, but there is something still unclear to me.
I've read this question applicative-order/call-by-value and normal-order/call-by-name differences and there is something I would to clarify once and for all:
Call-by-name
As normal order, but no reductions are performed inside abstractions. For example λx.(λx.x)x is in normal form according to this strategy, although it contains the redex (λx.x)x.
In call by name, the expression λx.(λx.x)x is said to be in normal form; is this because "(λx.x)x" is considered to be the body (since the scope of λ extends as far as possible to the right)? And so on the other side, if I apply the normal order, what would be the result?
In call by name, the expression λx.(λx.x)x is said to be in normal form; is this because "(λx.x)x" is considered to be the body (since the scope of λ extends as far as possible to the right)?
Yes, you are right.
And so on the other side, if I apply the normal order, what would be the result?
You do reduction inside the body: (λx.x)x -> x, so the whole thing reduces to the identity function:
λx.(λx.x)x -> λx.x
To clarify it a bit further, let me do this one more time, renaming the variables to conform with the Barendregt variable convention: λx.(λx.x)x =α λx.(λy.y)x:
λx.(λy.y)x -> λx.[y := x](y) = λx.x

Understanding immutable composite types with fields of mutable types in Julia

Initial note: I'm working in Julia, but this question probably applies to many languages.
Setup: I have a composite type as follows:
type MyType
x::Vector{String}
end
I write some methods to act on MyType. For example, I write a method that allows me to insert a new element in x, e.g. function insert!(d::MyType, itemToInsert::String).
Question: Should MyType be mutable or immutable?
My understanding: I've read the Julia docs on this, as well as more general (and highly upvoted) questions on Stackoverflow (e.g. here or here), but I still don't really have a good handle on what it means to be mutable/immutable from a practical perspective (especially for the case of an immutable composite type, containing a mutable array of immutable types!)
Nonetheless, here is my attempt: If MyType is immutable, then it means that the field x must always point to the same object. That object itself (a vector of Strings) is mutable, so it is perfectly okay for me to insert new elements into it. What I am not allowed to do is try and alter MyType so that the field x points to an entirely different object. For example, methods that do the following are okay:
MyType.x[1] = "NewValue"
push!(MyType.x, "NewElementToAdd")
But methods that do the following are not okay:
MyType.x = ["a", "different", "string", "array"]
Is this right? Also, is the idea that the object that an immutable types field values are locked to are those that are created within the constructor?
Final Point: I apologise if this appears to duplicate other questions on SO. As stated, I have looked through them and wasn't able to get the understanding that I was after.
So here is something mind bending to consider (at least to me):
julia> immutable Foo
data::Vector{Float64}
end
julia> x = Foo([1.0, 2.0, 4.0])
Foo([1.0,2.0,4.0])
julia> append!(x.data, x.data); pointer(x.data)
Ptr{Float64} #0x00007ffbc3332018
julia> append!(x.data, x.data); pointer(x.data)
Ptr{Float64} #0x00007ffbc296ac28
julia> append!(x.data, x.data); pointer(x.data)
Ptr{Float64} #0x00007ffbc34809d8
So the data address is actually changing as the vector grows and needs to be reallocated! But - you can't change data yourself, as you point out.
I'm not sure there is a 100% right answer is really. I primarily use immutable for simple types like the Complex example in the docs in some performance critical situations, and I do it for "defensive programming" reasons, e.g. the code has no need to write to the fields of this type so I make it an error to do so. They are a good choice IMO whenever the type is a sort of an extension of a number, e.g. Complex, RGBColor, and I use them in place of tuples, as a kind of named tuple (tuples don't seem to perform well with Julia right now anyway, wheres immutable types perform excellently).

Variables Naming Conventions- var1,var2 or firstVar, secondVar?

In cases where you have 2 variables from the same type that play a similar roll, like for example a merge function of 2 arrays:
IntArray merge(IntArray array1,IntArray array2);
What do you think is the best (most readable, least error-prone) way to name the variables? var1,var2 or firstVar,secondVar? or maybe a different way?
If you express your opinion, I would be glad to hear the rational behind, especially regarding which is less error-prone.
Thanks in advance.
For C/C++, I think the following would be preferred:
IntArray merge(IntArray a, IntArray b);
Because it's simple, short and avoids using integers in the variable names.
Or the 'operator form':
IntArray merge(IntArray lhs, IntArray rhs);
(for left-hand side and right-hand side)
(alike:
IntArray operator+(IntArray lhs, IntArray rhs);
)
The 'numeric' variants, and camelCase are more like ECMAScript than C. But it's mostly a 'standard' created by people who simply used C/C++ and used naming like that (see plus template in C++ standard library for an example).
But you can't get much rationale behind it because it's simply a matter of taste. As long as the variables are equally important (i.e. merge(a, b) == merge(b, a)), it's just how they will referred from the implementation code.
For clarity, I use purpose of the variable as name instead of numeric number. It instantly tell me that the variable count is finite. For example:
function predicate(int subject, int object)
function merge_array(array firstArray, array secondArray)
Using (array1, array2) sounds wrong for me because it seems like there is array3, array4, ... while we know the variable/parameter count is fixed.
The arguments of a function should always be talking about who they are and about what they should be used for. If there's non difference between the two arrays (as the source and the destination, for example), maybe you should think about passing a list of arrays (named arraysToMerge or something like that).
Variable's name has to explain variable's purpose, always. Because code is (or at least should be) our real documentation.
In your case I would use something along the lines of var and anotherVar.
Such as:
IntArray merge(IntArray array, IntArray anotherArray);
Variables and parameters should be self-explanatory. I think the situation is described more clearly this way.

Resources