Understanding Js.Promise.resolve(. ) dot syntax in ReasonML - promise

I'm trying to understand the docs:
https://reasonml.github.io/docs/en/promise
In the usage section there is:
let myPromise = Js.Promise.make((~resolve, ~reject) => resolve(. 2));
Why is there is dot before the 2? What does it mean and what does it do?

(. ) as used here, in function application, means the function should be called with an uncurried calling convention.
When used in a function type, like the type of resolve here, (. 'a) => unit, it means the function is uncurried.
Ok, so what the hell does that mean? Wweeellll, that's a bit of a story. Here goes:
What is uncurrying?
Uncurrying is the opposite of currying, so let's explain that first and then compare.
Currying is the process of transforming a function that takes multiple arguments into a series of functions that take exactly one argument each and return either the final return value or a function taking the next argument. In Reason/OCaml this is done automatically for us, and is why in OCaml function types have arrows between its arguments (e.g. 'a -> 'b -> 'ret). You can write function types in this way in Reason too ('a => 'b => 'ret), but by default it's hidden by the syntax (('a, 'b) => 'ret), which is meant well but might also make it more difficult to understand why functions behave unexpectedly in some circumstances.
In uncurried languages supporting first-class functions you can also curry functions manually. Let's look at an example in ES6. This is a normal "uncurried" ES6 function:
let add = (a, b) => a + b;
and this is its curried form:
let add = a => b => a + b;
and with parentheses to emphasize the separate functions:
let add = a => (b => a + b);
The first function takes the argument a, then returns a function (that closes over a) which takes the argument b and then computes the final return value.
This is cool because we can easily partially apply the a argument without using bind, but it's a bit inconvenient to apply all arguments to it at once, since we have to call each function individually:
let result = add(2)(3);
So not only does Reason/OCaml automatically curry functions upon creation, but it also provides a calling convention that lets us conveniently apply multiple arguments too.
And this all works great! ...as long as every function is curried. But then we want to hang out and talk with JavaScript, where most functions are not (but see Ramda for one notable exception). To be able to call uncurried JavaScript functions we need an uncurried calling convention, and to be able to create functions that can be called as expected from JavaScript we need an uncurried function type.
Why does resolve need to be uncurried?
An even better question might be "why aren't all external functions uncurried"? The answer is that they actually are, but the type and calling convention can often both be inferred at compile-time. And if not it can often be "reflected" at runtime by inspecting the function value, at a small (but quickly compounding) cost to performance. The exceptions to this are where it gets a bit muddy, since the documentation doesn't explain exactly when explicit uncurrying is required for correct functioning, and when it isn't required but can be beneficial for performance reasons. In addition, there's actually two annotations for uncurried functions, one which can infer the calling convention and one which requires it to be explicit, as is the case here. But this is what I've gathered.
Let's look at the full signature for Js.Promise.make, which is interesting because it includes three kinds of uncurried functions:
[#bs.new]
external make :
([#bs.uncurry] (
(~resolve: (. 'a) => unit,
~reject: (. exn) => unit) => unit)) => t('a) = "Promise";
Or in OCaml syntax, which I find significantly more readable in this case:
external make : (resolve:('a -> unit [#bs]) ->
reject:(exn -> unit [#bs]) -> unit [#bs.uncurry]) -> 'a t = "Promise" [##bs.new]
The first kind of function is make itself, which is an external and can be inferred to be uncurried because all externals are of course implemented in JavaScript.
The second kind of function is the callback we'll create and pass to make. This must be uncurried because it's called from JavaScript with an uncurried calling convention. But since functions we create are curried by default, [#bs.uncurry] is used here to specify both that it expects an uncurried function, and that it should be uncurried automatically.
The third kind of function is resolve and reject, which are callback functions passed back from JavaScript and thus uncurried. But these are also 1-ary functions, where you'd think the curried and uncurried form should be exactly the same. And with ordinary monomorphic functions you'd be right, but unfortunately resolve is polymorphic, which creates some problems.
If the return type had been polymorphic, the function might actually not be 1-ary in curried form, since the return value could itself be a function taking another argument, which could return another function and so on. That's one downside of currying. But fortunately it isn't, so we know it's 1-ary.
I think the problem is even more subtle than that. It might arise because we need to be able to represent 0-ary uncurried functions using curried function types, which are all 1-ary. How do we do that? Well, if you were to implement an equivalent function in Reason/OCaml, you'd use unit as the argument type, so let's just do that. But now, if you have a polymorphic function, it might be 0-ary if it's monomorphized as unit and 1-ary otherwise. And I suppose calling a 0-ary function with one argument has been deemed unsound in some way.
But why, then, does reject need to be uncurried, when it's not polymorphic?
Well... my best guess is that it's just for consistency.
For more info, see the manual (but note that it confuses currying with partial application)

Related

Is there a performance penalty to using a pipeline operator ponyfill instead of .pipe in RxJS

I use a pipeline operator ponyfill, which is just a utility function applyPipe such that applyPipe(x, a, b) is equivalent to b(a(x)) or x |> a |> b (in this example there are 2 functions, but actually it can be any number of functions). In fp-ts this function is called pipe.
In my case the function is implemented as
export const applyPipe = (
source,
...project
) => {
for (const el of project) {
source = el(source);
}
return source;
};
(you could also implement it with .reduce).
This function can be used to compose observable operators, so applyPipe(timer(500), delay(500)) is equivalent to timer(500).pipe(delay(500)). The question is, is there a performance penalty to using such function in place of the .pipe method?
Theoretically, I see no major issue or performance downgrade by doing so (other than adding an extra step by using your function and copying the observable/object reference to process it in the function). You will just copy the observable reference (not the emissions of the observable) hence it's shouldn't be a big deal for performance reasons.
Also, ponyfill/polyfills are generally expected to be either equal to or worst than the actual implementation in terms of performance. Just keep in mind that the spreading operator will copy only the properties of the object (not any nested property).
I would leave some comment in that ponyfill function to make it easier to understand for every other developer that works with your codebase.

fat arrow in Idris

I hope this question is appropriate for this site, it's just about the choice of concrete syntax in Idris compared to Haskell, since both are very similar. I guess it's not that important, but I'm very curious about it. Idris uses => for some cases where Haskell uses ->. So far I've seen that Idris only uses -> in function types and => for other things like lambdas and case _ of. Did this choice come from realizing that it's useful in practice to have a clear syntactical distinction between these use cases? Is it just an arbitrary cosmetic choice and I'm overthinking it?
Well, in Haskell, type signatures and values are in different namespaces, so something defined in one is at no risk of clashing with something in the other. In Idris, types and values occupy the same namespace, which is why you don't see e.g. data Foo = Foo as you would in Haskell, but rather, data Foo = MkFoo - the type is called Foo, and the constructor is called MkFoo, as there is already a value (the type Foo), bound to the name Foo, e.g. data Pair = MkPair http://docs.idris-lang.org/en/latest/tutorial/typesfuns.html#tuples
So it's probably for the best it didn't try to use the arrow used to construct the type of functions, with the arrow used for lambdas - those are rather different things. You can combine them with e.g. the (Int -> Int) (\x => x).
I think it is because they interpret the -> symbol differently.
From Wikipedia:
A => B means if A is true then B is also true; if A is false then nothing is said about B
which seems right for case expressions, and
-> may mean the same as =>, or it may have the meaning for functions given below
which is
f: X -> Y means the function f maps the set X into the set Y
So my guess is that Idris just uses -> for the narrow second meaning, i.e. for mapping one type to another in type signatures, whereas Haskell uses the broader interpretation, where it means the same as =>.

Difference between multiple values and plain tuples in Racket?

What is the difference between values and list or cons in Racket or Scheme? When is it better to use one over the other? For example, what would be the disadvantage if quotient/remainder returns (cons _ _) rather than (values _ _)?
Back in 2002 George Caswell asked that question in comp.lang.scheme.
The ensuing thread is long, but has many insights. The discussion
reveals that opinions are divided.
https://groups.google.com/d/msg/comp.lang.scheme/ruhDvI9utVc/786ztruIUNYJ
My answer back then:
> What are the motivations behind Scheme's multiple return values feature?
> Is it meant to reflect the difference in intent, or is there a
> runtime-practical reason?
I imagine the reason being this.
Let's say that need f is called by g. g needs several values from f.
Without multiple value return, f packs the values in a list (or vector),
which is passed to g. g then immediately unpacks the list.
With multple values, the values are just pushed on the stack. Thus no
packing and unpacking is done.
Whether this should be called an optimization hack or not, is up to you.
--
Jens Axel Søgaard
We don't need no side-effecting We don't need no allocation
We don't need no flow control We don't need no special-nodes
No global variables for execution No dark bit-flipping for debugging
Hey! did you leave the args alone? Hey! did you leave those bits alone?
(Chorus) -- "Another Glitch in the Call", a la Pink Floyd
They are semantically the same in Scheme and Racket. In both you need to know how the return looks like to use it.
values is connected to call-with-values and special forms like let-values are just syntax sugar with this procedure call. The user needs to know the form of the result to use call-with-values to make use of the result. A return is often done on a stack and a call is also on a stack. The only reason to favor values in Scheme would be that there are no overhead between the producer return and the consumer call.
With cons (or list) the user needs to know how the data structure of the return looks like. As with values you can use apply instead of call-with-values to do the same thing. As a replacement for let-values (and more) it's easy to make a destructuring-bind macro.
In Common Lisp it's quite different. You can use values always if you have more information to give and the user can still use it as a normal procedure if she only wants to use the first value. Thus for CL you wouldn't need to supply quotient as a variant since quotient/remainder would work just as well. Only when you use special forms or procedures that take multiple values will the fact that the procedure does return more values work the same way as with Scheme. This makes values a better choice in CL than Scheme since you get away with writing one instead of more procedures.
In CL you can access a hash like this:
(gethash 'key *hash* 't)
; ==> T; NIL
If you don't use the second value returned you don't know if T was the default value or the actual value found. Here you see the second value indicating the key was not found in the hash. Often you don't use that value if you know there are only numbers the default value would already be an indication that the key was not found. In Racket:
(hash-ref hash 'key #t)
; ==> #t
In racket failure-result can be a thunk so you get by, but I bet it would return multiple values instead if values did work like in CL. I assume there is more housekeeping with the CL version and Scheme, being a minimalistic language, perhaps didn't want to give the implementors the extra work.
Edit: Missed Alexis' comment on the same topic before posting this
One oft-overlooked practical advantage of using multiple return values over lists is that Racket's compose "just works" with functions that return multiple values:
(define (hello-goodbye name)
(values (format "Hello ~a! " name)
(format "Goodbye ~a." name)))
(define short-conversation (compose string-append hello-goodbye))
> (short-conversation "John")
"Hello John! Goodbye John."
The function produced by compose will pass the two values returned by hello-goodbye as two arguments to string-append. If you're writing code in a functional style with lots of compositions, this is very handy, and it's much more natural than explicitly passing values around yourself with call-with-values and the like.
It's also related to your programming style. If you use values, then it usually means you want to explicitly return n values. Using cons, list or vector usually means you want to return one value which contains something.
There are always pros/cons. For values: It may use less memory on some implemenentations. The caller need to use let-values or other multiple values specific syntax. (I wish I could use just let like CL.)
For cons or other types: You can use let or lambda to receive the returning value. You need to explicitly deconstruct it to get the value you want using car or other procedures.
Which to use and when? Again depending on your programming style and case by case but if the returning value can't be represented in one object (e.g. quotient and remainder), then it might be better to use values to make the procedure's meaning clearer. If the returning value is one object (e.g. name and age for a person), then it might be better to use cons or other constructor (e.g. record).

Purity of Memoized Functions in D

Are there any clever ways of preserving purity when memoizing functions in D?
I want this when caching SHA1-calculations of large datasets kept in RAM.
Short answer: Pick memoization or purity. Don't try and have both.
Long answer: I don't see how it would be possible to preserve purity with memoization unless you used casts to lie to the compiler and claim that a function is pure when it isn't, because in order to memoize, you have to store the arguments and the result, which breaks purity, since the number one guarantee of pure functions is that they don't access mutable global or static variables (which is the only way that you'd be able to memoize anything).
So, if you did something like
alias pure nothrow Foo function() FuncType;
auto result = (cast(FuncType)&theFunc)();
then you can treat theFunc as if it were pure when it isn't, but then it's up to you to ensure that the function acts pure from the outside - including dealing with the fact that the compiler thinks that it can change the mutability of the return type of a strongly pure function which returns a mutable type. For instance, this code will compile just fine
char[] makeString(size_t len) pure
{
return new char[](len);
}
void main()
{
char[] a = makeString(5);
const(char)[] b = makeString(5);
const(char[]) c = makeString(5);
immutable(char)[] d = makeString(5);
immutable(char[]) e = makeString(5);
}
even though the return type is always mutable. And that's because the compiler knows that makeString is strongly pure and returns a value which could not have been passed to it - so, it's guaranteed to be a new value every time - and therefore changing changing the mutability of the return type to const or immutable doesn't violate the type system.
If you were to do something inside of makeString that involved casting a function to pure when it violated the guarantee that makeString always returned a new value, then you'd have broken the type system, and you'd be risking having very buggy code depending on what you did with the values returned from makeString.
The only way that I'm aware of getting purity when you don't have it is to cast a function pointer so that it's pure, but if you do that, then you must fully understand what guarantees a pure function makes and what the compiler thinks that it can do with it so that you fully mimic that behavior. That's easier if you're returning immutable data or a value type, because then you don't have the issue of the compiler changing the mutability of the return type, but it's still very tricky business.
So, if you're thinking about casting something to pure, think again. Yes, it's possible to do some stuff that way that you couldn't otherwise, but it's very risky. Personally, I'd advise that you decide whether purity matters more to you or memoization matters more to you and that you drop the other. Anything else is highly risky.
What D allows to express within the type system is an impure function that memoizes a pure one.
Conceptually the memoizer is also pure, but the type system is not sufficiently expressive to allow that. You'd need to cheat somewhere.

scala coalesces multiple function call parameters into a Tuple -- can this be disabled?

This is a troublesome violation of type safety in my project, so I'm looking for a way to disable it. It seems that if a function takes an AnyRef (or a java.lang.Object), you can call the function with any combination of parameters, and Scala will coalesce the parameters into a Tuple object and invoke the function.
In my case the function isn't expecting a Tuple, and fails at runtime. I would expect this situation to be caught at compile time.
object WhyTuple {
def main(args: Array[String]): Unit = {
fooIt("foo", "bar")
}
def fooIt(o: AnyRef) {
println(o.toString)
}
}
Output:
(foo,bar)
No implicits or Predef at play here at all -- just good old fashioned compiler magic. You can find it in the type checker. I can't locate it in the spec right now.
If you're motivated enough, you could add a -X option to the compiler prevent this.
Alternatively, you could avoid writing arity-1 methods that accept a supertype of TupleN.
What about something like this:
object Qx2 {
#deprecated def callingWithATupleProducesAWarning(a: Product) = 2
def callingWithATupleProducesAWarning(a: Any) = 3
}
Tuples have the Product trait, so any call to callingWithATupleProducesAWarning that passes a tuple will produce a deprecation warning.
Edit: According to people better informed than me, the following answer is actually wrong: see this answer. Thanks Aaron Novstrup for pointing this out.
This is actually a quirk of the parser, not of the type system or the compiler. Scala allows zero- or one-arg functions to be invoked without parentheses, but not functions with more than one argument. So as Fred Haslam says, what you've written isn't an invocation with two arguments, it's an invocation with one tuple-valued argument. However, if the method did take two arguments, the invocation would be a two-arg invocation. It seems like the meaning of the code affects how it parses (which is a bit suckful).
As for what you can actually do about this, that's tricky. If the method really did require two arguments, this problem would go away (i.e. if someone then mistakenly tried to call it with one argument or with three, they'd get a compile error as you expect). Don't suppose there's some extra parameter you've been putting off adding to that method? :)
The compile is capable of interpreting methods without round brackets. So it takes the round brackets in the fooIt to mean Tuple. Your call is the same as:
fooIt( ("foo","bar") )
That being said, you can cause the method to exclude the call, and retrieve the value if you use some wrapper like Some(AnyRef) or Tuple1(AnyRef).
I think the definition of (x, y) in Predef is responsible. The "-Yno-predefs" compiler flag might be of some use, assuming you're willing to do the work of manually importing any implicits you otherwise need. By that I mean that you'll have to add import scala.Predef._ all over the place.
Could you also add a two-param override, which would prevent the compiler applying the syntactic sugar? By making the types taking suitably obscure you're unlikely to get false positives. E.g:
object WhyTuple {
...
class DummyType
def fooIt(a: DummyType, b: DummyType) {
throw new UnsupportedOperationException("Dummy function - should not be called")
}
}

Resources