What is the meaning of lam# in ATS? - ats

Sometimes, I see code like the following:
var foo = lam#(x: int): int => ...
What is the meaning of lam#? What is the difference between lam and lam#?

Memory-wise, lam creates a boxed closure (if it is a closure), and allocates it on the heap. lam# creates a flat closure, and allocates it on the stack frame.
See the Chapter 26. Linear Closure-Functions
and Chapter 27. Stack-Allocated Closure-Functions in the official tutorial for reference.

Related

What is the difference of immutable and mutable variables?

I have read almost every definition of immutable/mutable variables on the internet but as a beginner I just do not grasp it fully so I was wondering if someone could really explain it in layman terms.
Immutable variables (or objects) in any programming language is what I understand when you cannot change the value of that variable after it has been assigned a value. For example, I am using the Haskell programming language, and I write:
let x = 5
Since Haskell has immutable variables, x can never have any other value than 5. So if I after that line of code write:
x = 2
I have in fact not changed the value of x but made a new variable with the same name, which will now be the one that is referenced when I call x, so after both lines of code I can only reach an x with the value of 2.
But what is a mutable variable then, and what programming languages have it? This is where it gets foggy for me. Because when people say mutable variable, they are obviously referring to a variable or object which value you can indeed change after it has been assigned an initial value.
Does this mean that if you have a mutable variable you actually manipulate that place in the computers memory for that variable, and in case of immutable variable you cannot manipulate that place in the computers memory or what?
I don't know how to explain my question any further, as I said, I understand that mutable = can change value of variable after initial value assignment, immutable = cannot. I get the definition. But I don't understand what it actually means in terms of what is going on "behind the scenes". I guess I am looking for easy examples on actual mutable variables.
This has nothing to do with immutability
let x = 5
x = 2
This is reassignment and definitely not allowed in Haskell
First let's look at a regular let assignment
Prelude> let x = 5 in x
5
it :: Num a => a
You can bind x using let, and rebind a new x in a nested let – this effectively shadows the outer x
Prelude> let x = 5 in let x = 2 in x
2
it :: Num a => a
Remember a let is basically a lambda
Prelude> (\x -> x) 5
5
it :: Num a => a
And of course a lambda can return a lambda; illustrates shadowing agian
Prelude> (\x -> (\x -> x)) 5 2
2
it :: Num a => a
I believe a shorthand answer to your question would be that a Mutable variable that holds a value that you would later want to be adjusted.
Depending on the language you're using is its method.
In Kotlin val and var are used to declare variables.
val is a constant while var is adjustable.
Mutable respectively immutable does not concern the variables but the values. Note: one also says the type is (im-)mutable.
For instance if the value is of an immutable type StudentCard, with fields ID and Name, then after creation the fields no longer can be changed. On a name change the student card must be reissued, a new StudentCard must be created.
A more elementary immutable type is String in java. Assigning the same value to two variables is no problem as one variable may not change the shared value. Sharing a mutable value could be dangerous.
A commont mutable type is the array in java and others. Sharing an array, say storing an array parameter in a field, risks that somewhere else the array content is changed inadvertently changing your field.

Is it safe to write to a std::strings buffer directly?

If I have the following code:
std::string hello = "hello world";
char* internalBuffer = &hello[0];
Is it then safe to write to internalBuffer up to hello.length()? Or is this UB/implemention defined? Obviously I can write tests and see that this works, but it doesn't answer my question.
Yes, it's safe. No, it's not explicitly allowed by the standard.
According to my copy of the standard draft from like half a year ago, they do assure that data() points at a contiguous array, and that that array be the same as what you receive from operator[]:
21.4.7.1 basic_string accessors [string.accessors]
const charT* c_str() const noexcept;
const charT* data() const noexcept;
Returns: A pointer p such that p + i == &operator[](i) for each i in [0,size()].
From this one can conclude that operator[] returns a reference to some place within that contiguous array. They also allow the returned reference from (non-const) operator[] be modified.
Having a non-const reference to one member of an array I dare to say that we can modify the entire array.
The relevant section in the standard is §21.4.5:
const_reference operator[](size_type pos) const noexcept;
reference operator[](size_type pos) noexcept;
[...]
Returns: *(begin() + pos) if pos < size(), otherwise a reference to an
object of type T with value charT(); the referenced value shall not be modified.
If I understand this correctly, it means that as long as the index given to operator[] is smaller than the string's size, one is allowed to modify the value. If however, the index is equal to size and thus we obtain the \0 terminating the string, we must not write to this value.
Cppreference uses a slightly different wording here:
If pos == size(), a reference to the character with value CharT() (the null character) is returned.
For the first (non-const) version,the behavior is undefined if this character is modified.
I read this such that 'this character' here only refers to the default constructed CharT, and not to the reference returned in the other case. But I admit that the wording is a bit confusing here.
In practice it is safe, theoretically - no.
C++ standard doesn't force to implement string as a sequential character array like it does for the vector. I'm not aware of any implementation of string where it is not safe, but theoretically there is no guarantee.
http://herbsutter.com/2008/04/07/cringe-not-vectors-are-guaranteed-to-be-contiguous/

Does an equivalent function in OCaml exist that works the same way as "set!" in Scheme?

I'm trying to make a function that defines a vector that varies based on the function's input, and set! works great for this in Scheme. Is there a functional equivalent for this in OCaml?
I agree with sepp2k that you should expand your question, and give more detailed examples.
Maybe what you need are references.
As a rough approximation, you can see them as variables to which you can assign:
let a = ref 5;;
!a;; (* This evaluates to 5 *)
a := 42;;
!a;; (* This evaluates to 42 *)
Here is a more detailed explanation from http://caml.inria.fr/pub/docs/u3-ocaml/ocaml-core.html:
The language we have described so far is purely functional. That is, several evaluations of the same expression will always produce the same answer. This prevents, for instance, the implementation of a counter whose interface is a single function next : unit -> int that increments the counter and returns its new value. Repeated invocation of this function should return a sequence of consecutive integers — a different answer each time.
Indeed, the counter needs to memorize its state in some particular location, with read/write accesses, but before all, some information must be shared between two calls to next. The solution is to use mutable storage and interact with the store by so-called side effects.
In OCaml, the counter could be defined as follows:
let new_count =
let r = ref 0 in
let next () = r := !r+1; !r in
next;;
Another, maybe more concrete, example of mutable storage is a bank account. In OCaml, record fields can be declared mutable, so that new values can be assigned to them later. Hence, a bank account could be a two-field record, its number, and its balance, where the balance is mutable.
type account = { number : int; mutable balance : float }
let retrieve account requested =
let s = min account.balance requested in
account.balance <- account.balance -. s; s;;
In fact, in OCaml, references are not primitive: they are special cases of mutable records. For instance, one could define:
type 'a ref = { mutable content : 'a }
let ref x = { content = x }
let deref r = r.content
let assign r x = r.content <- x; x
set! in Scheme assigns to a variable. You cannot assign to a variable in OCaml, at all. (So "variables" are not really "variable".) So there is no equivalent.
But OCaml is not a pure functional language. It has mutable data structures. The following things can be assigned to:
Array elements
String elements
Mutable fields of records
Mutable fields of objects
In these situations, the <- syntax is used for assignment.
The ref type mentioned by #jrouquie is a simple, built-in mutable record type that acts as a mutable container of one thing. OCaml also provides ! and := operators for working with refs.

Is Odersky serious with "bills !*&^%~ code!"?

In his book programming in scala (Chapter 5 Section 5.9 Pg 93)
Odersky mentioned this expression "bills !*&^%~ code!"
In the footnote on same page:
"By now you should be able to figure out that given this code,the Scala compiler would
invoke (bills.!*&^%~(code)).!()."
That's a bit to cryptic for me, could someone explain what's going on here?
What Odersky means to say is that it would be possible to have valid code looking like that. For instance, the code below:
class BadCode(whose: String, source: String) {
def ! = println(whose+", what the hell do you mean by '"+source+"'???")
}
class Programmer(who: String) {
def !*&^%~(source: String) = new BadCode(who, source)
}
val bills = new Programmer("Bill")
val code = "def !*&^%~(source: String) = new BadCode(who, source)"
bills !*&^%~ code!
Just copy&paste it on the REPL.
The period is optional for calling a method that takes a single parameter, or has an empty parameter list.
When this feature is utilized, the next chunk after the space following the method name is assumed to be the single parameter.
Therefore,
(bills.!*&^%~(code)).!().
is identical to
bills !*&^%~ code!
The second exclamation mark calls a method on the returned value from the first method call.
I'm not sure if the book provides method signatures but I assume it's just a comment on Scala's syntactic sugar so it assumes if you type:
bill add monkey
where there is an object bill which has a method add which takes a parameter then it automatically interprets it as:
bill.add(monkey)
Being a little Scala rusty, I'm not entirely sure how it splits code! into (code).!() except for a vague tickling of the grey cells that the ! operator is used to fire off an actor which in compiler terms might be interpretted as an implicit .!() method on the object.
The combination of the '.()' being optional with method calls (as Wysawyg explained above) and the ability to use (almost) whatever characters you like for naming methods, makes it possible to write methods in Scala that look like operator overloading. You can even invent your own operators.
For example, I have a program that deals with 3D computer graphics. I have my own class Vector for representing a 3D vector:
class Vector(val x: Double, val y: Double, val z: Double) {
def +(v: Vector) = new Vector(x + v.x, y + v.y, z + v.z)
// ...etc.
}
I've also defined a method ** (not shown above) to compute the cross product of two vectors. It's very convenient that you can create your own operators like that in Scala, not many other programming languages have this flexibility.

Scala: Mutable vs. Immutable Object Performance - OutOfMemoryError

I wanted to compare the performance characteristics of immutable.Map and mutable.Map in Scala for a similar operation (namely, merging many maps into a single one. See this question). I have what appear to be similar implementations for both mutable and immutable maps (see below).
As a test, I generated a List containing 1,000,000 single-item Map[Int, Int] and passed this list into the functions I was testing. With sufficient memory, the results were unsurprising: ~1200ms for mutable.Map, ~1800ms for immutable.Map, and ~750ms for an imperative implementation using mutable.Map -- not sure what accounts for the huge difference there, but feel free to comment on that, too.
What did surprise me a bit, perhaps because I'm being a bit thick, is that with the default run configuration in IntelliJ 8.1, both mutable implementations hit an OutOfMemoryError, but the immutable collection did not. The immutable test did run to completion, but it did so very slowly -- it takes about 28 seconds. When I increased the max JVM memory (to about 200MB, not sure where the threshold is), I got the results above.
Anyway, here's what I really want to know:
Why do the mutable implementations run out of memory, but the immutable implementation does not? I suspect that the immutable version allows the garbage collector to run and free up memory before the mutable implementations do -- and all of those garbage collections explain the slowness of the immutable low-memory run -- but I'd like a more detailed explanation than that.
Implementations below. (Note: I don't claim that these are the best implementations possible. Feel free to suggest improvements.)
def mergeMaps[A,B](func: (B,B) => B)(listOfMaps: List[Map[A,B]]): Map[A,B] =
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableMaps[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] =
(mutable.Map[A,B]() /: (for (m <- listOfMaps; kv <- m) yield kv)) { (acc, kv) =>
acc + (if (acc.contains(kv._1)) kv._1 -> func(acc(kv._1), kv._2) else kv)
}
def mergeMutableImperative[A,B](func: (B,B) => B)(listOfMaps: List[mutable.Map[A,B]]): mutable.Map[A,B] = {
val toReturn = mutable.Map[A,B]()
for (m <- listOfMaps; kv <- m) {
if (toReturn contains kv._1) {
toReturn(kv._1) = func(toReturn(kv._1), kv._2)
} else {
toReturn(kv._1) = kv._2
}
}
toReturn
}
Well, it really depends on what the actual type of Map you are using. Probably HashMap. Now, mutable structures like that gain performance by pre-allocating memory it expects to use. You are joining one million maps, so the final map is bound to be somewhat big. Let's see how these key/values get added:
protected def addEntry(e: Entry) {
val h = index(elemHashCode(e.key))
e.next = table(h).asInstanceOf[Entry]
table(h) = e
tableSize = tableSize + 1
if (tableSize > threshold)
resize(2 * table.length)
}
See the 2 * in the resize line? The mutable HashMap grows by doubling each time it runs out of space, while the immutable one is pretty conservative in memory usage (though existing keys will usually occupy twice the space when updated).
Now, as for other performance problems, you are creating a list of keys and values in the first two versions. That means that, before you join any maps, you already have each Tuple2 (the key/value pairs) in memory twice! Plus the overhead of List, which is small, but we are talking about more than one million elements times the overhead.
You may want to use a projection, which avoids that. Unfortunately, projection is based on Stream, which isn't very reliable for our purposes on Scala 2.7.x. Still, try this instead:
for (m <- listOfMaps.projection; kv <- m) yield kv
A Stream doesn't compute a value until it is needed. The garbage collector ought to collect the unused elements as well, as long as you don't keep a reference to the Stream's head, which seems to be the case in your algorithm.
EDIT
Complementing, a for/yield comprehension takes one or more collections and return a new collection. As often as it makes sense, the returning collection is of the same type as the original collection. So, for example, in the following code, the for-comprehension creates a new list, which is then stored inside l2. It is not val l2 = which creates the new list, but the for-comprehension.
val l = List(1,2,3)
val l2 = for (e <- l) yield e*2
Now, let's look at the code being used in the first two algorithms (minus the mutable keyword):
(Map[A,B]() /: (for (m <- listOfMaps; kv <-m) yield kv))
The foldLeft operator, here written with its /: synonymous, will be invoked on the object returned by the for-comprehension. Remember that a : at the end of an operator inverts the order of the object and the parameters.
Now, let's consider what object is this, on which foldLeft is being called. The first generator in this for-comprehension is m <- listOfMaps. We know that listOfMaps is a collection of type List[X], where X isn't really relevant here. The result of a for-comprehension on a List is always another List. The other generators aren't relevant.
So, you take this List, get all the key/values inside each Map which is a component of this List, and make a new List with all of that. That's why you are duplicating everything you have.
(in fact, it's even worse than that, because each generator creates a new collection; the collections created by the second generator are just the size of each element of listOfMaps though, and are immediately discarded after use)
The next question -- actually, the first one, but it was easier to invert the answer -- is how the use of projection helps.
When you call projection on a List, it returns new object, of type Stream (on Scala 2.7.x). At first you may think this will only make things worse, because you'll now have three copies of the List, instead of a single one. But a Stream is not pre-computed. It is lazily computed.
What that means is that the resulting object, the Stream, isn't a copy of the List, but, rather, a function that can be used to compute the Stream when required. Once computed, the result will be kept so that it doesn't need to be computed again.
Also, map, flatMap and filter of a Stream all return a new Stream, which means you can chain them all together without making a single copy of the List which created them. Since for-comprehensions with yield use these very functions, the use of Stream inside the prevent unnecessary copies of data.
Now, suppose you wrote something like this:
val kvs = for (m <- listOfMaps.projection; kv <-m) yield kv
(Map[A,B]() /: kvs) { ... }
In this case you aren't gaining anything. After assigning the Stream to kvs, the data hasn't been copied yet. Once the second line is executed, though, kvs will have computed each of its elements, and, therefore, will hold a complete copy of the data.
Now consider the original form::
(Map[A,B]() /: (for (m <- listOfMaps.projection; kv <-m) yield kv))
In this case, the Stream is used at the same time it is computed. Let's briefly look at how foldLeft for a Stream is defined:
override final def foldLeft[B](z: B)(f: (B, A) => B): B = {
if (isEmpty) z
else tail.foldLeft(f(z, head))(f)
}
If the Stream is empty, just return the accumulator. Otherwise, compute a new accumulator (f(z, head)) and then pass it and the function to the tail of the Stream.
Once f(z, head) has executed, though, there will be no remaining reference to the head. Or, in other words, nothing anywhere in the program will be pointing to the head of the Stream, and that means the garbage collector can collect it, thus freeing memory.
The end result is that each element produced by the for-comprehension will exist just briefly, while you use it to compute the accumulator. And this is how you save keeping a copy of your whole data.
Finally, there is the question of why the third algorithm does not benefit from it. Well, the third algorithm does not use yield, so no copy of any data whatsoever is being made. In this case, using projection only adds an indirection layer.

Resources