Memory Management in Javascript - memory-management

Have a look at the code. Lets assume each statement takes 0 milliseconds to complete. printAfter2 is a simple function that prints the string passed to it after the 2 seconds of the call.
printAfter2 = (obj) => {
setTimeout(() => {
console.log(JSON.stringify(obj));
}, 2000)
}
In this below code we created a function that
defines a block-scoped variable obj at time 0 ms
calls the function with obj (type is Object) as parameter at time 0 ms. Since the passed parameter is an Object so its reference will be passed to the function.
Then there is console.log function call. After it the block ends at time 0 ms so the block scoped variable obj will also be destroyed.
At time 2000, the printAfter2 function fetches the value of parameter which was passed to it. In this case it is a reference of a variable which should be destroyed so far. But this didn't work as expected. It prints the same original obj at 2000 ms which was supposed to be destroyed at 0 ms. Why is this so?
We actually need not a async function but ignore it.
(async () => {
let obj = {name: 'Ali'}
printAfter2(obj);
console.log("obj var will be destroyed after this block");
})()

When the variable/parameter obj goes out of scope, that doesn't mean that anything gets destroyed immediately. It only means that one reference to some object disappears, which makes this object eligible for garbage collection if and only if that was the last reference to it. The garbage collector will eventually (next time it runs) free memory belonging to objects that are no longer reachable, i.e. have no references to them. Let's look at a simpler case, without any closures:
let o1;
function f1(obj) {
console.log(obj); // (3)
} // (4)
o1 = new Object(); // (1)
f1(o1); // (2)
let o2 = o1; // (5)
o1 = null; // (6)
// (7)
o2 = new Array();
// (8)
Line (1) obviously allocates an Object, and uses the variable o1 to refer to it. Note that there is a distinction between the object and the variable; in particular they have different lifetimes.
Line (2) passes the Object to the function; while the function executes (e.g. in line (3)), there are two variables referring to the same object: o1 in the outer scope, and obj in f1's scope.
When f1 terminates in line (4), the variable obj goes out of scope, but the Object is still reachable via o1.
Line (5) creates a new variable, again referring to the same object. This is conceptually very similar to passing it to some function.
When o1 stops referring to the Object in line (6), that doesn't make the Object eligible for garbage collection in line (7), because o2 is still referring to it ("keeping it alive"). Only once o2 is also reassigned, or goes out of scope, does the Object become unreachable: if the garbage collector runs any time after execution has reached line (8), the Object's memory will be freed.
(Side note: the garbage collector doesn't actually "collect garbage" or "destroy objects", because it doesn't touch that memory at all. It only registers the fact that the memory where the Object was stored is now free to be used for a new allocation.)
In the case of your example, you're creating a closure () => console.log(JSON.stringify(obj)) that contains a reference to the object. While this closure sits around waiting for its time to execute, this reference will keep the object alive. It can only be freed after the closure has run its course, and has become unreachable itself.
To illustrate in a different way:
function MakeClosure() {
let obj = {message: "Hello world"};
return function() { console.log(JSON.stringify(obj)); };
}
let callback = MakeClosure();
// While the local variable `obj` is inaccessible now, `callback` internally
// has a reference to the object created as `{message: ...}`.
setTimeout(callback, 2000);
// Same situation as above at this point.
callback = null;
// Now the variable `callback` can't be used any more to refer to the closure,
// but the `setTimeout` call added the closure to some internal list, so it's
// not unreachable yet.
// Only once the callback has run and is dropped from the engine-internal list
// of waiting setTimeout-scheduled callbacks, can the `{message: ...}` object get
// cleaned up -- again, this doesn't happen immediately, only whenever the garbage
// collector decides to run.

Related

Is * operator of std::shared_ptr thread safe?

I have a std::shared_ptr which changes asynchronously from a callback.
In main thread, I want to read the "latest" value and do complex calculations on it, and I do not care if the pointer's value changes while those calculations are running.
For this, I am simply making a copy of the contained value on the main thread:
// async thread
void callback(P new_data) {
smart_pointer_ = new_data;
}
// main thread loop!
Value copy_of_pointer_value = *smart_pointer_; // smart_pointer_ could be changing in callback right now
// do calcs with copy_of_pointer_value
Is this safe or should I be explicitly making a copy of the smart pointer before trying to read its value, like this:
// main thread loop!
auto smart_copy = smart_pointer_;
// I know I could work with *smart_copy directly, but I need to copy anyway for other reasons
Value copy_of_pointer_value = *smart_copy;
// do calcs with copy_of_pointer_value

reassigning a new object to a static type object in C++

I have the following piece of code that I have a question about.
f()
{
static V v(10,0);//first argument is size and the second is init val for each element
...
v = V(5,0);
}
Does the previously allocated V(10,0) get destroyed automatically when I call V(5,0) and assign it to v in the second line ? Or do I have to destroy it ?
Since v is static, is the object V(5,0) retained across function calls ?
Does the previously allocated V(10,0) get destroyed automatically when I call V(5,0) and assign it to v in the second line ? Or do I have to destroy it ?
No. The object lives for the life of the application. Its state is changed by the assignment operation.
The object gets destroyed automatically when the application is terminated. You don't have to destroy it. If you try to destroy it, your program will have undefined behavior.
PS You can use better names than v and V to make the code and the discussion more meaningful.

Golang garbage collector and maps

I'm processing some user session data inside of a goroutine and creating a map to keep track of user id -> session data inside of it. The goroutine loops through a slice and if a SessionEnd event is found, the map key is deleted inside the same iteration. This doesn't seem to always be the case, as I can still retrieve some of the data as well as the 'key exists' bool variable sometimes in the following iterations. It's as if some variables haven't yet been zeroed.
Each map has only one goroutine writing/reading from it. From my understanding there shouldn't be a race condition, but it definitely seems that there is with the map and delete().
The code works fine if the garbage collector is run on every iteration. Am I using a map for the wrong purpose?
Pseudocode (a function that is run inside a single goroutine, lines is passed as a variable):
active := make(ActiveSessions) // map[int]UserSession
for _, l := range lines { // lines is a slice of a parsed log
u = l.EventData.(parser.User)
s, exists = active[u.SessionID]
switch l.Event {
// Contains cases which can check if exists is true or false
// errors if contains an event that can't happen,
// for example UserDisconnect before UserConnect,
// or UserConnect while a session is already active
case "UserConnect":
if exists {
// error, can't occur
// The same session id can occur in the log after a prior session has completed,
// which is exactly when the problems occur
}
case "UserDisconnect":
sessionFinished = true
}
// ...
if sessionFinished {
// <add session to finished sessions>
delete(active, u.SessionID)
// Code works only if runtime.GC() is executed here, could just be a coincidence
}
}

Why doesn't Typescript Intellisense show Object extensions?

Consider this code to extend the Object type:
interface Object
{
doSomething() : void;
}
Object.prototype.doSomething = function ()
{
//do something
}
With this in place, the following both compile:
(this as Object).doSomething();
this.doSomething();
BUT: when I'm typing the first line, Intellisense knows about the doSomething method and shows it in the auto-completion list. When I'm typing the second line, it does not.
I'm puzzled about this, because doesn't every variable derive from Object, and therefore why doesn't Visual Studio show the extra method in the method list?
Update:
Even though the Intellisense doesn't offer the method, it does seem to recognize it when I've typed it manually:
What could explain that?!
...because doesn't every variable derive from Object
No, for two reasons:
1. JavaScript (and TypeScript) has both objects and primitives. this can hold any value (in strict mode), and consequently can be a primitive:
"use strict";
foo();
foo.call(42);
function foo() {
console.log(typeof this);
}
Here's that same code in the TypeScript playground. In both cases (here and there), the above outputs:
undefined
number
...neither of which is derived from Object.
2. Not all objects inherit from Object.prototype:
var obj = Object.create(null);
console.log(typeof obj.toString); // undefined
console.log("toString" in obj); // false
If an object's prototype chain is rooted in an object that doesn't have a prototype at all (like obj above), it won't have the features of Object.prototype.
From your comment below:
I thought even primitives like number inherit from Object. If number doesn't, how does number.ToString() work?
Primitives are primitives, which don't inherit from Object. But you're right that most of them seem to, because number, string, boolean, and symbol have object counterparts (Number, String, Boolean, and Symbol) which do derive from Object. But not all primitives do: undefined and null throw a TypeError if you try to treat them like objects. (Yes, null is a primitive even though typeof null is "object".)
For the four of them that have object counterparts, when you use a primitive like an object like this:
var a = 42;
console.log(a.toString());
...an appropriate type of object is created and initialized from the primitive via the abstract ToObject operation in the spec, and the resulting object's method is called; then unless that method returns that object reference (I don't think any built-in method does, but you can add one that does), the temporary object is immediately eligible for garbage collection. (Naturally, JavaScript engines optimize this process in common cases like toString and valueOf.)
You can tell the object is temporary by doing something like this:
var a = 42;
console.log(a); // 42
console.log(typeof a); // "number"
a.foo = "bar"; // temp object created and released
console.log(a.foo); // undefined, the object wasn't assigned back to `a`
var b = new Number(42);
console.log(b); // (See below)
console.log(typeof b); // "object"
b.foo = "bar"; // since `b` refers to an object, the property...
console.log(b.foo); // ... is retained: "bar"
(Re "see below": In the Stack Snippets console, you see {} there; in Chrome's real console, what you see depends on whether you have the console open: If you don't, opening it later will show you 42; if you do, you'll see ▶ Number {[[PrimitiveValue]]: 42} which you can expand with the ▶.)
Does number implement its own toString method, having nothing to do with Object?
Yes, but that doesn't really matter re your point about primitives and their odd relationship with Object.
So to round up:
this may contain a primitive, and while some primitives can be treated like objects, not all can.
this may contain an object reference for an object that doesn't derive from Object (which is to say, doesn't have Object.prototype in its prototype chain).
JavaScript is a hard language for IntelliSense. :-)

Variable capture by closures in Swift and inout parameters

I noticed that when a variable is captured by a closure in Swift, the closure can actually modify the value. This seems crazy to me and an excellent way of getting horrendous bugs, specially when the same var is captured by several closures.
var capture = "Hello captured"
func g(){
// this shouldn't be possible!
capture = capture + "!"
}
g()
capture
On the other hand, there's the inout parameters, which allow a function or closure to modify its parameters.
What's the need for inout, even captured variables can already be modified with impunity??!!
Just trying to understand the design decisions behind this...
Variables from an outer scope that are captured aren't parameters to the routine, hence their mutablility is inherited from context. By default actual parameters to a routine are constant (let) and hence can't be modified locally (and their value isn't returned)
Also note that your example isn't really capturing capture since it's a global variable.
var global = "Global"
func function(nonmutable:Int, var mutable:Int, inout returnable:Int) -> Void {
// global can be modified here because it's a global (not captured!)
global = "Global 2"
// nomutable can't be modified
// nonmutable = 3
// mutable can be modified, but it's caller won't see the change
mutable = 4
// returnable can be modified, and it's caller sees the change
returnable = 5
}
var nonmutable = 1
var mutable = 2
var output = 3
function(nonmutable, mutable, &output)
println("nonmutable = \(nonmutable)")
println("mutable = \(mutable)")
println("output = \(output)")
Also, as you can see, the inout parameter is passed differently so that it's obvious that on return, the value may be different.
David's answer is totally correct, but I thought I'd give an example how capture actually works as well:
func captureMe() -> (String) -> () {
// v~~~ This will get 'captured' by the closure that is returned:
var capturedString = "captured"
return {
// The closure that is returned will print the old value,
// assign a new value to 'capturedString', and then
// print the new value as well:
println("Old value: \(capturedString)")
capturedString = $0
println("New value: \(capturedString)")
}
}
let test1 = captureMe() // Output: Old value: captured
println(test1("altered")) // New value: altered
// But each new time that 'captureMe()' is called, a new instance
// of 'capturedString' is created with the same initial value:
let test2 = captureMe() // Output: Old value: captured
println(test2("altered again...")) // New value: altered again...
// Old value will always start out as "captured" for every
// new function that captureMe() returns.
The upshot of that is that you don't have to worry about the closure altering the captured value - yes, it can alter it, but only for that particular instance of the returned closure. All other instances of the returned closure will get their own, independent copy of the captured value that they, and only they, can alter.
Here are a couple of use cases for closures capturing variables outside their local context, that may help see why this feature is useful:
Suppose you want to filter duplicates out of an array. There’s a filter function that takes a filtering predicate and returns a new array of only entries matching that predicate. But how to pass the state of which entries have already been seen and are thus duplicates? You’d need the predicate to keep state between calls – and you can do this by having the predicate capture a variable that holds that state:
func removeDupes<T: Hashable>(source: [T]) -> [T] {
// “seen” is a dictionary used to track duplicates
var seen: [T:Bool] = [:]
return source.filter { // brace marks the start of a closure expression
// the closure captures the dictionary and updates it
seen.updateValue(true, forKey: $0) == nil
}
}
// prints [1,2,3,4]
removeDupes([1,2,3,1,1,2,4])
It’s true that you could replicate this functionality with a filter function that also took an inout argument – but it would be hard to write something so generic yet flexible as the possibilities with closures. (you could do this kind of filter with reduce instead of filter, since reduce passes state from call to call – but the filter version is probably clearer)
There is a GeneratorOf struct in the standard library that makes it very easy to whip up sequence generators of various kinds. You initialize it with a closure, and that closure can capture variables to use for the state of the generator.
Suppose you want a generator that serves up a random ascending sequence of m numbers from a range 0 to n. Here’s how to do that with GeneratorOf:
import Darwin
func randomGeneratorOf(#n: Int, #from: Int) -> GeneratorOf<Int> {
// state variable to capture in the closure
var select = UInt32(n)
var remaining = UInt32(from)
var i = 0
return GeneratorOf {
while i < from {
if arc4random_uniform(remaining) < select {
--select
--remaining
return i++
}
else {
--remaining
++i
}
}
// returning nil marks the end of the sequence
return nil
}
}
var g = randomGeneratorOf(n: 5, from: 20)
// prints 5 random numbers in 0..<20
println(",".join(map(g,toString)))
Again, it’s possible to do this kind of thing without closures – in languages without them, you’d probably have a generator protocol/interface and create an object that held state and had a method that served up values. But closure expressions allow a flexible way to do this with minimal boiler plate.
A closure being able to modify the captured variable in the outer scope is pretty common across languages. This is the default behavior in C#, JavaScript, Perl, PHP, Ruby, Common Lisp, Scheme, Smalltalk, and many others. This is also the behavior in Objective-C if the outer variable is __block, in Python 3 if the outer variable is nonlocal, in C++ if the outer variable is captured with &

Resources