Close() invocations and the garbage collector - go

In Go, interfaces that use resources like network connections usually have a Close() method that disposes of these resources.
Now I wonder what would happen if the associated structs implementing the interface get garbage-collected without Close having been invoked.
Will the OS keep the network connection / file descriptor / whatever open? Will the garbage collector do something or will something even prevent it from touching that struct?
I.e.
conn, _ := net.DialTCP(network, laddr, raddr)
// do stuff, then
conn = nil
// forgot to invoke `Close()`!!
// what happens now?

Such closeable resources may have finalizers set with runtime.SetFinalizer, for example the netFD of TCPListener:
runtime.SetFinalizer(fd, (*netFD).Close)
This should not be a reason for omitting to call Close() to free resources, also because there is no guarantee that the finalizer will be called or when:
The finalizer is scheduled to run at some arbitrary time after the program can no longer reach the object to which obj points. There is no guarantee that finalizers will run before a program exits, so typically they are useful only for releasing non-memory resources associated with an object during a long-running program.
Call Close() to make sure the resources are freed.
Related:
Which objects are finalized in Go by default and what are some of the pitfalls of it?

Os will keep the file descriptors for non-closed connections, which is going to be freed once the program exits.

Related

Is it necessary to free a mutex created by xSemaphoreCreateMutex()?

FreeRTOS and ESP-IDF provide xSemaphoreCreateMutex() which allocates and initializes a mutex. Their docs say:
If a mutex is created using xSemaphoreCreateMutex() then the required
memory is automatically dynamically allocated inside the
xSemaphoreCreateMutex() function. (see
http://www.freertos.org/a00111.html).
However, I can't find any info on whether it is necessary to free the memory created by the mutex. This would be important if using C++ with a mutex member variable, like:
class MyClass
{
MyClass::MyClass()
{
mAccessMutex = xSemaphoreCreateMutex();
}
MyClass::~MyClass()
{
// What to do here??
}
SemaphoreHandle_t mAccessMutex;
}
REFERENCE
https://docs.espressif.com/projects/esp-idf/en/latest/esp32/api-reference/system/freertos.html?highlight=xsemaphore#semaphore-api
According to FreeRTOS API reference, the proper way to destroy/delete a mutex is vSemaphoreDelete()
Deletes a semaphore, including mutex type semaphores and recursive
semaphores.
Do not delete a semaphore that has tasks blocked on it.
If you're using heap_1, deleting is not possible. Also, make sure that you fully understand the perils of dynamic memory allocation in embedded systems before using it. If MyClass is going to be created and destroyed in a regular/periodic basis, this may cause problems.
So yes, it's necessary to call vSemaphoreDelete(mAccessMutex) in ~MyClass(). But it's probably best to make sure that MyClass instances never get destroyed. In my projects, I generally use one-time dynamic allocation during initialization and forget to implement a proper destructor (which is a bad habit that I need to fix).

Why finalizer is never called?

var p = &sync.Pool{
New: func() interface{} {
return &serveconn{}
},
}
func newServeConn() *serveconn {
sc := p.Get().(*serveconn)
runtime.SetFinalizer(sc, (*serveconn).finalize)
fmt.Println(sc, "SetFinalizer")
return sc
}
func (sc *serveconn) finalize() {
fmt.Println(sc, "finalize")
*sc = serveconn{}
runtime.SetFinalizer(sc, nil)
p.Put(sc)
}
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
UPDATE
This may be related:https://github.com/golang/go/issues/2368
The above code tries to reuse object by SetFinalizer, but after debug I found finalizer is never called, why?
The finalizer is only called on an object when the GC
marks it as unused and then tries to sweep (free) at the end
of the GC cycle.
As a corollary, if a GC cycle is never performed during the runtime of your program, the finalizers you set may never be called.
Just in case you might hold a wrong assumption about the Go's GC, it may worth noting that Go does not employ reference counting on values; instead, it uses GC which works in parallel with the program, and the sessions during which it works happen periodically and are triggered by certain parameters like pressure on the heap produced by allocations.
A couple assorted notes regarding finalizers:
When the program terminates, no GC is forcibly run.
A corollary of this is that a finalizer is not guaranteed
to run at all.
If the GC finds a finalizer on an object about to be freed,
it calls the finalizer but does not free the object.
The object itself will be freed only at the next GC cycle —
wasting the memory.
All in all, you appear as trying to implement destructors.
Please don't: make your objects implement the sort-of standard method called Close and state in the contract of your type that the programmer is required to call it when they're done with the object.
When a programmer wants to call such a method no matter what, they use defer.
Note that this approach works perfectly for all types in the Go
stdlib which wrap resources provided by the OS—file and socket descriptors. So there is no need to pretend your types are somehow different.
Another useful thing to keep in mind is that Go was explicitly engineered to be no-nonsense, no-frills, no-magic, in-your-face language, and you're just trying to add magic to it.
Please don't, those who like decyphering layers of magic do program in Scala different languages.

Global variables in iokit drivers

I'm using some global variables in iokit based driver i.e. outside the main class instance. However, this cause some unexpected panics upon driver startup due to using uninitialize global variables or attempt to double free
a global variable upon teardown.
What is the life cycles of global variables in iokit driver lifecycle ?
if i set a global variable upon declaration,
For example, if i've got global variable from type lck_grp_t * my_lock_grp...
Can i assume my global variable is already allocated and ready to be set when my iokit driver reaches the ::start method ?
(my_lock_grp = lck_grp_alloc_init("my-locks", my_lock_grp_attr);)
Can i assume my global variable is still valid when I attempt to release it on my iokit ::free method? (lck_grp_free(my_lock_grp))
And the general question is what is the life-cycle of global variables in iokit based driver compared to the driver instance itself.
The lifetime will definitely be the same as the lifetime of the kext. IOKit init/start/stop/free functions on your classes will be happening between the kext start and stop functions (you may not have explicit kext start & stop functions), and global constructors are run before the kext start function, and likewise global destructors are run after the kext stop function. The memory allocation/deallocation for global/static variables is done by the dynamic kernel linker at the same time as the kext's code itself is loaded and unloaded.
There are 3 things I can think of:
The IOService start() and free() functions are not matched - free() is called even if start() was never called. So for example if you have a probe() function, and this is called and returns nullptr, then start() is never called, but free() definitely will be, and it tries to free a lock group that was never allocated. Similarly if an init() function returned false - start() will never run, but free() will. The equivalent of free() is the init() family of member functions, so only unconditionally destroy (no nullptr check) in free() what is unconditionally created in all possible init… functions.
start() can be called any number of times on different instances, so if you always run my_lock_grp = lck_grp_alloc_init() in start() and 2 instances are created, my_lock_grp only remembers the last one, so if then both instances of your class are freed, you end up trying to free one lock group twice and the other not at all. This is bad news obviously. For initialising/destroying truly global state, I recommend using kext start and stop functions or global constructors/destructors.
Otherwise I suspect you might be running into a situation where some other part of the running kernel still has a dangling reference past the point where your kext has already been unloaded, for example if you created a new kernel thread and this thread is still running, or if you've not deregistered all callbacks you registered, or if a callback has been deregistered but is not guaranteed to have completed all invocations. (kauth listeners are notorious for this latter situation)
If none of those sound like they might be the problem, I suggest posting the affected code and the panic log, maybe we can make more sense of the problem if we have some hard data.

Can I explicitly invoke property destructors so that I can see which one causes problems?

I guess this is a really nasty issue - seems like one of the property destructors of my class creates deadlock. Property destructors are called automatically after class destructor. And I'd like to call them manually and make a log entry after every single one succeeds.
The problem only occurs on devices, where debugger can't be used, so I am using log instead.
Client::~Client() {
// Stops io service and disconnects sockets
exit();
LOG("io_service stopped"<<endl);
// Destroy IO service
io_.~io_service();
LOG("io_service destroyed"<<endl);
}
But the code above actually causes exception, because the ~io_service() gets called twice.
So is there a way to do this properly? If not, what's an alternative to debugging destructors?
You can't alter the compiler behaviour like that. the compiler will augment the destructor to destruct nested objects.
What you can do is to declare io as a pointer and allocate it dynamically with new. then call delete io and monitor what happens there.
Other solution is just to put a breakpoint on the io destructor and follow what happens there upon destruction. this is probably the best idea.

Windows: TCP/IP: force close connection: avoid memleaks in kernel/user-level

A question to windows network programming experts.
When I use pseudo-code like this:
reconnect:
s = socket(...);
// more code...
read_reply:
recv(...);
// merge received data
if(high_level_protocol_error) {
// whoops, there was a deviation from protocol, like overflow
// need to reset connection and discard data right now!
closesocket(s);
goto reconnect;
}
Does kernel un-associate and frees all data "physically" received from NIC(since it must really already be there, in kernel memory, waiting for user-level to read it with recv()), when I closesocket()? Well, it logically should since data is not associated with any internal object anymore, right?
Because I don't really want to waste unknown amount of time for clean shutdown like "call recv() until returns error". That does not make sense: what if it will never return error, say, server continues to send data forever and not closes connection, but that is bad behaviour?
I'm wondering about it since I don't want my application to cause memory leaks anywhere. Is this way of forced resetting connection, that still expected to send in unknown amount of data correct?
// optional addition to question: if this method considered correct for windows, can it be considered correct (with change of closesocket() to close() ) for UNIX-compliant OS?
Kernel drivers in Windows (or any OS really), including tcpip.sys, are supposed to avoid memory leaks in all circumstances, regardless of what you do in user mode. I would think that the developers have charted the possible states, including error states, to make sure that resources aren't leaked. As for user mode, I'm not exactly sure but I wouldn't think that resources are leaked in your process either.
Sockets are just file objects in Windows. When you close the last handle to a file, the IO manager sends a IRP_MJ_CLEANUP message to the driver that owns the file to clean up resources associated with it. The receive buffers associated with the socket would be freed along with the file object.
It does say in the closesocket documentation that pending operations are canceled but that async operations may complete after the function returns. It sounds like closing the socket while in use is a supported scenario and wouldn't lead to a memory leak.
There will be no leak and you are under no obligation to read the stream to EOS before closing. If the sender is still sending after you close it will eventually get a 'connection reset'.

Resources