kernel: synchronizing deletion of shared field of task_struct - linux-kernel

I would like to add a pointer (to an object) to task_struct that is shared between all threads in the group. After the object has been deleted by 1 thread, how could I ensure that another thread will not attempt to dereference the invalid pointer.
Could I add an atomic variable reference field to task_struct, and then update them in sync across all threads of a process (hold a global spinlock while traversing task_structs)?
Or implementing a kernel thread that manages the objects and their reference counts. Seems like this problem must have been solved by other shared entities like virtual memory and file handles.

You could do this by defining you own datastructure:
struct my_task_data {
void *real_data;
}
The task_struct must be enhanced:
struct task_struct {
....
struct my_task_data *mtd;
};
In the clone() call you need to handle the mdt member of the task_struct.
real_data points to whatever you want. Doing it this way means you have one pointer from each task_struct to a shared object (mtd) which is always valid and can be dereferenced at any time. This shared object contains a pointer to your actual data item. When you want to access the item do:
data = current()->mtd->real_data;
if data is NULL another thread has deleted it, otherwise it can be used.
Locking issues are not shown in this example.
Of course you need to protect access to real_data by some locking mechanism, like a mutex or semaphore in the my_task_data structure and use it while manipulating my_task_data.

Related

Golang: are global variables protected from garbage collection?

I'm fairly new to Golang. I'm working on an application that builds an in-memory object-oriented data model (basically an ORM) to support the application functionality. I realize this isn't really idiomatic Go but it makes sense in this situation.
All my core objects are allocated on the heap then stored in global (though not necessarily exported) map structures that allow the code to look them up based on database IDs. Objects that reference instances of other objects have pointer fields in their structure definitions.
I was under the impression that any data that can be reached from a global variable is protected from being garbage collected. However, I am seeing intermittent cases of pointer references apparently becoming nil over time. If I restart the application, and rebuild the object model, then try the same operation, the problem disappears.
Is GC freeing my memory out from under me? Or should I look elsewhere to understand this problem? And if the answer to my first question is yes... how can I stop this from happening?
The garbage collector does not free memory as long as it is reachable. Global or package level variables are accessible during the whole lifetime of your app, so they can't be freed by the GC.
If you see the opposite, that is definitely a bug / mistake on your part (unless the Go runtime itself has a bug). For example you may have a data race initializing / accessing your global variables, or you (or some library you use) may use package unsafe or the uintptr type incorrectly. For example, quoting from unsafe.Pointer:
A uintptr is an integer, not a reference. Converting a Pointer to a uintptr creates an integer value with no pointer semantics. Even if a uintptr holds the address of some object, the garbage collector will not update that uintptr's value if the object moves, nor will that uintptr keep the object from being reclaimed.

cdev_alloc() vs cdev_init()

In Linux kernel modules, two different approaches can be followed when creating a struct cdev, as suggested in this site and in this answer:
First approach, cdev_alloc()
struct cdev *my_dev;
...
static int __init example_module_init(void) {
...
my_dev = cdev_alloc();
if (my_dev != NULL) {
my_dev->ops = &my_fops; /* The file_operations structure */
my_dev->owner = THIS_MODULE;
}
else
...
}
Second approach, cdev_init()
static struct cdev my_cdev;
...
static int __init example_module_init(void) {
...
cdev_init(&my_cdev, my_fops);
my_cdev.owner = THIS_MODULE;
...
}
(assuming that my_fops is a pointer to an initialized struct file_operations).
Is the first approach deprecated, or still in use?
Can cdev_init() be used also in the first approach, with cdev_alloc()? If no, why?
The second question is also in a comment in the linked answer.
Can cdev_init() be used also in the first approach, with cdev_alloc()?
No, cdev_init shouldn't be used for a character device, allocated with cdev_alloc.
At some extent, cdev_alloc is equivalent to kmalloc plus cdev_init. So calling cdev_init for a character device, created with cdev_alloc, has no sense.
Moreover, a character device allocated with cdev_alloc contains a hint that the device should be deallocated when no longer be used. Calling cdev_init for that device will clear that hint, so you will get a memory leakage.
Selection between cdev_init and cdev_alloc depends on a lifetime you want a character device to have.
Usually, one wants lifetime of a character device to be the same as lifetime of the module. In that case:
Define a static or global variable of type struct cdev.
Create the character device in the module's init function using cdev_init.
Destroy the character device in the module's exit function using cdev_del.
Make sure that file operations for the character device have .owner field set to THIS_MODULE.
In complex cases, one wants to create a character device at specific point after module's initializing. E.g. a module could provide a driver for some hardware, and a character device should be bound with that hardware. In that case the character device cannot be created in the module's init function (because a hardware is not detected yet), and, more important, the character device cannot be destroyed in the module's exit function. In that case:
Define a field inside a structure, describing a hardware, of pointer type struct cdev*.
Create the character device with cdev_alloc in the function which creates (probes) a hardware.
Destroy the character device with cdev_del in the function which destroys (disconnects) a hardware.
In the first case cdev_del is called at the time, when the character device is not used by a user. This guarantee is provided by THIS_MODULE in the file operations: a module cannot be unloaded if a file, corresponded to the character device, is opened by a user.
In the second case there is no such guarantee (because cdev_del is called NOT in the module's exit function). So, at the time when cdev_del returns, a character device can be still in use by a user. And here cdev_alloc really matters: deallocation of the character device will be deferred until a user closes all file descriptors associated with the character device. Such behavior cannot be obtained without cdev_alloc.
They do different things. The preference would be usual - prefer not to use dynamic allocation when not needed and allocate on stack when it's possible.
cdev_alloc() dynamically allocates my_dev, so it will call kfree(pointer) when cdev_del().
cdev_init() will not free the pointer.
Most importantly, the lifetime of the structure my_cdev is different. In cdev_init() case struct cdev my_cdev is bound to the containing lexical scope, while cdev_alloc() returns dynamically allocate pointer valid up until free-d.

In node* newNode=new node() , whose address is exactly getting returned by new here?

node* newNode=new node();
Here node is a typical linked list node class and newNode is the pointer used to dynamically create a new node containing int data and node* next attributes. Please tell exactly which address gets returned by the new keyword and getting stored in the newNode here?
For instance in int* p = arr[n]; , the address of arr or arr[0] is stored specifically.
There is lot of behind the scenes are happening when you are specifying new keyword. The context of "which address gets returned by the new keyword" is not the primary thing to focus, rather you need to know how a program (more specifically a process) deals with memory via Operating System.
what is process?
a process is more than a program code, which sometimes referred as text section; it also includes current activity as represented by the value of program counter and contents of the processor's registers.
a process generally includes:
the process stack, which contains temporary data such as function parameters, return addresses and local variables.
the data section, which contains global variables.
a process also include a heap area (which is important for your question), which is memory that is dynamically allocated during process runtime.
How memory allocated dynamically?
The new operator creates the object using that memory, and then returns a pointer containing the address of the memory that has been allocated.
new int; // dynamically allocate an integer.
Most often, we’ll assign the return value to our own pointer variable so we can access the allocated memory later.
int *ptr{ new int };
We can then perform indirection through the pointer to access the memory:
*ptr = 7;
Also some points to mention:
when you are requesting some memory by new keyword, the Operating System first search for free memory needed to fulfill your requirement from heap area (as mentioned above), if OS finds the memory, it is allocated to your variable/s.
when you are done with your process, the process itself return the memory back to the OS, so that it can be used by another program's process.
REFERENCE: OPERATING SYSTEM CONCEPTS BY SILBERSCHATZ, GALVIN AND GAGNE.

What happens when the raw pointer from shared_ptr get() is deleted?

I wrote some code like this:
shared_ptr<int> r = make_shared<int>();
int *ar = r.get();
delete ar; // report double free or corruption
// still some code
When the code ran up to delete ar;, the program crashed, and reported​ "double free or corruption", I'm confused why double free? The "r" is still in the scope, and not popped-off from stack. Do the delete operator do something magic?? Does it know the raw pointer is handled by a smart pointer currently? and then counter in "r" be decremented to zero automatically?
I know the operations is not recommended, but I want to know why?
You are deleting a pointer that didn't come from new, so you have undefined behavior (anything can happen).
From cppreference on delete:
For the first (non-array) form, expression must be a pointer to an object type or a class type contextually implicitly convertible to such pointer, and its value must be either null or pointer to a non-array object created by a new-expression, or a pointer to a base subobject of a non-array object created by a new-expression. If expression is anything else, including if it is a pointer obtained by the array form of new-expression, the behavior is undefined.
If the allocation is done by new, we can be sure that the pointer we have is something we can use delete on. But in the case of shared_ptr.get(), we cannot be sure if we can use delete because it might not be the actual pointer returned by new.
shared_ptr<int> r = make_shared<int>();
There is no guarantee that this will call new int (which isn't strictly observable by the user anyway) or more generally new T (which is observable with a user defined, class specific operator new); in practice, it won't (there is no guarantee that it won't).
The discussion that follows isn't just about shared_ptr, but about "smart pointers" with ownership semantics. For any owning smart pointer smart_owning:
The primary motivation for make_owning instead of smart_owning<T>(new T) is to avoid having a memory allocation without owner at any time; that was essential in C++ when order of evaluation of expressions didn't provide the guarantee that evaluation of the sub-expressions in the argument list was immediately before call of that function; historically in C++:
f (smart_owning<T>(new T), smart_owning<U>(new U));
could be evaluated as:
T *temp1 = new T;
U *temp2 = new U;
auto &&temp3 = smart_owning<T>(temp1);
auto &&temp4 = smart_owning<U>(temp2);
This way temp1 and temp2 are not managed by any owning object for a non trivial time:
obviously new U can throw an exception
constructing an owning smart pointer usually requires the allocation of (small) ressources and can throw
So either temp1 or temp2 could be leaked (but not both) if an exception was thrown, which was the exact problem we were trying to avoid in the first place. This means composite expressions involving construction of owning smart pointers was a bad idea; this is fine:
auto &&temp_t = smart_owning<T>(new T);
auto &&temp_u = smart_owning<U>(new U);
f (temp_t, temp_u);
Usually expression involving as many sub-expression with function calls as f (smart_owning<T>(new T), smart_owning<U>(new U)) are considered reasonable (it's a pretty simple expression in term of number of sub-expressions). Disallowing such expressions is quite annoying and very difficult to justify.
[This is one reason, and in my opinion the most compelling reason, why the non determinism of the order of evaluation was removed by the C++ standardisation committee so that such code is not safe. (This was an issue not just for memory allocated, but for any managed allocation, like file descriptors, database handles...)]
Because code frequently needed to do things such as smart_owning<T>(allocate_T()) in sub-expressions, and because telling programmers to decompose moderately complex expressions involving allocation in many simple lines wasn't appealing (more lines of code doesn't mean easier to read), the library writers provided a simple fix: a function to do the creation of an object with dynamic lifetime and the creation of its owning object together. That solved the order of evaluation problem (but was complicated at first because it needed perfect forwarding of the arguments of the constructor).
Giving two tasks to a function (allocate an instance of T and a instance of smart_owning) gives the freedom to do an interesting optimization: you can avoid one dynamic allocation by putting both the managed object and its owner next to each others.
But once again, that was not the primary purpose of functions like make_shared.
Because exclusive ownership smart pointers by definition don't need to keep a reference count, and by definition don't need to share the data needed for the deleter either between instances, and so can keep that data in the "smart pointer"(*), no additional allocation is needed for the construction of unique_ptr; yet a make_unique function template was added, to avoid the dangling pointer issue, not to optimize a non-thing (an allocation that isn't done in the fist place).
(*) which BTW means unique owner "smart pointers" do not have pointer semantic, as pointer semantic implies that you can makes copies of the "pointer", and you can't have two copies of a unique owner pointing to the same instance; "smart pointers" were never pointers anyway, the term is misleading.
Summary:
make_shared<T> does an optional optimization where there is no separate dynamic memory allocation for T: there is no operator new(sizeof (T)). There is obviously still the creation of an instance with dynamic lifetime with another operator new: placement new.
If you replace the explicit memory deallocation with an explicit destruction and add a pause immediately after that point:
class C {
public:
~C();
};
shared_ptr<C> r = make_shared<C>();
C *ar = r.get();
ar->~C();
pause(); // stops the program forever
The program will probably run fine; it is still illogical, indefensible, incorrect to explicitly destroy an object managed by a smart pointer. It isn't "your" resource. If pause() could exit with an exception, the owning smart pointer would try to destroy the managed object which doesn't even exist anymore.
It of course depends on how library implements make_shared, however most probable implementation is that:
std::make_shared allocates one block for two things:
shared pointer control block
contained object
std::make_shared() will invoke memory allocator once and then it will call placement new twice to initialize (call constructors) of mentioned two things.
| block requested from allocator |
| shared_ptr control block | X object |
#1 #2 #3
That means that memory allocator has provided one big block, which address is #1.
Shared pointer then uses it for control block (#1) and actual contained object (#2).
When you invoke delete with actual object kept by shred_ptr ( .get() ) you call delete(#2).
Because #2 is not known by allocator you get an corruption error.
See here. I quot:
std::shared_ptr is a smart pointer that retains shared ownership of an object through a pointer. Several shared_ptr objects may own the same object. The object is destroyed and its memory deallocated when either of the following happens:
the last remaining shared_ptr owning the object is destroyed;
the last remaining shared_ptr owning the object is assigned another pointer via operator= or reset().
The object is destroyed using delete-expression or a custom deleter that is supplied to shared_ptr during construction.
So the pointer is deleted by shared_ptr. You're not suppose to delete the stored pointer yourself
UPDATE:
I didn't realize that there were more statements and the pointer was not out of scope, I'm sorry.
I was reading more and the standard doesn't say much about the behavior of get() but here is a note, I quote:
A shared_ptr may share ownership of an object while storing a pointer to another object. get() returns the stored pointer, not the managed pointer.
So it looks that it is allowed that the pointer returned by get() is not necessarily the same pointer allocated by the shared_ptr (presumably using new). So delete that pointer is undefined behavior. I will be looking a little more into the details.
UPDATE 2:
The standard says at § 20.7.2.2.6 (about make_shared):
6 Remarks: Implementations are encouraged, but not required, to perform no more than one memory allocation. [ Note: This provides efficiency equivalent to an intrusive smart pointer. — end note ]
7 [ Note: These functions will typically allocate more memory than sizeof(T) to allow for internal bookkeeping structures such as the reference counts. — end note ]
So an specific implementation of make_shared could allocate a single chunk of memory (or more) and use part of that memory to initialize the stored pointer (but maybe not all the memory allocated). get() must return a pointer to the stored object, but there is no requirement by the standard, as previously said, that the pointer returned by get() has to be the one allocated by new. So delete that pointer is undefined behavior, you got a signal raised but anything can happen.

Associate text with a mutex

I have a program that checks only one copy of itself is running: (C++ pseudocode)
int main()
{
HANDLE h_mutex = CreateMutex(NULL, TRUE, "MY_APP_NAME");
if ( !h_mutex )
{
ErrorMessage("System object already exists");
return EXIT_FAILURE;
}
else if ( GetLastError() == ERROR_ALREADY_EXISTS )
{
ErrorMessage("App is already running");
return EXIT_FAILURE;
}
// rest of code
ReleaseMutex(h_mutex);
CloseHandle(h_mutex);
return 0;
}
I would like to improve the error message "App is already running", and instead have it say "App is already running - started by USER at DATETIME, pid PID, OTHERINFO".
Is it possible for the first instance of my application to "register" a text string when creating the Mutex (or just after that); so that when another instance of my application detects that the Mutex already exists, it it can retrieve that text string and display that information?
You could use CreateFileMapping and MapViewOfFile to share a structure between the existing process and the newly started process. You would need to create a named Event as well as the mutex that you already create to ensure that any information your store in the mapping is initialized before you try to read it in the new process.
The basic process would be:
Create the mutex as you do now.
If the mutex did not previously exist then you will use CreateFileMapping to create a named mapping backed by the page file (you'll pass INVALID_HANDLE_VALUE as the file handle). Use MapViewOfFile to map the section into the process address space. Initialize the contents of the shared memory with whatever information you want to share, remember that the address of the shared block will (likely) be different between processes, so don't use any pointers in the data. If you must, you offsets from the mapped address to make references (only within the shared section). Use CreateEvent to create a named manual reset event, use SetEvent to set the named event.
If the mutex existed previously, use CreateEvent to create the named event mentioned in the previous paragraph. Use WaitForSingleObject (or any other wait function) to wait for the named event to become signaled. This wait ensures that the original process has had a chance to initialize the contents of the shared section. Use CreateFileMapping and MapViewOfFile to map the shared section into the process address space and read whatever information you chose to store in the shared area.
Eventually, CloseHandle everything and exit.
As a side note, you do not need to take ownership of the mutex when creating it. The mutex in this case is really just a named object that you can determine whether or not it existed before you tried to create it. You chould use a semaphore, event, or even the shared section from CreateFileMapping.
You can do a lot of thing. You can store the text in a file and when your application opens, read it to check Mutex name. Or you can store it in Registry. Or you can send message to your application window. Still there is ways to do such a thing. You should decide which one is best fit for your application.

Resources