I want a named shared memory region that survives process termination. Memory region and data should be accessible even if no program has an open handle temporarily. Only a reboot should "release" the shared resource. How can I do this?
My current solution: a keep-alive-process with an open handle.
Related
I have an MFC C++ application that usually runs constantly in the system tray.
It allocates a very extensive tree of objects in memory, which causes the application to take several seconds to free, when the application needs to shutdown.
All my objects are allocated using new and typically freed using delete.
If I just skip deleting all the objects, in order to quit faster, what are the effects if any?
Does Windows realize the process is dead and reclaims the memory automatically?
I know that not freeing allocated memory is almost sacrilegious, but thought I would ask to see what everyone else thinks.
The application only shuts down when either the users system shuts down, or if they choose to shut the program down themselves.
When a process terminates the system will reclaim all resources. This includes releasing open handles to kernel objects and allocated memory. If you do not free memory during process termination it has no adverse effect on the operating system.
You will find substantial information about the steps performed during process termination at Terminating a process. With respect to your question the following is the relevant section:
Terminating a process has the following results:
...
Any resources allocated by the process are freed.
You probably should not skip the cleanup step in your debug builds though. Otherwise you will not get memory leak diagnostics for real memory leaks.
If in testing on a computer without a debugger, say a client's computer, I encounter a bug that may have corrupted the state of the program but not actually crashed it, I know I can take a memory dump using the Windows Task Manager (right click on process name, create dump file).
I can use these with WinDbg to peek around in memory, etc., but what would be most useful to me is to be able to restore the dump into memory so that I can continue interacting with the program. Is this possible? If so, how? Is there a tool that can restore it or do I need to write my own.
The typical usermode dumps or minidumps do not contain enough information to do so. While they contain all usermode memory, they do not contain kernel memory, so open handles to kernel resources like files or network sockets will not be included in the dump (and even if they were, the hard disk has most likely changed so just trying to write to the hard disk may corrupt your system even more).
The only way I see to restore a memory dump is restoring the full memory and all other state like hard disk state, which can be done with most virtual machine software (which will, however, disconnect all your network connections on restore; gratefully most programs can handle lost network connectsions better than lost file handles).
I discovered that I could do this with Hyper-V snapshots. If I run my program in a virtual machine, I can optionally dump the memory, create a snapshot, transfer the dump if necessary, come back some time later, restore the snapshot and continue the program.
I'm trying to use a memory-mapped file under windows (using CreateFile/CreateFileMapping/MapViewOfFile functions), and I'm currently specifying FILE_SHARE_READ and FILE_SHARE_WRITE when calling CREATE_FILE. However, this is locking the file from being using by other processes.
What I want is to memory-map a snapshot of the file at the time I call CreateFileMapping or MapViewOfFile, so that I don't see any changes (writes or deletes) made to the file by other processes. Sort of like copy-on-write, but other processes are doing the write. Can I do this using memory-mapped files on windows?
That's just not how memory mapped files work. Windows puts a hard lock on the file so that nobody can change its content and make it different from the pages mapped into RAM. Those pages in RAM are shared between all processes that created a view on the file. There is no 'tear off' option.
You could simply map the file and make a copy of the bytes in the view. Some synchronization with the other processes would typically be required.
I was working on shared memory and this question came in my mind so thought of asking from experts:
What happens to the shared memory if one of the process sharing the memory is killed? What happens if we do hard-kill rather than normal-kill?
Is it dependent on the mechanism we use for shared memory?
If it matters, I am working on Windows.
Provided at least one other thread in another process has an open handle to the file mapping, I would expect the shared memory to remain intact.
I want to call the Windows API OpenProcess function on another process running on the machine. Will this always cause the file whose process I am opening to be write locked? Or does it depend on the access rights I request?
Yes, it is a fundamental property of Windows. When an executable file gets loaded (EXE or DLL), Windows creates a memory mapped view of the file. Chunks of code or data from the executable file get page-faulted into RAM, as needed to keep the program running. It works the other way around too, when Windows needs to make RAM available for another program then it throws chunks of mapped pages away, the ones that weren't used in a while. Those pages don't take up space in the paging file if they are code, they can be reloaded from the executable file.
Very efficient, code that was written when 16 megabytes of RAM was a luxury. The memory mapped section keeps a write lock on the file. Still useful in this day and age, it prevents some kind of malware with fecking with the code of a running process.
The process file is locked while the process is running; it doesn't have anything to do with OpenProcess. The file is unlocked when the process terminates.