Effects of not freeing memory in a Windows application? - windows

I have an MFC C++ application that usually runs constantly in the system tray.
It allocates a very extensive tree of objects in memory, which causes the application to take several seconds to free, when the application needs to shutdown.
All my objects are allocated using new and typically freed using delete.
If I just skip deleting all the objects, in order to quit faster, what are the effects if any?
Does Windows realize the process is dead and reclaims the memory automatically?
I know that not freeing allocated memory is almost sacrilegious, but thought I would ask to see what everyone else thinks.
The application only shuts down when either the users system shuts down, or if they choose to shut the program down themselves.

When a process terminates the system will reclaim all resources. This includes releasing open handles to kernel objects and allocated memory. If you do not free memory during process termination it has no adverse effect on the operating system.
You will find substantial information about the steps performed during process termination at Terminating a process. With respect to your question the following is the relevant section:
Terminating a process has the following results:
...
Any resources allocated by the process are freed.
You probably should not skip the cleanup step in your debug builds though. Otherwise you will not get memory leak diagnostics for real memory leaks.

Related

Detect unclean filesystem shutdown

I have a project where we manipulate large amounts of cached data using memory mapped files. We use Windows 10, NTFS and .NET.
When the user starts the application, we detect if the previous program session was shutdown correctly, and if so we reuse the cache.
However, this is a pain for developers when debugging. It's quite common to just stop the program being debugged. At next startup, the cached data needs to be recalculated, which takes time and is annoying.
So, we've been thinking we could introduce a 'transaction log', so that we can recover even if the previous shutdown was unclean.
Now for the actual problem.
There seems to be no guarantees in which order memory mapped files are flushed. In case the program is just stopped, there is no problem, since the entire memory mapped file will be flushed to disk by the operating system. The problem comes if there is a power cut. In this case, there are no guarantees what state the file is in. Our "transaction log" doesn't help either, unless we always flush the transaction log to disk before modifying the cache. This would defeat the purpose of our architecture, since it would introduce unacceptable performance penalties.
If we could somehow know that our memory mapped file on disk was previously left in a state where the OS didn't manange to flush all pages before operating system shutdown, we could just throw the entire file away at next startup. There would be a delay, but it would be totally acceptable since it would only occur after a power cut or similar event.
When the operating system boots, it knows that the file is possibly corrupt, because it knows the filesystem was not cleanly unmounted.
And finally, my question:
Is there some way to ask Windows if the file system was clean when it was mounted?
NTFS periodically commits its own logs and so there's a window in which a power fail could occur and NTFS would (correctly) state that the volume (as in, "NTFS DATA" not user data) is clean.
You will likely have to do what databases do which is to lock your cache into physical memory so that you can control the writes-to-disk.

Is there a way to free the memory used by threads when debugging in Android?

I'm having some problems debugging an Android app which runs a string of memory-intensive operations on bitmaps. From Google's Debugging tips, I know that
The debugger and garbage collector are currently loosely integrated. The VM guarantees that any object the debugger is aware of is not garbage collected until after the debugger disconnects. This can result in a buildup of objects over time while the debugger is connected. For example, if the debugger sees a running thread, the associated Thread object is not garbage collected even after the thread terminates.
Unfortunately, this means that while my app runs fine in release mode, any memory-intensive thread running in debug mode will be ignored by the garbage collector and be kept around so more and more memory is used as more and more memory-intensive threads are created, resulting in the app crashing because it fails to allocate the required memory.
Is there any way to explicitly tell the garbage collector that these threads should be collected, or some other way around this issue?
I eventually solved this by spawning an AsyncTask rather than a Thread. It seems that AsyncTasks are cleared up more readily by the garbage collector and hence I was able to run the app in debug mode without issue.
An AsyncTask is the recommended way of spawning background operations. Any work to be done in the background should be placed in the task's doInBackground(Params...) method. AsyncTasks are usually intended to perform actions on the UI thread after completion, but you can avoid disturbing the UI thread by simply leaving the onPostExecute(Result)method empty or not present.

does opencl release all device memory after process termination?

On Linux I used to be sure that whatever resources a process allocates, they are released after process termination. Memory is freed, open file descriptors are closed. No memory is leaked when I loop starting and terminating a process several times.
Recently I've started working with opencl.
I understand that the opencl-compiler keeps compiled kernels in a cache. So when I run a program that uses the same kernels like a previous run (or probably even those from another process running the same kernels) they don't need to be compiled again. I guess that cache is on the device.
From that behaviour I suspect that maybe allocated device-memory might be cached as well (maybe associated with a magic cookie for later reuse or something like that) if it was not released explicitly before termination.
So I pose this question to rule out any such suspicion.
kernels survive in chache => other memory-allocations survive somehow ???
My short answer would be yes based on this tool http://www.techpowerup.com/gpuz/
I'm investigating a memory leak on my device and I noticed that memory is freed when my process terminates... most of the time. If you have a memory leak like me, it may linger around even after the process is finished.
Another tool that may help is http://www.gremedy.com/download.php
but its really buggy so use it judiciously.

What happens to shared memory if one of the process sharing the memory is killed?

I was working on shared memory and this question came in my mind so thought of asking from experts:
What happens to the shared memory if one of the process sharing the memory is killed? What happens if we do hard-kill rather than normal-kill?
Is it dependent on the mechanism we use for shared memory?
If it matters, I am working on Windows.
Provided at least one other thread in another process has an open handle to the file mapping, I would expect the shared memory to remain intact.

Memory is not becoming available even after freeing it

I am writing a simple memory manager for my application which will free any excess memory being used by my modules, so that it's available again for use. My modules allocate memory using alloc_page & kmem_cache_alloc and so the memory manager uses put_page & kmem_cache_free respectively to free up the memory which is not in use.
The problem I am facing is, even after I free the memory using put_page & kmem_cache_free, my modules are not able to get the free memory. I have written a test code which allocates a lot of memory in loop and when out of memory sleeps on memory manager to free up the memory. Memory manager successfully executes free code and wakes up the sleeping process as memory should be available now. Interestingly the alloc_page / kmem_cache_alloc calls still fail to allocate memory. Now, I am clueless why it is happening so seeking help.
I figured out the problem, it was with the API I was using. If I use
__free_page instead of put_page to free the memory pages, everything
works absolutely fine.

Resources