I am writing a simple memory manager for my application which will free any excess memory being used by my modules, so that it's available again for use. My modules allocate memory using alloc_page & kmem_cache_alloc and so the memory manager uses put_page & kmem_cache_free respectively to free up the memory which is not in use.
The problem I am facing is, even after I free the memory using put_page & kmem_cache_free, my modules are not able to get the free memory. I have written a test code which allocates a lot of memory in loop and when out of memory sleeps on memory manager to free up the memory. Memory manager successfully executes free code and wakes up the sleeping process as memory should be available now. Interestingly the alloc_page / kmem_cache_alloc calls still fail to allocate memory. Now, I am clueless why it is happening so seeking help.
I figured out the problem, it was with the API I was using. If I use
__free_page instead of put_page to free the memory pages, everything
works absolutely fine.
Related
I am writing a server app which I want to efficiently use ALL available physical RAM of the machine when possible. The plan is that it will allocate physical pages using AWE until it detects that 99% of physical memory and stop when 1% is free, and any time physical memory drops below 1% free, it will free physical pages it doesn't need.
However when I put this plan into practice, Windows seems to think any time it has 99% of RAM in use it would be a good idea to free up more physical memory, and so it starts paging all sorts of stuff to disk, and my system crashes.
How can I tell Windows it is OK to have 99% of RAM in use and it doesn't need to try to page stuff back to disk until it reaches whatever its default perceived ideal level of usage is (I guess it will be something like 90%...)
Note: Raymond says 'Unless you are designing a system where you are the only program running on the computer, this is a bad idea.'
In this server scenario this is basically intended to be the only app running on the computer. But unfortunately there are some OS/background tasks that need to run...
But certainly I don't expect there is any other process on the computer indulging in this 'use all but 1% of RAM' behavior...?
Update: I've done more experimentation and started to wonder if I'm somewhat asking the wrong question. My assumption that windows is being overeager may be wrong. Perhaps the question should instead have been 'how can I determine how much physical RAM my process can safely use without compromising overall responsiveness on the machine'?
You can't. The Windows memory manager runs at a lower level than your program and knows nothing about your program (and even if it did, it has no reason to assume your program is the good citizen you claim it to be. What if your program crashes, or has an off-by-one error in a loop that mallocs? What about other programs that need memory while yours is running? What about the thousand other scenarios that the guys who wrote the Windows MM encountered when they were writing it?)
Don't try to be cleverer than Windows. A more productive use of your time would be to consider if your application really needs to allocate 99% physical memory up front.
Using the Gorelic plugin, I've noticed that the virtual memory on my heroku dyno is steadily climbing even though the memory usage looks fine. The virtual memory never appears to exceed 1000M, but when it gets close I start to notice performance degradation.
I've used pprof to profile the memory usage of my app and it looks like it's free of leaks. Also, everything i've read suggests that Go will reserve large blocks of virtual memory and it's nothing to worry about.
Is it possible that the large virtual memory usage is affecting performance on my 1x heroku dyno? Is it just that I need to use 2x dynos because Go is a memory hog? Is it possible to configure go so that it uses less virtual memory?
Thanks!
I have an MFC C++ application that usually runs constantly in the system tray.
It allocates a very extensive tree of objects in memory, which causes the application to take several seconds to free, when the application needs to shutdown.
All my objects are allocated using new and typically freed using delete.
If I just skip deleting all the objects, in order to quit faster, what are the effects if any?
Does Windows realize the process is dead and reclaims the memory automatically?
I know that not freeing allocated memory is almost sacrilegious, but thought I would ask to see what everyone else thinks.
The application only shuts down when either the users system shuts down, or if they choose to shut the program down themselves.
When a process terminates the system will reclaim all resources. This includes releasing open handles to kernel objects and allocated memory. If you do not free memory during process termination it has no adverse effect on the operating system.
You will find substantial information about the steps performed during process termination at Terminating a process. With respect to your question the following is the relevant section:
Terminating a process has the following results:
...
Any resources allocated by the process are freed.
You probably should not skip the cleanup step in your debug builds though. Otherwise you will not get memory leak diagnostics for real memory leaks.
I am not an OS expert, and I am having trouble understanding my server's memory usage. I need your advices to understand the following:
My server has 8 GB RAM and operates as web server. PHP, mySQL and Apache processes consume the majority of the memory. When I issue the command "free" after the system is rebooted, I would normally see something along these lines:
total used free shared buffers cached
Mem: 8059080 2277924 5781156 0 948 310852
-/+ buffers/cache: 1966124 6092956
Swap: 4194296 0 4092668
Obviously, sooner or later the free memory would drop and the cached memory would increase and I assume there is nothing wrong with that since the OS decides to cache it.
What I don't understand is about 1-2 days later after the machine is rebooted, I would slightly see an increase in the used swap memory. Does not this mean that the server does not have free memory anymore and using IO instead? How can I understand which processes cause this?
I am asking this question to stackoverflow users because if I ask it to my hosting provider, I am sure they would ask more money to increase RAM.
Thanks.
This is perfectly normal. When the machine starts up, a large number of services also start up. As they run their startup code, read their configuration, and so on, they dirty some pages of memory. Many of these services will never run again. By writing this data to swap, the operating system accomplishes two things:
First, if it ever does encounter memory pressure, it can discard the pages without having to write them first, since it has already written them. Second, it can discard the pages to make more free memory to enlarge the cache.
The alternative is to keep information that hasn't been touched in days in physical memory. And that just doesn't make sense.
On Linux I used to be sure that whatever resources a process allocates, they are released after process termination. Memory is freed, open file descriptors are closed. No memory is leaked when I loop starting and terminating a process several times.
Recently I've started working with opencl.
I understand that the opencl-compiler keeps compiled kernels in a cache. So when I run a program that uses the same kernels like a previous run (or probably even those from another process running the same kernels) they don't need to be compiled again. I guess that cache is on the device.
From that behaviour I suspect that maybe allocated device-memory might be cached as well (maybe associated with a magic cookie for later reuse or something like that) if it was not released explicitly before termination.
So I pose this question to rule out any such suspicion.
kernels survive in chache => other memory-allocations survive somehow ???
My short answer would be yes based on this tool http://www.techpowerup.com/gpuz/
I'm investigating a memory leak on my device and I noticed that memory is freed when my process terminates... most of the time. If you have a memory leak like me, it may linger around even after the process is finished.
Another tool that may help is http://www.gremedy.com/download.php
but its really buggy so use it judiciously.