kernel is not clearing the cache memory on memory depletion - caching

Am using linux-2.6.32 and found strange behavior that kernel is unable to relinquish the cache memory on memory depletion.When it happens system will go for reboot. To alleviate the problem I clear cache memory using echo 1 > /proc/sys/vm/drop_caches.
My question is why kernel is unable to clear the cache memory on memory depletion ?

Related

Golang Virtual Memory - heroku

Using the Gorelic plugin, I've noticed that the virtual memory on my heroku dyno is steadily climbing even though the memory usage looks fine. The virtual memory never appears to exceed 1000M, but when it gets close I start to notice performance degradation.
I've used pprof to profile the memory usage of my app and it looks like it's free of leaks. Also, everything i've read suggests that Go will reserve large blocks of virtual memory and it's nothing to worry about.
Is it possible that the large virtual memory usage is affecting performance on my 1x heroku dyno? Is it just that I need to use 2x dynos because Go is a memory hog? Is it possible to configure go so that it uses less virtual memory?
Thanks!

Effects of not freeing memory in a Windows application?

I have an MFC C++ application that usually runs constantly in the system tray.
It allocates a very extensive tree of objects in memory, which causes the application to take several seconds to free, when the application needs to shutdown.
All my objects are allocated using new and typically freed using delete.
If I just skip deleting all the objects, in order to quit faster, what are the effects if any?
Does Windows realize the process is dead and reclaims the memory automatically?
I know that not freeing allocated memory is almost sacrilegious, but thought I would ask to see what everyone else thinks.
The application only shuts down when either the users system shuts down, or if they choose to shut the program down themselves.
When a process terminates the system will reclaim all resources. This includes releasing open handles to kernel objects and allocated memory. If you do not free memory during process termination it has no adverse effect on the operating system.
You will find substantial information about the steps performed during process termination at Terminating a process. With respect to your question the following is the relevant section:
Terminating a process has the following results:
...
Any resources allocated by the process are freed.
You probably should not skip the cleanup step in your debug builds though. Otherwise you will not get memory leak diagnostics for real memory leaks.

Understanding CentOS Memory usage

I am not an OS expert, and I am having trouble understanding my server's memory usage. I need your advices to understand the following:
My server has 8 GB RAM and operates as web server. PHP, mySQL and Apache processes consume the majority of the memory. When I issue the command "free" after the system is rebooted, I would normally see something along these lines:
total used free shared buffers cached
Mem: 8059080 2277924 5781156 0 948 310852
-/+ buffers/cache: 1966124 6092956
Swap: 4194296 0 4092668
Obviously, sooner or later the free memory would drop and the cached memory would increase and I assume there is nothing wrong with that since the OS decides to cache it.
What I don't understand is about 1-2 days later after the machine is rebooted, I would slightly see an increase in the used swap memory. Does not this mean that the server does not have free memory anymore and using IO instead? How can I understand which processes cause this?
I am asking this question to stackoverflow users because if I ask it to my hosting provider, I am sure they would ask more money to increase RAM.
Thanks.
This is perfectly normal. When the machine starts up, a large number of services also start up. As they run their startup code, read their configuration, and so on, they dirty some pages of memory. Many of these services will never run again. By writing this data to swap, the operating system accomplishes two things:
First, if it ever does encounter memory pressure, it can discard the pages without having to write them first, since it has already written them. Second, it can discard the pages to make more free memory to enlarge the cache.
The alternative is to keep information that hasn't been touched in days in physical memory. And that just doesn't make sense.

flush Core Duo cache before reboot?

suppose I'm writing to a RAM location on a Core Duo system through L1/L2 cache.
Suppose I am going to write to a persistent location in RAM and panic Linux kernel soon after that. The location is persistent meaning that it won't be re-inited during CPU reboot and will be picked up after reboot.
Will Linux flush CPU cache as a part of reboot/panic?
Will the CPU flush cache before rebooting?
Or should I do that manually? How?
Update: my cache is not write-through.
The question is, does the CPU spec define this behavior?
Probably the most appropriate way to do this would be to mark the page containing the persistent location(s) as non-cacheable. That way writes to the persistent location(s) would always bypass the cache (effectively write-through). Of course it may be that your cache is write-through anyway, so this may be redundant - you should check this first.
The cache may not be flushed because a system diagnostic or debugger may need to be run by a user, system engineer or IT support person to diagnose and dump the computer state. The cache may be flushed at startup or not and this depends on the type and version of operating system, programming language and application in use at the event. It may be a selectable option (from any BIOS) at start up time but It would likely be initialized at poweron but not necessarily at warm restart if available.
I guess this might come in handy:) http://lxr.linux.no/#linux+v2.6.30/arch/x86/kernel/reboot.c

Memory is not becoming available even after freeing it

I am writing a simple memory manager for my application which will free any excess memory being used by my modules, so that it's available again for use. My modules allocate memory using alloc_page & kmem_cache_alloc and so the memory manager uses put_page & kmem_cache_free respectively to free up the memory which is not in use.
The problem I am facing is, even after I free the memory using put_page & kmem_cache_free, my modules are not able to get the free memory. I have written a test code which allocates a lot of memory in loop and when out of memory sleeps on memory manager to free up the memory. Memory manager successfully executes free code and wakes up the sleeping process as memory should be available now. Interestingly the alloc_page / kmem_cache_alloc calls still fail to allocate memory. Now, I am clueless why it is happening so seeking help.
I figured out the problem, it was with the API I was using. If I use
__free_page instead of put_page to free the memory pages, everything
works absolutely fine.

Resources