I just ported some code from C/C++ to Go, it is a microservice. It works well, even faster than in C/C++. But I have a problem with memory.
When my program starts it will allocate about 4.5GB RAM and will fill it with data from disc also processing data while loading, then it will run for days(hopefully months) serving the requests from RAM. Unfortunately after the processing and placement of data in RAM is finished, extra 3.5 GB of RAM remains allocated by Go. I do not do any deallocations, only allocations, I do not think my program really uses 8 GB at any point, so I think Go just acquires extra RAM because it "feels" I might need more soon, but I will not.
I read that Go does not allow any functionality to deallocate unused RAM to return it to system. I want to run more services on the same machine, saving as much of RAM as possible, so wasting almost as much as I really use feels wrong.
So how do I keep the memory footprint compact avoiding those empty 3.5 GB being allocated by Go?
Speaking of virtual memory (See "Go Memory Management" from Povilas Versockas, and RSS vs. VSZ), don't try your program with Go 1.11.
See golang/go issue 28114: "runtime: programs compiled by 1.11 allocate an unreasonable amount of virtual memory".
Also discussed here.
Possibly related to:
CL 85888: runtime: remove non-reserved heap logic
issue 10460: runtime: 512GB memory limitation
Possible workaround: golang/go issue 28081
Most likely that is virtual memory that Go is using, not actual committed pages of RAM.
DeferPanic blog post: "Understanding Go Lang Memory Usage" (2014)
Related
In some our servers (Suze and RedHat 7 / HP 460c with 128Go or RAM) the free RAM space is close to 0 due to a high usage of buffer and cache.
The cache and buffer memory is filled up during backup with symantec networker.
As buffer and cache are freeable memory (as far as I know), I was not worried about that and if the applications needs more memory space the kernel will free buffer and cache to get new RAM Space.
But, I was suprised to see that some times ago, the kernel were useing the SWAP...
Could someone explain to me what could be the reasons that the kernel did not free RAM from buffer/cache to avoid using SWAP?
Regards
Maximilien
Here is one important thing : some times ago, we don't have such large RAM as now. So we need to use vritual memory for saving (also for other reasons like safe), and we use SWAP and do_page_fault() too.
With the development of RAM, we don't need to foucs on how to save the use of RAM. Instead, we foucs on how to speed up the access to hard disk by using RAM as a chache, that is why we use Buffer and Cache now.
Just learned these 3 new techniques from https://unix.stackexchange.com/questions/87908/how-do-you-empty-the-buffers-and-cache-on-a-linux-system:
To free pagecache:
# echo 1 > /proc/sys/vm/drop_caches
To free dentries and inodes:
# echo 2 > /proc/sys/vm/drop_caches
To free pagecache, dentries and inodes:
# echo 3 > /proc/sys/vm/drop_caches
I am trying to understand what exactly are pagecache, dentries and inodes. What exactly are they?
Do freeing them up also remove the useful memcached and/or redis cache?
--
Why i am asking this question? My Amazon EC2 server RAM was getting filled up over the days - from 6% to up to 95% in a matter of 7 days. I am having to run a bi-weekly cronjob to remove these cache. Then memory usage drops to 6% again.
With some oversimplification, let me try to explain in what appears to be the context of your question because there are multiple answers.
It appears you are working with memory caching of directory structures. An inode in your context is a data structure that represents a file. A dentries is a data structure that represents a directory. These structures could be used to build a memory cache that represents the file structure on a disk. To get a directly listing, the OS could go to the dentries--if the directory is there--list its contents (a series of inodes). If not there, go to the disk and read it into memory so that it can be used again.
The page cache could contain any memory mappings to blocks on disk. That could conceivably be buffered I/O, memory mapped files, paged areas of executables--anything that the OS could hold in memory from a file.
Your commands flush these buffers.
I am trying to understand what exactly are pagecache, dentries and
inodes. What exactly are they?
user3344003 already gave an exact answer to that specific question, but it's still important to note those memory structures are dynamically allocated.
When there's no better use for "free memory", memory will be used for those caches, but automatically purged and freed when some other "more important" application wants to allocate memory.
No, those caches don't affect any caches maintained by any applications (including redis and memcached).
My Amazon EC2 server RAM was getting filled up over the days - from 6%
to up to 95% in a matter of 7 days. I am having to run a bi-weekly
cronjob to remove these cache. Then memory usage drops to 6% again.
Probably you're mis-interpreting the situation: your system may just be making efficient usage of its ressources.
To simplify things a little bit: "free" memory can also be seen as "unused", or even more dramatic - a waste of resources: you paid for it, but don't make use of it. That's a very un-economic situation, and the linux kernel tries to make some "more useful" use of your "free" memory.
Part of its strategy involves using it to save various kinds of disk I/O by using various dynamically sized memory caches. A quick access to cache memory saves "slow" disk access, so that's often a useful idea.
As soon as a "more important" process wants to allocate memory, the Linux kernel voluntarily frees those caches and makes the memory available to the requesting process. So there's usually no need to "manually free" those caches.
The Linux kernel may even decide to swap out memory of an otherwise idle process to disk (swap space), freeing RAM to be used for "more important" tasks, probably also including to be used as some cache.
So as long as your system is not actively swapping in/out, there's little reason to manually flush caches.
A common case to "manually flush" those caches is purely for benchmark comparison: your first benchmark run may run with "empty" caches and so give poor results, while a second run will show much "better" results (due to the pre-warmed caches). By flushing your caches before any benchmark run, you're removing the "warmed" caches and so your benchmark runs are more "fair" to be compared with each other.
Common misconception is that "Free Memory" is important.
Memory is meant to be used.
So let's clear that out :
There's used memory, which is where important data is stored, and if that reaches 100% you're dead
Then there's cache/buffer, which is used as long as there is space to do so. It's facultative memory to access disk files faster, mostly. If you run out of free memory, this will just free itself and let you access disk directly.
Clearing cached memory as you suggest is most of the case useless and means you're deactivating an optimization, therefore you'll get a slow down.
If you really run out of memory, that is if your "used memory" is high, and you begin to see swap usage, then you must do something.
HOWEVER : there's a known bug running on AWS instances, with dentry cache eating memory with no apparent reason. It's clearly described and solved in this blog.
My own experience with this bug is that "dentry" cache consumes both "used" and "cached" memory and does not seem to release it in time, eventually causing swap. The bug itself can consume resources anyway, so you need to look into it.
Hate to bring an old thread back from the dead, but I've been dealing with memory issues lately on my Linux Virtual Machines. Unfortunately, even with the virtualization of computing machines being great and the advancements of Linux memory and resource allocation being superb, conflicts occur when the hypervisor acts out what it calls "performance features".
VMWare will actively send RAM that hasn't been "written or modified" recently, to the disk. When your disk is on a SAN, that means reading from the RAM is now at 1Gbps to 10Gbps at best if you have a REALLY performant RAID and steady network access (ignoring the fact that now the RAM of say 100 VMs are all using the same SAN). DDR3 RAM operates at 25Gbps+ on modern systems, so I'll assume you can see the problem with systems running at 1/25th to less than 1/2 of the speed anticipated.
The caches on my linux systems are literally the same speed as disk I/O of the filesystem, meaning they do not help our performance and are actively sending the OS's RAM into Swap instead of clearing caches. This is a huge problem thanks to VMWare, not because of Linux, but be aware that cloud infrastructure often does stupid crap like this all the time unfortunately. You can read more here: https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/perf-vsphere-memory_management.pdf or if you use VMWare, surely you'll notice the "allocated memory" vs "active memory" and where your VMs will always display a different amount than VMWare because of this distinction and treatment of the memory.
I have a long-running memory hog of an experimental program, and I'd like to know it's actual memory footprint. The Task Manager says (in windows7-64) that the app is consuming 800 mb of memory, but the total amount of memory allocated, also according to the task manager, is 3.7gb. The sum of all the allocated memory does not equal 3.7gb. How can I determine, on the fly, how much memory my application is actually consuming.
Corollary: What memory is the task manager actually reporting? It doesn't seem to be all the memory that's allocated to the app itself.
As I understand it, Task manager shows the Working Set;
working set: The set of memory pages
recently touched by the threads of a
process. If free memory in the
computer is above a threshold, pages
are left in the working set of a
process even if they are not being
used. When free memory falls below a
threshold, pages are trimmed from the
working set.
via http://msdn.microsoft.com/en-us/library/cc432779(PROT.10).aspx
You can get Task Manager to show Virtual Memory as well.
I usually use perfmon (Start -> Run... -> perfmon) to track memory usage, using the Private Bytes counter. It reflects memory allocated by your normal allocators (new/HeapAlloc/malloc, etc).
Memory is a tricky thing to measure. An application might reserve lots of virtual memory but not actually use much of it. Some of the memory might be shared; that is, a shared DLL might be loaded in to the address space of several applications but it is only loaded in to physical memory once.
A good measure is the working set, which is the set of pages in its virtual address space that have been accessed recently. What the meaning of 'accessed recently' is depends on the operating system and its page replacement algorithm. In other words, it is the actual set of virtual pages that are mapped in to physical memory and are in use at the moment. This is what the task manager shows you.
The virtual memory usage is the amount of virtual pages that have been reserved (note that not all of these will have actually been committed, that is, had physical backing store allocated for it. You can add this to the display in task manager by clicking View -> Select Columns.
The most important thing though: If you want to actually measure how much memory your program is using to see if you need to optimize some of it for space or choose better data structures or persist some things to disk, using the task manager is the wrong approach. You should almost certainly be using a profiler.
That depends on what memory you are talking about. Unfortunately there are many different ways to measure memory. For instance ...
Physical Memory Allocated
Virtual Memory Allocated
Virtual Memory Reserved (but not committed)
Private Bytes
Shared Bytes
Which metric are you interested in?
I think most people tend to be interested in the "Virtual Memory Allocated" category.
The memory statistics displayed by task manager are not nearly all the statistics available, nor are particularly well presented. I would use the great free tool from Microsoft Sysinternals, VMMap, to analyse the memory used by the application further.
If it is a long running application, and the memory usage grows over time, it is going to be the heap that is growing. Parts of the heap may or may not be paged out to disk at any time, but you really need to optimize you heap usage. In this case you need to be profile your application. If it is a .Net application then I can recommend Redgate's ANTS profiler. It is very easy to use. If it's a native application, then the Intel vtune profiler is pretty powerful. You don't need the source code for the process you are profiling for either tool.
Both applications have a free trial. Good luck.
P.S. Sorry I didn't include more hyperlinks to the tools, but this is my first post, and stackoverflow limits first posts to one hyperlink :-(
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Virtual memory was introduced to help run more programs with limited memory. But in todays environment of inexpensive RAM, is it still relevant?
Since there will be no disk access if it is disabled and all the programs will be memory resident, will it not improve the performance and program response times?
Is there any essential requirement for virtual memory in windows apart from running more programs as stated above? Something windows internal not known to us.
Some pedantry: virtual memory is not just the pagefile. The term encompasses a whole range of techniques that give the program the illusion that it has one single contiguous address space, some of which is the program's code, some of which is data, and some of which are DLLs or memory-mapped files.
So to your lead-in question: yes, virtual memory is required. It's what makes modern OS's work.
Don't disable virtual memory. 2GB is not nearly enough to even consider this. Regardless, you should always keep virtual memory on even if you do have enough since it will only ever be used when you actually need it. Much better to be safe than sorry since NOT having it active means you simply hit a wall, while having it active means your computer starts swapping to the hard drive but continues to run.
Yes, because it's the basis of all on-demand paging that occurs in a modern operating system, not just Windows.
Windows will always use all of your memory, if not for applications then for caching whatever you read from your hard drive. Because if that memory is not used, then you're throwing your investment in memory away. Basically, Windows uses your RAM as a big fat cache to your hard drives. And this happens all the time, as the relevant pages are only brought into main memory when you address the content of that page.
The question is really what is the use of a pagefile considering how much memory modern computers have and what's going on under the hood in the OS.
It's common for the Windows task manager to show not much physical memory being used, but, your having many page faults? Win32 will never allocate all it's physical memory. It always saves some for new resource needs. With a big pagefile vs small pagefile, Win32 will be slower to allocate physical memory to a process.
For a few days now I've been using a very small pagefile (200 MB fixed) in Vista with 3GB of addressable physical memory. I have had no crashes or problems. Haven't tried things like large video editing or many different processes open at once. I wouldn't recommend no pagefile since the OS can never shuffle pages around in physical memory leading to the development of holes. A large pagefile is fail-safe for people who wouldn't know how to manually increase the pagefile if a low memory warning pops up or the OS crashes.
Some points:
The kernel will use some of the physical memory and this will be shared through VM mapping with all other processes. Other processes will be in the remaining physical memory. VM makes each process see a 4GB mem space, the OS at the lower 2GB. Each process will need much less than the 4GB of physical memory, this amount is it's committed memory requirement. When programming, a malloc or new will reserve memory but not commit it. Things like the first write to the memory will commit it. Some memory is immedietely committed by the OS for each process.
Your question is really about using a page file, and not virtual memory, as kdgregory said. Probably the most important use for virtual memory is so that the OS can protect once process's memory from another processes memory, while still giving each process the illusion of a contiguous, flat virtual address space. The actual physical addresses can and will become fragmented, but the virtual addresses will appear contiguous.
Yes, virtual memory is vital. The page file, maybe not.
Grrr. Disk space is probably always going to be cheaper than RAM. One of my lab computers has 512MB of RAM. That used to be enough when I got it, but now it has slowed to a crawl swapping and I need to put more RAM in it. I am not running more software programs now than I was then, but they have all gotten more bloated, and they often spawn more "daemon" programs that just sit there doing nothing but wait for some event and use up memory. I look at my process list and the "in-memory" column for the file explorer is 40MB. For Firefox it's 162MB. Java's "update scheduler" jusched.exe uses another 3.6MB. And that's just the physical-memory, these numbers don't include the swap space.
So it's really important to save the quicker, more expensive memory for what can't be swapped out. I can spare tens of GB on my hard drive for swap space.
Memory is seen as cheap enough that the OS and many programs don't try to optimize any more. On the one hand it's great because it makes programs more maintainable and debuggable and quicker to develop. But I hate having to keep putting in more RAM into my computer.
A good explanation at
http://blogs.technet.com/markrussinovich/archive/2008/11/17/3155406.aspx
To optimally size your paging file you
should start all the applications you
run at the same time, load typical
data sets, and then note the commit
charge peak (or look at this value
after a period of time where you know
maximum load was attained). Set the
paging file minimum to be that value
minus the amount of RAM in your system
(if the value is negative, pick a
minimum size to permit the kind of
crash dump you are configured for). If
you want to have some breathing room
for potentially large commit demands,
set the maximum to double that number.
Virtual memory is much more than simply an extension of RAM. In reality, virtual memory is a system they virtualizes access to physical memory. Applications are presented with a consistent environment that is completely independent of RAM size. This offers a number of important advantages quite appart from the increased memory availabilty. Virtual memory is an integral part of the OS and cannot possibly be disabled.
The pagefile is NOT virtual memory. Many sources have claimed this, including some Microsoft articles. But it is wrong. You can disable the pagefile (not recommended) but this will not disable virtual memory.
Virtual mmeory has been used in large systems for some 40 years now and it is not going away anytime soon. The advantages are just too great. If virtual memory were eliminated all current 32 bit applications (and 64 bit as well) would become obsolete.
Larry Miller
Microsoft MCSA
Virtual memory is a safety net for situations when there is not enough RAM available for all running application. This was very common some time ago and today when you can have large amounts of system RAM it is less so.
Some say to leave page file alone and let it be managed by Windows. Some people say that even if you have large RAM keeping big pagefile cannot possibly hurt because it will not be used. That is not true since Windows does pre-emptive paging to prepare for spikes of memory demand. If that demand never comes this is just wasted HDD activity and we all know that HDD is the slowest component of any system. Pre-emptive paging with big enough RAM is just pointless and the only thing it does is to slow down any other disk activity that happens at the same time. Not to mention additional disk wear. Plus big page file means gigabytes of locked disk space.
A lot of people point to Mark Russinovich article to back up their strong belief that page file should not be disabled at any circumstances and so many clever people at Microsoft have thought it so thoroughly that we, little developers, should never question default Windows policy on page file size. But even Russinovich himself writes:
Set the paging file minimum to be that value (Peak Commit Charge) minus the amount of RAM in your system (if the value is negative, pick a minimum size to permit the kind of crash dump you are configured for).
So if you have large RAM amounts and your peek commit charge is never more than 50% of your RAM even when you open all your apps at once and then some, there is no need have page file at all. So in those situations 99.99% of time you will never need more memory than your RAM.
Now I am not advocating for disabling page file it but having it in size of your RAM or more is just waste of space and unnecessary activity that can slow down something else. Page file gives you a safety net in those rare (with plenty of RAM) situations when system does need more memory and to prevent it from getting out of memory which will most likely make your system unstable and unusable.
The only real need for page file is kernel dumps. If you need full kernel dumps you need at least 400 MB of paging file. But if you are happy with mini dumps, minimum is 16 MB. So to have best of both worlds which is
virtually no page file
safety net of virtual memory
I would suggest to configure Windows for mini kernel dumps, set minimum page file size to 16 MB and maximum to whatever you want. This way page file would be practically unused but would automatically expand after first out of memory error to prevent your system from being unusable. If you happen to have at least one out of memory issue you should of course reconsider your minimum size. If you really want to be safe make page file min. size 1 GB. For servers though you should be more careful.
Unfortunately, it is still needed because the windows operating system has a tendency to 'overcache'.
Plus, as stated above, 2GB isn't really enough to consider turning it off. Heck, I probably wouldn't turn it off until I had 8GB or more.
G-Man
Since there will be no disk access if it is disabled and all the programs will be memory resident, will it not improve the performance and program response times?
I'm not totally sure about other platforms, but I had a Linux machine where the swap-space had been accidently disabled. When a process used all available memory, the machine basically froze for 5 minutes, the load average went to absurd numbers and the kernel OOM killer kicked in and terminated several processes. Reenabling swap fixed this entirely.
I never experienced any unnecessary swapping to disc - it only happened when I used all the available memory. Modern OS's (even 5-10 year old Linux distros) deal with swap-space quite intelligently, and only use it when required.
You can probably get by without swap space, since it's quite rare to reach 4GB of memory usage with a single process. With a 64-bit OS and say 8GB of RAM it's even more rare.. but, there's really no point disabling swap-space, you don't gain much (if anything), and when you run out of physical memory without it, bad things happen..
Basically - any half-decent OS should only use disc-swap (or virtual-memory) when required. Disabling swap only stops the OS being able to fall back on it, which causes the OOM killer to strike (and thus data-loss when processes are terminated).
I'm trying to get a better understanding of how Windows, 32-bit, calculates the virtual bytes for a program. I am under the impression that Virtual Bytes (VB) are the measure of how much of the user address space is being used, while the Private Bytes (PB) are the measure of actual committed and reserved memory on the system.
In particular, I have a server program I am monitoring which, when under heavy usage, will climb up to the 3GB limit for VBs. Around the same time the PB climb as well, but then quickly drop down to around 1 GB as the usage drops. The PB tend to then stay low, around the 1 GB mark, but the VB stay up around the 3 GB mark. I do not have access to the source code, so I am just using the basic Windows performance counters to monitor all of this. From a programming point of view, what memory concept do I not understand that makes this all possible? Is there a good reference to learn more about this?
What your reporting is most likely being caused by the process heap. There are two pieces to a memory allocation in Windows. The first piece is the continuous address space in your application for the memory to accessed through. On a 32 bit system not running the /3GB switch all your allocations must come out of the lower 2 GB of user address space. The second piece of the memory allocation is the actually memory for the allocation. This can be either RAM or part of the page file system on the hard disk. The OS handles moving allocations between RAM and the page file system in the background.
Most likely your application is using a Windows heap to handle all memory allocations. When a heap is created is reserves 1 MB of address space for the memory it will allocate. Until it actually needs memory associated with this address space no physical memory is actually used. If the heap needs more memory than 1 MB it uses a doubling algorithm to reserve more address space, and then commits physical memory when it needs it. The important thing to note is that once a heap reserves address space it never releases it.
Personally I found the following books and chapters useful when trying to understand memory management.
Advanced Windows Debugging - Chapter 6 This book has the most detailed look into the heap I have seen.
Windows Internals - Chapter 7 This book adds a bit of information not found in Advanced Windows Debugging; however, it does not give as good an overview.
It sounds to me like you have a garbage collector that's only kicking in once the memory pressure hits 1/3 (1 GB out of 3 GB).
As for the VB - don't worry! It's virtual! Honestly, nothing's been allocated, nothing's been committed. Focus on your private bytes - your real allocations.
There is such a thing as "Virtual Memory". It's a rather non-OS-specific concept in computer science. Microsoft has also written about Windows implementation of the thing.
A long story short, in Windows you can ask to reserve some memory without actually allocating any physical memory. It's like making some memory addresses reserved for future use. When you really need the memory, you allocate it physically (aka "commit" it).
I haven't needed to use this feature myself, so I don't know how it's used in real life programs, but I know it's there. I think the idea might be something like preserving pointers to some memory address and allocating the memory when needed, without having to change what the pointers actually point to.
Windows is notorious for having a variety of types of memory allocations, some of which are supersets of others. You've mentioned Private Bytes and Virtual Bytes. Private bytes, as you know, refers to memory allocated specifically to your process. Virtual bytes includes private bytes, as well as any shared memory, reserved memory, etc.
Even though in real life you only need to be concerned with private bytes and perhaps shared memory (Windows handles the rest, anyways), the Virtual Bytes count is often what users (and evaluators of your software) see and interpret as the memory usage of your process.
A good and up-to-date reference on the subject is the book titled Windows Via C/C++ by Jeffrey Richter, you should look for Chapter 13: "Windows Memory Architecture".