I'm using Windows 2012 R2 machine.
I have set my Heap Size in Environment variable as follows
ES_HEAP_SIZE
4g
After setting the heap size, i have installed Elasticsearch as windows service using command
service.bat install
When i started the service, Elasticsearch services has taken 4GB properly (Checked in Taskmanger.exe)
After some time, the memory used by elasticsearch service is came down to 1 GB.
Is this expected?
This is apparently an issue of ES under Windows.
Quoting from the link:
The 4gb committed heap size that you see in the node stats API is the amount of virtual memory that's reserved by setting ES_HEAP_SIZE (Xms), which is expected, even with bootstrap.mlockall disabled.
By enabling bootstrap.mlockall, we expect the call to VirtualLock() to lock the working set into physical memory, which happens initially (this is the memory you see in task manager), but eventually drops off.
I don't have a solid explanation for this yet, but I have observed that the more memory pressure the system is under (i.e. less free space available), the quicker the "drop off" occurs. It's as though Windows doesn't respect the fact that the pages in the working set are locked, and will release them when resources become low.
There's a lot of info out there that seems to indicate that VirtualLock doesn't guarantee pages won't be swapped, only reduces the odds, however the documentation says nothing about this.
1Gb is default heap size for Elasticsearch. So when you set the env.variable, you set maximum amount of memory it's allowed to use. If you don't issue any queries, the memory usage may drop to the default
Related
I am not an OS expert, and I am having trouble understanding my server's memory usage. I need your advices to understand the following:
My server has 8 GB RAM and operates as web server. PHP, mySQL and Apache processes consume the majority of the memory. When I issue the command "free" after the system is rebooted, I would normally see something along these lines:
total used free shared buffers cached
Mem: 8059080 2277924 5781156 0 948 310852
-/+ buffers/cache: 1966124 6092956
Swap: 4194296 0 4092668
Obviously, sooner or later the free memory would drop and the cached memory would increase and I assume there is nothing wrong with that since the OS decides to cache it.
What I don't understand is about 1-2 days later after the machine is rebooted, I would slightly see an increase in the used swap memory. Does not this mean that the server does not have free memory anymore and using IO instead? How can I understand which processes cause this?
I am asking this question to stackoverflow users because if I ask it to my hosting provider, I am sure they would ask more money to increase RAM.
Thanks.
This is perfectly normal. When the machine starts up, a large number of services also start up. As they run their startup code, read their configuration, and so on, they dirty some pages of memory. Many of these services will never run again. By writing this data to swap, the operating system accomplishes two things:
First, if it ever does encounter memory pressure, it can discard the pages without having to write them first, since it has already written them. Second, it can discard the pages to make more free memory to enlarge the cache.
The alternative is to keep information that hasn't been touched in days in physical memory. And that just doesn't make sense.
Hypervisors and Memory Management
I have been using virtual machines for years and never really had any issues. I have primarily used VMWare's free single ESXi host and had nothing but success. Because I have never had any issues I have never delved in much deeper. I have however always been very wary of loading the system up and get a lot of spare resources handy.
I have recently purchased a new server and we have decided to give Hyper-V a try and see how that goes. We have a fairly small team but utilise lots of servers for testing etc.
My question relates to memory and how much I need to leave free or available for the host machine to run appropriately.
Setup:Dell Server 24 Cores: 48GB Ram
When I run taskmgr in the windows host instance I see the following:
Physical Memory: 49139
Cached: 14933
Available: 17743
Free: 2982
What exactly do these figures mean? What is the difference between free and available?
My server uses hardly any CPU resources ever and has 10 Production servers running on it without a single user complaint ever about speed of the services.
Am I able to run up another server with 2GB ram effectivly leaving 982MB free? or am I starting to push my requirements a little?
Thanks for the help.
You shouldn’t use the host partition for anything other than Hyper-V (although you can run security and infrastructure software such as management agents, backup agents and firewalls). Therefore, that 2GB recommendation assumes you aren’t going to run any extra applications or server roles in the parent partition.
Hyper-V doesn’t let you allocate memory directly to the host partition. It essentially uses whatever memory is left over. Therefore, you have to remember to leave 2GB of your host server’s memory not allocated so it’s available for the parent partition.
Source
In an application I'm working on, under certain conditions the memory usage will shoot through the roof, effectively locking up my computer. I don't think it's a memory leak, and there are no errors, it just needs too much memory. The memory usage jumps to 99% in Task Manager and Windows stops working, forcing me to reboot.
Is it possible to set a maximum amount of memory VS can use while debugging? I'm not looking for a way to make it run out of memory faster, I just want to keep some memory free so Windows can keep working.
Visual Studio 2010
Windows 7 64b
8GB RAM
C# .NET
Edit:
I'm not asking how to fix a memory leak. I'm trying to limit the memory used by the VS debugger. For example, my PC has 8GB RAM, but my application has to run on a PC with 2GB RAM. So I want to configure VS to only use 2GB. If the application tries to allocate 2.0001GB I want VS to tell it there is no more memory (probably causing a crash).
This isn't exactly the answer you were looking for, but it might help others, so I'm posting:
I would try the following:
1) Download Oracle Virtualbox
2) Download Disk2VHD.exefrom Microsoft Sysinternals
3) Clone your system using Disk2VHD
4) Configure a VM with the memory restrictions you want.
In this way you can restrict the RAM and CPUs used by your task, and possibly recover easier from the case you describe.
Two applications share memory by MMF.
A create MMF (about 1GB), B open that MMF file by name.
When I see Windows Task Manager, A has 1GB memory.
But, after several closing and launching B app again,
(or after 1 days later? I'm not sure how to reproduce)
A's memory in Windows Task Manager is below 1K bytes.
My guess is,
maybe because A app doesn't do anything after create MMF,
so, Windows thinks MMF is belong to B app. (Just guess).
My OS is Windows 2003 Enterprise x64, SP2.
Is there somebody who knows the reason?
Thanks in advance.
Memory mapped file is still part of your Virtual Address Space, use perfmon to get reliable counters instead of Task Manager, which changes with each release of Windows. The Perfmon counter of Process | Virtual Bytes (total VAS) is the most interesting.
My understanding is that 1GB is reserved in the virtual address space, but memory is only actually allocated for pages that are touched. Memory mapped files are implemented parallel to the Virtual Memory API, and both build upon the NT Virtual Memory Manager. See this article and diagram for an explanation.
Did you fill your entire file with data, or did you just allocate 1GB?
UPDATE:
Which column are you viewing in Task Manager?
The default Memory (Private Working) represents physically allocated memory.
You can add the column Commit Size to see the total amount of virtual address space allocated to the process.
Here is a summary of the various memory statistics you can see in Task Manager and what they mean.
It's because of memory working set minimize.
Thanks for everyone. :)
We have some Win32 console applications running on Windows Server 2003 Service Pack 2 that regularly fail with this:
Error 1450 (ERROR_NO_SYSTEM_RESOURCES): "Insufficient system resources exist to complete the requested service."
All the documentation we've found suggests it is linked to the number of Free System Page Table Entries running out. We have 16GB RAM in these machines and use the /3GB Operating System switch to squeeze the Windows kernel into 1GB and allow our processes access to 3GB of address space. This drastically reduces the total number of Free System Page Table Entries, so combined with our heavy use of MapViewOfFile() it is perhaps not surprising that the kernel page table entries are running out.
However, when using Performance Monitor to view the Free System Page Table Entries counter, the value is around 36,000 on reboot and doesn't go down when our application starts. I find it hard to believe that our application, which opens many large memory-mapped files, doesn't have any effect on the kernel page table. If we can't believe the counter, it's much more difficult to test the effect of any system changes we make.
There is a promising Knowledge Base article, The Performance tool does not accurately show the available Free System Page Table entries in Windows Server 2003, but it says the problem has been fixed in Service Pack 1, and we are already on Service Pack 2.
Has anyone else struggled with or solved this issue?
Update: I have checked !sysptes in windbg (debugging the kernel) and the value matches the performance counter, around 36,000. I guess this is most likely to mean that there really are that many free page table entries and Windows is telling the truth. It does leave the question of why we're getting 1450 errors though, if the PTEs are not running out.
Further update: We never did get to the bottom of why the 1450 errors were occurring. However, instead we upgraded the OS on these servers to 64-bit Windows. This allows the existing 32-bit applications (without recompilation) to access a full 4GB of virtual address space, and lets the kernel memory area with those pesky Page Table Entries be as big as it likes too. I don't think we've had a 1450 error since.
Can you try the windbg command "!sysptes" to get System PTE Information? I'm not sure if you can do this with live kernel debug, you may have to get a memory dump.
I'm not sure why you assume that ERROR_NO_SYSTEM_RESOURCES is caused only by running out of free System Page Table Entries ? As far as I know, such generic error codes are used for more than one resource type. And in fact, the first Google hit suggests that running out of file cache memory may cause it too. (KB on an XP bug, which tripped this error mode).
In your case, I'd be checking the "Handle Count". Another possible problem is address space fragmentation. If you you want to create a 1GB file mapping view, you need 1GB of free address space, and it has to be contiguous. If you map a 1GB file, a 800 MB file, and a 1GB file, close the 800MB one and open a 900MB file, the 900MB file may not fit in the hole that's left.
MS has 2 ways to allow there 32 bit OS to "deal" with hardware that has 4 GB or more of RAM.
Option 1: is what you did with the /3GB Switch in the Boot.ini.
Option 1 Pros and Cons:
(CONS) This option sucks 1 GB from the normal 2 GB kernel area - hence making the OS struggle to meet the demands of both Paged Pool allocations and kernel stack allocations. So a person might think that using the /3GB Switch will help their, but really this option is screwing the 32 bit Window OS into a slow death.
(CONS) But, This gives my App 3GB.... WRONG (Hence this is a CON) The catch is that ONLY application that have been recompiled from the vendor to be "/3GB Switch aware" can really use the extra 1 GB. Hence the whole use of the /3GB Switch is a really BAD J.O.K.E on everyone.
Read this link for a much better write-up:
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
Option 2: Use the /PAE switch in the Boot.ini.
Option 2 Pros and Cons:
(PROS) This really this only option if you have a more then 4GB of RAM. It tricks a application by placing the complete application memory footprint in RAM. Normally, only a application "Working Set" memory is in RAM and the remaining application memory requirements go into Windows Pagefile. What is a application total memory requirements?? - it called "Virtual Size".
In my world, I have a big fat Java based IBM Product that I deal with. The server that is running the "application" has 16 GB of RAM. I simply add the /PAE switch and watch (thanks to sysinternals Processes Explorer) application paging requests go from 200 KB per sec to up to 4MB per sec.
Question: "Why"?
Answer: The whole application is in RAM.
Question: "Does the application know that it is completely running in RAM?
Answer: No - It is running that same old way that it was always run, "THINKING" that it's has part of itself as the "Working Set" memory living in RAM and the remaining application memory requirements go into Windows Pagefile.
Yes, it is that flipping GOOD.
Please Note: Microsoft has done a poor job telling anyone about the great Windows OS option. Duh
Try it and report back to stackoverflow....