On Varnishv4.1 when i use ram as backend for Caching,when Reqests comes to it
after a while the amount of server's ram begans to full little by little and after it completely fills, the server crashes
and again it starts caching in the ram
I assign following variables in systemd service file for varnish.service
but still it does its Previous behavior and it crushs again:
LimitMEMLOCK=14336
MemoryLimit=13G
MemoryHigh=13G
MemoryMax=13G
How can i limit and specify the special amonut of memory that it cant exceed from that?
#Version used:
Varnishv4.1
#Operating System and version:
Ubuntu16.04
#Source of binary packages used (if any)
Installed from official ubuntu packages
You will have to limit both malloc and Transient.
I.e. as startup parameters -s malloc,3GB s Transient,1GB
In general the RAM allocated for Varnish should not exceed the 80% of total RAM avaiable on the system
Related
i want set heap size for go application in my windows machine
In java we used to provide -Xms settings as vm arguments in intellij but how to provide similar setting in golang and set memory limit for the go application.
Tried with
<env name="GOMEMLIMIT" value="2750MiB" />
but not working
we are using go 1.6.2 version.
Go 1.19 adds support for a soft memory limit:
The runtime now includes support for a soft memory limit. This memory limit includes the Go heap and all other memory managed by the runtime, and excludes external memory sources such as mappings of the binary itself, memory managed in other languages, and memory held by the operating system on behalf of the Go program. This limit may be managed via runtime/debug.SetMemoryLimit or the equivalent GOMEMLIMIT environment variable.
You can't set a hard limit as that would make your app malfunction if it would need more memory.
To set a soft limit from your app, simply use:
debug.SetMemoryLimit(2750 * 1 << 20) // 2750 MB
To set a soft limit outside of your app, use the GOMEMLIMIT env var, e.g.:
GOMEMLIMIT=2750MiB
But please note that doing so may make your app's performance worse as it may enforce more frequent garbage collection and return memory to OS more aggressively even if your app will need it again.
I read somewhere "A modern server has 144GB RAM memory", is that 144GB all used as cache?
When we talk about a server's cache, does that mean the server's memory?
It all depends on the caching method utilized by the applications that run on the sever. There are numerous caching methods, but two methods frequently used are persistent Caching and In Memory Caching.
With persistent cache, the application stores cache values somewhere intended to be “permanent”, such as the file system, database or otherwise.
Whereas, with In Memory Caching, the application uses the memory (AKA RAM, in your question 144GB) to store data. Using this method, the data is intended to be semi-permanent and not persist across reboots, application recycles, or otherwise.
If, when coding, you allocate a new object, dictionary, list or otherwise, these objects are stored in memory. Additionally, all of a servers memory is not available to the applications that run on said server. All operating systems and processes that are installed use the same RAM. Therefore, it’s common for a device that has 4GB RAM to only have 2GB reasonably usable, as the other 2GB is used by the operating system. Of course, these numbers depend on a lot of factors.
So i have a Elasticsearch Cluster inside the Kubernetes.
The machine it is running on has 30 GB RAM and 8 cores.
Now according to the thumb rule 50% of the RAM is what we set as ES_JAVA_OPTS and remaining is used for file caching.
here it would be 15 GB
Also in the helm chart we have resource requirements mentioned like below:
resources:
limits:
cpu: 8
memory: 15Gi
requests:
cpu: 8
memory: 15Gi
My question is whether the 50% RAM is of the host machine (Which is 30 GB) or the limit specified in the helm chart 15 GB
Can someone explain how in kubernetes utilise the RAM
Because if it with respect to Host and file caching is not considered as the utilisation of Deployed Application we are OK. But if it within the Resources Limits i need to increase the to 30GB.
Edit:
The question here is that if one elasticsearch node used 50% of RAM as Heap and 50% as file caching and i mention the Heap as 15GB (50% of the RAM) in a 30GB machine. so should i mention the resoure limitations in the deployment template as somewhere around 15GB which Heap requires of need 30GB (Say 28GB) that from the rule Elasticsearch need to be able to cache files.
This comes as concern as if pod exceed the mentioned limit on the template at any given moment kubernetes restart the pod.
So in other words i want to know the RAM file caching is come into play in the overall memory usage of the pod or not.
Note: I am using instance storage as primary Storage of the ES Data as this is extremely fast as compare to EBS.
Conclusion:
Keep Heap half to the RAM in the system and Mentioned in the resources Limit(if any)
I am not a expert in k8s and docker but what I understand is that, docker container uses the host resources and using resource limit you can have a hard limit on the resources it can consume.
If you put a resource limit of 15GB, than overall your docker container can consume 15GB of host RAM.now whether it will share the file system cache with host or not depends on how you have configured your docker volume.
As docker container have the option to share the file system with host using the bind volume or have its own data volume(which is ephemeral and not suited for ES as its a stateful application). in first option it should share the file system cache with host and you should not increase the resource limit further(recommended as you have ES which is stateful) and in second option, as it will use its own file system you have to allocate RAM for its file system cache and have to increase RAM to 30 GB, but you have to give some space for Host OS as well.
Container will always see the node`s memory instead of the container one. In Kubernertes, even though you set a limit for the memory to a container, the container itself is not aware of this limit.
This has an effect on the applications that looks up for the memory available on the system and use that information to decide how memory it wants to reserve.
This is why you setup the JVM heap size. Without this specified the JVM will setup the maximum heap size based on the host/node total memory instead of the one available (that you`ve declared as limit) to the container.
Check out this article about how limits works in k8s.
What should we keep SGA size with respect to total RAM available in system. I found in SQL server we should allocate 4GB or 10% of available RAM (which one is higher) .
It depends upon how much memory you are going to assign for different compoenents of SGA such as buffer cache, shared pool or are you going to use dedicated server mode or shared server mode.
And you should also consider whether there are other application going to be run on the server.
Asktom: SGA Size
I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.