Golang Virtual Memory - heroku - heroku

Using the Gorelic plugin, I've noticed that the virtual memory on my heroku dyno is steadily climbing even though the memory usage looks fine. The virtual memory never appears to exceed 1000M, but when it gets close I start to notice performance degradation.
I've used pprof to profile the memory usage of my app and it looks like it's free of leaks. Also, everything i've read suggests that Go will reserve large blocks of virtual memory and it's nothing to worry about.
Is it possible that the large virtual memory usage is affecting performance on my 1x heroku dyno? Is it just that I need to use 2x dynos because Go is a memory hog? Is it possible to configure go so that it uses less virtual memory?
Thanks!

Related

Function that require high CPUs

I have a piece of code that I need it to run as fast as possible. It is written in C# and I cannot change anything about that code (and jeez I don't want to).
I been given a task to make it work fast and cheap as possible.
Now so far I tried Digital Ocean servers, I found that there are optimized servers. It is running much better though I still couldn't make it be cheap as I wanted.
So far I was able to run it in ~4s in $160 server in DO (8 cores), it is actually running faster on my PC (16 cores).
Now I am new to all the cloud computing, but is there any cloud provider that can get me more CPU cores with less memory/networking for cheaper?
This code can run on 2GB memory machine in the same speed, I don't need 32GB .
I read about AWS Lambda, is it really faster? I didn't see anywhere that they publish CPU cores of their "server-less" servers.

Can I stop Windows from over-eagerly reclaiming physical memory?

I am writing a server app which I want to efficiently use ALL available physical RAM of the machine when possible. The plan is that it will allocate physical pages using AWE until it detects that 99% of physical memory and stop when 1% is free, and any time physical memory drops below 1% free, it will free physical pages it doesn't need.
However when I put this plan into practice, Windows seems to think any time it has 99% of RAM in use it would be a good idea to free up more physical memory, and so it starts paging all sorts of stuff to disk, and my system crashes.
How can I tell Windows it is OK to have 99% of RAM in use and it doesn't need to try to page stuff back to disk until it reaches whatever its default perceived ideal level of usage is (I guess it will be something like 90%...)
Note: Raymond says 'Unless you are designing a system where you are the only program running on the computer, this is a bad idea.'
In this server scenario this is basically intended to be the only app running on the computer. But unfortunately there are some OS/background tasks that need to run...
But certainly I don't expect there is any other process on the computer indulging in this 'use all but 1% of RAM' behavior...?
Update: I've done more experimentation and started to wonder if I'm somewhat asking the wrong question. My assumption that windows is being overeager may be wrong. Perhaps the question should instead have been 'how can I determine how much physical RAM my process can safely use without compromising overall responsiveness on the machine'?
You can't. The Windows memory manager runs at a lower level than your program and knows nothing about your program (and even if it did, it has no reason to assume your program is the good citizen you claim it to be. What if your program crashes, or has an off-by-one error in a loop that mallocs? What about other programs that need memory while yours is running? What about the thousand other scenarios that the guys who wrote the Windows MM encountered when they were writing it?)
Don't try to be cleverer than Windows. A more productive use of your time would be to consider if your application really needs to allocate 99% physical memory up front.

Understanding CentOS Memory usage

I am not an OS expert, and I am having trouble understanding my server's memory usage. I need your advices to understand the following:
My server has 8 GB RAM and operates as web server. PHP, mySQL and Apache processes consume the majority of the memory. When I issue the command "free" after the system is rebooted, I would normally see something along these lines:
total used free shared buffers cached
Mem: 8059080 2277924 5781156 0 948 310852
-/+ buffers/cache: 1966124 6092956
Swap: 4194296 0 4092668
Obviously, sooner or later the free memory would drop and the cached memory would increase and I assume there is nothing wrong with that since the OS decides to cache it.
What I don't understand is about 1-2 days later after the machine is rebooted, I would slightly see an increase in the used swap memory. Does not this mean that the server does not have free memory anymore and using IO instead? How can I understand which processes cause this?
I am asking this question to stackoverflow users because if I ask it to my hosting provider, I am sure they would ask more money to increase RAM.
Thanks.
This is perfectly normal. When the machine starts up, a large number of services also start up. As they run their startup code, read their configuration, and so on, they dirty some pages of memory. Many of these services will never run again. By writing this data to swap, the operating system accomplishes two things:
First, if it ever does encounter memory pressure, it can discard the pages without having to write them first, since it has already written them. Second, it can discard the pages to make more free memory to enlarge the cache.
The alternative is to keep information that hasn't been touched in days in physical memory. And that just doesn't make sense.

Hyper-V and Virtualisation memory management

Hypervisors and Memory Management
I have been using virtual machines for years and never really had any issues. I have primarily used VMWare's free single ESXi host and had nothing but success. Because I have never had any issues I have never delved in much deeper. I have however always been very wary of loading the system up and get a lot of spare resources handy.
I have recently purchased a new server and we have decided to give Hyper-V a try and see how that goes. We have a fairly small team but utilise lots of servers for testing etc.
My question relates to memory and how much I need to leave free or available for the host machine to run appropriately.
Setup:Dell Server 24 Cores: 48GB Ram
When I run taskmgr in the windows host instance I see the following:
Physical Memory: 49139
Cached: 14933
Available: 17743
Free: 2982
What exactly do these figures mean? What is the difference between free and available?
My server uses hardly any CPU resources ever and has 10 Production servers running on it without a single user complaint ever about speed of the services.
Am I able to run up another server with 2GB ram effectivly leaving 982MB free? or am I starting to push my requirements a little?
Thanks for the help.
You shouldn’t use the host partition for anything other than Hyper-V (although you can run security and infrastructure software such as management agents, backup agents and firewalls). Therefore, that 2GB recommendation assumes you aren’t going to run any extra applications or server roles in the parent partition.
Hyper-V doesn’t let you allocate memory directly to the host partition. It essentially uses whatever memory is left over. Therefore, you have to remember to leave 2GB of your host server’s memory not allocated so it’s available for the parent partition.
Source

Memory is not becoming available even after freeing it

I am writing a simple memory manager for my application which will free any excess memory being used by my modules, so that it's available again for use. My modules allocate memory using alloc_page & kmem_cache_alloc and so the memory manager uses put_page & kmem_cache_free respectively to free up the memory which is not in use.
The problem I am facing is, even after I free the memory using put_page & kmem_cache_free, my modules are not able to get the free memory. I have written a test code which allocates a lot of memory in loop and when out of memory sleeps on memory manager to free up the memory. Memory manager successfully executes free code and wakes up the sleeping process as memory should be available now. Interestingly the alloc_page / kmem_cache_alloc calls still fail to allocate memory. Now, I am clueless why it is happening so seeking help.
I figured out the problem, it was with the API I was using. If I use
__free_page instead of put_page to free the memory pages, everything
works absolutely fine.

Resources