Limit the memory allocation in Go Language? - go

I'm finding a way to limit the memory usage in Go language. My application implementing with Go language has a big data that must be loaded in main memory, so I want to limit the maximum memory size of the process to the size specified by the user.
In C language, actually, I accumulate the sizes of malloc'ed memory to do that, but I don't know how to do same thing in Go language.
Please let me know if there is a way to do it.
Thank you.

The Go garbage collector is not deterministic and it is conservative. Therefore, using the runtime.MemStats variable is not going to be accurate for your purpose.
Fix your approximate memory usage by setting the maximum size of data that you are going to allow to be loaded at one time into a process using the input from the user.

Perhaps you want to use ulimit in conjunction with your go code?

You can do this via runtime/debug.SetMemoryLimit
See here for the original proposal.
Take a look here for the GitHub issue.

Besides runtime.MemStats you could use gosigar to monitor system memory.

Related

How could I make a Go program use more memory? Is that recommended?

I'm looking for option something similar to -Xmx in Java, that is to assign maximum runtime memory that my Go application can utilise. Was checking the runtime , but not entirely if that is the way to go.
I tried setting something like this with func SetMaxStack(), (likely very stupid)
debug.SetMaxStack(5000000000) // bytes
model.ExcelCreator()
The reason why I am looking to do this is because currently there is ample amount of RAM available but the application won't consume more than 4-6% , I might be wrong here but it could be forcing GC to happen much faster than needed leading to performance issue.
What I'm doing
Getting large dataset from RDBMS system , processing it to write out in excel.
Another reason why I am looking for such an option is to limit the maximum usage of RAM on the server where it will be ultimately deployed.
Any hints on this would greatly appreciated.
The current stable Go (1.10) has only a single knob which may be used to trade memory for lower CPU usage by the garbage collection the Go runtime performs.
This knob is called GOGC, and its description reads
The GOGC variable sets the initial garbage collection target percentage. A collection is triggered when the ratio of freshly allocated data to live data remaining after the previous collection reaches this percentage. The default is GOGC=100. Setting GOGC=off disables the garbage collector entirely. The runtime/debug package's SetGCPercent function allows changing this percentage at run time. See https://golang.org/pkg/runtime/debug/#SetGCPercent.
So basically setting it to 200 would supposedly double the amount of memory the Go runtime of your running process may use.
Having said that I'd note that the Go runtime actually tries to adjust the behaviour of its garbage collector to the workload of your running program and the CPU processing power at hand.
I mean, that normally there's nothing wrong with your program not consuming lots of RAM—if the collector happens to sweep the garbage fast enough without hampering the performance in a significant way, I see no reason to worry about: the Go's GC is
one of the points of the most intense fine-tuning in the runtime,
and works very good in fact.
Hence you may try to take another route:
Profile memory allocations of your program.
Analyze the profile and try to figure out where the hot spots
are, and whether (and how) they can be optimized.
You might start here
and continue with the gazillion other
intros to this stuff.
Optimize. Typically this amounts to making certain buffers
reusable across different calls to the same function(s)
consuming them, preallocating slices instead of growing them
gradually, using sync.Pool where deemed useful etc.
Such measures may actually increase the memory
truly used (that is, by live objects—as opposed to
garbage) but it may lower the pressure on the GC.

How to find no of I/Os of a program in stxxl?

I am using STXXL, can somebody help me in finding the no. of I/O's(or blocks transferred) done by my program(or algorithm or process)? I know how to restrict the memory usage by any particular process, but don't know how to restrict the block size in STXXL and how to count no. of blocks transferred.
The STXXL provides an I/O Performance Counter, see here which stores various measured I/O data (including the number of blocks transferred).
If you are on Linux, blktrace will keep track of block I/O for you. I don't know about other systems.

Allocated memory growth on web servers

I am trying to create a web application where the memory allocated can at times grow exponentially and would like to estimate how much maximum memory can be allocated by an application on a web server at any given time, so that it would still run normally. Any feedback would be appreciated. Thanks.
You can use binary search type technique in your experiments. You have to run experiments to see how much memory you need. Start with an educated guess and use binary search to increase or decrease till you are satisfied. Is there anything specific you are asking?

allocate large (32mb) contiguous region

Is it at all possible to allocate large (i.e. 32mb) physically contiguous memory regions from kernel code at runtime (i.e. not using bootmem)? From my experiments, it seems like it's not possible to get anything more than a 4mb chunk successfully, no matter what GFP flags I use. According to the documentation I've read, GFP_NOFAIL is supposed to make the kmalloc just wait as long as is necessary to free the requested amount, but from what I can tell it just makes the request hang indefinitely if you request more than is availble - it doesn't seem to be actively trying to free memory to fulfil the request (i.e. kswapd doesn't seem to be running). Is there some way to tell the kernel to aggressively start swapping stuff out in order to free the requested allocation?
Edit: So I see from Eugene's response that it's not going to be possible to get a 32mb region from a single kmalloc.... but is there any possibility of getting it done in more of a hackish kind of way? Like identifying the largest available contiguous region, then manually migrating/swapping away data on either side of it?
Or how about something like this:
1) Grab a bunch of 4mb chunks until you're out of memory.
2) Check them all to see if any of them happen to be contiguous, if so,
combine them.
3) kfree the rest
4) goto 1)
Might that work, if given enough time to run?
You might want to take a look at the Contiguous Memory Allocator patches. Judgging from the LWN article, these patches are exactly what you need.
Mircea's link is one option; if you have an IOMMU on your device you may be able to use that to present a contiguous view over a set of non-contiguous pages in memory as well.

What's the Best Way to Measure Memory Use from a Program?

I've been trying to optimize the Windows program I am developing, trying to find the best data structures that will minimize memory use. It loads large blocks of data, so with huge files, it can use a large amount of RAM.
For memory measurement, I have been using GlobalMemoryStatusEx. See: http://msdn.microsoft.com/en-us/library/aa366589(VS.85).aspx
I believe this works for most flavors of Windows, from Windows 2000 right up to and including Windows Vista.
Is this the preferred way to measure memory use from within a program, or is there another, better way?
Addenum: Found the Stackoverflow question: How to get memory usage under Windows in C++ which references GetProcessMemoryInfo
I'll try that one out.
If you are trying to optimize your own program memory-wise, I suggest you use a memory profiler tool for that.
There are many out there...some are free, some are not..you will surely find the one you need.
Those tools are written specifically for what you need (and also for memory leaks search) so...it will be hard to compare with them and do something like that on your own from your own program :)
See the addenum in my question.
I use valgrind to track memory usage, as well as to profile code and detect memory leaks. The massif tool, I believe, tracks memory usage on the stack and heap.

Resources