Hello world - what is a simple program to use 16GB of memory? - memory-management

How can I allocate a large amount of memory with 16GB ram? Please provide a simple C/C++ program as an example.
E.g.
main()
{
// (10 gigabytes) / (4 bytes) = 2 684 354 560
int *hugearray = malloc( 2684354560 * sizeof(int) );
}
...obviously that doesn't work.

malloc() does allocate the memory, but most OS will only give you a virtual address space until you actually try to read or write within that memory, at which time they'll start allocating backing physical or swap memory. You simply need to loop writing some garbage values into the memory.

Sample program works fine if you change the declaration from int to long.
I'm running Mint Linux on a 64-bit Intel-esque CPU with 16GB of memory.

Related

Why can't 64-bit Windows allocate a lot of virtual memory?

On a system with virtual memory, it should be possible to allocate lots of address space, more than you have physical RAM, and then only write to as much of it as you need.
On a 32-bit system of course there is only four gigabytes of virtual address space, but that limit disappears on a 64-bit system.
Granted that Windows doesn't use the full 64-bit address space, apparently it uses 44 bits; that is still sixteen terabytes, so there should be no problem with allocating e.g. one terabyte: Behind Windows x64's 44-bit virtual memory address limit
So I wrote a program to test this, attempting to allocate a terabyte of address space in chunks of ten gigabytes each:
#include <new>
#include <stdio.h>
void main() {
std::set_new_handler([]() {
perror("new");
exit(1);
});
for (int i = 0; i < 100; i++) {
auto p = new char[10ULL << 30];
printf("%p\n", p);
}
}
Run on Windows x64 with 32 gigabytes of RAM, it gives this result (specifics differ between runs, but always qualitatively similar):
0000013C881C1040
0000013F081D0040
00000141881E2040
00000144081F1040
0000014688200040
0000014908219040
0000014B88226040
0000014E08232040
0000015088246040
0000015308252040
0000015588260040
new: Not enough space
So it only allocates 110 gigabytes before failing. That is larger than physical RAM, but much smaller than the address space that should be available.
It is definitely not trying to actually write to the allocated memory (that would require the allocation of physical memory); I tried explicitly doing that with memset immediately after allocation, and the program ran much slower, as expected.
So where is the limit on allocated virtual memory coming from?

How to Resolve this Out of Memory Issue for a Small Variable in Matlab?

I am running a 32-bit version of Matlab R2013a on my computer (4GB RAM, and 32-bit Windows 7).
I have dataset (~ 60 MB) and I want to read it using
ds = dataset('File', myFile, 'Delimiter', ',');
And each time I face Out of Memory error. Theoretically, I should be able to use 2GB of RAM, so there should be no problem reading such small files.
Here is what I got when typed memory
Maximum possible array: 36 MB (3.775e+07 bytes) *
Memory available for all arrays: 421 MB (4.414e+08 bytes) **
Memory used by MATLAB: 474 MB (4.969e+08 bytes)
Physical Memory (RAM): 3317 MB (3.478e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I followed every instructions I found (this is not a new issue), but for my case it seems rather weird, because I cannot run a simple program now.
System: Windows 7 32 bit
Matlab: R2013a
RAM: 4 GB
Clearly your issue is right here.
Maximum possible array: 36 MB (3.775e+07 bytes) *
You are either using a lot of memory in your system and/or you have a very low swap space.

Why do we need external sort?

The main reason for external sort is that the data may be larger than the main memory we have.However,we are using virtual memory now, and the virtual memory will take care of swapping between main memory and disk.Why do we need to have external sort then?
An external sort algorithm makes sorting large amounts of data efficient (even when the data does not fit into physical RAM).
While using an in-memory sorting algorithm and virtual memory satisfies the functional requirements for an external sort (that is, it will sort the data), it fails to achieve the non-functional requirement of being efficient. A good external sort minimises the amount of data read and written to external storage (and historically also seek times), and a general-purpose virtual memory implementation on top of a sort algorithm not designed for this will not be competitive with an algorithm designed to minimise IO.
In addition to #Anonymous's answer that external sort is better optimized for less disk IO, sometimes using in-memory sort and using the virtual memory is infeasible, since the virtual memory space is smaller than the file's size.
For example, if you have a 32 bits system (there are still a lot of these), and you want to sort a 20 GB file, 32bits system allow you to have 2^32 ~= 4GB virtual addresses, but the file you are trying to sort cannot fit in.
This used to be a real issue when 64 bits systems were still not very common, and is still an issue today for old 32 bits systems and some embadded devices.
However, even for 64 bits system, as expained in previous answers, the external sort algorithm is more optimized for the nature of sorting, and will require significantly less disk IO than letting the OS "take care of things".
I'm using Windows, in common line shell, you could run "systeminfo", it gives me my laptop's memory usage information.
Total Physical Memory: 8,082 MB
Available Physical Memory: 2,536 MB
Virtual Memory: Max Size: 11,410 MB
Virtual Memory: Available: 2,686 MB
Virtual Memory: In Use: 8,724 MB
I just write a app to test max size of array I could initialize from my laptop.
public static void BurnMemory()
{
for(var i = 1; i <= 1024; i++)
{
long size = 1 << i;
long t = 4 * size / (1 << 30);
try
{
// 1 int32 takes 32 bit(4 byte) memmory,
var arr = new int[size];
Console.WriteLine("Test pass initialize a array with size = 2^" + i.ToString());
}
catch(OutOfMemoryException err)
{
Console.WriteLine("Reach memory limitation when initialize a array with size = 2^{0} int32 = 4 x {1}B= {2}TB",i, size, t );
break;
}
}
}
It seems it terminate when it is trying to initialize array with size of 2^29.
Reach memory limitation when initialize a array with size = 2^29 int32 = 4 x 536870912B= 2TB
What I get from the test:
It is not hard to reach the memory limitation.
We need to understand our server's capability, then decide whether use in-memory sort or external sort.

How do I increase memory limit (contiguous as well as overall) in Matlab r2012b?

I am using Matlab r2012b on win7 32-bit with 4GB RAM.
However, the memory limit on Matlab process is pretty low. On memory command, I am getting the following output:
Maximum possible array: 385 MB (4.038e+08 bytes) *
Memory available for all arrays: 1281 MB (1.343e+09 bytes) **
Memory used by MATLAB: 421 MB (4.413e+08 bytes)
Physical Memory (RAM): 3496 MB (3.666e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I need to increase the limit to as much as possible.
System: Windows 7 32 bit
RAM: 4 GB
Matlab: r2012b
For general guidance with memory management in MATLAB, see this MathWorks article. Some specific suggestions follow.
Set the /3GB switch in the boot.ini to increase the memory available to MATLAB. Or set it with a properties dialog if you are averse to text editors. This is mentioned in this section of the above MathWorks page.
Also use pack to increase the Maximum possible array by compacting the memory. The 32-bit MATLAB memory needs blocks of contiguous free memory, which is where this first value comes from. The pack command saves all the variables, clears the workspace, and reloads them so that they are contiguous in memory.
More on overall memory, try disabling the virtual machine, closing programs, stopping unnecessary Windows services. No easy answer for this part.

Linux memory overcommit details

I am developing SW for embedded Linux and i am suffering system hangs because OOM Killer appears from time to time. Before going beyond i would like to solve some confusing issues about how Linux Kernel allocate dynamic memory assuming /proc/sys/vm/overcommit_memory has 0 and /proc/sys/vm/min_free_kbytes has 712, and no swap.
Supposing embedded Linux currently physical memory available is 5MB (5MB of free memory and there is not usable cached or buffered memory available) if i write this piece of code:
.....
#define MEGABYTE 1024*1024
.....
.....
void *ptr = NULL;
ptr = (void *) malloc(6*MEGABYTE); //Preserving 6MB
if (!prt)
exit(1);
memset(ptr, 1, MEGABYTE);
.....
I would like to know if when memset call is committed the kernel will try to allocate ~6MB or ~1MB (or min_free_kbytes multiple) in the physical memory space.
Right now there is about 9MB in my embedded device which has 32MB RAM. I check it by doing
# echo 3 > /proc/sys/vm/drop_caches
# free
total used free shared buffers
Mem: 23732 14184 9548 0 220
Swap: 0 0 0
Total: 23732 14184 9548
Forgetting last piece of C code, i would like to know if its possible that oom killer appears when for instance free memory is about >6MB.
I want to know if the system is out of memory when oom appears, so i think i have two options:
See VmRSS entries in /proc/pid/status of suspicious process.
Set /proc/sys/vm/overcommit_memory = 2 and /proc/sys/vm/overcommit_memory = 75 and see if there is any process requiring more of physical memory available.
I think you can read this document. Is provides you three small C programs that you can use to understand what happens with the different possible values of /proc/sys/vm/overcommit_memory .

Resources