How to Resolve this Out of Memory Issue for a Small Variable in Matlab? - windows

I am running a 32-bit version of Matlab R2013a on my computer (4GB RAM, and 32-bit Windows 7).
I have dataset (~ 60 MB) and I want to read it using
ds = dataset('File', myFile, 'Delimiter', ',');
And each time I face Out of Memory error. Theoretically, I should be able to use 2GB of RAM, so there should be no problem reading such small files.
Here is what I got when typed memory
Maximum possible array: 36 MB (3.775e+07 bytes) *
Memory available for all arrays: 421 MB (4.414e+08 bytes) **
Memory used by MATLAB: 474 MB (4.969e+08 bytes)
Physical Memory (RAM): 3317 MB (3.478e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I followed every instructions I found (this is not a new issue), but for my case it seems rather weird, because I cannot run a simple program now.
System: Windows 7 32 bit
Matlab: R2013a
RAM: 4 GB

Clearly your issue is right here.
Maximum possible array: 36 MB (3.775e+07 bytes) *
You are either using a lot of memory in your system and/or you have a very low swap space.

Related

RuntimeError: CUDA out of memory. Tried to allocate... but memory is empty

I'm trying to run the train file from this Unet with their default hyperparameters, batch size = 1.
I have a GTX970 with 4GB and made Windows use the integrated graphics.
When I run nvidia-smi, it says that the memory of the GPU is almost free (52MiB / 4096MiB), "No running processes found " and pytorch uses the GPU not the integrated graphics
I do not understand what is using the memory:
RuntimeError: CUDA out of memory. Tried to allocate 150.00 MiB (GPU 0; 4.00 GiB total capacity; 2.77 GiB already allocated; 72.46 MiB free; 2.82 GiB reserved in total by PyTorch).
GPU memory allocation is not done all at once. As the program loads the data and the model, GPU memory usage gradually increases until the training actually starts. In your case, the program has allocated 2.7GB and tries to get more memory before training starts, but there is not enough space. 4GB GPU memory is usually too small for CV deep learning algorithms.

Virtual Memory - Calculating the Virtual Address Space

I am studying for an exam tomorrow and I came across this question:
You are given a memory system with 2MB of virtual memory, 8KB page size,
512 MB of physical memory, TLB contains 16 entries, 2-way set associative.
How many bits are needed to represent the virtual address space?
I was thinking it would be 20 bits, since 2^10 is 1024, so I simply multiply 2^10*2^10 and get 2^20. However, the answer ends up being 21 and I have no idea why.
The virtual address space required is 2MB.
As you have calculated, 20 bits can accomodate 1MB of VM space. You need 21 bits to accomodate 2MB.

Memory management by OS

I am trying to understand memory management by the OS .
What I understand till now is that in a 32 bit system ,each process is allocated a space of 4gb [2gb user + 2gb kernel] ,in the virtual address space.
What confuses me is that is this 4gb space unique for every process . if I have say 3 processes p1 ,p2 ,p3 running would I need 12 gb of space on the hard disk ?
Also if say I have 2gb ram on a 32 bit system ,how will it manage to handle a process which needs 4gb ?[through the paging file ] ?
[2gb user + 2gb kernel]
That is a constraint by the OS. On an x86 32-bit system without PAE enabled, the virtual address space is 4 GiB (note that GB usually denotes 1000 MB while GiB stands for 1024 MiB).
What confuses me is that is this 4gb space unique for every process .
Yes, every process has its own 4 GiB virtual address space.
if I have say 3 processes p1 ,p2 ,p3 running would I need 12 gb of
space on the hard disk ?
No. With three processes, they can occupy a maximum of 12 GiB of storage. Whether that's primary or secondary storage is left to the kernel (primary is preferred, of course). So, you'd need your primary memory size + some secondary storage space to be at least 12 GiB to contain all three processes if all those processes really occupied the full range of 4 GiB, which is pretty unlikely to happen.
Also if say I have 2gb ram on a 32 bit system ,how will it manage to
handle a process which needs 4gb ?[through the paging file ] ?
Yes, in a way. You mean the right thing, but the "paging file" is just an implementation detail. It is used by Windows, but Linux, for example, uses a seperate swap partition instead. So, to be technically correct, "secondary storage (a HDD, for example) is needed to store the remaining 2 GiB of the process" would be right.

How do I increase memory limit (contiguous as well as overall) in Matlab r2012b?

I am using Matlab r2012b on win7 32-bit with 4GB RAM.
However, the memory limit on Matlab process is pretty low. On memory command, I am getting the following output:
Maximum possible array: 385 MB (4.038e+08 bytes) *
Memory available for all arrays: 1281 MB (1.343e+09 bytes) **
Memory used by MATLAB: 421 MB (4.413e+08 bytes)
Physical Memory (RAM): 3496 MB (3.666e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I need to increase the limit to as much as possible.
System: Windows 7 32 bit
RAM: 4 GB
Matlab: r2012b
For general guidance with memory management in MATLAB, see this MathWorks article. Some specific suggestions follow.
Set the /3GB switch in the boot.ini to increase the memory available to MATLAB. Or set it with a properties dialog if you are averse to text editors. This is mentioned in this section of the above MathWorks page.
Also use pack to increase the Maximum possible array by compacting the memory. The 32-bit MATLAB memory needs blocks of contiguous free memory, which is where this first value comes from. The pack command saves all the variables, clears the workspace, and reloads them so that they are contiguous in memory.
More on overall memory, try disabling the virtual machine, closing programs, stopping unnecessary Windows services. No easy answer for this part.

Can't use more than about 1.5 GB of my 4 GB RAM for a simple sort

I'm using a circa summer 2007 MacBook Pro (x86-64) with a 32KB L1 (I think), 4MB L2, and 4GB RAM; I'm running OS X 10.6.8.
I'm writing a standard radix sort in C++: it copies from one array to another and back again as it sorts (so the memory used is twice the size of the array). I watch it by printing a '.' every million entries moved.
If the array is at most 750 MB then these dots usually move quite fast; however if the array is larger then the whole process crawls to a halt. If I radix sort 512 MB in blocks and then attempt to merge sort the blocks, the first block goes fast and then again the process crawls to a halt. That is, my process only seems to be able to use 1.5 GB of RAM for the sort. What is odd is that I have 4 GB of physical RAM.
I tried just allocating an 8 GB array and walking through it writing each byte and printing a '.' every million bytes; it seems that everything starts to slow down around 1.5 GB and stays at that rate even past 4 GB when I know it must be going to disk; so the OS starts writing pages to disk around 1.5 GB.
I want to use my machine to sort large arrays. How do I tell my OS to give my process at least, say, 3.5 GB of RAM ? I tried using mlock(), but that just seems to slow things down even more. Ideas?

Resources