In my Flink Application , I am seeing that [taskmanager.memory.managed.size​] this memory is showing filled up i.e. it uses 100% i.e. 512 MB of 512 MB - memory-management

In my Flink application I see that Managed Memory is getting being utilized 100% i.e. 512 MB of 512 MB, Even after increasing the size of Memory to 1 GB, I see instantly it shows memory consumption as 100% i.e. 0.96GB of .96GB . I am not understanding why it is happening like this, can someone please help me here?
I have tried in UAT and Preview environment.

Related

Committed Bytes and Commit Limit - Memory Statistics

I'm trying to understand the actual difference between committed bytes and commit limit.
From the definitions below,
Commit Limit is the amount of virtual memory that can be committed
without having to extend the paging file(s). It is measured in bytes.
Committed memory is the physical memory which has space reserved on the disk paging files.
Committed Bytes is the amount of committed virtual memory, in bytes.
From my computer configurations, i see that my Physical Memory is 1991 MB, Virtual Memory (total paging file for all files) is 1991 MB and
Minimum Allowed is 16 MB, Recommended is 2986 MB and Currently Allocated is 1991 MB.
But when i open my perfmon and monitor Committed Bytes and Commit Limit, the numbers differ a lot. So what exactly are these Committed Bytes and Commit Limit and how do they form.
Right now in my perfmon, Committed Bytes is running at 3041 MB (Sometimes it goes to 4000 MB as well), Commit Limit is 4177 MB. So how are they calculated. Kindly explain. I've read a lot of documents but I wasn't understanding how this works.
Please help. Thanks.

How can 8086 processors access harddrives larger than 1 MB?

How can 8086 processors (or real mode on later processors) access harddrives larger than 1 MB, when they can only access 1 MB (without expanded memory) of RAM?
Access is not linear (by byte) but by sector. Sector size may be for example 512 bytes. The computer reads sectors to memory as needed.

How to Resolve this Out of Memory Issue for a Small Variable in Matlab?

I am running a 32-bit version of Matlab R2013a on my computer (4GB RAM, and 32-bit Windows 7).
I have dataset (~ 60 MB) and I want to read it using
ds = dataset('File', myFile, 'Delimiter', ',');
And each time I face Out of Memory error. Theoretically, I should be able to use 2GB of RAM, so there should be no problem reading such small files.
Here is what I got when typed memory
Maximum possible array: 36 MB (3.775e+07 bytes) *
Memory available for all arrays: 421 MB (4.414e+08 bytes) **
Memory used by MATLAB: 474 MB (4.969e+08 bytes)
Physical Memory (RAM): 3317 MB (3.478e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I followed every instructions I found (this is not a new issue), but for my case it seems rather weird, because I cannot run a simple program now.
System: Windows 7 32 bit
Matlab: R2013a
RAM: 4 GB
Clearly your issue is right here.
Maximum possible array: 36 MB (3.775e+07 bytes) *
You are either using a lot of memory in your system and/or you have a very low swap space.

How do I increase memory limit (contiguous as well as overall) in Matlab r2012b?

I am using Matlab r2012b on win7 32-bit with 4GB RAM.
However, the memory limit on Matlab process is pretty low. On memory command, I am getting the following output:
Maximum possible array: 385 MB (4.038e+08 bytes) *
Memory available for all arrays: 1281 MB (1.343e+09 bytes) **
Memory used by MATLAB: 421 MB (4.413e+08 bytes)
Physical Memory (RAM): 3496 MB (3.666e+09 bytes)
* Limited by contiguous virtual address space available.
** Limited by virtual address space available.
I need to increase the limit to as much as possible.
System: Windows 7 32 bit
RAM: 4 GB
Matlab: r2012b
For general guidance with memory management in MATLAB, see this MathWorks article. Some specific suggestions follow.
Set the /3GB switch in the boot.ini to increase the memory available to MATLAB. Or set it with a properties dialog if you are averse to text editors. This is mentioned in this section of the above MathWorks page.
Also use pack to increase the Maximum possible array by compacting the memory. The 32-bit MATLAB memory needs blocks of contiguous free memory, which is where this first value comes from. The pack command saves all the variables, clears the workspace, and reloads them so that they are contiguous in memory.
More on overall memory, try disabling the virtual machine, closing programs, stopping unnecessary Windows services. No easy answer for this part.

How to keep cache a second class citizen

OK in a comment to this question:
How to clean caches used by the Linux kernel
ypnos claims that:
"Applications will always be first citizens for memory and don't have to fight with cache for it."
Well, I think my cache is rebelious and does not want to accept its social class. I ran the experiment here:
http://www.linuxatemyram.com/play.html
step 1:
$ free -m
total used free shared buffers cached
Mem: 3015 2901 113 0 15 2282
-/+ buffers/cache: 603 2411
Swap: 2406 2406 0
So 2282MB is used by cache and 113MB is free.
Now:
$ ./munch
Allocated 1 MB
Allocated 2 MB
Allocated 3 MB
Allocated 4 MB
.
.
.
Allocated 265 MB
Allocated 266 MB
Allocated 267 MB
Allocated 268 MB
Allocated 269 MB
Killed
OK, Linux gave me, generously another 156MB and that's it! So, how can I tell Linux that my programs are more important than that 2282MB cache?
Extra info: my /home is encrypted.
More people with the same problem (These make the encryption hypothesis not very plausible):
https://serverfault.com/questions/171164/can-you-set-a-minimum-linux-disk-buffer-size
and
https://askubuntu.com/questions/41778/computer-freezing-on-almost-full-ram-possibly-disk-cache-problem
The thing to know about caching in the kernel is that it's designed to be efficient as possible. This often means things put into cache are left there when there's nothing else asking for memory.
This is the kernel preparing to be lucky in case the thing in cache is asked for again. If no-one else needs the memory, there's little benefit in freeing it up.
I am not sure about Linux specific stuff, but a good OS will keep track of how many times a memory page was accessed, and how long ago. If it wasn't accessed much lately, it can swap it out, and use the RAM for caching.
Also, allocated but unused memory can be sent to swap as well, because sometimes programs allocate more than they actually need, so many memory pages would just sit there filling your RAM.
I found out if I turn off the swap by
#swapoff -a
The problem is going away. If I have swap, when I ask for more memory, then linux tries to move the cache to the swap and then swap get full, then linux halts the whole operation instead of dropping the cache. This results in "out of memory". But without swap Linux knows that it has no hope but dropping the cache in the first place.
I think it's a bug in linux kernel.
From the one of the link that added to the question suggests that:
sysctl -w vm.min_free_kbytes=65536
helps, for me with 64MG still I can easily get into trouble. I'm working with 128MG margin and when the greedy cache reach there, the machine becomes very slow but unlike before it doesn't freeze. I'll check with 256MG margin and see if there will be an improvement or not.

Resources