EMR - Mapreduce memory errors - hadoop

I'm getting this memory error on my reducers:
Container is running beyond physical memory limits. Current usage: 6.1 GB of 6 GB physical memory used; 10.8 GB of 30 GB virtual memory used.
So yeah, there's a physical memory issue what can be resolved by increasing mapreduce.reduce.memory.mb BUT I don't understand why it happens.
The more data is getting into the pipe, the more likely it is for this memory issue to occur. The thing is that most of my reducers (about 90%) pass and memory was supposed to be freed as reducers pass because data should have been written to the disk already.
What am I missing here?

In YARN by default, containers are pre allocated with memory. In this case all reducer containers will have 6 GB memory regardless of free memory across cluster. The JVM of these containers will not be re-used or shared across containers(not at least in Hadoop2) , that means If the memory on one reducer goes beyond its memory 6 GB limit, it will not grab resources from other free containers (if that's your concern)
Now , why only few reducers are always going above its memory(given 90% passed) hints a possible SKEW in data , which means this reducer might be processing more input groups or more keys than others.

Related

Hive: changing mapper and reducer memory leads to hugh difference on resource usage

I am quite confused with the 2 parameters:
mapreduce.map.memory.mb
mapreduce.reduce.memory.mb
At the beginning, I setup them to both 1 GB then I run many Hive tasks in parallel and I can see all nodes' cpu usage reach above 95% and memory usages are about 50%.
However, when I run some tasks it may occur:
Container is running beyond physical memory limits. Current usage: 1.0 GB of 1 GB physical memory used; 2.8 GB of 2.1 GB virtual memory used. Killing container.
Then I changed the 2 parameters regarding memory to both 2 GB. The error of container disappear now, but when I rerun the tasks in parallel, it becomes very slow and usage of CPU on all nodes are only 1%-5%....
Why there are such hugh usage difference? How to increase the CPU usage and memory usage?

Cassandra java process using more memory than its allocated max heap size (Xmx)

We have our cassandra cluster which runs Apache Cassandra 3.11.4 in set of unix hosts (18). each of these host has 96G of RAM and we have configured heap size to -Xms=64G -Xmx=64G but top command (top -M) on hosts shows the actual memory utilization is ~85G on average i.e. much higher than allocated heap (64G).
the trends of memory usage are like, during startup of cassandra daemon, top -M show the process has already occupied ~75G which (75G-64G)=9G more than allocated heap size, and this memory utilization increases over time and reaches to max 85G in just 3-4 hours and remains at that stage throughout the time, while the heap utilization (~40-50%) is normal, GS activities are usual, minor GC kicks in as usual.
have confirmed that the total off-heap memory utilized by all the keyspaces are below 2G on each hosts.
We are unable to trace what else is consuming the RAM in addition to the allocated heap.
Besides the heap memory, Cassandra uses also the off-heap memory, for example for keeping compression metadata, bloom filters, and some other things. From documentation (1, 2):
Compression metadata is stored off-heap and scales with data on disk. This often requires 1-3GB of off-heap RAM per terabyte of data on disk, though the exact usage varies with chunk_length_in_kb and compression ratios.
Bloom filters are stored in RAM, but are stored offheap, so operators should not consider bloom filters when selecting the maximum heap size.
You can monitor heap & offheap memory usage using the JMX, for example. (I've seen setups, where bloom filter alone occupied ~40Gb of RAM, but it was heavily dependent on the number of the unique partition keys)
Too big heaps are usually not recommended because they can use long pauses, etc. It of course depends on the workload, but you can try 31Gb or lower (or just use default settings). Plus, you need to leave the memory for Linux file buffers so it will cache often used files. That is the reason why by default Cassandra allocates only 1/4th of system memory for heap.

setting up heap memory in jmeter for more than one concurrent script execution

Below is my scenario.
I have 2 test scripts :- one might use 5GB to 15GB of heap memory and other script might use from 5GB to 12GB.
If i have a machine of 32 GB memory,
While executing for the first script can i assign XMS 1GB XMX 22GB(though my script needs 15GB) and for the second script can i assign XMS 1GB and XMX 12GB
As sum of maximum goes beyong 32GB(total memory)
In the second case i assign like this--->
for script 1:XMS 22GB XMX 22GB
for script 2:XMS 12GB and XMX 12GB
Sum of Max 34GB.
Does it by any chance work like below----- >
If 12GB is assigned for first script,is this memory blocked for that process/script ? and can i not use the unused memory for other processes ?
or
If 12GB is assigned for the first script ,it uses only as much as requuired by it and any other process can use the rest memory ? IF it works in this way-i don't have to specifically assign heap for two scripts separately.
If you set the minimum heap memory via Xms parameter the JVM will reserve this memory and it will not be available for other processes.
If you're able to allocate more JVM Heap than you have total physical RAM it means that your OS will go for swapping - using hard drive to dump memory pages which extends your computer memory at cost of speed because memory operations are fast and disk operations are very slow.
For example look at my laptop which has 8 GB of total physical RAM:
It has 8 GB of physical memory of which 1.2 GB are free. It means that I can safely allocate 1 GB of RAM to Java
However when I give 8 GB to Java as:
java -Xms8G
it still works
15 GB - still works
and when I try to allocate 20 GB it fails because it doesn't fit into physical + virtual memory.
You must avoid swapping because it means that JMeter will not be able to send requests fast enough even if the system under tests supports it so make sure to measure how much available physical RAM you have and your test must not exceed this. If you cannot achieve it on one machine - you will have to go for distributed testing
Also "concurrently" running 2 different scripts is not something you should be doing because it's not only about the memory, a single CPU core can execute only one command at a time, other commands are waiting in the queue and being served by context switching which is kind of expensive and slow operation.
And last but not the least, allocating the maximum HEAP is not the best idea because this way garbage collections will be less frequent but will last much longer resulting in throughput dropdowns, keep heap usage between 30% and 80% like in Optimal Heap Size article

What does VIRTUAL_MEMORY_BYTES task counter mean in Hadoop?

The following excerpt from The Definitive Guide provides high level details as shown below but
what exactly is virtual memory is referring to in this task counter?
How to interpret it? How is it related to PHYSICAL_MEMORY_BYTES?
Following is an example extract from one of the jobs. Physical is 214 GB approx. and virtual is 611 GB approx.
1.What exactly is virtual memory is referring to in this task counter?
Virtual Memory here is used to prevent Out of Memory errors of a task,if data size doesn't fits in RAM(physical mem).
in RAM.So a portion of memory of size what didn't fit in RAM will be used as Virtual Memory.
So,while setting up hadoop cluster one is advised to have the value of vm.swappiness =1 to achieve better performance. On linux systems, vm.swappiness is set to 60 by default.
Higher the value more aggresive swapping of memory pages.
https://community.hortonworks.com/articles/33522/swappiness-setting-recommendation.html
2. How to interpret it? How is it related to PHYSICAL_MEMORY_BYTES?
swapping of memory pages from physical memory to virtual memory on disk when not enough phy mem
This is the relation between PHYSICAL_MEMORY_BYTES and VIRTUAL_MEMORY_BYTES.

Yarn container out of memory when using large memory mapped file

I am using hadoop 2.4. The reducer use several large memory mapped files (about 8G total). The reducer itself uses very little memory. To my knowledge, the memeory mapped file (FileChannel.map(readonly)) also uses little memory (managed by OS instead of JVM).
I got this error:
Container [pid=26783,containerID=container_1389136889967_0009_01_000002]
is running beyond physical memory limits.
Current usage: 4.2 GB of 4 GB physical memory used;
5.2 GB of 8.4 GB virtual memory used. Killing container
Here was my settings:
mapreduce.reduce.java.opts=-Xmx2048m
mapreduce.reduce.memory.mb=4096
So I adjust the parameter to this and works:
mapreduce.reduce.java.opts=-Xmx10240m
mapreduce.reduce.memory.mb=12288
I further adjust the parameters and get it work like this:
mapreduce.reduce.java.opts=-Xmx2048m
mapreduce.reduce.memory.mb=10240
My question is: why I need the yarn container to have about 8G more memory than the JVM size? The culprit seems to be the large Java memory mapped files I used (each about 1.5G, sum up to about 8G). Isn't the memory mapped files managed by the OS and they supposed to be sharable by multiple processes (e.g. reducers)?
I use AWS m2.4xlarge instance (67G memory) and it has about 8G unused and the OS should have sufficient memory. In current settings, there are only about 5 reducers available for each instance, and each reducer has extra 8G memory. This just looks very stupid.
From the logs, it seems that you have enabled yarn.nodemanager.pmem-check-enabled and yarn.nodemanager.vmem-check-enabled properties in yarn-site.xml. If these checks are enabled, then NodeManger may kill container(s), if it detects that the container(s) exceeded the resource limits. In your case, physical memory exceeded the configured value (=4G) so NodeManager killed the task (running within the container).
In normal cases, heap memory (defined using -Xmx property in mapreduce.reduce.java.opts and mapreduce.map.java.opts configurations) is defined 75-80% of the total memory (defined using mapreduce.reduce.memory.mb and mapreduce.map.memory.mb configurations). However, in your case due to Java Memory Mapped Files implementation, non-heap memory requirements are higher than heap memory, and thats why you had to keep quite large gap between total and heap memory.
Please check below link, there may be a need to tune the property mapreduce.reduce.shuffle.input.buffer.percent
Out of memory error in Mapreduce shuffle phase

Resources