Does Nifi use memory other than allocated Heap Memory? - apache-nifi

We have given 4GB of memory to heap by mentioning it in bootstrap.conf and all repositories (Content, Provenance, Content & FlowFileRepository) are configured to Disk. But when NiFi instance runs for a while It starts using 10 GB of memory.
Not able to find where the extra 6 GB of RAM is consumed. Please help.

Would it be stored in the RAM or JVM memory? It looks like NiFi uses the JVM and RAM to store the memory in some fashion.

Related

setting up heap memory in jmeter for more than one concurrent script execution

Below is my scenario.
I have 2 test scripts :- one might use 5GB to 15GB of heap memory and other script might use from 5GB to 12GB.
If i have a machine of 32 GB memory,
While executing for the first script can i assign XMS 1GB XMX 22GB(though my script needs 15GB) and for the second script can i assign XMS 1GB and XMX 12GB
As sum of maximum goes beyong 32GB(total memory)
In the second case i assign like this--->
for script 1:XMS 22GB XMX 22GB
for script 2:XMS 12GB and XMX 12GB
Sum of Max 34GB.
Does it by any chance work like below----- >
If 12GB is assigned for first script,is this memory blocked for that process/script ? and can i not use the unused memory for other processes ?
or
If 12GB is assigned for the first script ,it uses only as much as requuired by it and any other process can use the rest memory ? IF it works in this way-i don't have to specifically assign heap for two scripts separately.
If you set the minimum heap memory via Xms parameter the JVM will reserve this memory and it will not be available for other processes.
If you're able to allocate more JVM Heap than you have total physical RAM it means that your OS will go for swapping - using hard drive to dump memory pages which extends your computer memory at cost of speed because memory operations are fast and disk operations are very slow.
For example look at my laptop which has 8 GB of total physical RAM:
It has 8 GB of physical memory of which 1.2 GB are free. It means that I can safely allocate 1 GB of RAM to Java
However when I give 8 GB to Java as:
java -Xms8G
it still works
15 GB - still works
and when I try to allocate 20 GB it fails because it doesn't fit into physical + virtual memory.
You must avoid swapping because it means that JMeter will not be able to send requests fast enough even if the system under tests supports it so make sure to measure how much available physical RAM you have and your test must not exceed this. If you cannot achieve it on one machine - you will have to go for distributed testing
Also "concurrently" running 2 different scripts is not something you should be doing because it's not only about the memory, a single CPU core can execute only one command at a time, other commands are waiting in the queue and being served by context switching which is kind of expensive and slow operation.
And last but not the least, allocating the maximum HEAP is not the best idea because this way garbage collections will be less frequent but will last much longer resulting in throughput dropdowns, keep heap usage between 30% and 80% like in Optimal Heap Size article

What does VIRTUAL_MEMORY_BYTES task counter mean in Hadoop?

The following excerpt from The Definitive Guide provides high level details as shown below but
what exactly is virtual memory is referring to in this task counter?
How to interpret it? How is it related to PHYSICAL_MEMORY_BYTES?
Following is an example extract from one of the jobs. Physical is 214 GB approx. and virtual is 611 GB approx.
1.What exactly is virtual memory is referring to in this task counter?
Virtual Memory here is used to prevent Out of Memory errors of a task,if data size doesn't fits in RAM(physical mem).
in RAM.So a portion of memory of size what didn't fit in RAM will be used as Virtual Memory.
So,while setting up hadoop cluster one is advised to have the value of vm.swappiness =1 to achieve better performance. On linux systems, vm.swappiness is set to 60 by default.
Higher the value more aggresive swapping of memory pages.
https://community.hortonworks.com/articles/33522/swappiness-setting-recommendation.html
2. How to interpret it? How is it related to PHYSICAL_MEMORY_BYTES?
swapping of memory pages from physical memory to virtual memory on disk when not enough phy mem
This is the relation between PHYSICAL_MEMORY_BYTES and VIRTUAL_MEMORY_BYTES.

Neo4j Heap and pagecache configuration

I'm running neo4j 3.2.1 on Linux machine of 16GB RAM, yet I'm having issues ith the heap memory showing an error everytime.
In the documentation, we have:
Actual OS allocation = available RAM - (page cache + heap size)
Does it mean that if we configure it for my machine (for example 16g for the heap and 16g for page cache) then the Allocation would be 0, inducing the problem?
Can anyone tell me how to proceed with the best configuration to explore more of the capacity of the machine without having to face that heap error again?
In your example, you are trying to give all the RAM to the page cache, and also all the RAM to the heap. That is never possible. The available RAM has to be divided between the OS, the page cache, and the heap.
The performance documentation shows how to divide up the RAM.
As a first pass, you could try this allocation (given your 16 GB of RAM):
7 GB for the page cache
8 GB for the heap
That will leave (16GB - (7GB + 8GB)), or 1 GB for the OS.
But you should read the documentation to fine tune your allocations.

EMR - Mapreduce memory errors

I'm getting this memory error on my reducers:
Container is running beyond physical memory limits. Current usage: 6.1 GB of 6 GB physical memory used; 10.8 GB of 30 GB virtual memory used.
So yeah, there's a physical memory issue what can be resolved by increasing mapreduce.reduce.memory.mb BUT I don't understand why it happens.
The more data is getting into the pipe, the more likely it is for this memory issue to occur. The thing is that most of my reducers (about 90%) pass and memory was supposed to be freed as reducers pass because data should have been written to the disk already.
What am I missing here?
In YARN by default, containers are pre allocated with memory. In this case all reducer containers will have 6 GB memory regardless of free memory across cluster. The JVM of these containers will not be re-used or shared across containers(not at least in Hadoop2) , that means If the memory on one reducer goes beyond its memory 6 GB limit, it will not grab resources from other free containers (if that's your concern)
Now , why only few reducers are always going above its memory(given 90% passed) hints a possible SKEW in data , which means this reducer might be processing more input groups or more keys than others.

Yarn container out of memory when using large memory mapped file

I am using hadoop 2.4. The reducer use several large memory mapped files (about 8G total). The reducer itself uses very little memory. To my knowledge, the memeory mapped file (FileChannel.map(readonly)) also uses little memory (managed by OS instead of JVM).
I got this error:
Container [pid=26783,containerID=container_1389136889967_0009_01_000002]
is running beyond physical memory limits.
Current usage: 4.2 GB of 4 GB physical memory used;
5.2 GB of 8.4 GB virtual memory used. Killing container
Here was my settings:
mapreduce.reduce.java.opts=-Xmx2048m
mapreduce.reduce.memory.mb=4096
So I adjust the parameter to this and works:
mapreduce.reduce.java.opts=-Xmx10240m
mapreduce.reduce.memory.mb=12288
I further adjust the parameters and get it work like this:
mapreduce.reduce.java.opts=-Xmx2048m
mapreduce.reduce.memory.mb=10240
My question is: why I need the yarn container to have about 8G more memory than the JVM size? The culprit seems to be the large Java memory mapped files I used (each about 1.5G, sum up to about 8G). Isn't the memory mapped files managed by the OS and they supposed to be sharable by multiple processes (e.g. reducers)?
I use AWS m2.4xlarge instance (67G memory) and it has about 8G unused and the OS should have sufficient memory. In current settings, there are only about 5 reducers available for each instance, and each reducer has extra 8G memory. This just looks very stupid.
From the logs, it seems that you have enabled yarn.nodemanager.pmem-check-enabled and yarn.nodemanager.vmem-check-enabled properties in yarn-site.xml. If these checks are enabled, then NodeManger may kill container(s), if it detects that the container(s) exceeded the resource limits. In your case, physical memory exceeded the configured value (=4G) so NodeManager killed the task (running within the container).
In normal cases, heap memory (defined using -Xmx property in mapreduce.reduce.java.opts and mapreduce.map.java.opts configurations) is defined 75-80% of the total memory (defined using mapreduce.reduce.memory.mb and mapreduce.map.memory.mb configurations). However, in your case due to Java Memory Mapped Files implementation, non-heap memory requirements are higher than heap memory, and thats why you had to keep quite large gap between total and heap memory.
Please check below link, there may be a need to tune the property mapreduce.reduce.shuffle.input.buffer.percent
Out of memory error in Mapreduce shuffle phase

Resources