Suggestions required in increasing utilization of yarn containers on our discovery cluster - hadoop

Current Setup
we have our 10 node discovery cluster.
Each node of this cluster has 24 cores and 264 GB ram Keeping some memory and CPU aside for background processes, we are planning to use 240 GB memory.
now, when it comes to container set up, as each container may need 1 core, so max we can have 24 containers, each with 10GB memory.
Usually clusters have containers with 1-2 GB memory but we are restricted with the available cores we have with us or maybe I am missing something
Problem statement
as our cluster is extensively used by data scientists and analysts, having just 24 containers does not suffice. This leads to heavy resource contention.
Is there any way we can increase number of containers?
Options we are considering
If we ask the team to run many tez queries (not separately) but in a file, then at max we will keep one container.
Requests
Is there any other way possible to manage our discovery cluster.
Is there any possibility of reducing container size.
can a vcore (as it's a logical concept) be shared by multiple containers?

Vcores are just a logical unit and not in anyway related to a CPU core unless you are using YARN with CGroups and have yarn.nodemanager.resource.percentage-physical-cpu-limit enabled. Most tasks are rarely CPU-bound but more typically network I/O bound. So if you were to look at your cluster's overall CPU utilization and memory utilization, you should be able to resize your containers based on the wasted (spare) capacity.
You can measure utilization with a host of tools but sar, ganglia and grafana are the obvious ones but you can also look at Brendan Gregg's Linux Performance tools for more ideas.

Related

High CPU usage on elasticsearch nodes

we have been using a 3 node Elasticsearch(7.6v) cluster running in docker container. I have been experiencing very high cpu usage on 2 nodes(97%) and moderate CPU load on the other node(55%). Hardware used are m5 xlarge servers.
There are 5 indices with 6 shards and 1 replica. The update operations take around 10 seconds even for updating a single field. similar case is with delete. however querying is quite fast. Is this because of high CPU load?
2 out of 5 indices, continuously undergo a update and write operations as they listen from a kafka stream. size of the indices are 15GB, 2Gb and the rest are around 100MB.
You need to provide more information to find the root cause:
All the ES nodes are running on different docker containers on the same host or different host?
Do you have resource limit on your ES docker containers?
How much heap size of ES and is it 50% of host machine RAM?
Node which have high CPU, holds the 2 write heavy indices which you mentioned?
what is the refresh interval of your indices which receives high indexing requests.
what is the segment size of your 15 GB indices, use https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-segments.html to get this info.
What all you have debugged so far and is there is any interesting info you want to share to find the issue?

MemSQL performance issues

I have a single node MemSQL install with one master aggregator and two leaves (all on a single box). The machine has 2 cores, 16Gb RAM, and MemSQL columnstore data is ~7Gb (coming from 21Gb CSV). When running queries on the data, memory usage caps at ~2150Mb (11Gb sitting free). I've configured both leaves to have maximum_memory = 7000 in the memsql.cnf files for both nodes (memsql-optimize does similar). During query execution, the master aggregator sits at 100% CPU, with the leaves 0-8% CPU.
This does not seems like an efficient use of system resources, but I'm not sure what I can do to configure the system or MemSQL to make more efficient use of CPU or memory. Any help would be greatly appreciated!
If during query execution your machine is at 100% cpu (on all cores), it doesn't really matter which MemSQL node it is, your workload throughput is still bottlenecked on cpu. However for most queries you wouldn't expect most of the cpu use to be on the aggregator, so you may want to take a look at EXPLAIN or PROFILE of your queries.
Columnstore data is cached in memory as part of the OS file cache - it isn't counted as memory reserved by MemSQL, which is why your memory usage is less than the size of the columnstore data.
My database was coming from some other place than the current memsql install (perhaps an older cluster configuration) despite there only being a single memsql cluster on the machine. Looking at the Databases section in the Web UI was displaying no databases/tables, but my queries were succeeded with the expected answers.
drop database/reload from CSV managed to remedy the situation. All core threads are now used during query.

how can i evaluate my spark application

hello i just finished creating my first spark application, now i have access to a cluster (12 nodes where each node has 2 processors Intel(R) Xeon(R) CPU E5-2650 2.00GHz, where each processor has 8 cores), i want to know what are criteria that help me to tuning my application and to observe its performance.
i have already visited the official website of spark, it's talking about Data Serialization, but i couldn't get what is it exactly or how to specify it.
it is talking also about "memory management", "Level of Parallelism" but i didn't understand how to control these.
one more thing, i know that the size of data has an effect, but all files.csv that i have have small size, how can i get files with large size (10 GB, 20 GB, 30 GB, 50 GB, 100 GB, 300 GB, 500 GB)
please try to explain well for me, because cluster computing is fresh for me.
For tuning you application you need to know few things
1) You Need to Monitor your application whether your cluster is under utilized or not how much resources are used by your application which you have created
Monitoring can be done using various tools eg. Ganglia
From Ganglia you can find CPU, Memory and Network Usage.
2) Based on Observation about CPU and Memory Usage you can get a better idea what kind of tuning is needed for your application
Form Spark point of you
In spark-defaults.conf
you can specify what kind of serialization is needed how much Driver Memory and Executor Memory needed by your application even you can change Garbage collection algorithm.
Below are few Example you can tune this parameter based on your requirements
spark.serializer org.apache.spark.serializer.KryoSerializer
spark.driver.memory 5g
spark.executor.memory 3g
spark.executor.extraJavaOptions -XX:MaxPermSize=2G -XX:+UseG1GC
spark.driver.extraJavaOptions -XX:MaxPermSize=6G -XX:+UseG1GC
For More details refer http://spark.apache.org/docs/latest/tuning.html
Hope this Helps!!

How to set the VCORES in hadoop mapreduce/yarn?

The following are my configuration :
**mapred-site.xml**
map-mb : 4096 opts:-Xmx3072m
reduce-mb : 8192 opts:-Xmx6144m
**yarn-site.xml**
resource memory-mb : 40GB
min allocation-mb : 1GB
the Vcores in hadoop cluster displayed 8GB but i dont know how the computation or where to configure it.
hope someone could help me.
Short Answer
It most probably doesn't matter, if you are just running hadoop out of the box on your single-node-cluster or even a small personal distributed cluster. You just need to worry about memory.
Long Answer
vCores are used for larger clusters in order to limit CPU for different users or applications. If you are using YARN for yourself there is no real reason to limit your container CPU. That is why vCores are not even taken into consideration by default in Hadoop !
Try setting your available nodemanager vcores to 1. It doesn't matter ! Your number of containers will still be 2 or 4 .. or whatever the value of :
yarn.nodemanager.resource.memory-mb / mapreduce.[map|reduce].memory.mb
If really do want the number of containers to take vCores into consideration and be limited by :
yarn.nodemanager.resource.cpu-vcores / mapreduce.[map|reduce].cpu.vcores
then you need to use a different a different Resource Calculator. Go to your capacity-scheduler.xml config and change DefaultResourceCalculator to DominantResourceCalculator.
In addition to using vCores for container allocation, you want to use vCores to really limit CPU usage of each node ? You need to change even more configurations to use the LinuxContainerExecutor instead of the DefaultContainerExecutor, because it can manage linux cgroups which are used to limit CPU resources. Follow this page if you want more info on this.
yarn.nodemanager.resource.cpu-vcores - Number of CPU cores that can be allocated for containers.
mapreduce.map.cpu.vcores - The number of virtual CPU cores allocated for each map task of a job
mapreduce.reduce.cpu.vcores - The number of virtual CPU cores for each reduce task of a job
I accidentally came across this question and I eventually managed to find the answers that I needed, so I will try to provide a complete answer.
Entities and they relations For each hadoop application/job, you have an Application Master that communicates with the ResourceManager about available resources on the cluster. The ResourceManager receives information about available resources on each node from each NodeManager. The resources are called Containers (memory and CPU). For more information see this.
Resource declaration on the cluster Each NodeManager provides information about its available resources. Relevant settings are yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores in $HADOOP_CONF_DIR/yarn-site.xml. They declare the memory and cpus that can be allocated to Containers.
Ask for resources For your jobs you can configure what resources are needed by each map/reduce. This can be done as follows (this is for the map tasks).
conf.set("mapreduce.map.cpu.vcores", "4");
conf.set("mapreduce.map.memory.mb", "2048");
This will ask for 4 virtual cores and 2048MB of memory for each map task.
You can also configure the resources that are necessary for the Application Master the same way with the properties yarn.app.mapreduce.am.resource.mb and yarn.app.mapreduce.am.resource.cpu-vcores.
Those properties can have default values in $HADOOP_CONF_DIR/mapred-default.xml.
For more options and default values I would recommend you to take a look at this and this

Why is Hadoop MapReduce so slow and not using all of the available resources?

I am currently testing the performance of Apache Hadoop in a 9 node cluster with each node having 4Gig RAM plus 2 CPU's and determined that when submitting a single job the resources of the cluster (RAM, CPU, NET, disk I/O) are nearly unused.
What are the limited factors that prevent MapReduce to use all available resources?

Resources