h2o starting on YARN not working - hadoop

When I start H2o on a cdh cluster I get the following error. I downloaded everything formt he wbesite and followed the tutorial. The command I ran was
hadoop jar h2odriver.jar -nodes 2 -mapperXmx 1g -output hdfsOutputDirName
It shows that containers are not being used. It's not clear what settings these would be on hadoop. I have given all settings memory. It's the 0.0 for memory that doesnt make sense, and why are the containers not using memory. Is the cluster even running now?
----- YARN cluster metrics -----
Number of YARN worker nodes: 3
----- Nodes -----
Node: http://data-node-3:8042 Rack: /default, RUNNING, 1 containers used, 1.0 / 6.0 GB used, 1 / 4 vcores used
Node: http://data-node-1:8042 Rack: /default, RUNNING, 0 containers used, 0.0 / 6.0 GB used, 0 / 4 vcores used
Node: http://data-node-2:8042 Rack: /default, RUNNING, 0 containers used, 0.0 / 6.0 GB used, 0 / 4 vcores used
----- Queues -----
Queue name: root.default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 0.00
Maximum capacity: -1.00
Application count: 0
Queue 'root.default' approximate utilization: 0.0 / 0.0 GB used, 0 / 0 vcores used
----------------------------------------------------------------------
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (0.0 GB)
WARNING: Job virtual cores request (2) exceeds queue available virtual cores capacity (0)
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1462681033282_0008'

You should setup your default queue to have available resources to run 2nodes cluster.
See warnings:
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (0.0 GB)
you ask 1GB per node (+overhead) but there is no available resources in the YARN queue
WARNING: Job virtual cores request (2) exceeds queue available virtual cores capacity (0)
you ask for 2 virtual cores but no cores are available in your default queue
Please check YARN documentation - for example setup of capacity scheduler and max available resources:
https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html

I made the following changes in Cloudera Manager yarn configuration
Setting Value
yarn.scheduler.maximum-allocation-vcores 8
yarn.nodemanager.resource.cpu-vcores 4
yarn.nodemanager.resource.cpu-vcores 4
yarn.scheduler.maximum-allocation-mb 16 GB

Related

hadoop + how to rebalnce the hdfs

we have HDP cluster version 2.6.5 with 8 data nodes , all machines are installed on rhel 7.6 version
HDP cluster is based amabri platform version - 2.6.1
each data-node ( worker machine ) include two disks and each disk size is 1.8T
when we access the data-node machines we can see differences between the size of the disks
for example on the first data-node the size is : ( by df -h )
/dev/sdb 1.8T 839G 996G 46% /grid/sdc
/dev/sda 1.8T 1014G 821G 56% /grid/sdb
on the second data-node the size is:
/dev/sdb 1.8T 1.5T 390G 79% /grid/sdc
/dev/sda 1.8T 1.5T 400G 79% /grid/sdb
on the third data-node th size is:
/dev/sdb 1.8T 1.7T 170G 91% /grid/sdc
/dev/sda 1.8T 1.7T 169G 91% /grid/sdb
and so on
the big question is why HDFS not perform the re-balance on the HDFS disks?
for example expected results on all disks should be with the same size on all datanodes machines
why is the used size differences between datanode1 to datanode2 to datanode3 etc ?
any advice about the tune parameters in HDFS that can help us?
because its very critical when one disk is reached 100% size and the other are more small as 50%
This is known behaviour of the hdfs re-balancer in HDP 2.6, There are many reasons for unbalanced block distribution. Click to check all the possible reasons.
With HDFS-1312 a disk balance option have been introduced to address this issue.
Following articles shall help you tune it more efficiently:-
HDFS Balancer (1): 100x Performance Improvement
HDFS Balancer (2): Configurations & CLI Options
HDFS Balancer (3): Cluster Balancing Algorithm
I would suggest to upgrade to HDP3.X as HDP 2.x is not supported anymore by Cloudera Support.

Hadoop multinode cluster too slow. How do I increase speed of data processing?

I have a 6 node cluster - 5 DN and 1 NN. All have 32 GB RAM. All slaves have 8.7 TB HDD. DN has 1.1 TB HDD. Here is the link to my core-site.xml , hdfs-site.xml , yarn-site.xml.
After running an MR job, i checked my RAM Usage which is mentioned below:
Namenode
free -g
total used free shared buff/cache available
Mem: 31 7 15 0 8 22
Swap: 31 0 31
Datanode :
Slave1 :
free -g
total used free shared buff/cache available
Mem: 31 6 6 0 18 24
Swap: 31 3 28
Slave2:
total used free shared buff/cache available
Mem: 31 2 4 0 24 28
Swap: 31 1 30
Likewise, other slaves have similar RAM usage. Even if a single job is submitted, the other submitted jobs enter into ACCEPTED state and wait for the first job to finish and then they start.
Here is the output of ps command of the JAR that I submnitted to execute the MR job:
/opt/jdk1.8.0_77//bin/java -Dproc_jar -Xmx1000m
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log
-Dyarn.home.dir= -Dyarn.id.str= -Dhadoop.root.logger=INFO,console
-Dyarn.root.logger=INFO,console -Dyarn.policy.file=hadoop-policy.xml
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log
-Dyarn.home.dir=/home/hduser/hadoop -Dhadoop.home.dir=/home/hduser/hadoop
-Dhadoop.root.logger=INFO,console -Dyarn.root.logger=INFO,console
-classpath --classpath of jars
org.apache.hadoop.util.RunJar abc.jar abc.mydriver2 /raw_data /mr_output/02
Is there any settings that I can change/add to allow multiple jobs to run simultaneously and speed up current data processing ? I am using hadoop 2.5.2. The cluster is in PROD environment and I can not take it down for updating hadoop version.
EDIT 1 : I started a new MR job with 362 GB of data and still the RAM usage is around 8 GB and 22 GB of RAM is free. Here is my job submission command -
nohup yarn jar abc.jar def.mydriver1 /raw_data /mr_output/01 &
Here is some more information :
18/11/22 14:09:07 INFO input.FileInputFormat: Total input paths to process : 130363
18/11/22 14:09:10 INFO mapreduce.JobSubmitter: number of splits:130372
Is there some additional memory parameters that we can use to submit the job to have efficient memory usage ?
I believe you can edit the mapred-default.xml
The Params you are looking for are
mapreduce.job.running.map.limit
mapreduce.job.running.reduce.limit
0 (Probably what it is set too at the moment) means UNLIMITED.
Looking at your Memory 32G/Machine seems too small.
What CPU/Cores are you having ? I would expect Quad CPU/16 Cores Minimum. Per Machine.
Based on your yarn-site.xml your yarn.scheduler.minimum-allocation-mb setting of 10240 is too high. This effectively means you only have at best 18 vcores available. This might be the right setting for a cluster where you have tons of memory but for 32GB it's way too large. Drop it to 1 or 2GB.
Remember, HDFS block sizes are what each mapper typically consumes. So 1-2GB of memory for 128MB of data sounds more reasonable. The added benefit is you could have up to 180 vcores available which will process jobs 10x faster than 18 vcores.
To give you an idea of how a 4 node 32 core 128GB RAM per node cluster is setup:
For Tez: Divide RAM/CORES = Max TEZ Container size
So in my case: 128/32 = 4GB
TEZ:
YARN:

Spark Program running very slow on cluster

I am trying to run my PySpark in Cluster with 2 nodes and 1 master (all have 16 Gb RAM). I have run my spark with below command.
spark-submit --master yarn --deploy-mode cluster --name "Pyspark"
--num-executors 40 --executor-memory 2g CD.py
However my code runs very slow, it takes almost 1 hour to parse 8.2 GB of data.
Then i tried to change the configuration in my YARN. I changed following properties.
yarn.scheduler.increment-allocation-mb = 2 GiB
yarn.scheduler.minimum-allocation-mb = 2 GiB
yarn.scheduler.increment-allocation-mb = 2 GiB
yarn.scheduler.maximum-allocation-mb = 2 GiB
After doing these changes still my spark is running very slow and taking more than 1 hour to parse 8.2 GB of files.
could you please try with the below configuration
spark.executor.memory 5g
spark.executor.cores 5
spark.executor.instances 3
spark.driver.cores 2

Yarn - why doesn't task go out of heap space but container gets killed?

If a YARN container grows beyond its heap size setting, the map or reduce task will fail, with an error similar to the one below:
2015-02-06 11:58:15,461 WARN org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Container [pid=10305,containerID=container_1423215865404_0002_01_000007] is running beyond physical memory limits.
Current usage: 42.1 GB of 42 GB physical memory used; 42.9 GB of 168 GB virtual memory used. Killing container.
Dump of the process-tree for container_1423215865404_0002_01_000007 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 10310 10305 10305 10305 (java) 1265097 48324 46100516864 11028122 /usr/java/default/bin/java -server -XX:OnOutOfMemoryError=kill %p -Xms40960m -Xmx40960m -XX:MaxPermSize=128m -Dspark.sql.shuffle.partitions=20 -Djava.io.tmpdir=/data/yarn/datanode/nm-local-dir/usercache/admin/appcache/application_1423215865404_0002/container_1423215865404_0002_01_000007/tmp org.apache.spark.executor.CoarseGrainedExecutorBackend akka.tcp://sparkDriver#marx-61:56138/user/CoarseGrainedScheduler 6 marx-62 5
|- 10305 28687 10305 10305 (bash) 0 0 9428992 318 /bin/bash -c /usr/java/default/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms40960m -Xmx40960m -XX:MaxPermSize=128m -Dspark.sql.shuffle.partitions=20 -Djava.io.tmpdir=/data/yarn/datanode/nm-local-dir/usercache/admin/appcache/application_1423215865404_0002/container_1423215865404_0002_01_000007/tmp org.apache.spark.executor.CoarseGrainedExecutorBackend akka.tcp://sparkDriver#marx-61:56138/user/CoarseGrainedScheduler 6 marx-62 5 1> /opt/hadoop/logs/userlogs/application_1423215865404_0002/container_1423215865404_0002_01_000007/stdout 2> /opt/hadoop/logs/userlogs/application_1423215865404_0002/container_1423215865404_0002_01_000007/stderr
It is interesting to note that all stages complete, just when save as sequence file is called, it fails. The executor is not using up the heap space, wonder what else is eating it up?
Spark executor gets killed all the time and Spark keeps retrying the failed stage. For Spark on YARN, nodemanager would kill Spark executor if it used more memory than the configured size of "spark.executor.memory" + "spark.yarn.executor.memoryOverhead". Increase "spark.yarn.executor.memoryOverhead" to make sure it covers the executor off-heap memory usage.
Some issues:
https://issues.apache.org/jira/browse/SPARK-2398
https://issues.apache.org/jira/browse/SPARK-2468
You are actually running the container out of physical memory in this case:
Current usage: 42.1 GB of 42 GB physical memory used
The virtual memory isn't the bounding factor. You'll have to increase the heap size of the container or increase spark.yarn.executor.memoryOverhead to give some more space to the YARN container without increasing the executor heap size necessarily.
I faced exact same problem as OP, all stages succeeded and only at the time of saving and writing the results, the container would be killed.
If java heap memory is exceeded, you see OutOfMemory exceptions but a container being killed is related to everything except java heap memory, which can be either related to memoryOverhead or application master memory.
In my case increasing spark.yarn.executor.memoryOverhead or spark.yarn.driver.memoryOverhead didn't help because probably it was my application master (AM) getting out of memory. In yarn-client mode, the configuration to increase AM memory is spark.yarn.am.memory. For yarn-cluster mode, it is the driver memory. This is how it worked for me.
Here's a reference to the error I got:
Application application_1471843888557_0604 failed 2 times due to AM Container for appattempt_1471843888557_0604_000002 exited with exitCode: -104
For more detailed output, check application tracking page:http://master01.prod2.everstring.com:8088/cluster/app/application_1471843888557_0604Then, click on links to logs of each attempt.
Diagnostics: Container [pid=89920,containerID=container_e59_1471843888557_0604_02_000001] is running beyond physical memory limits.
Current usage: 14.0 GB of 14 GB physical memory used; 16.0 GB of 29.4 GB virtual memory used. Killing container.

running spark-ec2 with --worker-instances

right an absolute spark noob is talking here.
this is the command I'm running and expecting 3 workers
./spark-ec2 --worker-instances=3 --key-pair=my.key --identity-file=mykey.pem --region=us-east-1 --zone=us-east-1a launch my-spark-cluster-G
however, in aws console only two servers will be created (master and slave)
on the other side in :
http://myMasterSparkURL:8080/
I get the following info which does not just add up:
Workers: 3
Cores: 3 Total, 3 Used
Memory: 18.8 GB Total, 18.0 GB Used
Applications: 1 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE
and under workers it shows:
worker1 (port 8081) worker1IP:43595 ALIVE 1 (1 Used) 6.3 GB (6.0 GB Used)
worker1 (port 8082) worker1IP:53195 ALIVE 1 (1 Used) 6.3 GB (6.0 GB Used)
worker1 (port 8083) worker1IP:41683 ALIVE 1 (1 Used) 6.3 GB (6.0 GB Used)
now if I click on the first one (worker with 8081) it redirected me to the worker page however if I click on the other two (workers with port 8082 and 8083). it basically says page not found.
with high probability I am assuming this is a bug in spark-ec2 but I'm not quite sure since I'm a noob here.
I've searched all over the place to find someone with similar issue. so I appreciate any suggestion where can give me some ideas why this is happening and how to fix it. ty
The spark version spark-1.3.0
You might want to change that invokation a little, this is how I have been creating clusters so far:
./spark-ec2 -k MyKey
-i MyKey.pem
-s 3
--instance-type=m3.medium
--region=eu-west-1
--spark-version=1.2.0
launch MyCluster

Resources