right an absolute spark noob is talking here.
this is the command I'm running and expecting 3 workers
./spark-ec2 --worker-instances=3 --key-pair=my.key --identity-file=mykey.pem --region=us-east-1 --zone=us-east-1a launch my-spark-cluster-G
however, in aws console only two servers will be created (master and slave)
on the other side in :
http://myMasterSparkURL:8080/
I get the following info which does not just add up:
Workers: 3
Cores: 3 Total, 3 Used
Memory: 18.8 GB Total, 18.0 GB Used
Applications: 1 Running, 0 Completed
Drivers: 0 Running, 0 Completed
Status: ALIVE
and under workers it shows:
worker1 (port 8081) worker1IP:43595 ALIVE 1 (1 Used) 6.3 GB (6.0 GB Used)
worker1 (port 8082) worker1IP:53195 ALIVE 1 (1 Used) 6.3 GB (6.0 GB Used)
worker1 (port 8083) worker1IP:41683 ALIVE 1 (1 Used) 6.3 GB (6.0 GB Used)
now if I click on the first one (worker with 8081) it redirected me to the worker page however if I click on the other two (workers with port 8082 and 8083). it basically says page not found.
with high probability I am assuming this is a bug in spark-ec2 but I'm not quite sure since I'm a noob here.
I've searched all over the place to find someone with similar issue. so I appreciate any suggestion where can give me some ideas why this is happening and how to fix it. ty
The spark version spark-1.3.0
You might want to change that invokation a little, this is how I have been creating clusters so far:
./spark-ec2 -k MyKey
-i MyKey.pem
-s 3
--instance-type=m3.medium
--region=eu-west-1
--spark-version=1.2.0
launch MyCluster
Related
I have a 6 node cluster - 5 DN and 1 NN. All have 32 GB RAM. All slaves have 8.7 TB HDD. DN has 1.1 TB HDD. Here is the link to my core-site.xml , hdfs-site.xml , yarn-site.xml.
After running an MR job, i checked my RAM Usage which is mentioned below:
Namenode
free -g
total used free shared buff/cache available
Mem: 31 7 15 0 8 22
Swap: 31 0 31
Datanode :
Slave1 :
free -g
total used free shared buff/cache available
Mem: 31 6 6 0 18 24
Swap: 31 3 28
Slave2:
total used free shared buff/cache available
Mem: 31 2 4 0 24 28
Swap: 31 1 30
Likewise, other slaves have similar RAM usage. Even if a single job is submitted, the other submitted jobs enter into ACCEPTED state and wait for the first job to finish and then they start.
Here is the output of ps command of the JAR that I submnitted to execute the MR job:
/opt/jdk1.8.0_77//bin/java -Dproc_jar -Xmx1000m
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log
-Dyarn.home.dir= -Dyarn.id.str= -Dhadoop.root.logger=INFO,console
-Dyarn.root.logger=INFO,console -Dyarn.policy.file=hadoop-policy.xml
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log
-Dyarn.home.dir=/home/hduser/hadoop -Dhadoop.home.dir=/home/hduser/hadoop
-Dhadoop.root.logger=INFO,console -Dyarn.root.logger=INFO,console
-classpath --classpath of jars
org.apache.hadoop.util.RunJar abc.jar abc.mydriver2 /raw_data /mr_output/02
Is there any settings that I can change/add to allow multiple jobs to run simultaneously and speed up current data processing ? I am using hadoop 2.5.2. The cluster is in PROD environment and I can not take it down for updating hadoop version.
EDIT 1 : I started a new MR job with 362 GB of data and still the RAM usage is around 8 GB and 22 GB of RAM is free. Here is my job submission command -
nohup yarn jar abc.jar def.mydriver1 /raw_data /mr_output/01 &
Here is some more information :
18/11/22 14:09:07 INFO input.FileInputFormat: Total input paths to process : 130363
18/11/22 14:09:10 INFO mapreduce.JobSubmitter: number of splits:130372
Is there some additional memory parameters that we can use to submit the job to have efficient memory usage ?
I believe you can edit the mapred-default.xml
The Params you are looking for are
mapreduce.job.running.map.limit
mapreduce.job.running.reduce.limit
0 (Probably what it is set too at the moment) means UNLIMITED.
Looking at your Memory 32G/Machine seems too small.
What CPU/Cores are you having ? I would expect Quad CPU/16 Cores Minimum. Per Machine.
Based on your yarn-site.xml your yarn.scheduler.minimum-allocation-mb setting of 10240 is too high. This effectively means you only have at best 18 vcores available. This might be the right setting for a cluster where you have tons of memory but for 32GB it's way too large. Drop it to 1 or 2GB.
Remember, HDFS block sizes are what each mapper typically consumes. So 1-2GB of memory for 128MB of data sounds more reasonable. The added benefit is you could have up to 180 vcores available which will process jobs 10x faster than 18 vcores.
To give you an idea of how a 4 node 32 core 128GB RAM per node cluster is setup:
For Tez: Divide RAM/CORES = Max TEZ Container size
So in my case: 128/32 = 4GB
TEZ:
YARN:
I am trying to run my PySpark in Cluster with 2 nodes and 1 master (all have 16 Gb RAM). I have run my spark with below command.
spark-submit --master yarn --deploy-mode cluster --name "Pyspark"
--num-executors 40 --executor-memory 2g CD.py
However my code runs very slow, it takes almost 1 hour to parse 8.2 GB of data.
Then i tried to change the configuration in my YARN. I changed following properties.
yarn.scheduler.increment-allocation-mb = 2 GiB
yarn.scheduler.minimum-allocation-mb = 2 GiB
yarn.scheduler.increment-allocation-mb = 2 GiB
yarn.scheduler.maximum-allocation-mb = 2 GiB
After doing these changes still my spark is running very slow and taking more than 1 hour to parse 8.2 GB of files.
could you please try with the below configuration
spark.executor.memory 5g
spark.executor.cores 5
spark.executor.instances 3
spark.driver.cores 2
When I start H2o on a cdh cluster I get the following error. I downloaded everything formt he wbesite and followed the tutorial. The command I ran was
hadoop jar h2odriver.jar -nodes 2 -mapperXmx 1g -output hdfsOutputDirName
It shows that containers are not being used. It's not clear what settings these would be on hadoop. I have given all settings memory. It's the 0.0 for memory that doesnt make sense, and why are the containers not using memory. Is the cluster even running now?
----- YARN cluster metrics -----
Number of YARN worker nodes: 3
----- Nodes -----
Node: http://data-node-3:8042 Rack: /default, RUNNING, 1 containers used, 1.0 / 6.0 GB used, 1 / 4 vcores used
Node: http://data-node-1:8042 Rack: /default, RUNNING, 0 containers used, 0.0 / 6.0 GB used, 0 / 4 vcores used
Node: http://data-node-2:8042 Rack: /default, RUNNING, 0 containers used, 0.0 / 6.0 GB used, 0 / 4 vcores used
----- Queues -----
Queue name: root.default
Queue state: RUNNING
Current capacity: 0.00
Capacity: 0.00
Maximum capacity: -1.00
Application count: 0
Queue 'root.default' approximate utilization: 0.0 / 0.0 GB used, 0 / 0 vcores used
----------------------------------------------------------------------
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (0.0 GB)
WARNING: Job virtual cores request (2) exceeds queue available virtual cores capacity (0)
----------------------------------------------------------------------
For YARN users, logs command is 'yarn logs -applicationId application_1462681033282_0008'
You should setup your default queue to have available resources to run 2nodes cluster.
See warnings:
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (0.0 GB)
you ask 1GB per node (+overhead) but there is no available resources in the YARN queue
WARNING: Job virtual cores request (2) exceeds queue available virtual cores capacity (0)
you ask for 2 virtual cores but no cores are available in your default queue
Please check YARN documentation - for example setup of capacity scheduler and max available resources:
https://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html
I made the following changes in Cloudera Manager yarn configuration
Setting Value
yarn.scheduler.maximum-allocation-vcores 8
yarn.nodemanager.resource.cpu-vcores 4
yarn.nodemanager.resource.cpu-vcores 4
yarn.scheduler.maximum-allocation-mb 16 GB
I am running a micro instace in EC2 with 592 MB available RAM
Jenkins was crashing due to Out Of Memory build errors while running UPDATE on big SQL Table in backend.
Disk utilisation is 83% with 6 GB out of 8GB EBS volume used ..
sudo du -hsx * | sort -rh | head -10
/
2.7G opt
1.5G var
1.2G usr
I found only 6 MB was free with command - "free -m " with these services running -
(i) LAMPP
(ii) Jenkins
(iii) Mysql 5.6
I stopped LAMPP and that created 70 MB free space
Then , I closed Jenkins, it created 320 MB free space
Closing MySQL 5.6 brings it up to 390 MB free space ..
So, 200MB RAM is still getting used with none of my services running.
Is 200MB RAM minimum required for an Ubuntu micro Instance running on Amazon EC2 ?
Nope, i believe it can run till its 100% used.
If a task that requires a large memory than what is available, the task is killed.
To free up more memory space, you can run this from your terminal
sudo apt-get autoremove
I am getting the problem
Failed with exception java.io.IOException:java.io.IOException: Could not obtain block: blk_364919282277866885_1342 file=/user/hive/warehouse/invites/ds=2008-08-08/kv3.txt
I checked the file is actually there.
hive>dfs -ls /user/hive/warehouse/invites/ds=2008-08-08/kv3.txt
Found 1 items
-rw-r--r-- 2 root supergroup 216 2012-11-16 16:28 /user/hive/warehouse/invites/ds=2008-08-08/kv3.txt
What I should do?
Please help.
I ran into this problem on my cluster, but it disappeared once I restarted the task on a cluster with more nodes available. The underlying cause appears to be an out-of-memory error, as this thread indicates. My original cluster on AWS was running 3 c1.xlarge instances (7 GB memory each), while the new one had 10 c3.4xlarge instances (30 GB memory each).
Try hadoop fsck /user/hive/warehouse/invites/ds=2008-08-08/kv3.txt ?