How to optimize and tune hadoop cluster performance - hadoop

I am not very familiar with hadoop cluster configs and I have recently integrated Apache Nutch with Apache Hadoop and I have crawled data indexed in Solr successfully.
I have my master-slave sources as below:
Master:
CPU : 4 cores
memory :12G
hard disk : 37G
Slave1 :
CPU : 2 cores
memory :4G
hard disk : 18G
Slave2:
CPU : 2 cores
memory :4G
hard disk : 16G
Slave3 :
CPU : 2 cores
memory :4G
hard disk : 16G
Slave4 :
CPU : 4 cores
memory :4G
hard disk : 50G
I have configed core-site.xml, mapred-site.xml, hdfs-site.xml, masters and slaves.
Here is my core-site.xml :
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/My Project Name/hadoop-datastore</value>
<description>store data</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://master:54310</value>
<description>the name of default file system</description>
</property>
</configuration>
Here is my mapred-site.xml :
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>master:54311</value>
<description>host and port</description>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>10</value>
<description></description>
</property>
<property>
<name>mapred.map.tasks</name>
<value>20</value>
<description></description>
</property>
<property>
<name>mapred.tasktracker.map.tasks.maximum</name>
<value>8</value>
<description></description>
</property>
<property>
<name>mapred.tasktracker.reduce.tasks.maximum</name>
<value>8</value>
<description></description>
</property>
</configuration>
And here is my hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>default block</description>
</property>
</configuration>
And here is my conf/masters :
master
And finally my conf/slaves:
master
slave1
slave2
slave3
slave4
This story goes well: When I run master and run the Jps command, I have the folowings on master:
19031 TaskTracker
18644 DataNode
18764 SecondaryNameNode
18884 JobTracker
13226 Jps
18506 NameNode
And when I run the Jps command on all the slaves, I have the followings:
4969 DataNode
5057 TaskTracker
5592 Jps
When I look at Master Hadoop Map/Reduce administration I have the following Cluster Summary:
<h2>Cluster Summary (Heap Size is 114.5 MB/889 MB)</h2>
<table border="1" cellpadding="5" cellspacing="0">
<tr><th>Running Map Tasks</th><th>Running Reduce Tasks</th><th>Total Submissions</th><th>Nodes</th><th>Occupied Map Slots</th><th>Occupied Reduce Slots</th><th>Reserved Map Slots</th><th>Reserved Reduce Slots</th><th>Map Task Capacity</th><th>Reduce Task Capacity</th><th>Avg. Tasks/Node</th><th>Blacklisted Nodes</th><th>Graylisted Nodes</th><th>Excluded Nodes</th></tr>
<tr><td>8</td><td>8</td><td>1607</td><td>1</td><td>8</td><td>8</td><td>0</td><td>0</td><td>8</td><td>8</td><td>16.00</td><td>0</td><td>0</td><td>0</td></tr></table>
<br>
The problem is this procedure works fine with topN :1000 but There is load on master with high cpu and memory usage but when I find top on slaves, Neither cpu nor memory has loads. I mean both cpu and memory usage is low and cpu idle is high.
I wonder whether it is natural and OK or not. I am looking for some solutions and configs so that I am able to share the load on all slaves and make the procedure faster.
Any links, documentations and solutions are very much appreciated.

Your master node is running a lot of services :
TaskTracker DataNode SecondaryNameNode JobTracker NameNode
Typically in a decent sized cluster the Master would not have the datanode service.
Name Node & secondary Name node should be on different nodes. You can set secondary name node on one of your data nodes.
Similarly Task Tracker - Master typically does not have task Tracker. I.e. you do not run MR tasks on Master.
On the other hand for pure experimentation the setup you have done is ok & the CPU usage you are noticing is obvious.

I found an error about version 1.2.1 looking deeply at logs directory, saying this version is a 1.2.1 snapshot version. So I changed the server, installing simply version 1.2.1 and making all slaves and master similar in version. That fixed my problem. Now happily I have five nodes equal to the count of my machines.
And I really thank ... for his greate help

Related

Why does a datanode doesn´t disappear in the hadoop web site when the datanode job is killed?

I have a 3 node HA cluster in a CentOS 8 VM. I am using ZK 3.7.0 and Hadoop 3.3.1.
In my cluster I have 2 namenodes, node1 is the active namenode and node2 is the standby namenode in case that node1 falls. The other node is the datanode
I just start all with the command
start-dfs.sh
In node1 I had the following processes running: NameNode, Jps, QuorumPeerMain and JournalNode
In node2 I had the following processes running: NameNode, Jps, QuorumPeerMain, JournalNode and DataNode.
My hdfs-site.xml configuration is the following:
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/datos/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/datos/datanode</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ha-cluster</value>
</property>
<property>
<name>dfs.ha.namenodes.ha-cluster</name>
<value>nodo1,nodo2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.nodo1</name>
<value>nodo1:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.nodo2</name>
<value>nodo2:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.nodo1</name>
<value>nodo1:9870</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.nodo2</name>
<value>nodo2:9870</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://nodo3:8485;nodo2:8485;nodo1:8485/ha-cluster</value>
</property>
The problem is that since the node2 is the standby namenode I didn't want it to have the DataNode process running, so I killed it. I used the command kill -9 (I know it's not the best way, I should have used hdfs --daemon stop datanode).
Then I entered the hadoop website to check how many datanodes I had. In the node1 (the active namenode) Hadoop website, in the datanode part I only had 1 datanode, node3.
The problem is that in the Hadoop website of the node2 (the standby namenode) was like this:
In case u can't see the image:
default-rack/nodo2:9866 (192.168.0.102:9866) http://nodo2:9864 558s
/default-rack/nodo3:9866 (192.168.0.103:9866) http://nodo3:9864 1s
The node2 datanode hasn't been alive for 558s and it doesn't take the node as dead.
Does anybody know why does this happen??
in your hdfs-site.xml
check values for:
dfs.heartbeat.interval (Determines datanode heartbeat interval in
seconds.)
dfs.namenode.heartbeat.recheck-interval (This time decides the
interval to check for expired datanodes. With this value and
dfs.heartbeat.interval, the interval of deciding the datanode is
stale or not is also calculated. The unit of this configuration is
millisecond.)
check here for defaults and more info:
https://hadoop.apache.org/docs/r2.7.0/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml
There is a formula to determine when a node is dead:
2 * dfs.namenode.heartbeat.recheck-interval + 10 * (1000 * dfs.heartbeat.interval)
means:
2 * 300000 + 10 * 3000 = 630000 milliseconds = 10 minutes 30 seconds or **630 seconds**.
source:
Hadoop 2.x Administration Cookbook (Packt) - Configuring Datanode heartbeat:
Datanode Removal time = (2 x dfs.namenode.heartbeat.recheck-interval ) + (10 X dfs.heartbeat.interval)

HADOOP YARN - Application is added to the scheduler and is not yet activated. Skipping AM assignment as cluster resource is empty

I am evaluating YARN for a project. I am trying to get the simple distributed shell example to work. I have gotten the application to the SUBMITTED phase, but it never starts. This is the information reported from this line:
ApplicationReport report = yarnClient.getApplicationReport(appId);
Application is added to the scheduler and is not yet activated. Skipping AM assignment as cluster resource is empty. Details : AM Partition = DEFAULT_PARTITION; AM Resource Request = memory:1024, vCores:1; Queue Resource Limit for AM = memory:0, vCores:0; User AM Resource Limit of the queue = memory:0, vCores:0; Queue AM Resource Usage = memory:128, vCores:1;
The solutions for other developers seems to have to increase yarn.scheduler.capacity.maximum-am-resource-percent in the yarn-site.xml file from its default value of .1. I have tried values of .2 and .5 but it does not seem to help.
Looks like you did not configure the RAM allocated to Yarn in a proper way. This can be a pin in the ..... if you try to infer/adapt from tutorials according to your own installation. I would strongly recommend that you use tools such as this one:
wget http://public-repo-1.hortonworks.com/HDP/tools/2.6.0.3/hdp_manual_install_rpm_helper_files-2.6.0.3.8.tar.gz
tar zxvf hdp_manual_install_rpm_helper_files-2.6.0.3.8.tar.gz
rm hdp_manual_install_rpm_helper_files-2.6.0.3.8.tar.gz
mv hdp_manual_install_rpm_helper_files-2.6.0.3.8/ hdp_conf_files
python hdp_conf_files/scripts/yarn-utils.py -c 4 -m 8 -d 1 false
-c number of cores you have for each node
-m amount of memory you have for each node (Giga)
-d number of disk you have for each node
-bool "True" if HBase is installed; "False" if not
This should give you something like:
Using cores=4 memory=8GB disks=1 hbase=True
Profile: cores=4 memory=5120MB reserved=3GB usableMem=5GB disks=1
Num Container=3
Container Ram=1536MB
Used Ram=4GB
Unused Ram=3GB
yarn.scheduler.minimum-allocation-mb=1536
yarn.scheduler.maximum-allocation-mb=4608
yarn.nodemanager.resource.memory-mb=4608
mapreduce.map.memory.mb=1536
mapreduce.map.java.opts=-Xmx1228m
mapreduce.reduce.memory.mb=3072
mapreduce.reduce.java.opts=-Xmx2457m
yarn.app.mapreduce.am.resource.mb=3072
yarn.app.mapreduce.am.command-opts=-Xmx2457m
mapreduce.task.io.sort.mb=614
Edit your yarn-site.xml and mapred-site.xml accordingly.
nano ~/hadoop/etc/hadoop/yarn-site.xml
nano ~/hadoop/etc/hadoop/mapred-site.xml
Moreover, you should have this in your yarn-site.xml
<property>
<name>yarn.acl.enable</name>
<value>0</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>name_of_your_master_node</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
and this in your mapred-site.xml:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
Then, upload your conf files to each node using scp (If you uploaded you ssh keys to each one)
for node in node1 node2 node3; do scp ~/hadoop/etc/hadoop/* $node:/home/hadoop/hadoop/etc/hadoop/; done
And then, restart yarn
stop-yarn.sh
start-yarn.sh
and check that you can see your nodes:
hadoop#master-node:~$ yarn node -list
18/06/01 12:51:33 INFO client.RMProxy: Connecting to ResourceManager at master-node/192.168.0.37:8032
Total Nodes:3
Node-Id Node-State Node-Http-Address Number-of-Running-Containers
node3:34683 RUNNING node3:8042 0
node2:36467 RUNNING node2:8042 0
node1:38317 RUNNING node1:8042 0
This might fix the issue (good luck) (additional info)
Add below properties to yarn-site.xml and restart dfs and yarn
<property>
<name>yarn.scheduler.capacity.root.support.user-limit-factor</name>
<value>2</value>
</property>
<property>
<name>yarn.nodemanager.disk-health-checker.min-healthy-disks</name>
<value>0.0</value>
</property>
<property>
<name>yarn.nodemanager.disk-health-checker.max-disk-utilization-per-disk-percentage</name>
<value>100.0</value>
</property>
I got the same error and tried to solve it hard. I realized the resource manager had no resource to allocate the application master (AM) of the MapReduce application.
I navigated on browser http://localhost:8088/cluster/nodes/unhealthy and examined unhealthy nodes (in my case there was only one) -> health report. I saw the warning about that some log directories filled up. I cleaned those directories then my node became healthy and the application state switched to RUNNING from ACCEPTED. Actually, as a default, if the node disk fills up more than %90, YARN behaves like that. Someway you have to clean space and make available space lower than %90.
My exact health report was:
1/1 local-dirs usable space is below configured utilization percentage/no more usable space [ /tmp/hadoop-train/nm-local-dir : used space above threshold of 90.0% ] ;
1/1 log-dirs usable space is below configured utilization percentage/no more usable space [ /opt/manual/hadoop/logs/userlogs : used space above threshold of 90.0% ]

Configure Yarn with Hadoop 2.7.4 resources issue

I have configured hadoop 2.7.4 by following this tutorial. DataNode, NameNode and SecondaryNameNode are working properly.
But when I run yarn, NodeManager goes down with the following message
org.apache.hadoop.yarn.exceptions.YarnRuntimeException:
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved
SHUTDOWN signal from Resourcemanager ,Registration of NodeManager
failed, Message from ResourceManager: NodeManager from localhost
doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the
NodeManager.
My system has 8 cpu with 8 GB RAM. How to configure yarn with these resources? I have found a lot such as this but could not find any solution that solve my problem.
I had the same problem during a course. We were using Amazon virtual machines with 2 cores.
After various modifications in yarn-site.xml, we got our NodeManager running setting the following properties
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>4096</value>
</property>
<property>
<name>yarn.nodemanager.resource.cpu-vcores</name>
<value>2</value>
</property>
In your case, you may need to establish 8 virtual cores.

what would happen if nodes in hadoop change their IP address?

my hadoop clusters do not work fine because of the network conditions.What if i change the entire network,like another router,thus change the IP addresses? could the clusters still work by updating some configurations? or i must torn it down and rebuilt everything?
Thanks in advance
It works once you change the ip addresses into the configuration, why did not you use the DNS?
Ok, it was not a good answer, let me apologize and give a better answer.
If you need to change configuration on a running cluster you can decommission and commission the data nodes.
Switch off the data node is not a good idea.
Data Node Decomissioning
The fist step is tell to yarn you are going to remove some nodes, then you have to say the same to node manager.
I don't know if your system is configured for decommissioning, if it so you have the key yarn.resourcemanager.nodes.exclude-path into the yarn-site.xml and dfs.hosts.exclude into hdfs-site.xml
hdfs-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>$YOUR_PATH/dfs.exclude</value>
<final>true</final>
</property>
yarn-site.xml
<property>
<name>dfs.hosts.exclude</name>
<value>$YOUR_PATH/dfs.exclude</value>
<final>true</final>
</property>
Open the file $YOUR_PATH/dfs.exclude and add hostnames / ip addresses of node you need to stop.
execute
yarn rmadmin -refreshNodes
hdfs dfsadmin -refreshNodes
Check if the data nodes are in decommission checking the web interface.
Data Node Comissioning
Works in the same way of the Decommissioning
yarn-site.xml
<property>
<name>yarn.resourcemanager.nodes.include-path</name>
<value>$YOUR_PATH/dfs.include</value>
<final>true</final>
</property>
hdfs-site.xml
<property>
<name>dfs.hosts</name>
<value>$YOUR_PATH/dfs.include</value>
<final>true</final>
</property>
Open the file $YOUR_PATH/dfs.include and add hostnames / ip addresses of node you need to add.
yarn rmadmin -refreshNodes
hdfs dfsadmin -refreshNodes
wait some time
hdfs dfsadmin -report
Now the hosts you added are into the list.
If your configurations are missing the above keys you need to halt/restart the node manager and yarn after adding them.
Using these procedure you can halt data nodes in a safe way.

hadoop no data node started

I am following this tutorial.
http://hadoop.apache.org/docs/r2.4.0/hadoop-project-dist/hadoop-common/SingleCluster.html#Pseudo-Distributed_Operation
I got to this point and started the nodes.
Start NameNode daemon and DataNode daemon:
$ sbin/start-dfs.sh
But then when I run the next steps, it looks like no data node is running (as I get errors saying so).
Why is the data node down? And how can I fix this?
Here is the log from my data node.
hduser#test02:/usr/local/hadoop$ jps
3792 SecondaryNameNode
3929 Jps
3258 NameNode
hduser#test02:/usr/local/hadoop$ cat /usr/local/hadoop/logs/hadoop-hduser-datanode-test02.out
Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
-m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 3781
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
hduser#test02:/usr/local/hadoop$
EDIT:
Seems I had this port number wrong.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
Now when I made it right (i.e. equal to 9000) I have no name node starting up.
hduser#test02:/usr/local/hadoop$ jps
10423 DataNode
10938 Jps
10703 SecondaryNameNode
and I cannot browse:
http://my-server-name:50070/
any more.
Hope this gives you some hint what is happening.
I am total beginner with Hadoop and kind of lost now.
[core-site.xml]
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
[hdfs-site.xml]
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
</configuration>
In mapred-site.xml I have nothing.
1.first stop all the entities like namenode, datanode etc. (you will be having some script or command to do that)
Format tmp directory
Go to /var/cache/hadoop-hdfs/hdfs/dfs/ and delete all the contents in the directory manually
Now format your namenode again
start all the entities then use jps command to confirm that the datanode has been started
Now run whichever application you may like or have.
Hope this helps.
Add this configuration
conf/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
conf/mapred-site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
conf/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
stop hadoop
bin/stop-all.sh
change permission and remove temp directory data
chmod 755 /var/lib/hadoop/tmp
rm -Rf /var/lib/hadoop/tmp/*
format name node
bin/hadoop namenode -format
After 1 day of struggle, I just removed version 2.4 and installed Hadoop 2.2 (as I realized 2.2 is the latest stable version). Then I got it all working by following this nice tutorial.
http://codesfusion.blogspot.com/2013/10/setup-hadoop-2x-220-on-ubuntu.html?m=1
Something is not right with this document about 2.4 which I was reading.
Not to talk that it's not suitable for beginners, and it's usually beginners who stumble upon it.
Maybe your slave's data master's data are not synced, delete data & name folder in ./hadoop/hdfs and recreate them. re-format namenode. Than start dfs.

Resources