We are using HDP 2.7.1.2.3 with Ambari 2.1.2
After finish setup, every node status is correct.
But oneday ambari suddenly show namdenode is stopped.(we don't change any config of ambari or namenode)
However, we still can use HBASE and run MapReduce.
we think name node status should be normal.
We try to restart namenode and check ambari-server log
It shows:
ServiceComponentHostImpl:949 - Host role transitioned to a new state, serviceComponentName=NAMENODE, oldState=STARTING, currentState=STARTED
HeartBeatHandler:657 - State of service component NAMENODE of service HDFS of cluster wae has changed from STARTED to INSTALLED
we don't understand why its status change from "STARTED" to "INSTALLED".
In namenode side, we check ambari-agent.log
It shows one warning:
[Alert][namenode_directory_status] HA nameservice value is present but there are no aliases for {{hdfs-site/dfs.ha.namenodes.{{ha-nameservice}}}}
We think it is irrelevant.
What's the reason that ambari think namenode is stopped?
Is there any way that we can fix this issue?
Run the command ambari-server restart from linux terminal in Ambari server node
Run the command ambari-agent restart from linux terminal in all the nodes in the cluster.
You can run the command hdfs dfsadmin -report from the terminal as hdfs user to confirm all the nodes are up and running.
Related
I am new in hadoop and I have installed hadoop 2.7.2 into two machines which are master and slave1. I have followed this tutorial. It was not mentioned in the tutorial but I have also edited JAVA_HOME and HADOOP_CONF_DIR variables in hadoop-env.sh. At the end I have two machines hadoop installed. In master NameNode, DataNode, SecondaryNameNode, ResourceManager and NodeManager are running and in slave1 DataNode and NodeManager are running.
I am able to go to master:8088 in the browser and when I go http://master:8088/cluster/nodes, there is only master node here. I am not able to go isci17:8088 and that is not a live node. Why could it be?
Port 8088 is the resource manager web ui port, so if it is running on master you probably won't have it on the slave.
You should be able to also go to the name node web ui on port 50070 on your name node as well to see status such as http://master:50070/ and the MapReduce JobHistory Server at http://hostname:19888/ for a web ui.
If you have access to a terminal session you run the following command on each server as root/sudo user to see which ports are listening on which server in a Linux terminal session;
sudo lsof -i tcp | grep -i LISTEN
You also also run hadoop cli commands that will give you info;
You can run the following to check hadoops ports in a terminal session.
hdfs portmap
Other Health checks on command line;
hdfs classpath
hdfs getconf -namenodes
hdfs dfsadmin -report -live
hdfs dfsadmin -report -dead
hdfs dfsadmin -printTopology
Depending on if the hadoop cli command works automatically you might have to find the executable to run ./hdfs. Also depending on distro/version you might have to replace the command hdfs with the command hadoop.
If you want to see your cluster configurations check your /etc/hadoop/conf folder along with /etc/hadoop/hive . You will find about 5-10 *-site.xml files. There configuration files contain your clusters configuration with the hostnames and ports.
I have a two node Haoop 2.7.1 installation on Ubuntu 12.04 LTS. All the demons are up and running after sbin/start-dfs.sh && sbin/start-yarn.sh as confirmed by jps.
However, only the Master node(it doubles as slave) is doing work while the slave-node's DataNode sits idle.
The weird thing is that the tmp folder of the slave-node is being populated by nm-local-dir (created by the ResourceManager I presume).
All configuration seems fine and I have tried every trick that the Internet can suggest- but to no avail.
Any ideas?
Delete the local datanode/namenode folder on the master node and all the slave nodes then do hdfs namenode -format
I think that the problem is an inconsistency in the clusterID between the master and the slaves.
We have setup a cluster with 6 slave nodes. I am trying to see how replication happens when one of the DataNode dies.
I logged into one of the slave and killed the DataNode using the kill -9 command. After sometime the DataNode is restarted automatically and HDFS gets back into healthy status. I am verify this because the PID of the DataNode has changed.
I don't see any documentation on the above behavior of DataNode. Is this the Apache Hadoop or Cloudera CDH feature? Any reference to the documentation is appreciated.
As the pid of datanode has been changed, I don't think it is a behavior of datanode. If you are managing your cluster using Cloudera Manager, there is an option for restarting datanode daemon if it fails(Automatically Restart Process). This option will be set by default. When the datanode process gets failed or killed, As Automatic restart option is set Cloudera Scm agent will start the the datanode daemon.
For Automatic restart option : Choose HDFS services -> go to Configuration section -> Search for automatic restart.
This feature is available in CM 4.X release as well.
I installed the hadoop 2.2 system on my ubuntu box using this tutorial
http://codesfusion.blogspot.com/2013/11/hadoop-2x-core-hdfs-and-yarn-components.html
Everything worked fine for me and now when I do
http://localhost:50070
I can see the management UI for HDFS. Very good!!
But the I am going through another tutorial which tells me that there must be a task manager UI running at http://mymachine.com:50030 and http://mymachine.com:50060
on my machine I cannot open these ports.
I have already done
start-dfs.sh
start-yarn.sh
start-all.sh
is something wrong? why can't I see the task manager UI?
You have installed YARN (MRv2) which runs the ResourceManager. The URL http://mymachine.com:50030 is the web address for the JobTracker daemon that comes with MRv1 and hence you are not able to see it.
To see the ResourceManager UI, check your yarn-site.xml file for the following property:
yarn.resourcemanager.webapp.address
By default, it should point to : resource_manager_hostname:8088
Assuming your ResourceManager runs on mymachine, you should see the ResourceManager UI at http://mymachine.com:8088/
Make sure all your deamons are up and running before you visit the URL for the ResourceManager.
For Hadoop 2[aka YARN/MRV2] - Any hadoop installation version-ed 2.x or higher its at port number 8088. eg. localhost:8088
For Hadoop 1 - Any hadoop installation version-ed lower than 2.x[eg 1.x or 0.x] its at port number 50030. eg localhost:50030
By default HadoopUI location is as below
http://mymachine.com:50070
I have installed CDH3U5 on a 2 node cluster. Everything seems to run fine such as all the services, web UI, MR jobs, HDFS shell commands. However, interestingly, when I started the datanode service, it gave me an OK message that datanode is running as process say X. But when I run JPS, I do not see the label "Datanode" for the process. So the output looks like -
17153 TaskTracker
18908 Jps
16267
The process ID - 16267 is the Datanode process. All other checkpoints have passed. So this seems weird. The same thing happens on the other node in the cluster. Any insight into this behavior and if this is something that needs fixing would be helpful.
can you check the following and reply?
- web interface for namenode and what does it show there for livenode
- logfiles for datanode to see if any exception
- if datanode is pingable/ssh from namenode and viceversa
If all the above look ok I'm not sure what the problem is but to fix you can
- stop all hadoop deamons
- delete temp directory pointed in conf/core-site.xml for both NN and DN
- format namenode
- start deamon