Datanode is not starting in hadoop-hbase start? - hadoop

I am running the following script to run all the hbase and hadoop processes in my hbase setup in virtual machine.
#!/bin/sh
start-dfs.sh
start-yarn.sh
start-hbase.sh
#hbase-daemon.sh start rest
hbase-daemon.sh start thriftr
Earlier all the processes used to run properly. But recently, I have force shutdown my virtual machine process without stopping the hbase and hadoop related processes. Then my datanode process stopped. Later I have formatted my name node process, using some suggestion on online. Now my name node comes properly but data node process does not come up. When I check the running java process (JPS) the datanode process is missing
4672 NodeManager
5474 ThriftServer
4098 NameNode
4408 SecondaryNameNode
5723 Jps
4555 ResourceManager
5372 HRegionServer
5246 HMaster
5182 HQuorumPeer
But earlier the DataNode process used to come properly. Is it because of formatting my namenode. Do I need to change any config data or someting also?

Related

Hadoop Multi-Cluster Installation: Unable to see the data nodes despite seeing daemons running on them

I am trying to set of a multi-node hadoop cluster using Hadoop 3.0.0. There is no straightforward documentation on this so I had to read a lot of blogs. I am at a point where when I run start-all.sh I see daemon processes appearing in the name node as well as data nodes. However, when I go to http://namenode:9870 I see 0 live nodes.
To be more specific when I run start-all.sh I see
and I when I run jps I see NameNode, SecondaryNameNode and ResourceManager processes are running. On data nodes running jps shows DataNode and NodeManager are running.
What I get on the url is
Any guidance is greatly appreciated.
Thanks

Name node is not displaying iwhen I hit JPS command

17223 JobTracker
16897 DataNode
17518 Jps
17451 TaskTracker
17129 SecondaryNameNode
8571 FsShell
Name node is not displaying
Seems like you are using the same user for starting all users, so If namenode is coming in the jps output, Probably namenode daemons might be got killed to not started properly. you may use the following command for ensuring namenode process running or not
ps aux | grep -i namenode
If not running you may need to format your namenode before starting hdfs service, stop all hdfs deamons using stop-dfs.sh script then format your namenode using the below command and start HDFS using the start-dfs.sh script.
hadoop namenode -format
Go through the below SO post if you are hitting the below situation.
Hadoop namenode needs to be formatted after every computer start
If you are looking to check all running JVMs on the host via 'jps',
you need to run the command as root. Otherwise, 'jps' will only show
JVMs running as your currently logged-in user.
Please see this link for more:
https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/1dlxmB_GVuU
Should always check the logs.. :)

Hadoop - Restart datanode and tasktracker

I want to bring down a single datanode and tasktracker, so that some new changes that i've made in my mapred-site.xml take effect, such as mapred.reduce.child.java.opts etc. How do I do that? However I don't want to bring down the whole cluster since i have active jobs running.
Also, how can that be done ensuring that the namenode does not copy the relevant data blocks of a "temporarily down" datanode onto another node
To stop
You can stop the DataNodes and TaskTrackers from NameNode's hadoop bin directory.
./hadoop-daemon.sh stop tasktracker
./hadoop-daemon.sh stop datanode
So this script checks for slaves file in conf directory of hadoop to stop the DataNodes and same with the TaskTracker.
To start
Again this script checks for slaves file in conf directory of hadoop to start the DataNodes and TaskTrackers.
./hadoop-daemon.sh start tasktracker
./hadoop-daemon.sh start datanode
In Hadoop 2.7.2, tasktracker is long gone, to manually restart services out on slaves:
yarn-daemon.sh stop nodemanager
hadoop-daemon.sh stop datanode
hadoop-daemon.sh start datanode
yarn-daemon.sh start nodemanager
Ssh into the datanode/tasktracker machine and cd into the bin directory of hadoop.
Invoke
./hadoop-daemon.sh stop tasktracker
./hadoop-daemon.sh stop datanode
./hadoop-daemon.sh start datanode
./hadoop-daemon.sh start tasktracker
I'm not sure if restarting the tasktracker is required for the changes in mapred-site.xml to take effect. Please leave a comment so that i can correct my answer if needed

What is best way to start and stop hadoop ecosystem, with command line?

I see there are several ways we can start hadoop ecosystem,
start-all.sh & stop-all.sh
Which say it's deprecated use start-dfs.sh & start-yarn.sh.
start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh
hadoop-daemon.sh namenode/datanode and yarn-deamon.sh resourcemanager
EDIT: I think there has to be some specific use cases for each command.
start-all.sh & stop-all.sh : Used to start and stop hadoop daemons all at once. Issuing it on the master machine will start/stop the daemons on all the nodes of a cluster. Deprecated as you have already noticed.
start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh : Same as above but start/stop HDFS and YARN daemons separately on all the nodes from the master machine. It is advisable to use these commands now over start-all.sh & stop-all.sh
hadoop-daemon.sh namenode/datanode and yarn-deamon.sh resourcemanager : To start individual daemons on an individual machine manually. You need to go to a particular node and issue these commands.
Use case : Suppose you have added a new DN to your cluster and you need to start the DN daemon only on this machine,
bin/hadoop-daemon.sh start datanode
Note : You should have ssh enabled if you want to start all the daemons on all the nodes from one machine.
Hope this answers your query.
From Hadoop page,
start-all.sh
This will startup a Namenode, Datanode, Jobtracker and a Tasktracker on your machine.
start-dfs.sh
This will bring up HDFS with the Namenode running on the machine you ran the command on. On such a machine you would need start-mapred.sh to separately start the job tracker
start-all.sh/stop-all.sh has to be run on the master node
You would use start-all.sh on a single node cluster (i.e. where you would have all the services on the same node.The namenode is also the datanode and is the master node).
In multi-node setup,
You will use start-all.sh on the master node and would start what is necessary on the slaves as well.
Alternatively,
Use start-dfs.sh on the node you want the Namenode to run on. This will bring up HDFS with the Namenode running on the machine you ran the command on and Datanodes on the machines listed in the slaves file.
Use start-mapred.sh on the machine you plan to run the Jobtracker on. This will bring up the Map/Reduce cluster with Jobtracker running on the machine you ran the command on and Tasktrackers running on machines listed in the slaves file.
hadoop-daemon.sh as stated by Tariq is used on each individual node. The master node will not start the services on the slaves.In a single node setup this will act same as start-all.sh.In a multi-node setup you will have to access each node (master as well as slaves) and execute on each of them.
Have a look at this start-all.sh it call config followed by dfs and mapred
Starting
start-dfs.sh (starts the namenode and the datanode)
start-mapred.sh (starts the jobtracker and the tasktracker)
Stopping
stop-dfs.sh
stop-mapred.sh

Hadoop installation - Datanode running, but not showing in JPS

I have installed CDH3U5 on a 2 node cluster. Everything seems to run fine such as all the services, web UI, MR jobs, HDFS shell commands. However, interestingly, when I started the datanode service, it gave me an OK message that datanode is running as process say X. But when I run JPS, I do not see the label "Datanode" for the process. So the output looks like -
17153 TaskTracker
18908 Jps
16267
The process ID - 16267 is the Datanode process. All other checkpoints have passed. So this seems weird. The same thing happens on the other node in the cluster. Any insight into this behavior and if this is something that needs fixing would be helpful.
can you check the following and reply?
- web interface for namenode and what does it show there for livenode
- logfiles for datanode to see if any exception
- if datanode is pingable/ssh from namenode and viceversa
If all the above look ok I'm not sure what the problem is but to fix you can
- stop all hadoop deamons
- delete temp directory pointed in conf/core-site.xml for both NN and DN
- format namenode
- start deamon

Resources