YARN: No Active Cluster Nodes - hadoop

I successfully installed Hadoop and can see all ResourceManager on master node and NodeManager on slave nodes. But When I check on http://hadoop-master:8088/cluster, it showed no active cluster nodes.
I also checked the Yarn's logs, it said java.net.ConnectException: Your endpoint configuration is wrong; For more details see: http://wiki.apache.org/hadoop/UnsetHostnameOrPort
But I don't know what I did wrong, here is the configuration:
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</configuration>
As the image here, not active nodes :
https://i.imgur.com/mbSCiKO.png
jps on slave nodes:
9011 DataNode
10093 NodeManager
10446 Jps
jps on Master node:
32546 ResourceManager
25176 NameNode
25643 SecondaryNameNode
17629 Jps

Check if hostname is correct wich machine
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</configuration>

Related

resource manager and node manager not starting

my question might be duplicated, i had searched similar question here but it didn't solve my problem.
so i'm new in hadoop, now i'm setting up multinode cluster which is 1 master and 2 slaves. when i run jps command on master node, my terminal shows this
3250 - DataNode
3090 - NameNode
4099 - jps
3498 - SecondaryNameNode
and when i run jps command on slaves node, my terminal shows this
3896 - DataNode
4684 - jps
4111 - SecondaryNameNode
according to this tutorial link, my master node would have the this output
jps
namenode
secondary namenode
resource manager
and my slaves node would have this
jps
NodeManager
DataNode
so on master node there is no resource manager and on slaves node there is no node manager
edit
when i start start-dfs.sh command, the output of my terminal shows this
Starting namenodes on [HadoopMaster]
starting datanodes
starting secondary namenodes [farhan-master]
and when i start start-yarn.sh command, the output of my terminal shows this
starting resourcemanager
starting nodemanager
how do i solve this problem? thanks in advance

HBASE distributed mode on HADOOP stuck

I am trying to configure HBASE Distributed mode on 3 node hadoop cluster.
problem is when i start start-all-sh ,
my cursor stucks after writing
hadoop#namenode1:/usr/local/hbase/bin$ start-hbase.sh
hadoop#namenode1's password: datanode2: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode2.out
datanode1: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode1.out
and it is not proceeding further . It is kind of stuck .
my hbase.site.xml
`<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://namenode1:10001/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://namenode1:10001/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>namenode1,datanode1,datanode2</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
`
however i did not install zookeeper separatly
my data nodes show
4387 Jps
3978 DataNode
4332 HQuorumPeer
4126 NodeManager
name node only show
`hadoop#namenode1:/usr/local/hbase/conf$ jps
4832 ResourceManager
4676 SecondaryNameNode
4443 NameNode
5437 Jps`
Please help in resolving issue . I am stuck.
now when i press enter on cursor
hadoop#namenode1:/usr/local/hbase/bin$ start-hbase.sh
hadoop#namenode1's password: datanode2: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode2.out
datanode1: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode1.out
namenode1: Connection closed by UNKNOWN
starting master, logging to /usr/local/hbase/logs/hbase-hadoop-master-namenode1.out
hadoop#namenode1's password: datanode2: starting regionserver, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-regionserver-datanode2.out
datanode1: starting regionserver, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-regionserver-datanode1.out
hadoop#namenode1's password: namenode1: Permission denied, please try again.
hadoop#namenode1's password: namenode1: Permission denied, please try again.
namenode1: Permission denied (publickey,password).
and then namenode shows
hadoop#namenode1:/usr/local/hbase/bin$ jps
4832 ResourceManager
4676 SecondaryNameNode
5559 HMaster
5751 Jps
4443 NameNode
and datanode1 show
hadoop#datanode1:/usr/local/hbase/conf$ jps
4610 Jps
4502 HRegionServer
3978 DataNode
4332 HQuorumPeer
4126 NodeManager
and datanode2 show
hadoop#datanode2:~$ jps
2465 DataNode
2601 NodeManager
2922 HRegionServer
2794 HQuorumPeer
3054 Jps
Actually first of all i needed to install zookeeper in distributed mode and also
for password less ssh i did not configure self ssh passwordless
so ran
sudo cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Thanks #Abhinav guys

jps command on name node shows secondary name node

I have Hadoop-2.7.1 and I have configured a cluster consists of three nodes.
when I call jps command on name node i am getting the following output
3234 SecondaryNameNode
3039 NameNode
9019 Jps
3382 ResourceManager
calling jps command on secondary name node output is
4720 DataNode
4826 NodeManager
4949 Jps
calling jps command on data node output is
4824 Jps
4587 DataNode
4701 NodeManager
Is this output right? why jps shows secondarynamenode on name node and showing data node on secondary name node
isn't there any conflict!
It looks like you have used start-all.sh or start-dfs.sh to start the daemons and have not set the property dfs.namenode.secondary.http-address in hdfs-site.xml.
In that case, secondarynamenode will be started in the same node from where the start-dfs(all).sh script is executed. To start it in a different node, add this property to hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>secondary_namenode_hostname:50090</value>
</property>
Datanodes are started based on the hostname(s) listed in the slaves file.
Alternatively, use hadoop-daemon.sh and yarn-daemon.sh scripts to start the specific HDFS and YARN services respectively on each node.

Slave's datanodes not starting in hadoop

I followed this tutorial and tried to setup a multinode hadoop cluster on centOS. After doing all the configurations and running start-dfs.sh and start-yarn.sh, this is what jps outputs:
Master
26121 ResourceManager
25964 SecondaryNameNode
25759 NameNode
25738 Jps
Slave 1
19082 Jps
17826 NodeManager
Slave 2
17857 Jps
16650 NodeManager
Data node is not started on slaves.
Can anyone suggest what is wrong with this setup?

Hadoop Multi Master Cluster Setup

We have a Hadoop setup with 2 Master nodes and 1 Slave node.
We have configured the Hadoop cluster. After configuring, when we executed "jps" command, we are getting following output on my Master Node:
13405 NameNode
14614 Jps
13860 ResourceManager
13650 DataNode
14083 NodeManage
On my second Master Node, output is:
9698 Jps
9234 DataNode
9022 NameNode
9450 NodeManager
On my Data Node, output is:
21681 NodeManager
21461 DataNode
21878 Jps
I feel my secondary node is not running. Please tell me this is right or wrong. If its wrong, what should be the status of my node? Please answer me as soon as possible.
You can check status of node by running below command
hdfs haadmin -getServiceState

Resources