HBASE distributed mode on HADOOP stuck - hadoop

I am trying to configure HBASE Distributed mode on 3 node hadoop cluster.
problem is when i start start-all-sh ,
my cursor stucks after writing
hadoop#namenode1:/usr/local/hbase/bin$ start-hbase.sh
hadoop#namenode1's password: datanode2: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode2.out
datanode1: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode1.out
and it is not proceeding further . It is kind of stuck .
my hbase.site.xml
`<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://namenode1:10001/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>hdfs://namenode1:10001/zookeeper</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>namenode1,datanode1,datanode2</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
</configuration>
`
however i did not install zookeeper separatly
my data nodes show
4387 Jps
3978 DataNode
4332 HQuorumPeer
4126 NodeManager
name node only show
`hadoop#namenode1:/usr/local/hbase/conf$ jps
4832 ResourceManager
4676 SecondaryNameNode
4443 NameNode
5437 Jps`
Please help in resolving issue . I am stuck.
now when i press enter on cursor
hadoop#namenode1:/usr/local/hbase/bin$ start-hbase.sh
hadoop#namenode1's password: datanode2: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode2.out
datanode1: starting zookeeper, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-zookeeper-datanode1.out
namenode1: Connection closed by UNKNOWN
starting master, logging to /usr/local/hbase/logs/hbase-hadoop-master-namenode1.out
hadoop#namenode1's password: datanode2: starting regionserver, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-regionserver-datanode2.out
datanode1: starting regionserver, logging to /usr/local/hbase/bin/../logs/hbase-hadoop-regionserver-datanode1.out
hadoop#namenode1's password: namenode1: Permission denied, please try again.
hadoop#namenode1's password: namenode1: Permission denied, please try again.
namenode1: Permission denied (publickey,password).
and then namenode shows
hadoop#namenode1:/usr/local/hbase/bin$ jps
4832 ResourceManager
4676 SecondaryNameNode
5559 HMaster
5751 Jps
4443 NameNode
and datanode1 show
hadoop#datanode1:/usr/local/hbase/conf$ jps
4610 Jps
4502 HRegionServer
3978 DataNode
4332 HQuorumPeer
4126 NodeManager
and datanode2 show
hadoop#datanode2:~$ jps
2465 DataNode
2601 NodeManager
2922 HRegionServer
2794 HQuorumPeer
3054 Jps

Actually first of all i needed to install zookeeper in distributed mode and also
for password less ssh i did not configure self ssh passwordless
so ran
sudo cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
Thanks #Abhinav guys

Related

YARN: No Active Cluster Nodes

I successfully installed Hadoop and can see all ResourceManager on master node and NodeManager on slave nodes. But When I check on http://hadoop-master:8088/cluster, it showed no active cluster nodes.
I also checked the Yarn's logs, it said java.net.ConnectException: Your endpoint configuration is wrong; For more details see: http://wiki.apache.org/hadoop/UnsetHostnameOrPort
But I don't know what I did wrong, here is the configuration:
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</configuration>
As the image here, not active nodes :
https://i.imgur.com/mbSCiKO.png
jps on slave nodes:
9011 DataNode
10093 NodeManager
10446 Jps
jps on Master node:
32546 ResourceManager
25176 NameNode
25643 SecondaryNameNode
17629 Jps
Check if hostname is correct wich machine
<name>yarn.resourcemanager.hostname</name>
<value>hadoop-master</value>
</configuration>

Hadoop-Installation-Multinode

Hi all I am trying to install the multinode hadoop installation. Everything works fine but my nodemanager for yarn is not working. When I looked at the log file for Yarn nodemanager, I got following information
"org.apache.hadoop.yarn.server.nodemanager.NodeStatusUpdaterImpl:
Initialized nodemanager for null: physical-memory=-1 virtual-memory=-2
virtual-cores=-1"
I have no idea why its not showing the actual memory and virtual core. My VM has 8GB memory and 8Vcpus. Because of above values I am getting this error:
"org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Recieved
SHUTDOWN signal from Resourcemanager ,Registration of NodeManager
failed, Message from ResourceManager: NodeManager from SFeUbuntuVM2
doesn't satisfy minimum allocations, Sending SHUTDOWN signal to the
NodeManager"
Can someone help me out with this issue?
Check if you have
Selinux disabled
firewall disabled
Check your configuration files.
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>{your host name}</value>
</property>
After all do format your namenode, and start all services again.

Error starting datanode on hadoop

I'm trying to run a hadoop cluster via Docker. I have one virtual machine as the namenode and another for the datanode, but the datanode gives me this error running start-dfs.sh:
namenode: namenode running as process 130. Stop it first.
The command jps on the datanode does not show the namenode running. Then I try to start it by hand, using:
hadoop namenode
And it fails with this error:
java.net.BindException: Problem binding to [namenode:9000] java.net.BindException: Cannot assign requested address; For more details see: http://wiki.apache.org/hadoop/BindException
So far it seems that namenode is not accesible or is not listening on port 9000. But the network setup is correct: if I execute on datanode:
telnet namenode 9000
It correctly connects to the namenode, and the command netstat -apn | grep 9000 from namenode shows the incoming connection. If I shut down dfs on namenode (stop-dfs.sh), the telnet command from datanode fails with "Connection closed by foreign host."
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value> <!-- I have tried with 1 and 2 too -->
</property>
<property>
<name>dfs.namenode.datanode.registration.ip-hostname-check</name>
<value>false</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:9000</value>
</property>
</configuration>
Thanks!

HDFS NFS GateWay mount.nfs: Input/output error?

HDFS NFS GateWay mount.nfs: Input/output error?
1.The errors are as follow:
[root#xx sbin]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync localhost:/ /hdfs_y
mount.nfs: Input/output error
2016-03-10 15:12:06,350 WARN org.apache.hadoop.hdfs.nfs.nfs3.RpcProgramNfs3: Exception
804 org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.authorize.AuthorizationException): User: hadoop is not allowed to impersonate root
805 at org.apache.hadoop.ipc.Client.call(Client.java:1475)
......
2.The process of hadoop, and the user "hadoop" start hadoop.
[root#WEB-W031 sbin]# jps
6755 Nfs3
11199 Jps
6163 Portmap
7977 SecondaryNameNode
7720 NameNode
8217 ResourceManager
16762 org.eclipse.equinox.launcher_1.3.0.v20130327-1440.jar
[hadoop#WEB-W031 hadoop]$ jps
6755 Nfs3
7977 SecondaryNameNode
7720 NameNode
8217 ResourceManager
11239 Jps
You have new mail in /var/spool/mail/root
3.referenceļ¼š
https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html#Allow_mounts_from_unprivileged_clients
Thanks very much!
You need to add these lines to your core-site.xml :
<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>*</value>
</property>

hadoop fs -mkdir failed on connection exception

I have been trying to set up and run Hadoop in the pseudo Distributed Mode.But when I type
bin/hadoop fs -mkdir input
I get
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
here is the details
core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/grid/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://h1:9000</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>h1:9001</value>
</property>
<property>
<name>mapred.map.tasks</name>
<value>20</value>
</property>
<property>
<name>mapred.reduce.tasks</name>
<value>4</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>h1:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>h1:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>h1:19888</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.http.address</name>
<value>h1:50070</value>
</property>
<property>
<name>dfs.namenode.rpc-address</name>
<value>h1:9001</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>h1:50090</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/grid/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
/etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.13 h1
192.168.1.14 h2
192.168.1.15 h3
After hadoop namenode -format and start-all.sh
1702 ResourceManager
1374 DataNode
1802 NodeManager
2331 Jps
1276 NameNode
1558 SecondaryNameNode
the problem occurs
[grid#h1 hadoop-2.6.0]$ bin/hadoop fs -mkdir input
15/05/13 16:37:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
mkdir: Call From h1/192.168.1.13 to h1:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Where is the problems?
hadoop-grid-datanode-h1.log
2015-05-12 11:26:20,329 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = h1/192.168.1.13
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0
hadoop-grid-namenode-h1.log
2015-05-08 16:06:32,561 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = h1/192.168.1.13
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0
why the port 9000 does not work?
[grid#h1 ~]$ netstat -tnl |grep 9000
[grid#h1 ~]$ netstat -tnl |grep 9001
tcp 0 0 192.168.1.13:9001 0.0.0.0:* LISTEN
Please start dfs and yarn.
[hadoop#hadooplab sbin]$ ./start-dfs.sh
[hadoop#hadooplab sbin]$ ./start-yarn.sh
Now try using "bin/hadoop fs -mkdir input"
The issue usually comes when you install hadoop in a VM and then shut it down. When you shut down VM, dfs and yarn also stops. So you need to start dfs and yarn each time you restart the VM.
Firstly try command
bin/hadoop dfs -mkdir input
If you have followed micheal-roll post properly then you should not have any issue. I suspect that passwordless ssh is not working in your configuration, recheck it.
Following procedure resolved the issue for me:
Stop all the services.
Delete namenode and datanode directories as specified in hdfs-site.xml.
Create new namenode and datanode directories and modify hdfs-site.xml accordingly.
In core-site.xml, make the following changes or add the following properties:
fs.defaultFS
hdfs://172.20.12.168/
fs.default.name
hdfs://172.20.12.168:8020
Make the following changes in hadoop-2.6.4/etc/hadoop/hadoop-env.sh file:
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home
Restart dfs, yarn and mr as follows:
start-dfs.sh
start-yarn.sh
mr-jobhistory-daemon.sh start historyserver
This command worked for me:
hadoop namenode -format

Resources