Connection refused in Hbase Shell while Connecting HBase to HDFS - hadoop

I am trying to connect my HBase to HDFS. I have my hdfs namenode(bin/hdfs namenode) and datnode(/bin/hdfs datanode) running. I can also start my Hbase (sudo ./bin/start-hbase.sh) and local region servers (sudo ./bin/local-regionservers.sh start 1 2). But when I try to execute a command from Hbase shell it gives the following error:
cis655stu#cis655stu-VirtualBox:/teaching/14f-cis655/proj-dtracing/hbase/hbase-0.99.0-SNAPSHOT$ ./bin/hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.99.0-SNAPSHOT, rUnknown, Sat Aug 9 08:59:57 EDT 2014
hbase(main):001:0> list
TABLE
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/teaching/14f-cis655/proj-dtracing/hbase/hbase-0.99.0-SNAPSHOT/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2015-01-19 13:33:07,179 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
ERROR: Connection refused
Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:
hbase> list
hbase> list 'abc.*'
hbase> list 'ns:abc.*'
hbase> list 'ns:.*'
Below are my configuration files for HBase and Hadoop:
HBase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<!--for psuedo-distributed execution-->
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.master.wait.on.regionservers.mintostart</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/teaching/14f-cis655/tmp/zk-deploy</value>
</property>
<!--for enabling collection of traces
-->
<property>
<name>hbase.trace.spanreceiver.classes</name>
<value>org.htrace.impl.LocalFileSpanReceiver</value>
</property>
<property>
<name>hbase.local-file-span-receiver.path</name>
<value>/teaching/14f-cis655/tmp/server-htrace.out</value>
</property>
</configuration>
Hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/yarn/yarn_data/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/yarn/yarn_data/hdfs/datanode</value>
</property>
<property>
<name>hadoop.trace.spanreceiver.classes</name>
<value>org.htrace.impl.LocalFileSpanReceiver</value>
</property>
<property>
<name>hadoop.local-file-span-receiver.path</name>
<value>/teaching/14f-cis655/proj-dtracing/hadoop-2.6.0/logs/htrace.out</value>
</property>
</configuration>
Core-site.xml
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>

Please check does you HDFS is available from shell:
$ hdfs dfs -ls /hbase
Also make sure that you've all environment variables in hdfs-env.sh file:
HADOOP_CONF_LIB_NATIVE_DIR="/hadoop/lib/native"
HADOOP_OPTS="-Djava.library.path=/hadoop/lib"
HADOOP_HOME=/hadoop
YARN_HOME=/hadoop
HBASE_HOME=/hbase
HADOOP_HDFS_HOME=/hadoop
HBASE_MANAGES_ZK=true
Do you run Hadoop and HBase using the same OS user? If you use separate users, please check if HBase user is allowed to access HDFS.
Make sure that you have a copy of hdfs-site.xml and core-stie.xml (or symlink) files in ${HBASE_HOME}/conf directory.
Also fs.default.name option is deprecated for YARN (but it must still work), you must consider using fs.defaultFS instead.
Do you use Zookeeper? Because you've specified hbase.zookeeper.property.dataDir option, but there is no hbase.zookeeper.quorum there, and other significant options. Please read http://hbase.apache.org/book.html#zookeeper for more information.
Please add next option to hdfs-site.xml to make HBase work correctly (replace $HBASE_USER variable by your system user, which is used to run HBase):
<property>
<name>hadoop.proxyuser.$HBASE_USER.groups</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.$HBASE_USER.hosts</name>
<value>*</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>

Related

Hadoop localhost:9870 browser interface is not working

I need to do data analysis using Hadoop. Therefore I have installed Hadoop and configured as below. But localhost:9870 is not working. Even I have format namenode every time I worked with that. Some articles and answers of this forum mentioned that 9870 is the updated one from 50070. I have win 10. I also referred answers in this forum but none of them worked. Java-home and hadoop-home paths are set. Paths to bin and sbin of hadoop are also set up. Can anyone please tell me what I am doing wrong in here?
I referred this site to do the installation and configuration.
https://medium.com/#pedro.a.hdez.a/hadoop-3-2-2-installation-guide-for-windows-10-454f5b5c22d3
core-site.xml
I have set up the Java path in this xml as well.
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9870</value>
</property>
hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop-3.2.2\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop-3.2.2\data\datanode</value>
</property>
mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
If you look at the namenode logs, it very likely has an error saying something about a port already being in use.
The default fs.defaultFS port should be 9000 - https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html ; you shouldn't change this without good reason.
The Namenode web UI isn't the value in fs.defaultFS. It's default port is 9870, and is defined by dfs.namenode.http-address in hdfs-site.xml
need to do data analysis
You can do analysis on Windows without Hadoop using Spark, Hive, MapReduce, etc. directly and it'll have direct access to your machine without being limited by YARN container sizes.

I can't proceed hadoop example

I do put README.txt file and do jar command but hdfs don't proceed anymore This is last terminal screen
I think "SASL encryption trust check" or "Unable to find 'resource-types.xml'" are the problem so I tried to insert
export HADOOP_SECURE_DN_USER=
to HADOOP-env.sh and insert
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
to mapred-site.xml
but It didn't work again
Hadoop version is 3.1.3
Java version is oracle java 1.8.0_212
hdfs-site.xml
core-site.xml
mapred-site.xml
yarn-site.xml
please help me...
This is 8088 page Is it YARN UI?

Unable to start name node while configuring Hadoop for Lustre

I'm trying to integrate hadoop with intel lustre. I have added hadoop-lustre-plugin-3.1.0 to hadoop-2.7.3/lib/native folder. Lustre is mounted at /mnt/lustre. I'm getting following error when I start hadoop using start-all.sh
[root#master hadoop]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
17/04/06 17:36:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.
Starting namenodes on [ ]
...
core-site.xml :
<property>
<name>fs.defaultFS</name>
<value>lustre:///</value>
</property>
<property>
<name>fs.lustre.impl</name>
<value>org.apache.hadoop.fs.LustreFileSystem</value>
</property>
<property>
<name>fs.AbstractFileSystem.lustre.impl</name>
<value>org.apache.hadoop.fs.LustreFileSystemlustre</value>
</property
<property>
<name>fs.lustrefs.mount</name>
<value>/mnt/lustre/hadoop</value>
<description>This is the directory on Lustre that acts as the root level for Hadoop services</description>
</property>
<property>
<name>lustre.stripe.count</name>
<value>1</value>
</property>
<property>
<name>lustre.stripe.size</name>
<value>4194304</value>
</property>
<property>
<name>fs.block.size</name>
<value>1073741824</value>
</property>
maprd-site.xml
<property>
<name>mapreduce.job.map.output.collector.class</name>
<value>org.apache.hadoop.mapred.SharedFsPlugins$MapOutputBuffer</value>
</property>
<property>
<name>mapreduce.job.reduce.shuffle.consumer.plugin.class</name>
<value>org.apache.hadoop.mapred.SharedFsPlugins$Shuffle</value>
</property>
hdfs-site.xml
<property>
<name>dfs.name.dir</name>
<value>/mnt/lustre/hadoop/hadoop_tmp/namenode</value>
<description>true</description>
</property>
Is there any configuration that I have missed in configuration files?
As fs.defaultFS holds the lustre specific URI, the startup script is unable to determine the host in which Namenode has to be started.
Add this property in hdfs-site.xml,
<property>
<name>dfs.namenode.rpc-address</name>
<value>namenode_host:port</value>
</property>

Cannot create directory /home/hadoop/hadoopinfra/hdfs/namenode/current

I get the error
Cannot create directory /home/hadoop/hadoopinfra/hdfs/namenode/current
While trying to install hadoop on my local Mac.
What could be the reason for this? Just for reference, I'm putting my xml files down below:
mapred-site.xml:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
hdfs-site.xml:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/namenode </value>
</property>
<property>
<name>dfs.data.dir</name>
<value>file:///home/hadoop/hadoopinfra/hdfs/datanode </value>
</property>
</configuration>
core-site.xml:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/Cellar/hadoop/hdfs/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
I think my problem lies in my hdfs-site.xml file, but I'm not sure how to pinpoint/change it.
I'm using this tutorial, and "hadoop" in the file path is replaced by my username.
Possible error: misconfiguration of the hdfs-site.xml file
This happened to me when I was following a setup tutorial. The contents of the hdfs-site.xml for me was
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/hadoop/data/nameNode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/hadoop/data/dataNode</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
Only then I realized that the text hadoop in the above file corresponds to the user name, where in my case, it had to replaced with hduser. When both occurrences of hadoop was replaced with hduser, the hdfs namenode -format command worked fine.
I had this problem too and it was a permission problem. I just did:
sudo chmod 777 /home/hadoop/hadoopinfra/hdfs/namenode/
and works!
In the step where you need to verify the hadoop installation, instead of 'hdfs namenode -format' use '/usr/local/hadoop/bin/hdfs namenode -format'
Found this answer from:
hadoop java.io.IOException: while running namenode -format
If you are not using any other distro than native hadoop, then add the current user to hadoop group and retry formatting the namenode.
sudo usermod -a -G hadoop <current-username>
In case of using thirdparty hadoop distros such Cloudera, Hortonworks or MapR, switch to root user and again switch to hdfs user then try formatting the namenode will succeed.
$ sudo -i
$ su - hdfs
$ hdfs namenode -format
Try the Hadoop command with sudo

Hmaster is not starting hbase 1.1.2 with hadoop 2.7.1

I have hadoop 2.7.1 installed and it's running successfully.
I tried to install hbase 1.1.2 by referring this link:
https://archanaschangale.wordpress.com/2013/08/31/installing-pseudo-distributed-hbase-on-ubuntu/
Configuration :
hbase-env.sh:
export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-i386
export HBASE_REGIONSERVERS=/usr/lib/hbase/hbase-1.1.2/conf/regionservers
export HBASE_MANAGES_ZK=true
hbase-site.xml:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser/hbase/zookeeper</value>
</property>
</configuration>
And
bashrc
export HBASE_HOME=/usr/lib/hbase/hbase-1.1.2
export PATH=$PATH:$HBASE_HOME/bin
But when I tries to start hbase using
bin/start-hbase.sh
Error: Could not find or load main class org.apache.hadoop.hbase.util.HBaseConfTool
Error: Could not find or load main class org.apache.hadoop.hbase.zookeeper.ZKServerTool
starting master, logging to /usr/lib/hbase/hbase-0.94.8/logs/hbase-hduser-master-master-VirtualBox.out
log4j:ERROR Could not find value for key log4j.appender.RFA
log4j:ERROR Could not instantiate appender named "RFA".
log4j:ERROR Could not find value for key log4j.appender.RFAS
log4j:ERROR Could not instantiate appender named "RFAS".
log4j:WARN No appenders could be found for logger (org.apache.hadoop.hbase.util.VersionInfo).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
I changed the folder which stores
hbase.zookeeper.property.dataDir
This is link for the stack overflow answer:
Link

Resources