When I start-hbase.sh, I get the following error
localhost: starting zookeeper, logging to /usr/lib/HBase/bin/../logs/hbase-hduser-zookeeper-nkhl.out
localhost: java.io.FileNotFoundException: /home/hduser/zookeeperpropertydataDir/myid (Permission denied)
localhost: at java.io.FileOutputStream.open0(Native Method)
localhost: at java.io.FileOutputStream.open(FileOutputStream.java:270)
localhost: at java.io.FileOutputStream.<init>(FileOutputStream.java:213)
localhost: at java.io.FileOutputStream.<init>(FileOutputStream.java:162)
localhost: at java.io.PrintWriter.<init>(PrintWriter.java:263)
localhost: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer.java:162)
localhost: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:70)
starting master, logging to /usr/lib/HBase/logs/hbase-hduser-master-nkhl.out
OpenJDK 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
regionserver running as process 25123. Stop it first.
After this, when I do hbase shell, it does open up, but when I list it throws this error:
ERROR: Can't get master address from ZooKeeper; znode data == null
Here is some help for this command:
List all tables in hbase. Optional regular expression parameter could
be used to filter the output. Examples:
hbase> list
hbase> list 'abc.*'
hbase> list 'ns:abc.*'
hbase> list 'ns:.*'
This is jps:
25123 HRegionServer
23975 SecondaryNameNode
23767 DataNode
24168 ResourceManager
26456 HMaster
26665 Jps
24297 NodeManager
23613 NameNode
Zookeeper starts fine:
ZooKeeper JMX enabled by default Using config:
/usr/lib/zookeeper/conf/zoo.cfg Starting zookeeper ... STARTED
My hbase-site.xml configuration:
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54433/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser/zookeeperpropertydataDir</value>
</property>
<property >
<name>hbase.master.port</name>
<value>60010</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description> The port at which the clients will connect.</description>
</property>
</configuration>
This is my hbase-env.sh configuration:
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre/
export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
export HBASE_MANAGES_ZK=true
Any help in this will be appreciated.
Hbase zookeeper deamon HQuorumPeer is not running. one of the reason can be below dir doesn't exist as shown in logs:-
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser/zookeeperpropertydataDir</value>
</property>
Make sure the file that ZooKeeper is using has been created and has the right privileges.
Use chmod to grant access to all users. This fixed my problem.
chmod -R 777 path/file_name
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser /zookeeperpropertydataDir</value>
</property>
Related
I am unable to connect to hdfs on port 9000, I keep getting this error:
localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused
hdfs-site.xml file is this:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/localhdfs/datanode</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-bind-host</name>
<value>0.0.0.0</value>
</property>
</configuration>
and core-site.xml file is this:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
I have restarted the cluster multiple times, I keep getting connection errors:
this is my /etc/hosts file look like:
127.0.0.1 localhost
what am I missing?
Why are you using port 9000 in your config? fs.defaultFS should contain something like: hdfs://nameofcluster
Is this a single node instance? Sandbox? Are you running the command hdfs dfs -ls /?
I would first check:
Remove port from fs.default.name
iptables or Firewalls
hadoop.proxyuser.hdfs.hosts
hadoop.proxyuser.hdfs.groups
Ranger
Logs are exceeding 80% of disk
/etc/hosts file contains the FQDN and your machine's public IP.
Get the IP with ip a command and set is like: 192.166.6.6 abc.xxx.com
Delete fs.default.name property from hdfs-site.xml
Configure a Single node cluster
I've created an HBase cluster in a Hadoop HA cluster. My region servers are failing to start with the following exception in the logs:
2017-09-12 11:41:32,116 ERROR [regionserver/my.hostname.com/10.10.30.28:16020] regionserver.HRegionServer: Failed init
java.io.IOException: Failed on local exception: java.net.SocketException: Invalid argument; Host Details : local host is: "my.hostname.com/10.10.30.28"; destination host is: "0.0.0.1":8020;
I'm pretty sure the problem is caused by the hadoop HA configuration
I think Hbase doesn't understand the nameservice and thinks it's an IP address.
excerpt from core-site.xml:
<property>
<name>fs.defaultFS</name>
<value>hdfs://001</value>
<description>NameNode URI</description>
</property>
excerpt from hdfs-site.xml:
<property>
<name>dfs.nameservices</name>
<value>001</value>
</property>
my hbase-site.xml:
<configuration>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.rootdir</name>
<value>hdfs://001/hbase</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>zk1:2181,zk2:2181,zk3:2181</value>
</property>
</configuration>
Help?
It was a silly mistake. Hbase was missing the path to the hadoop configuration files. Simply added HADOOP_CONF_DIR to hbase-env.sh
i try to install hadoop from this video
https://www.youtube.com/watch?v=CtOhsZ0Sb1E&t=126s
When i run the last command
start-all.sh
i got this message:
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
localhost: namenode running as process 6283. Stop it first.
localhost: starting datanode, logging to /home/myname/hadoop- 2.7.3/logs/hadoop-myname-datanode-MYNAME.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: secondarynamenode running as process 6379. Stop it first.
starting yarn daemons
starting resourcemanager, logging to /home/myname/hadoop- 2.7.3/logs/yarn-myname-resourcemanager-MYNAME.out
Error: Could not find or load main class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager
localhost: starting nodemanager, logging to /home/myname/hadoop- 2.7.3/logs/yarn-myname-nodemanager-MYNAME.out
localhost: Error: Could not find or load main class org.apache.hadoop.yarn.server.nodemanager.NodeManager
my bashrc file
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_INSTALL=/home/myname/hadoop-2.7.3
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib/native"
my hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>Default block replication.
The actual number of replications can be specified when the file is created.
The default is used if replication is not specified in create time.
</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/myname/hadoop-2.7.3/etc/hadoop/hadoop_store/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/myname/hadoop-2.7.3/etc/hadoop/hadoop_store/hdfs/datanode</value>
</property>
</configuration>
my core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/myname/hadoop-2.7.3/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:54310</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri's scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri's authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>
</configuration>
my mapred-site.xml
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:54311</value>
<description>The host and port that the MapReduce job tracker runs
at. If "local", then jobs are run in-process as a single map
and reduce task.
</description>
</property>
</configuration>
I have tried a lot of things but the error is still there..
Any idea ?
Add the following line to your .bashrc file:
export HADOOP_PREFIX=/path_to_hadoop_location
You have to include yarn-site.xml file while configuring hadoop
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
mapred-site.xml : add this also
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
I think you can resolve this issue by adding these properties.
i created HA-Cluster with one DataNode ,active NameNode , Standby NameNode and three JournalNode
when is was put a file to HDFS get this error:
put: Operation category READ is not supported in state standby
put command :
./hadoop fs -put golnaz.txt /user/input
NameNode log:
at org.apache.hadoop.ipc.Client.call(Client.java:1476)
at org.apache.hadoop.ipc.Client.call(Client.java:1407)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
at com.sun.proxy.$Proxy15.rollEditLog(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:148)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:273)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.access$600(EditLogTailer.java:61)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:315)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$200(EditLogTailer.java:284)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:301)
at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415)
at org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:297)
2016-09-15 02:07:23,961 INFO org.apache.hadoop.ipc.Server: IPC Server handler 9 on 9000, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 103.41.177.161:45797 Call#11403 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category JOURNAL is not supported in state standby
2016-09-15 02:07:30,547 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000, call org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol.rollEditLog from 103.41.177.160:39200 Call#11404 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category JOURNAL is not supported in state standby
Error in SecondaryNameNode log :
ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category JOURNAL is not supported in state standby
and this is HDFS-Site.xml:
<configuration>
<property>
<name>dfs.data.dir</name>
<value>/root/hadoopstorage/data</value>
<final>true</final>
</property>
<property>
<name>dfs.name.dir</name>
<value>/root/hadoopstorage/name</value>
<final>true</final>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>ha-cluster</value>
</property>
<property>
<name>dfs.ha.namenodes.ha-cluster</name>
<value>NameNode,Standby</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.NameNode</name>
<value>103.41.177.161:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.ha-cluster.Standby</name>
<value>103.41.177.162:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.NameNode</name>
<value>103.41.177.161:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.ha-cluster.Standby</name>
<value>103.41.177.162:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
value>qjournal://103.41.177.161:8485;103.41.177.162:8485;103.41.177.160:8485/ha-cluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.ha-cluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>3000</value>
</property>
</configuration>
You didn't mention anything about the automatic failover (ZKFC & Zookeeper). Without it, hdfs won't failover automatically.
You may try the following : make sure that both of your namenodes are in the standby state, by checking the namenodes consoles (or using the getServiceState command from Administrative commands). If so, manually trigger the transition using -transitionToActive command and tail the namenodes logs at the same time. In case of transition failure update your post with namenode logs.
I've integrated my hadoop2 and hbase0.98 with phoenix and by typing command sqlline.py localhost phoenix shell starts, but when I try to run apache phoenix example by this command : psql.py /usr/local/phoenix/examples/WEB_STAT.sql /usr/local/phoenix/examples/WEB_STAT.csv /usr/local/phoenix/examples/WEB_STAT_QUERIES.sql I faced this error ERROR client.HConnectionManager$HConnectionImplementation: The node /hbase is not in ZooKeeper. It should have been written by the master. Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
I use hadoop 2.6 in single mode and hbase 0.98 in psudo distributed mod, in addition I didn't explicitly install zookeeper, is it required to install zookeeper explicitly?
my HBASE_HOME/conf/hbase-site.xml file contains :
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://localhost:54310/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>localhost</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/home/hduser/hbase/zookeeper</value>
</property>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase</value>
</property>
<property>
<name>hbase.master</name>
<value>hadoop-master:60000</value>
</property>
</configuration>
and my running java process are
7415 DataNode
7262 NameNode
9119 Jps
7605 SecondaryNameNode
7893 NodeManager
8704 HRegionServer
8544 HMaster
8475 HQuorumPeer
7763 ResourceManager
Simply you should add the address of your server here localhost to your command. Pay attention to command you've already run, sqlline.py localhost that you gave the server address.
Are you using the HDP distribution? iirc they use /hbase-unsecure or for un-Kerberized clusters. I don't remember how it interacted with your config setting for /hbase
start the ZooKeeper cli
zkCli.sh or perhaps some variant of zookeepershell
query the existing root nodes
ls /
the HBase root node is probably named hbase-unsecure