I have installed hadoop 2.2.0 & hbase-0.94.18 on ubuntu 12.04. When I try to run the command
create 't1','c1'
in hbase shell, I get the following error-
ERROR client.HConnectionManager$HConnectionImplementation:
Check the value configured in 'zookeeper.znode.parent'.
There could be a mismatch with the one configured in the master.
What's wrong?
A few things in no particular order:
To start with, let the error display continue. It will try 7 times and then exit. Before it exits, it will show the name of exception occurring. Try to look it up. It probably says MasterNotRunningException.
Verify that master is indeed running by doing $sudo jps. You should see an entry for HMaster. If not, start the hbase-master service.
Assuming you're going for pseudo-distributed mode, you may also want to check your /etc/hosts to make sure that entries point to 127.0.0.1 and not 127.0.1.1.
For cloudera's installs, here is a guide on how to setup HBase in pseudo-distributed mode. It also includes instructions to install hbase-master and zookeeper correctly.
Maybe you should check the file hbase-site.xml about zookeeper.znode.parent whether it's right. its default value is /hbase
Mine was set by default to /hbase-unsecure (hbase-site.xml)
Related
We are trying to install Phoenix 4.4.0 on HBase 1.0.0-cdh5.4.4 (CDH5.5.5 four nodes cluster) via this installation document: Phoenix installation
Based on that we copied our phoenix-server-4.4.0-HBase-1.0.jar to hbase libs on each region server and master server, so that, on each /opt/cloudera/parcels/CDH-5.4.4-1.cdh5.4.4.p0.4/lib/hbase/lib folder in the master and three region servers.
After that we reboot the HBase service via Cloudera Manager.
Everything seems to be ok, but when we are trying to access to phoenix shell via ./sqlline.py localhost command, we get a Zookeeper error in that way:
15/09/09 14:20:51 WARN client.ZooKeeperRegistry: Can't retrieve clusterId from Zookeeper
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/hbaseid
So we are not sure that the installation is properly done. Is necessary any further configuration?
We are not even sure wether we are using the sqlline command properly.
Any help will be appreciated.
After reinstalling the 4 nodes cluster on AWS, phoenix is now working properly.
It's a pitty that we don't know exactly what was really happening, but we think that after several changes in our config, we broke something that made phoenix impossible to work.
One thing to take into consideration is that sqllline command has to be executed with an ip that is in the zookeeper quorum, and this is something we were doing wrong, since we were trying to run it from the namenode, and it wasn't in the zookeeper quorum.Once we run sqlline.py from a datanode, everything is working fine.
Btw, the installation guide that we finally followed is Phoenix Installation
I have installed Hadoop and Hive on CentOS 5.8. Hadoop is working fine but I am not able to start hiveserver2. Running the command $HIVE_HOME/bin/hiveserver2 gives no output. I have also checked and no process is listening to port 10000 which is the default port. What can be the possible cause?
The problem was my namenode went into safemode. Turning off safemode fixed the problem.
I followed the tutorialshttp://hadoop.apache.org/docs/r2.4.1/hadoop-project-dist/hadoop-common/SingleCluster.html and installed hadoop 2.4.1 as pseudo distributed cluster. I created a ubuntu VM using OracleVM and installed hadoop as mentioned in the link. It was setup fine and able to run the examples. However the job tracker URL is not working. :50030 gives page not found. I also tried netstat on the server and there is no process waiting on 50030 port . Do i need to start any other service ? What are the possible reasons ?
You need to execute this:
$HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver
Or JobTracker won't start.
(In my case, $HADOOP_HOME is in /usr/local/hadoop)
Check the value of mapred.job.tracker.http.address in mapred-site.xml
If the port is different, use that.
Also check if jobtracker is running. Check the jobtracker logs.
You need to enter the following command
http://localhost:50030/
Job Tracker web UI.
i am trying to learn Hadoop and i'v reached HBase section in Hadoop Definitive Guide.
i tried to start HBase and got error. Could someone give me step-by-step guide?
opel#ubuntu:~$ zkServer.sh start
JMX enabled by default
Using config: /home/opel/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
opel#ubuntu:~$ start-hbase.sh
starting master, logging to /home/opel/hbase-0.94.20/logs/hbase-opel-master-ubuntu.out
opel#ubuntu:~$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.20, r09c60d770f2869ca315910ba0f9a5ee9797b1edc, Fri May 23 22:00:41 PDT 2014
hbase(main):001:0> status
14/06/02 22:40:44 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:45 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:47 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:49 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:51 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:55 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
14/06/02 22:40:59 ERROR client.HConnectionManager$HConnectionImplementation: Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.
ERROR: org.apache.hadoop.hbase.MasterNotRunningException: Retried 7 times
Here is some help for this command:
Show cluster status. Can be 'summary', 'simple', or 'detailed'. The
default is 'summary'. Examples:
hbase> status
hbase> status 'simple'
hbase> status 'summary'
hbase> status 'detailed'
is there anything wrong?
I had the same problem. For me the solution was to add the following property to the hbase-site.xml (for me it can be found under /usr/lib/hbase/conf directory):
<configuration>
<property>
<name>zookeeper.znode.parent</name>
<value>/hbase-unsecure</value>
</property>
</configuration>
But this is only for the standalone mode. I still have no idea how to solve this problem when using external ZooKeeper.
there wont be any problem with the configurations if you are using the Cloudera Manager VM.
The problem is HMaster is not up. To resolve it, go to Cloudera Manager and restart the HBase services. it will resolve the issue.
When I had this problem I could able to fix this by not using zookeeper.
If you're running HBase in standalone mode then you don't need zookeeper. I could able to skip the zookeeper part my making the hbase.cluster.distributed property false.
<property>
<name>hbase.cluster.distributed</name>
<value>false</value>
</property>
Now, I could able to play with hbase without zookeeper.
In cloudera management page, Goto services -> hbase1 and start the sevice problem will be resolved. No need to make the hbase unsecure property.
This problem tooks me a whole night, and this is how i resolved it:
After starting hadoop, go to : http://localhost:50070/dfshealth.html#tab-datanode
You will see a list of available datanode in a table, you just need to add it in your hbase-site.xml as follow for me:
<configuration>
<property>
<name>zookeeper.znode.parent</name>
<value>127.0.0.1:50010</value>
</property>
</configuration>
Best thing check your HBase logs. It will give you the clear idea about the error. In my case i was running Kafka + zookeeper and HBase on the same server. So, whenever i was trying to run hbase shell i was kept getting same error on the console. When I checked logs and found
port is already in use
so i just changed the value for
hbase.zookeeper.property.clientPort
in hbase-site.xml file and everything start running.
open zookeeper/bin and run the command - ./zkServer.sh start
After successful execution, execute command - /zkCli.sh
then execute command get /hbase-unsecure
if it returns as null then, create -s /testmaster "127.0.0.1:2222"
Also, edit hbase-site.xml by adding
<property>
<name>zookeeper.znode.parent</name>
<value>/testmaster</value>
</property>
PS - keep the value of hbase.cluster.distributed property as false.
Hope this solves your error.
I am new to the spark, After installing the spark using parcels available in the cloudera manager.
I have configured the files as shown in the below link from cloudera enterprise:
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/4.8.1/Cloudera-Manager-Installation-Guide/cmig_spark_installation_standalone.html
After this setup, I have started all the nodes in the spark by running /opt/cloudera/parcels/SPARK/lib/spark/sbin/start-all.sh. But I couldn't run the worker nodes as I got the specified error below.
[root#localhost sbin]# sh start-all.sh
org.apache.spark.deploy.master.Master running as process 32405. Stop it first.
root#localhost.localdomain's password:
localhost.localdomain: starting org.apache.spark.deploy.worker.Worker, logging to /var/log/spark/spark-root-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
localhost.localdomain: failed to launch org.apache.spark.deploy.worker.Worker:
localhost.localdomain: at java.lang.ClassLoader.loadClass(libgcj.so.10)
localhost.localdomain: at gnu.java.lang.MainThread.run(libgcj.so.10)
localhost.localdomain: full log in /var/log/spark/spark-root-org.apache.spark.deploy.worker.Worker-1-localhost.localdomain.out
localhost.localdomain:starting org.apac
When I run jps command, I got:
23367 Jps
28053 QuorumPeerMain
28218 SecondaryNameNode
32405 Master
28148 DataNode
7852 Main
28159 NameNode
I couldn't run the worker node properly. Actually I thought to install a standalone spark where the master and worker work on a single machine. In slaves file of spark directory, I given the address as "localhost.localdomin" which is my host name. I am not aware of this settings file. Please any one cloud help me out with this installation process. Actually I couldn't run the worker nodes. But I can start the master node.
Thanks & Regards,
bips
Please notice error info below:
localhost.localdomain: at java.lang.ClassLoader.loadClass(libgcj.so.10)
I met the same error when I installed and started Spark master/workers on CentOS 6.2 x86_64 after making sure that libgcj.x86_64 and libgcj.i686 had been installed on my server, finally I solved it. Below is my solution, wish it can help you.
It seem as if your JAVA_HOME environment parameter didn't set correctly.
Maybe, your JAVA_HOME links to system embedded java, e.g. java version "1.5.0".
Spark needs java version >= 1.6.0. If you are using java 1.5.0 to start Spark, you will see this error info.
Try to export JAVA_HOME="your java home path", then start Spark again.