apache phoenix installation on cloudera - hadoop

I am facing problem with installing apache phoenix on cloudera. I refered http://crazyadmins.com/install-and-configure-apache-phoenix-on-cloudera-hadoop-cdh5/ and many other with the same approach.
My cloudera version is 5.5. I'm getting errors onn running the command:
./psql.py <hostname/localhost/IP>:2181 ../examples/WEB_STAT.sql ../examples/WEB_STAT.csv ../examples/WEB_STAT_QUERIES.sql
org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=35, exceptions:Fri Jan 05 16:09:08 IST 2018,RpcRetryingCaller{globalStartTime=1515148748159, pause=100, retries=35}.
ERROR org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation: Can't get connection to ZooKeeper: KeeperErrorCode = ConnectionLoss for /hbase

Can you connect to hbase? From the error message it seems that it is not up.
As for sqlline, you can run it anywhere as long as you point it to the namenode.

Related

Hbase Master turing off after start. Setup for Hbase on Hadoop for a single cluster DB on my local machine

I have installed Hadoop (2.9.1) and Hbase (2.1) on my linux machine with the appropriate configurations.
1) I start all hadoop components. Using jps, I am able to see all the components that are running. This step is working fine.
2) When I start hbase, all the hbase components start again . Using the jps command, I am able to see the required components are running again. However, within 10 seconds, Hmaster turns off.
This is the contents of the log file for hbase master:-
The errors outlined below are pretty much the same for both master and regionserver log file.
2018-08-17 17:13:14,255 WARN [main-SendThread(localhost:2181)] zookeeper.ClientCnxn: Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1141)
I understand that there is some port connection problem, but don't quite know what and where to make the changes.
Thank you in advance for your guidance.

Why does Hive return FAILED: SemanticException...Unable To Instantiate

I have installed Hive, added it to PATH and am able to open it using the hive command in Terminal.
However, when I attempt to run a basic command such as
SHOW TABLES;
I am presented with the error:
FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient
The instructions I am following do not suggest that anything has to be instantiated.
For reference, I am using the book Hadoop: The Definitive Guide (4th Edition) and running it locally on my machine.
When running JPS the following services are running:
2528 DataNode
7232 RunJar
2441 NameNode
7401 Jps
2634 SecondaryNameNode
282
2842 NodeManager
2751 ResourceManager
I fixed by removing the derby database files
rm -rf $HIVE_HOME/bin/metastore_db
and
$HIVE_HOME/bin/schematool -initSchema -dbType derby
I was able to resolve this problem by initializing the schema. I am surprised it is not mentioned anywhere.
To initialize the schema:
Navigate to your Hive installation folder
[install folder]/bin/schematool -initSchema -dbType derby
Next you should receive some messages confirming
Metastore Connection Driver : org.apache.derby.jdbc.EmbeddedDriver
Metastore connection User: APP
Starting metastore schema initialization to 2.3.0
Initialization script hive-schema-2.3.0.derby.sql
Initialization script completed
schemaTool completed
Start hive
Run any basic commands to determine Hive is functioning such as SHOW TABLES;

Hbase shell gives NativeException: java.lang.ExceptionInInitializerError

I have configure hbase on my local machine, below are my jsp task
$ jps
17389 HQuorumPeer
16554 TaskTracker
17894 Jps
16362 JobTracker
15786 NameNode
16078 DataNode
16267 SecondaryNameNode
But when I hit
$ hbase shell
It gives me following error
NativeException: java.lang.ExceptionInInitializerError:
java.lang.reflect.InvocationTargetException
initialize at /home/rahul/hbase-1.2.4/lib/ruby/hbase/hbase.rb:42
(root) at /home/rahul/hbase-1.2.4/bin/hirb.rb:131
Can any one help me to solve this error.I have wasted several hours to solve this error. Help is really appreciated.
Unfortunately this error is very generic and can occur for a number of reasons. I recently experienced this using the hbase command on version HBase 1.2.0-cdh5.16.1 when the wrong URI was configured in core-site.xml and hbase-site.xml (fs.defaultFS and hbase.rootdir respectively). The only way I diagnosed this was to try connecting programmatically via the Java API (e.g. by following https://www.baeldung.com/hbase), which gave me the full stack trace of the exception that caused the NativeException.

Cloudera Manager Health Issue: NameNode Connectivity, Web Server Status

Below is a snapshot of the health issues reported on CM. The datanodes in the list keep changing. Some errors from the datanode logs :
3:59:31.859 PM ERROR org.apache.hadoop.hdfs.server.datanode.DataNode
datanode05.hadoop.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.248.200.113:45252 dest: /10.248.200.105:50010
java.io.IOException: Premature EOF from inputStream
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:635)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:564)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
at java.lang.Thread.run(Thread.java:662)
5:46:03.606 PM INFO org.apache.hadoop.hdfs.server.datanode.DataNode
Exception for BP-846315089-10.248.200.4-1369774276029:blk_-780307518048042460_200374997
java.net.SocketTimeoutException: 60000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.248.200.105:50010 remote=/10.248.200.122:43572]
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:165)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:156)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:129)
at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:258)
at java.io.BufferedInputStream.read(BufferedInputStream.java:317)
at java.io.DataInputStream.read(DataInputStream.java:132)
at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:192)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:414)
at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:635)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:564)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:103)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:67)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:221)
at java.lang.Thread.run(Thread.java:662)
Snapshot:
I am unable to figure out the root cause of the issue. I can manually connect from one datanode to another without issues, I don't believe it is a network issue. Also, the missing blocks and under-replicated block counts change (up & down) as well.
Cloudera Manager : Cloudera Standard 4.8.1
CDH 4.7
Any help in resolving this issue is appreciated.
Update: Jan 01, 2016
For the datanodes listed as bad, when I see the dadanode logs, I see this message a lot...
11:58:30.066 AM INFO org.apache.hadoop.hdfs.server.datanode.DataNode
Receiving BP-846315089-10.248.200.4-1369774276029:blk_-706861374092956879_36606459 src: /10.248.200.123:56795 dest: /10.248.200.112:50010
Why is this datanode receiving a lot of blocks from other datanodes around the same time? It seems that because of this activity the datanode cannot respond to the namenode request in time and thus timing out. All bad datanodes show the same pattern.
Similar question got answered
hdfs data node disconnected from namenode.
Please check your firewall. Use
telnet ipaddress port
to check the connectivity.

Hbase 0.20.6 Can not start master exception

I'm using Hbase 0.20.6 with Hadoop 0.21.0 on Ubuntu 10.04 LTS and I got can't start master error. (The error is attached at the end of the post from the hbase-root-master-ubuntu.log file)
Does Hbase 0.20.6 work fine with Hadoop 0.21.0 ?? and if it's NOT, Is there a work around ??
What's the problem source ??
Thanks for your time and consideration.
The Log :
java.io.IOException: Call to localhost/127.0.0.1:54310 failed on local exception: java.io.EOFException
at org.apache.hadoop.ipc.Client.wrapException(Client.java:775)
at org.apache.hadoop.ipc.Client.call(Client.java:743)
at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
at $Proxy0.getProtocolVersion(Unknown Source)
at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:359)
at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:106)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:207)
at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:170)
at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:82)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1378)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:66)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1390)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:196)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:95)
at org.apache.hadoop.hbase.master.HMaster.<init>(HMaster.java:195)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:94)
at org.apache.hadoop.hbase.LocalHBaseCluster.<init>(LocalHBaseCluster.java:78)
at org.apache.hadoop.hbase.master.HMaster.doMain(HMaster.java:1229)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1274)
Caused by: java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
Fri Dec 24 14:02:12 EET 2010 Starting master on ubuntu
ulimit -n 1024
2010-12-24 14:02:13,267 INFO org.apache.hadoop.hbase.master.HMaster: vmName=Java HotSpot(TM) Client VM, vmVendor=Sun Microsystems Inc., vmVersion=17.1-b03
2010-12-24 14:02:13,268 INFO org.apache.hadoop.hbase.master.HMaster: vmInputArguments=[-Xmx1000m, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -XX:+HeapDumpOnOutOfMemoryError, -XX:+UseConcMarkSweepGC, -XX:+CMSIncrementalMode, -Dhbase.log.dir=/usr/lib/hbase/bin/../logs, -Dhbase.log.file=hbase-root-master-ubuntu.log, -Dhbase.home.dir=/usr/lib/hbase/bin/.., -Dhbase.id.str=root, -Dhbase.root.logger=INFO,DRFA, -Djava.library.path=/usr/lib/hbase/bin/../lib/native/Linux-i386-32]
2010-12-24 14:02:13,353 INFO org.apache.hadoop.hbase.master.HMaster: My address is ubuntu.ubuntu-domain:60000
2010-12-24 14:02:13,593 ERROR org.apache.hadoop.hbase.master.HMaster: Can not start master
There has been a discussion about this on HBase users mailing list recently, I would suggest reading it.
http://mail-archives.apache.org/mod_mbox/hbase-user/201012.mbox/%3CAANLkTimA7UQZAiG0810mtHtGk30x8ejGs+n5+CF8GxQ1#mail.gmail.com%3E
As a summary I would quote what Ryan Rawson of StumbleUpon mentioned in the lists:
HBase 0.20.6 is likely to run well on hadoop 21. We have many patches
that help bolster durability on top of branch-20-append, and also some
may apply to hadoop 21.
What you are possibly running in to is using hadoop 20 jars in hbase
0.90 on top of hadoop 21. Try deleting the hadoop 20 jars and copying
in your hadoop 21.
Also consider running cdh3b2+, hadoop 21 is a panned release and no
one runs it nor expects it to be run in a production setting.
We are using the HBase 0.90 RCs with Cloudera's CDH3b3 via debian packages. In case you want to consider it please refer to its installation page for details. I would also recommend this page for installation on a cluster. Download the latest HBase 0.90 RC from here.

Resources