Hive : The application won't work without a running HiveServer2 - hadoop

I am new to this field. I was checking CDH 5.8 quick-start VM to try some basic hive/impala example.
But I hit an issue, while I am opening HUE it's giving below error. I searched solution for but didnt get anything which can resolve my issue.
Configuration files located in /etc/hue/conf.empty
Potential misconfiguration detected. Fix and restart Hue.
Hive The application won't work without a running HiveServer2.
I checked the and it's up & running. Tried restarting the service & CDH, didnt help.
Hive Server2 is running [ OK ]
When navigated to Hive tried some command it gave me below error.
Could not connect to quickstart.cloudera:10000 (code THRIFTTRANSPORT): TTransportException('Could not connect to quickstart.cloudera:10000',)
FOR Impala I am getting
AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore.
Tried starting hive --service metastore but got error
[cloudera#quickstart conf.empty]$ hive --service metastore
2017-03-03 05:37:14,502 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
Starting Hive Metastore Server
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
Not sure what is wrong or if I need to change some config. Can you anyone guide me towards the solution ?

You HiveServer2 requires Metastore up and running. Seems your Metastore Server cannot start because the port 9083 is already used by some service. Check it:
netstat -tulpn | grep 9083
If something is using this port you need to either change the port of you metastore in hive configuration or stop the application which already uses this port.

Related

Nifi putHiveStreaming Failed to connect to metastore uri

I'm facing issues with putHiveStreaming Processor as it is not connecting to hive metastore. I am using kylo-cloudera-sandbox-0.9.1, please help me on this as I'm not able to figure out the issue.
Sound as if your thrift server may not be running on the sandbox. Start with a simple port test to see if it us up and running and then proceed from there.
telnet localhost 9083
lsof -i :9083

Hive - issues while starting

I have been using Hive for sometime now on Ubuntu while Hadoop is in Pseudo Distribution mode however today out of nowhere i am getting error while starting Hive shell.I have not made any changes in configuration at all -
Caused by: Meta Exception(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
The hivemetastore service is not running. You can start the service with the command below. This command is for installations made using packages.
service hive-metastore start
For tarball installations, you can start the hive metastore using the below command
hive --service metastore &

Beeline command issue

I am new to Hive and hopefully this is going to be an easy thing to solve
for someone with more experience, but I am having trouble doing it on my
own.
On my EC2 app server I am running the following command with no error:
beeline -u jdbc:hive2://master
This is working on Hive 13 which was installed through a bootstrap action
using the latest AMI version. 'master' is pointing to my EMR cluster
Then I downloaded the source for Hive 14 and built it. I have replaced my
/home/hadoop/hive directory with the package that was built.
However, if I try to execute the same command, I get an error:
scan complete in 6ms
Connecting to jdbc:hive2://master
Error: Could not open client transport with JDBC Uri: jdbc:hive2://master:
Cannot open without port. (state=08S01,code=0)
Beeline version 0.14.0 by Apache Hive
0: jdbc:hive2://master (closed)>
Running it with the port provided works correctly:
beeline -u jdbc:hive2://master:10000
I would like to be able to able to run the command without providing the
default port number.
Can anyone direct me with an instruction.
Thanks,
Hive Beeline Connection in Two Modes:
1.Embedded Mode:
If both Hive Client and Hive server are same then connect beeline by using below url:
!connect jdbc:hive2://
2.Remote Mode:
If server in one machine but client in one machine you can connect beeline using below url:
!connect jdbc:hive2://<host>:<port>

java.net.ConnectException: Connection refused error when running Hive

I'm trying work through a hive tutorial in which I enter the following:
load data local inpath '/usr/local/Cellar/hive/0.11.0/libexec/examples/files/kv1.txt' overwrite into table pokes;
Thits results in the following error:
FAILED: RuntimeException java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
I see that there are some replies on SA having to do with configuring my ip address and local host, but I'm not familiar with the concepts in the answers. I'd appreciate anything you can tell me about the fundamentals of what causes this kind of answer and how to fix it. Thanks!
This is because hive is not able to contact your namenode
Check if your hadoop services has started properly.
Run the command jps to see what all services are running.
The reason why you get this error is that Hive needs hadoop as its base. So, you need to start Hadoop first.
Here are some steps.
Step1: download hadoop and unzip it
Step2: cd #your_hadoop_path
Step3: ./bin/hadoop namenode -format
Step4: ./sbin/start-all.sh
And then, go back to #your_hive_path and start hive again
Easy way i found to edit the /etc/hosts file. default it looks like
127.0.0.1 localhost
127.0.1.1 user_user_name
just edit and make 127.0.1.1 to 127.0.0.1 thats it , restart your shell and restart your cluster by start-all.sh
same question when set up hive.
solved by change my /etc/hostname
formerly it is my user_machine_name
after I changed it to localhost, then it went well
I guess it is because hadoop may want to resolve your hostname using this /etc/hostname file, but it directed it to your user_machine_name while the hadoop service is running on localhost
I was able to resolve the issue by executing the below command:
start-all.sh
This would ensure that the Hive service has started.
Then starting the Hive was straight forward.
I had a similar problem with a connection timeout:
WARN DFSClient: Failed to connect to /10.165.0.27:50010 for block, add to deadNodes and continue. java.net.ConnectException: Connection timed out: no further information
DFSClient was resolving nodes by internal IP. Here's the solution for this:
.config("spark.hadoop.dfs.client.use.datanode.hostname", "true")

Error occured when using HDFS to store the data of HBase

When I set hbase.rootdir configuration in hbase-site.xml to local filesystem like file://hbase_root_dir_path, hbase worked OK.But when I change it to hdfs://localhost:9000/hbase, hbase was also OK at the beginning. After a short time(usually a few seconds), however, it didn't work.I found the HMaster stopped with jps command.Of course I could not open the localhost:60010 web page.I read the log, and found sth wrong like the following:
INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x13e35b26eb80001 type:delete cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/hbase/backup-masters/localhost,35320,1366700487007 Error:KeeperErrorCode = NoNode for /hbase/backup-masters/localhost,35320,1366700487007
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2182. Will not attempt to authenticate using SASL (unknown error)
ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=person,,1365998702159.a5af90c23325829096517fb3b15bca17., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
WARN org.apache.zookeeper.ClientCnxn: Session 0x13e35b26eb80002 for server null, unexpected error, closing socket connection and attempting reconnect
I use the pseudo-distributed mode of hbase in Ubuntu 12.04 LTS.
In my /etc/hosts, I have already changed the the IP of hostname to 127.0.0.1.And my hadoop safemode status if OFF.My hadoop version is 1.0.4 and my hbase version is 0.94.6.1(both are the latest stable release), the HBase Reference guide says hbase-0.94.x works fine with hadoop-1.0.x.
I think sth about the HDFS results the problem, because it really works with the local filesystem.By the way, there is a hbase-x.x.x-security release, what's the difference between it and hbase-x.x.x release and do I need to use the security release?
Dit you set your Zookeeper quorum? It seems Zookeeper is trying to connect to your localhost.
Try setting the addresses of the machines you wan't to use using the hbase.zookeeper.quorum property in hbase-site.xml. Also, if you're not managing your own Zookeeper instance make sure that in hbase-env.sh this line isn't commented export HBASE_MANAGES_ZK=true.

Resources