My Hive shell hangs at logging initialization at configuration
[cloudera#quickstart hive]$ hive
2017-03-01 08:23:50,909 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
Logging initialized using configuration in file:/etc/hive/conf.dist/hive-log4j.properties
This is the log file description.
2017-02-28 08:56:34,685 WARN [main]: hive.metastore
(HiveMetaStoreClient.java:open(448)) - set_ugi() not successful,
Likely cause: new client talking to old server. Continuing without it.
org.apache.thrift.transport.TTransportException:
java.net.SocketTimeoutException: Read timed out at
org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:129)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86)
I also had this problem when I started hive cli. I tried to find some ways to solve the problem, but it didn't work. When I changed the metastore's version (for example, mysql8 to mysql5.6, hive version is 1.10 and don't change) , the problem has been solved!!!!
the log file has already said new client talking to old server. So you should change your metastore's version to a new one. It should be kept in mind that your metastore's version match hive's version.
Related
I am new to this field. I was checking CDH 5.8 quick-start VM to try some basic hive/impala example.
But I hit an issue, while I am opening HUE it's giving below error. I searched solution for but didnt get anything which can resolve my issue.
Configuration files located in /etc/hue/conf.empty
Potential misconfiguration detected. Fix and restart Hue.
Hive The application won't work without a running HiveServer2.
I checked the and it's up & running. Tried restarting the service & CDH, didnt help.
Hive Server2 is running [ OK ]
When navigated to Hive tried some command it gave me below error.
Could not connect to quickstart.cloudera:10000 (code THRIFTTRANSPORT): TTransportException('Could not connect to quickstart.cloudera:10000',)
FOR Impala I am getting
AnalysisException: This Impala daemon is not ready to accept user requests. Status: Waiting for catalog update from the StateStore.
Tried starting hive --service metastore but got error
[cloudera#quickstart conf.empty]$ hive --service metastore
2017-03-03 05:37:14,502 WARN [main] mapreduce.TableMapReduceUtil: The hbase-prefix-tree module jar containing PrefixTreeCodec is not present. Continuing without it.
Starting Hive Metastore Server
org.apache.thrift.transport.TTransportException: Could not create ServerSocket on address 0.0.0.0/0.0.0.0:9083.
Not sure what is wrong or if I need to change some config. Can you anyone guide me towards the solution ?
You HiveServer2 requires Metastore up and running. Seems your Metastore Server cannot start because the port 9083 is already used by some service. Check it:
netstat -tulpn | grep 9083
If something is using this port you need to either change the port of you metastore in hive configuration or stop the application which already uses this port.
I am using hive for querying and data processing purpose in my hadoop main node,but I am not able to start hive in terminal and way taking too much time and not starting as show below.
#hive
Logging initialized using configuration in file:/etc/hive/2.3.4.7-4/0/hive-log4j.properties
you can lookup for the actual problem in HIVE
Hive uses log4j for logging. By default logs are not emitted to the console by the CLI. The default logging level is WARN for Hive releases prior to 0.13.0. Starting with Hive 0.13.0, the default logging level is INFO.
The logs are stored in the directory /tmp/<user.name>:
/tmp/<user.name>/hive.log
Note: In local mode, prior to Hive 0.13.0 the log file name was ".log" instead of "hive.log".
I used Amazon EMR to create an emr-4.0.0 cluster:
However, whenever I try to submit a spark application on it, it fails and gives the following error:
File does not exist: hdfs://ip-xx-xx-xxx-xx.ec2.internal:8020/user/hadoop/.sparkStaging/application_1441035668468_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
This is even though earlier in the log it uploads this exact same file without issuing any error message:
2015-08-31 15:43:29,070 INFO [main] yarn.Client (Logging.scala:logInfo(59)) - Uploading resource file:/usr/lib/spark/lib/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar -> hdfs://ip-xx-xx-xxx-xx.ec2.internal:8020/user/hadoop/.sparkStaging/application_1441035668468_0001/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar
(I've verified that the source file indeed exists at /usr/lib/spark/lib/spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar on the master machine).
The command I use is:
spark-submit --deploy-mode cluster --master yarn-cluster --class com.sundaysky.ads.spark.cluster.TrackingLogsAnalysis /tmp/oz/AdsTests-1.0-SNAPSHOT.jar
BTW, I've noticed that this uses Java 1.7 (even though it's the newest EMR version by Amazon), but I don't think that is relevant.
Do you have any ideas what could be the issue, or alternatively, how to debug the problem? I've tried many way of adding parameters to the spark-submit command to get TRACE level messages from yarn-client, but without success.
Thanks,
Oz
So, after talking to Amazon support, in case anyone ever comes across a simliar issue:
The specific problem in my case was that my logic jar (not the spark-assembly-1.4.1-hadoop2.6.0-amzn-0.jar, which is provided by Amazon) was compiled with Java 8, while the machine only supported Java 7.
This was not reflected in the error log for the step, but rather in the stderr log for the step's container, where a following message appeared:
15/08/31 15:43:41 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread Exception in thread "main" java.lang.UnsupportedClassVersionError: com/xxxxxx/xxxx/xxxxx/xxxxx/MyClass : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
If you encounter a similar problem, and the step's log files do not provide an answer, you should also look in the container's log:
Go to Amazon's EMR web page.
Click your cluster to open the Cluster Details screen
Near the "Log URI" there should be a folder icon, click it to open the logs
Go to "containers" and continue going down the one matching your task
Check the stderr.gz and stdout.gz for issues
HTH,
Oz
When I set hbase.rootdir configuration in hbase-site.xml to local filesystem like file://hbase_root_dir_path, hbase worked OK.But when I change it to hdfs://localhost:9000/hbase, hbase was also OK at the beginning. After a short time(usually a few seconds), however, it didn't work.I found the HMaster stopped with jps command.Of course I could not open the localhost:60010 web page.I read the log, and found sth wrong like the following:
INFO org.apache.zookeeper.server.PrepRequestProcessor: Got user-level KeeperException when processing sessionid:0x13e35b26eb80001 type:delete cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/hbase/backup-masters/localhost,35320,1366700487007 Error:KeeperErrorCode = NoNode for /hbase/backup-masters/localhost,35320,1366700487007
INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server localhost/127.0.0.1:2182. Will not attempt to authenticate using SASL (unknown error)
ERROR org.apache.hadoop.hbase.regionserver.handler.OpenRegionHandler: Failed open of region=person,,1365998702159.a5af90c23325829096517fb3b15bca17., starting to roll back the global memstore size.
java.lang.IllegalStateException: Could not instantiate a region instance.
WARN org.apache.zookeeper.ClientCnxn: Session 0x13e35b26eb80002 for server null, unexpected error, closing socket connection and attempting reconnect
I use the pseudo-distributed mode of hbase in Ubuntu 12.04 LTS.
In my /etc/hosts, I have already changed the the IP of hostname to 127.0.0.1.And my hadoop safemode status if OFF.My hadoop version is 1.0.4 and my hbase version is 0.94.6.1(both are the latest stable release), the HBase Reference guide says hbase-0.94.x works fine with hadoop-1.0.x.
I think sth about the HDFS results the problem, because it really works with the local filesystem.By the way, there is a hbase-x.x.x-security release, what's the difference between it and hbase-x.x.x release and do I need to use the security release?
Dit you set your Zookeeper quorum? It seems Zookeeper is trying to connect to your localhost.
Try setting the addresses of the machines you wan't to use using the hbase.zookeeper.quorum property in hbase-site.xml. Also, if you're not managing your own Zookeeper instance make sure that in hbase-env.sh this line isn't commented export HBASE_MANAGES_ZK=true.
I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can't execute any of my DDL Tasks e.g. "show tables" or "describe tablename" etc. It is giving me the error as below
ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
Can anybody suggest what do I need to do to get my metastore_db instantiated without recreating the tables? Otherwise, I have to duplicate the effort of creating the entire database/schema once again.
I have resolved the problem. These are the steps I followed:
Go to $HIVE_HOME/bin/metastore_db
Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
Deleted the lock entries from db.lck and dbex.lck
Log out from hive shell as well as from all running instances of HDFS
Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
Log out from all hive shell and HDFS instances
Relogin and you should see your old tables
Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.
Advantage: You don't have to duplicate the effort of re-creating the entire database.
Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead
I was told that generally we get this exception if we the hive console not terminated properly.
The fix:
Run the jps command, look for "RunJar" process and kill it using
kill -9 command
See: getting error in hive
Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?
For instance, if you're using MySQL to hold your metadata db, you wll need to copy
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.
This fixed that same error for me.
I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:
Login as $HIVE_USER
nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
I had a similar problem with hive server and followed the below steps:
1. Go to $HIVE_HOME/bin/metastore_db
2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
3. Deleted the lock entries from db.lck and dbex.lck
4. Relogin from hive shell. It is working
Thanks
For instance, I use MySQL to hold metadata db, I copied
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder
My error resolved
I also was facing the same problem, and figured out that I had both hive-deafult.xml and hive-site.xml(created manually by me),
I moved my hive-site.xml to hive-site.xml-template(as I was not needed this file) then
started hive, worked fine.
Cheers,
Ajmal
I have faced this issue and in my case it was while running hive command from command line.
I resolved this issue by running kinit command as I was using kerberized hive.
kinit -kt <your keytab file location> <kerberos principal>