Authentication failed, status: 503 error hortonworks HDP 2.4 - hadoop

I am getting the following error: (through command line as well as web-interface).
Useful info:
1. Hive, HDFS, Yarn services are up and running.
2. I can even get into hive prompt through command line and web-interface. The error occurs when I use show databases. (or click refresh symbol on database explorer of web-interface).
3. I logged in as root user, hdfs user
4. I tried changing permissions to 755 for the directory /user/root
Any help would be greatly appreciated..
------------------start of error message (copied from that of web-interface log)
Unable to submit statement. Error while processing statement: FAILED: Hive Internal Error: com.sun.jersey.api.client.ClientHandlerException(java.io.IOException: org.apache.hadoop.security.authentication.client.AuthenticationException: Authentication failed, status: 503, message: Service Unavailable) [ERROR_STATUS].

Step 1) Restart Atlas on Sandbox.
Step 2) Restart Hive services on Sandbox.
For me this resolved the issue.
Cheers

Related

Error applying authorization policy on hive configuration

I am trying to execute my hive from command prompt.
When i am trying to run the command on my Windows 10 machine.
i.e C:\hadoop-2.7.1\hive-2.1.0\bin>hive
It throws the Error applying authorization policy on hive configuration Error.
Here is full Stack of Error:
Error applying authorization policy on hive configuration: Couldn't create directory ${system:java.io.tmpdir}\${hive.session.id}_resources
What could be the problem.
Check if namenode and datanode services are running. Start those by sbin/start-dfs.cmd on windows.

Running pig scripts giving me the error: You are a Hue admin but not a HDFS superuser (which is "hdfs")

I'm using cloudera quickstart vm-4.7.
I'm unable to run pig scripts as it's throwing the following error message :
Cannot access: /pigwordcount/wordcountinput.txt. Note: You are a Hue admin but not a HDFS superuser (which is "hdfs").
[Errno 2] File /pigwordcount/wordcountinput.txt not found
It is giving me "file not found" but already the files were present in the directory user/cloudera/pigwordcount
How can I resolve this issue?

Hive jdbc connection is giving error if MR is involved

I am working on Hive-jdbc connection in HDP 2.1
Code is working fine for queries where mapreduce is not involved like "select * from tabblename". The same code is showing error when the query is modified with a 'where' clause or if we specify columnnames(which will run mapreduce in the the background).
I have verified the correctness of the query by executing it in HiveCLI.
Also I have verified the read/write permissions for the table for the user through which I am running the java-jdbc code.
The error is as follows
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:275)
at org.apache.hive.jdbc.HiveStatement.executeQuery(HiveStatement.java:355)
at com.testing.poc.hivejava.HiveJDBCTest.main(HiveJDBCTest.java:25)
Today I also got this exception when I submit a hive task from java.
The following error:
org.apache.hive.jdbc.HiveDriverorg.apache.hive.jdbc.HiveDriverhive_driver:
org.apache.hive.jdbc.HiveDriverhive_url:jdbc:hive2://10.174.242.28:10000/defaultget
connection sessucess获取hive连接成功!
java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
I tried to use the sql execute in hive and it works well. Then I saw the log in /var/log/hive/hadoop-cmf-hive-HIVESERVER2-cloud000.log.out then I found the reason of this error. The following error:
Job Submission failed with exception 'org.apache.hadoop.security.AccessControlException(Permission denied: user=anonymous, access=WRITE, inode="/user":hdfs:supergroup:drwxr-xr-x
Solution
I used the following command :
sudo -u hdfs hadoop fs -chmod -R 777 /
This solved the error!
hive_driver:org.apache.hive.jdbc.HiveDriver
hive_url:jdbc:hive2://cloud000:10000/default
get connection sessucess
获取hive连接成功!
Heart beat
执行insert成功!
If you use beeline to execute the same queries, do you see the same behaviour as you get while running your test program?
The beeline client also uses the open source JDBC driver and connects to Hive server, which is similar to what you do in your program. HiveCLI on the other hand has Hive embedded in it and does not connect to a remote Hive server by default. You can use HiveCLI to connect to a remote Hive Server 1 but I don't believe you can use it to connect to Hive Server2 (use beeline for Hive Server 2).
For this error, you can take a look at the hive.log and hiveserver2.log on the server side to get more insight into what might have caused the MapReduce error.
Hope this helps.
Cheers,
Holman

FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient

I shutdown my HDFS client while HDFS and hive instances were running. Now when I relogged into Hive, I can't execute any of my DDL Tasks e.g. "show tables" or "describe tablename" etc. It is giving me the error as below
ERROR exec.Task (SessionState.java:printError(401)) - FAILED: Error in metadata: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.metastore.HiveMetaStoreClient
Can anybody suggest what do I need to do to get my metastore_db instantiated without recreating the tables? Otherwise, I have to duplicate the effort of creating the entire database/schema once again.
I have resolved the problem. These are the steps I followed:
Go to $HIVE_HOME/bin/metastore_db
Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
Deleted the lock entries from db.lck and dbex.lck
Log out from hive shell as well as from all running instances of HDFS
Re-login to HDFS and hive shell. If you run DDL commands, it may again give you the "Could not instantiate HiveMetaStoreClient error"
Now copy back the db.lck1 to db.lck and dbex.lck1 to dbex.lck
Log out from all hive shell and HDFS instances
Relogin and you should see your old tables
Note: Step 5 may seem a little weird because even after deleting the lock entry, it will still give the HiveMetaStoreClient error but it worked for me.
Advantage: You don't have to duplicate the effort of re-creating the entire database.
Hope this helps somebody facing the same error. Please vote if you find useful. Thanks ahead
I was told that generally we get this exception if we the hive console not terminated properly.
The fix:
Run the jps command, look for "RunJar" process and kill it using
kill -9 command
See: getting error in hive
Have you copied the jar containing the JDBC driver for your metadata db into Hive's lib dir?
For instance, if you're using MySQL to hold your metadata db, you wll need to copy
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib.
This fixed that same error for me.
I faced the same issue and resolved it by starting the metastore service. Sometimes service might get stopped if your machine is re-booted or went down. You could start the service by running the command:
Login as $HIVE_USER
nohup hive --service metastore>$HIVE_LOG_DIR/hive.out 2>$HIVE_LOG_DIR/hive.log &
I had a similar problem with hive server and followed the below steps:
1. Go to $HIVE_HOME/bin/metastore_db
2. Copied the db.lck to db.lck1 and dbex.lck to dbex.lck1
3. Deleted the lock entries from db.lck and dbex.lck
4. Relogin from hive shell. It is working
Thanks
For instance, I use MySQL to hold metadata db, I copied
mysql-connector-java-5.1.22-bin.jar into $HIVE_HOME/lib folder
My error resolved
I also was facing the same problem, and figured out that I had both hive-deafult.xml and hive-site.xml(created manually by me),
I moved my hive-site.xml to hive-site.xml-template(as I was not needed this file) then
started hive, worked fine.
Cheers,
Ajmal
I have faced this issue and in my case it was while running hive command from command line.
I resolved this issue by running kinit command as I was using kerberized hive.
kinit -kt <your keytab file location> <kerberos principal>

Impala on Cloudera CDH "Could not create logging file: Permission denied"

I installed Impala via a parcel in the Cloudera Manager 4.5 on a CDH 4.2.0-1.cdh4.2.0.p0.10 cluster.
When I try to start the service it fails on all nodes with this message
perl -pi -e 's#{{CMF_CONF_DIR}}#/run/cloudera-scm-agent/process/800-impala-IMPALAD#g' /run/cloudera-scm-agent/process/800-impala-IMPALAD/impala-conf/impalad_flags
'[' impalad = impalad ']'
exec /opt/cloudera/parcels/IMPALA-0.6-1.p0.109/lib/impala/../../bin/impalad --flagfile=/run/cloudera-scm-agent/process/800-impala-IMPALAD/impala-conf/impalad_flags
Could not create logging file: Permission denied
COULD NOT CREATE A LOGGINGFILE 20130326-204959.15015!log4j:ERROR setFile(null,true) call failed.
java.io.FileNotFoundException: /var/log/impalad/impalad.INFO (Permission denied)
at java.io.FileOutputStream.openAppend(Native Method)
...
at org.apache.commons.logging.LogFactory.getLog(LogFactory.java:685)
at org.apache.hadoop.fs.FileSystem.<clinit>(FileSystem.java:92)
+ date
Complete StdErr Log
I'm unsure whether the permission issue is cause of Impala not running or whether something else crashes and the permission issues just comes up because the crash log can not be written.
Any help would be great!
Run impala from debug binaries as described here:
https://issues.cloudera.org/browse/IMPALA-160
Seems to be related to the JVM in Kernel 12.04.1 LTS
Original Answer: https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/impala-user/4MRZYbn5hI0

Resources