Error with datastax enterprise with hadoop enable and kerberos enable - hadoop

I have configured dse with hadoop enable and kerberos authentication. But I see this ERROR in the log. I can execute dse hadoop fs commands and nodetool commands but cannot run map reduce jobs.
The following is the log :-
ERROR [TASK-TRACKER-INIT] 2014-02-07 20:45:03,813 TaskTrackerRunner.java (line 128) Hadoop Task Tracker caused an exception in state STARTING:
java.io.IOException: Cannot run program "/usr/share/dse/hadoop/native/Linux-amd64- 64/bin/task-controller" (in directory "."): error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
at org.apache.hadoop.util.Shell.startProcess(Shell.java:199)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:225)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:401)
at org.apache.hadoop.mapred.LinuxTaskController.setup(LinuxTaskController.java:137)
at org.apache.hadoop.mapred.TaskTracker.<init>(TaskTracker.java:1470)
at com.datastax.bdp.hadoop.mapred.TaskTrackerRunner.initService(TaskTrackerRunner.java:104)
at com.datastax.bdp.hadoop.mapred.TaskTrackerRunner.initService(TaskTrackerRunner.java:31)
at com.datastax.bdp.hadoop.mapred.ServiceRunner.run(ServiceRunner.java:121)
at java.lang.Thread.run(Thread.java:724)
Caused by: java.io.IOException: error=13, Permission denied
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:135)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1022)
... 10 more
ERROR [Thrift:8] 2014-02-07 20:45:12,624 TNegotiatingServerTransport.java (line 293) An error occurred during transport negotiation
com.datastax.bdp.transport.common.TTransportNegotiationException: Improper authentication type requested. Requested auth: No authentication with service principal: FRAMED_TRANSPORT_FAKE_PRINCIPAL, Allowed auth: Kerberos
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getUnderlyingFactory(TNegotiatingServerTransport.java:485)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:286)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.open(TNegotiatingServerTransport.java:192)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:517)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:408)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
ERROR [Thrift:8] 2014-02-07 20:45:12,625 TNegotiatingServerTransport.java (line 524) Failed to open server transport.
com.datastax.bdp.transport.common.TTransportNegotiationException: An error occurred during transport negotiation
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:294)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.open(TNegotiatingServerTransport.java:192)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:517)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:408)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: com.datastax.bdp.transport.common.TTransportNegotiationException: Improper authentication type requested. Requested auth: No authentication with service principal: FRAMED_TRANSPORT_FAKE_PRINCIPAL, Allowed auth: Kerberos
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getUnderlyingFactory(TNegotiatingServerTransport.java:485)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:286)
... 7 more
ERROR [Thrift:8] 2014-02-07 20:45:12,626 CustomTThreadPoolServer.java (line 219) Error occurred during processing of message.
java.lang.RuntimeException: Failed to open server transport: unknown
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:525)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:408)
at org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:193)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:724)
Caused by: com.datastax.bdp.transport.common.TTransportNegotiationException: An error occurred during transport negotiation
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:294)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.open(TNegotiatingServerTransport.java:192)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getTransport(TNegotiatingServerTransport.java:517)
... 5 more
Caused by: com.datastax.bdp.transport.common.TTransportNegotiationException: Improper authentication type requested. Requested auth: No authentication with service principal: FRAMED_TRANSPORT_FAKE_PRINCIPAL, Allowed auth: Kerberos
at com.datastax.bdp.transport.server.TNegotiatingServerTransport$Factory.getUnderlyingFactory(TNegotiatingServerTransport.java:485)
at com.datastax.bdp.transport.server.TNegotiatingServerTransport.handleTransportNegotiation(TNegotiatingServerTransport.java:286)
... 7 more
WARN [TASK-TRACKER-INIT] 2014-02-07 20:45:13,828 MetricsSystemImpl.java (line 200) Source name ugi already exists!
This is the task controller :-
-rwsr-x--- 1 root cassandra 40111 Jan 9 18:14 /usr/share/dse/hadoop/native/Linux-amd64-64/bin/task-controller
I am using
DSE 3.2.3
Java 1.7.0_25
I have configured properly in cassandra.yaml, dse.yaml, core-site.xml, mapre-site.xml, /etc/default/dse files

You should not start the daemon from root home directory. This sounds weird but try to start the daemon other than root home Directory.

/usr/share/dse/hadoop/native/Linux-amd64- 64/bin/task-controller" (in directory "."): error=13, Permission denied
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1041)
Above clearly shows the user doesn't have the permission to run it. Change the permission so that it has execute permission for the user. Use chmod to change the permission

Related

Not able to run oozie workflow in AWS EMR

We are trying to implement a POC where we are trying to run Oozie in AWS EMR. Due to security reasons, I cannot post the workflow but it is an simple example where we only have an rename action which renames the file name. The rest of the actions are the standard ones like start, end, Fatal error, error Handler etc.
The same workflow worked fine on EC2 instance. But when we try to run Oozie workflow on EMR we are getting the following error
2019-09-12 19:34:41,300 WARN ActionStartXCommand:523 - SERVER[<hostname>] USER[hadoop] GROUP[-] TOKEN[] APP[<WorkflowName>] JOB[0000006-190911195656052-oozie-oozi-W] ACTION[0000006-190911195656052-oozie-oozi-W#ErrorHandler] Error starting action [ErrorHandler]. ErrorType [ERROR], ErrorCode [EM007], Message [EM007: Encountered an error while sending the email message over SMTP.]
org.apache.oozie.action.ActionExecutorException: EM007: Encountered an error while sending the email message over SMTP.
at org.apache.oozie.action.email.EmailActionExecutor.email(EmailActionExecutor.java:304)
at org.apache.oozie.action.email.EmailActionExecutor.validateAndMail(EmailActionExecutor.java:173)
at org.apache.oozie.action.email.EmailActionExecutor.start(EmailActionExecutor.java:112)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:243)
at org.apache.oozie.command.wf.ActionStartXCommand.execute(ActionStartXCommand.java:68)
at org.apache.oozie.command.XCommand.call(XCommand.java:291)
at org.apache.oozie.command.wf.SignalXCommand.execute(SignalXCommand.java:459)
at org.apache.oozie.command.wf.SignalXCommand.execute(SignalXCommand.java:82)
at org.apache.oozie.command.XCommand.call(XCommand.java:291)
at org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:283)
at org.apache.oozie.command.wf.ActionEndXCommand.execute(ActionEndXCommand.java:62)
at org.apache.oozie.command.XCommand.call(XCommand.java:291)
at org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:244)
at org.apache.oozie.command.wf.ActionCheckXCommand.execute(ActionCheckXCommand.java:56)
at org.apache.oozie.command.XCommand.call(XCommand.java:291)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.oozie.service.CallableQueueService$CallableWrapper.run(CallableQueueService.java:210)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: javax.mail.MessagingException: Could not connect to SMTP host: <hostname>, port: 25;
nested exception is:
java.net.ConnectException: Connection refused (Connection refused)
at com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:1961)
at com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:654)
When we check the application logs, we get the below error
Launcher AM execution failed
java.lang.UnsupportedOperationException: Not implemented by the S3FileSystem FileSystem implementation
at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:216)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2564)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2574)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.oozie.action.hadoop.FSLauncherURIHandler.create(FSLauncherURIHandler.java:36)
at org.apache.oozie.action.hadoop.PrepareActionsHandler.execute(PrepareActionsHandler.java:86)
at org.apache.oozie.action.hadoop.PrepareActionsHandler.prepareAction(PrepareActionsHandler.java:73)
at org.apache.oozie.action.hadoop.LauncherAM.executePrepare(LauncherAM.java:371)
at org.apache.oozie.action.hadoop.LauncherAM.access$000(LauncherAM.java:55)
at org.apache.oozie.action.hadoop.LauncherAM$2.run(LauncherAM.java:220)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.oozie.action.hadoop.LauncherAM.run(LauncherAM.java:217)
at org.apache.oozie.action.hadoop.LauncherAM$1.run(LauncherAM.java:153)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1642)
at org.apache.oozie.action.hadoop.LauncherAM.main(LauncherAM.java:141)
Exception in thread "main" java.lang.UnsupportedOperationException: Not implemented by the S3FileSystem FileSystem implementation
at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:216)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2564)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2574)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2591)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:91)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2630)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2612)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:370)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:296)
at org.apache.hadoop.io.SequenceFile$Writer.<init>(SequenceFile.java:1060)
at org.apache.hadoop.io.SequenceFile$RecordCompressWriter.<init>(SequenceFile.java:1371)
Hadoop distribution:Amazon 2.8.5
Oozie version:Oozie 5.1.0
EMR version : emr-5.26.0
Appreciate any guidance here.
Issue resolved after we used the older version of Oozie i.e., 4.3. No other changes made. Works fine. Had read in one of the AWS links that some people were not able to execute oozie with 5.X versions. Will update the answer once we get an concrete reply from AWS.

hadoop distcp permission denied error while executing "distcp" command with non-super user

I am trying to do inter cluster data copying between two clusters using hadoop distcp and the command would be executed using user which doesnt have any permissions,only super user has all file permissions(777),is there a way to solve this problem
command executed on shell:
hadoop distcp hdfs://[cluster1]:9000/home/[usr1]/data1 hdfs://[cluster2]:9000/home/[usr1]/data2
Error:
ERROR [main] tools.DistCp (DistCp.java:run(162)) - Exception encountered
org.apache.hadoop.security.AccessControlException: Permission denied: user=[usr1], access=EXECUTE, inode="/tmp":[superuser]:supergroup:drwxrwx---
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:350)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:311)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:238)
at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1751)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1735)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkOwner(FSDirectory.java:1680)
at org.apache.hadoop.hdfs.server.namenode.FSDirAttrOp.setPermission(FSDirAttrOp.java:64)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setPermission(FSNamesystem.java:1777)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setPermission(NameNodeRpcServer.java:830)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setPermission(ClientNamenodeProtocolServerSideTranslatorPB.java:469)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:503)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:871)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:817)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1889)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2606)

storm hdfs bolt not working in HDP 2.2

I have tried executing storm hdfs sample program. I have created a jar file and copied the jar to the edge node of the cluster.
Then I executed the below command
java -cp storm-.1.jar:lib/*:lib/log4j.properties test.storm.sample.StormSampleTopology
But, I am getting the below exception
17434 [Thread-11-hdfs_bolt] WARN org.apache.hadoop.ipc.Client - Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
17448 [Thread-9-hdfs_bolt] WARN org.apache.hadoop.ipc.Client - Exception encountered while connecting to the server : java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name
17457 [Thread-11-hdfs_bolt] ERROR backtype.storm.util - Async loop died!
java.lang.RuntimeException: Error preparing HdfsBolt: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name; Host Details : local host is: "xxxx2002.xx.xxxxx.com/11.11.111.221"; destination host is: "x-xxxxx.xxxx.com":8020;
at org.apache.storm.hdfs.bolt.AbstractHdfsBolt.prepare(AbstractHdfsBolt.java:96) ~[storm-hdfs-0.1.2.jar:na]
at backtype.storm.daemon.executor$fn__3441$fn__3453.invoke(executor.clj:692) ~[storm-core-0.9.3.jar:0.9.3]
at backtype.storm.util$async_loop$fn__464.invoke(util.clj:461) ~[storm-core-0.9.3.jar:0.9.3]
at clojure.lang.AFn.run(AFn.java:24) [clojure-1.5.1.jar:na]
at java.lang.Thread.run(Thread.java:744) [na:1.7.0_45]
Caused by: java.io.IOException: Failed on local exception: java.io.IOException: java.lang.IllegalArgumentException: Failed to specify server's Kerberos principal name; Host Details : local host is: "xxx2002.xx.xxx.com/11.11.111.221"; destination host is: "x-xxxx.xxx.com":8020;
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1472) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at org.apache.hadoop.ipc.Client.call(Client.java:1399) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232) ~[hadoop-common-2.6.0.2.2.0.0-2041.jar:na]
at com.sun.proxy.$Proxy14.create(Unknown Source) ~[na:na]
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295) ~[hadoop-hdfs-2.6.0.2.2.0.0-2041.jar:na]
My key-tab file is located in the edge node itself. I have specified the location also in the property file
Please help me to resolve this issue.

CDH5 Hue Hive — Beeswax Server: Error opening session: Failed to validate proxy privilage of hue for admin

I setup a Hadoop cluster with security by Kerberos, Hive has been enable Sentry. And I have problem with Hue - Hive (Beeswax) Editor. Hue can't load data, information from hive, in hive-server2 log :
2014-04-03 11:36:39,814 WARN thrift.ThriftCLIService (ThriftCLIService.java:GetSchemas(364)) - Error getting catalogs:
org.apache.hive.service.cli.HiveSQLException: Invalid SessionHandle: SessionHandle [de47ccb1-0bf0-44f0-b15b-c07fd62b1134]
at org.apache.hive.service.cli.session.SessionManager.getSession(SessionManager.java:156)
at org.apache.hive.service.cli.CLIService.getSchemas(CLIService.java:222)
at org.apache.hive.service.cli.thrift.ThriftCLIService.GetSchemas(ThriftCLIService.java:359)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult(TCLIService.java:1433)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$GetSchemas.getResult(TCLIService.java:1418)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:603)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
2014-04-03 11:36:39,815 INFO thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(203)) - Client protocol version: HIVE_CLI_SERVICE_PROTOCOL_V5
2014-04-03 11:36:39,816 WARN thrift.ThriftCLIService (ThriftCLIService.java:OpenSession(212)) - Error opening session:
org.apache.hive.service.cli.HiveSQLException: Failed to validate proxy privilage of hue for admin
at org.apache.hive.service.cli.thrift.ThriftCLIService.getProxyUser(ThriftCLIService.java:556)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getUserName(ThriftCLIService.java:236)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getSessionHandle(ThriftCLIService.java:242)
at org.apache.hive.service.cli.thrift.ThriftCLIService.OpenSession(ThriftCLIService.java:206)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1313)
at org.apache.hive.service.cli.thrift.TCLIService$Processor$OpenSession.getResult(TCLIService.java:1298)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge20S$Server$TUGIAssumingProcessor.process(HadoopThriftAuthBridge20S.java:603)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:244)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.security.authorize.AuthorizationException: Unauthorized connection for super-user: hue from IP /10.199.91.97
at org.apache.hadoop.security.authorize.ProxyUsers.authorize(ProxyUsers.java:165)
at org.apache.hadoop.hive.shims.HadoopShimsSecure.authorizeProxyAccess(HadoopShimsSecure.java:585)
at org.apache.hive.service.cli.thrift.ThriftCLIService.getProxyUser(ThriftCLIService.java:552)
... 12 more
Can anyone help me?
Thank you
Is Hive impersonation turned on? When using Sentry it should be off that way the Hive user can access the data according to Sentry privileges. This Hive with Sentry post details it more.

Server returns 403 during secondary namenode docheckpoint with namenode

I am configuring hadoop on clusters.
All node started successfully, but secondary node failed doCheckpoint with following log:
2011-10-25 11:09:07,207 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint:
2011-10-25 11:09:07,208 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: java.io.IOException: Server returned HTTP response code: 403 for URL: https://name.node.http:50470/getimage?getimage=1
at sun.reflect.GeneratedConstructorAccessor24.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1491)
at java.security.AccessController.doPrivileged(Native Method)
at sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1485)
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1139)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:234)
at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:183)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(SecondaryNameNode.java:364)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$3.run(SecondaryNameNode.java:353)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.downloadCheckpointFiles(SecondaryNameNode.java:353)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:438)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:329)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$2.run(SecondaryNameNode.java:288)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:337)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1110)
at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:285)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: https://name.node.http:50470/getimage?getimage=1
at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1436)
at sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2308)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getHeaderField(HttpsURLConnectionImpl.java:271)
at org.apache.hadoop.hdfs.server.namenode.TransferFsImage.getFileClient(TransferFsImage.java:175)
... 14 more
Seems namenode rejects request of secondarynode with http error code 403.
Kerberos is configured with hadoop, and auth is passed by namenode to accept the request of secondary namenode:
2011-10-25 11:27:40,033 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successfull for hadoop/secondarynamenode#MY.DOMAIN.COM
2011-10-25 11:27:40,100 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successfull for hadoop/secondarynamenode#MY.DOMAIN.COM for protocol=interface org.apache.hadoop.hdfs.server.protocol.NamenodeProtocol
2011-10-25 11:27:40,101 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Roll Edit Log from 123.58.169.92
Does anyone know how could that happen? How can I fix it?
Thanks very much.
I think it's more appropriate to move my comment above to here as an answer.
This error is because of the _HOST macro setting of secondary namemode principal in hdfs-site.xml, if there is no dfs.secondary.http.address set in hdfs-site.xml, the _HOST will be translated by the one who use it.
I this case, code runs in namenode, so, _HOST parsed to namenode address, since kerberos principal composed of name, hostname, realm, that's a different principal, that's why authentication failed.

Resources