Facing issue in Hiveserver2 service - hadoop

I am using hiveserver2 for connection but after running few jobs in hive i am getting below error in hive log.
2014-12-30 05:55:54,594 ERROR server.TThreadPoolServer (TThreadPoolServer.java:run(234)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:209)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:132)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:182)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
And after this i am not able to connect to hiveserver2 and after restarting hiveserver2 service i am able to connect.
What is the root cause of this issue?
Thanks

Related

java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure] Successfully fetched the content

i have Nifi Installation running on Linux, which was working fine and all of sudden FetchSFTp throwing error
my flow is List SFTP - FetchSFTp - PutSFTP. and below is the error showing in FetchSFTp process.
FetchSFTP[id=908da67c-0181-1000-1830-fdbb76da7be8] Successfully fetched the content for FlowFile[filename=cfgcampaign_2022-06-25.csv] from etl12.kw.zain.com:22/data1/dw/ftpuser/Varicent_Files/ICM_CC/cfgcampaign_2022-06-25.csv but failed to rename the remote file due to net.schmizz.sshj.sftp.SFTPException: Failure: java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure - Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
And from the log:
2022-06-26 10:58:50,699 WARN [Timer-Driven Process Thread-4] o.a.nifi.processors.standard.FetchSFTP [FetchSFTP[id=908da67c-0181-1000-1830-fdbb76da7be8], StandardFlowFileRecord[uuid=d139d68c-f094-45f8-982d-ab4a1abaf264,claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1656230330691-548, container=default, section=548], offset=0, length=46140],offset=0,name=cfgcampaign_2022-06-25.csv,size=46140], etl12.kw.zain.com, 22, /data1/dw/ftpuser/Varicent_Files/ICM_CC/cfgcampaign_2022-06-25.csv, java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure] Successfully fetched the content for {} from {}:{}{} but failed to rename the remote file due to {}
java.io.IOException: net.schmizz.sshj.sftp.SFTPException: Failure
at org.apache.nifi.processors.standard.util.SFTPTransfer.rename(SFTPTransfer.java:785)
at org.apache.nifi.processors.standard.FetchFileTransfer.performCompletionStrategy(FetchFileTransfer.java:359)
at org.apache.nifi.processors.standard.FetchFileTransfer.lambda$onTrigger$1(FetchFileTransfer.java:313)
at org.apache.nifi.controller.repository.StandardProcessSession.commitAsync(StandardProcessSession.java:537)
at org.apache.nifi.processors.standard.FetchFileTransfer.onTrigger(FetchFileTransfer.java:312)
at org.apache.nifi.processor.AbstractProcessor.onTrigger(AbstractProcessor.java:27)
at org.apache.nifi.controller.StandardProcessorNode.onTrigger(StandardProcessorNode.java:1283)
at org.apache.nifi.controller.tasks.ConnectableTask.invoke(ConnectableTask.java:214)
at org.apache.nifi.controller.scheduling.AbstractTimeBasedSchedulingAgent.lambda$doScheduleOnce$0(AbstractTimeBasedSchedulingAgent.java:63)
at org.apache.nifi.engine.FlowEngine$2.run(FlowEngine.java:110)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: net.schmizz.sshj.sftp.SFTPException: Failure
at net.schmizz.sshj.sftp.Response.error(Response.java:140)
at net.schmizz.sshj.sftp.Response.ensureStatusIs(Response.java:133)
at net.schmizz.sshj.sftp.Response.ensureStatusPacketIsOK(Response.java:125)
at net.schmizz.sshj.sftp.SFTPEngine.rename(SFTPEngine.java:250)
at net.schmizz.sshj.sftp.SFTPClient.rename(SFTPClient.java:124)
at net.schmizz.sshj.sftp.SFTPClient.rename(SFTPClient.java:119)
at org.apache.nifi.processors.standard.util.SFTPTransfer.rename(SFTPTransfer.java:777)
... 16 common frames omitted
Can anyone help me to fix this?
Regards,
Ben

Geomesa-accumulo add index fail job

have a problem with geomesa failed on adding indexes, maybe someones know where problem is?
geomesa-accumulo add-attribute-index -u root -p xxx -c xxx_dev_test -a asset_id --coverage full -f telemetry_values
DEBUG Looking up Accumulo Instance Id in Zookeeper for 5000 milliseconds.
DEBUG You can specify the Instance Id via the command line or
change the Zookeeper timeout by setting the system property 'instance.zookeeper.timeout'.
INFO Running map reduce index job for attributes: [asset_id] with coverage: full...
ERROR Error encountered running attribute index command. Check hadoop's job history logs for more information.
Found that no jobs created in hadoop so no logs, but in tserver logs I found
2021-01-25 12:32:05,129 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
2021-01-25 12:32:05,202 [rpc.CustomNonBlockingServer$CustomFrameBuffer] WARN : Got an IOException during write!
java.io.IOException: Broken pipe
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.thrift.transport.TNonblockingSocket.write(TNonblockingSocket.java:165)
at org.apache.thrift.server.AbstractNonblockingServer$FrameBuffer.write(AbstractNonblockingServer.java:414)
at org.apache.thrift.server.AbstractNonblockingServer$AbstractSelectThread.handleWrite(AbstractNonblockingServer.java:221)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.select(TNonblockingServer.java:206)
at org.apache.thrift.server.TNonblockingServer$SelectAcceptThread.run(TNonblockingServer.java:154)
hadoop 3.1
accumulo 1.9.3
geomesa-accumulo 2.4.0
any advice?
geomesa logs, looks same as error as zookeeper
2021-01-25 13:29:38,762 DEBUG [org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator] IOException thrown
java.io.IOException: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:760)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator$QueryTask.run(TabletServerBatchReaderIterator.java:367)
at org.apache.htrace.wrappers.TraceRunnable.run(TraceRunnable.java:57)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.java:35)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: java.nio.channels.ClosedByInterruptException
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:161)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:158)
at org.apache.accumulo.core.client.impl.ThriftTransportPool$CachedTTransport.flush(ThriftTransportPool.java:346)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:73)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:62)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.send_startMultiScan(TabletClientService.java:326)
at org.apache.accumulo.core.tabletserver.thrift.TabletClientService$Client.startMultiScan(TabletClientService.java:308)
at org.apache.accumulo.core.client.impl.TabletServerBatchReaderIterator.doLookup(TabletServerBatchReaderIterator.java:684)
... 6 more
Caused by: java.nio.channels.ClosedByInterruptException
at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:475)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.apache.thrift.transport.TIOStreamTransport.flush(TIOStreamTransport.java:159)
... 13 more
here more logs from geomesa, seems some problem with job creation
2021-01-25 13:54:36,873 WARN [org.apache.hadoop.mapred.LocalJobRunner] job_local1471203421_0001
java.lang.Exception: java.lang.NullPointerException
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:492)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:552)
Caused by: java.lang.NullPointerException
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:103)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper$$anonfun$setup$1.apply(AttributeIndexJob.scala:102)
at org.locationtech.geomesa.utils.io.WithStore.apply(WithStore.scala:37)
at org.locationtech.geomesa.jobs.accumulo.index.AttributeIndexJob$AttributeMapper.setup(AttributeIndexJob.scala:102)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:347)
at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:271)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
error from mapred job
Exception in thread "main" java.lang.NumberFormatException: For input string: "local1471203421"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:589)
at java.lang.Long.parseLong(Long.java:631)
at org.apache.hadoop.mapreduce.TypeConverter.toClusterTimeStamp(TypeConverter.java:111)
at org.apache.hadoop.mapreduce.TypeConverter.toYarn(TypeConverter.java:82)
at org.apache.hadoop.mapred.ClientServiceDelegate.<init>(ClientServiceDelegate.java:121)
at org.apache.hadoop.mapred.ClientCache.getClient(ClientCache.java:68)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:870)
at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:215)
at org.apache.hadoop.mapreduce.tools.CLI.getJob(CLI.java:660)
at org.apache.hadoop.mapreduce.tools.CLI.run(CLI.java:470)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90)
at org.apache.hadoop.mapred.JobClient.main(JobClient.java:1277)
hadoop 3.1 not support this feature, need 3.2 update

Talend Bigdata & Hortonworks sandbox

I am trying to write data from Talend to Hortonworks sandbox Hadoop setup.
While excuting the talend job to send the data, it generates the following exception and warning in console. However, Talend job creates the file on Hortonworks environment but not able to write data there.
[statistics] connecting to socket on port 3740 [statistics] connected
[WARN ]: org.apache.hadoop.util.NativeCodeLoader - Unable to load
native-hadoop library for your platform... using builtin-java classes
where applicable Exception in component tHDFSOutput_1
java.io.IOException: DataStreamer Exception: at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:697)
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
[WARN ]: org.apache.hadoop.hdfs.DFSClient - DataStreamer Exception
java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
[statistics] disconnected [ERROR]: org.apache.hadoop.hdfs.DFSClient -
Failed to close inode 17106 java.io.IOException: DataStreamer
Exception: at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:697)
Caused by: java.nio.channels.UnresolvedAddressException at
sun.nio.ch.Net.checkAddress(Unknown Source) at
sun.nio.ch.SocketChannelImpl.connect(Unknown Source) at
org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:192)
at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530) at
org.apache.hadoop.hdfs.DFSOutputStream.createSocketForPipeline(DFSOutputStream.java:1611)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1409)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1362)
at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:589)
Job test ended at 16:50 18/03/2017. [exit code=1]

Getting “NoSuchMethodError: org.apache.hadoop.mapreduce.Job.setJar” error when building sample Kylin cube

I am trying to setup Kylin 1.6 on my cloudera cluster(5.9). Setup was successful but when i am trying to build sample cube i am getting this error
org.apache.kylin.job.exception.ExecuteException: org.apache.kylin.job.exception.ExecuteException: java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.Job.setJar(Ljava/lang/String;)V
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:123)
at org.apache.kylin.job.impl.threadpool.DefaultScheduler$JobRunner.run(DefaultScheduler.java:136)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kylin.job.exception.ExecuteException: java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.Job.setJar(Ljava/lang/String;)V
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:123)
at org.apache.kylin.job.execution.DefaultChainedExecutable.doWork(DefaultChainedExecutable.java:57)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
... 4 more
Caused by: java.lang.NoSuchMethodError: org.apache.hadoop.mapreduce.Job.setJar(Ljava/lang/String;)V
at org.apache.kylin.engine.mr.common.AbstractHadoopJob.setJobClasspath(AbstractHadoopJob.java:162)
at org.apache.kylin.engine.mr.steps.FactDistinctColumnsJob.run(FactDistinctColumnsJob.java:88)
at org.apache.kylin.engine.mr.MRUtil.runMRJob(MRUtil.java:92)
at org.apache.kylin.engine.mr.common.MapReduceExecutable.doWork(MapReduceExecutable.java:120)
at org.apache.kylin.job.execution.AbstractExecutable.execute(AbstractExecutable.java:113)
I am not able to find out the solution for it.Can anyone please help me in this case?

Hive action fails in Oozie executed from Hue UI

I am executing oozie hive action and it fails in Hue UI with the following exception in Hive logs:
2016-07-15 15:27:58,430 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:138)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
2016-07-15 15:27:58,432 ERROR org.apache.thrift.server.TThreadPoolServer: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:739)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:736)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1651)
at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:736)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:268)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.thrift.transport.TTransportException: Unsupported mechanism type PLAIN
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:138)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 10 more
Kerberos is enabled on Hadoop cluster, all the settings are added in hive-site.xml.
Also I am able to connect to hiveserver2 from command line.
Please help me to understand what is going wrong here?
It seems like you are trying to use plain authentication to connect to a secured environment.
Unsupported mechanism type PLAIN
I am not sure whether it is the same in HUE, but normally I would suspect that you need to set the hive authentication method to KERBEROS .

Resources